CINXE.COM
Search results for: national image interpretability rating scale
<!DOCTYPE html> <html lang="en" dir="ltr"> <head> <!-- Google tag (gtag.js) --> <script async src="https://www.googletagmanager.com/gtag/js?id=G-P63WKM1TM1"></script> <script> window.dataLayer = window.dataLayer || []; function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'G-P63WKM1TM1'); </script> <!-- Yandex.Metrika counter --> <script type="text/javascript" > (function(m,e,t,r,i,k,a){m[i]=m[i]||function(){(m[i].a=m[i].a||[]).push(arguments)}; m[i].l=1*new Date(); for (var j = 0; j < document.scripts.length; j++) {if (document.scripts[j].src === r) { return; }} k=e.createElement(t),a=e.getElementsByTagName(t)[0],k.async=1,k.src=r,a.parentNode.insertBefore(k,a)}) (window, document, "script", "https://mc.yandex.ru/metrika/tag.js", "ym"); ym(55165297, "init", { clickmap:false, trackLinks:true, accurateTrackBounce:true, webvisor:false }); </script> <noscript><div><img src="https://mc.yandex.ru/watch/55165297" style="position:absolute; left:-9999px;" alt="" /></div></noscript> <!-- /Yandex.Metrika counter --> <!-- Matomo --> <!-- End Matomo Code --> <title>Search results for: national image interpretability rating scale</title> <meta name="description" content="Search results for: national image interpretability rating scale"> <meta name="keywords" content="national image interpretability rating scale"> <meta name="viewport" content="width=device-width, initial-scale=1, minimum-scale=1, maximum-scale=1, user-scalable=no"> <meta charset="utf-8"> <link href="https://cdn.waset.org/favicon.ico" type="image/x-icon" rel="shortcut icon"> <link href="https://cdn.waset.org/static/plugins/bootstrap-4.2.1/css/bootstrap.min.css" rel="stylesheet"> <link href="https://cdn.waset.org/static/plugins/fontawesome/css/all.min.css" rel="stylesheet"> <link href="https://cdn.waset.org/static/css/site.css?v=150220211555" rel="stylesheet"> </head> <body> <header> <div class="container"> <nav class="navbar navbar-expand-lg navbar-light"> <a class="navbar-brand" href="https://waset.org"> <img src="https://cdn.waset.org/static/images/wasetc.png" alt="Open Science Research Excellence" title="Open Science Research Excellence" /> </a> <button class="d-block d-lg-none navbar-toggler ml-auto" type="button" data-toggle="collapse" data-target="#navbarMenu" aria-controls="navbarMenu" aria-expanded="false" aria-label="Toggle navigation"> <span class="navbar-toggler-icon"></span> </button> <div class="w-100"> <div class="d-none d-lg-flex flex-row-reverse"> <form method="get" action="https://waset.org/search" class="form-inline my-2 my-lg-0"> <input class="form-control mr-sm-2" type="search" placeholder="Search Conferences" value="national image interpretability rating scale" name="q" aria-label="Search"> <button class="btn btn-light my-2 my-sm-0" type="submit"><i class="fas fa-search"></i></button> </form> </div> <div class="collapse navbar-collapse mt-1" id="navbarMenu"> <ul class="navbar-nav ml-auto align-items-center" id="mainNavMenu"> <li class="nav-item"> <a class="nav-link" href="https://waset.org/conferences" title="Conferences in 2024/2025/2026">Conferences</a> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/disciplines" title="Disciplines">Disciplines</a> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/committees" rel="nofollow">Committees</a> </li> <li class="nav-item dropdown"> <a class="nav-link dropdown-toggle" href="#" id="navbarDropdownPublications" role="button" data-toggle="dropdown" aria-haspopup="true" aria-expanded="false"> Publications </a> <div class="dropdown-menu" aria-labelledby="navbarDropdownPublications"> <a class="dropdown-item" href="https://publications.waset.org/abstracts">Abstracts</a> <a class="dropdown-item" href="https://publications.waset.org">Periodicals</a> <a class="dropdown-item" href="https://publications.waset.org/archive">Archive</a> </div> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/page/support" title="Support">Support</a> </li> </ul> </div> </div> </nav> </div> </header> <main> <div class="container mt-4"> <div class="row"> <div class="col-md-9 mx-auto"> <form method="get" action="https://publications.waset.org/abstracts/search"> <div id="custom-search-input"> <div class="input-group"> <i class="fas fa-search"></i> <input type="text" class="search-query" name="q" placeholder="Author, Title, Abstract, Keywords" value="national image interpretability rating scale"> <input type="submit" class="btn_search" value="Search"> </div> </div> </form> </div> </div> <div class="row mt-3"> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Commenced</strong> in January 2007</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Frequency:</strong> Monthly</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Edition:</strong> International</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Paper Count:</strong> 12926</div> </div> </div> </div> <h1 class="mt-3 mb-3 text-center" style="font-size:1.6rem;">Search results for: national image interpretability rating scale</h1> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">12896</span> Speeding-up Gray-Scale FIC by Moments</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Eman%20A.%20Al-Hilo">Eman A. Al-Hilo</a>, <a href="https://publications.waset.org/abstracts/search?q=Hawraa%20H.%20Al-Waelly"> Hawraa H. Al-Waelly</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In this work, fractal compression (FIC) technique is introduced based on using moment features to block indexing the zero-mean range-domain blocks. The moment features have been used to speed up the IFS-matching stage. Its moments ratio descriptor is used to filter the domain blocks and keep only the blocks that are suitable to be IFS matched with tested range block. The results of tests conducted on Lena picture and Cat picture (256 pixels, resolution 24 bits/pixel) image showed a minimum encoding time (0.89 sec for Lena image and 0.78 of Cat image) with appropriate PSNR (30.01dB for Lena image and 29.8 of Cat image). The reduction in ET is about 12% for Lena and 67% for Cat image. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=fractal%20gray%20level%20image" title="fractal gray level image">fractal gray level image</a>, <a href="https://publications.waset.org/abstracts/search?q=fractal%20compression%20technique" title=" fractal compression technique"> fractal compression technique</a>, <a href="https://publications.waset.org/abstracts/search?q=iterated%20function%20system" title=" iterated function system"> iterated function system</a>, <a href="https://publications.waset.org/abstracts/search?q=moments%20feature" title=" moments feature"> moments feature</a>, <a href="https://publications.waset.org/abstracts/search?q=zero-mean%20range-domain%20block" title=" zero-mean range-domain block"> zero-mean range-domain block</a> </p> <a href="https://publications.waset.org/abstracts/19903/speeding-up-gray-scale-fic-by-moments" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/19903.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">493</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">12895</span> Detect Circles in Image: Using Statistical Image Analysis</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Fathi%20M.%20O.%20Hamed">Fathi M. O. Hamed</a>, <a href="https://publications.waset.org/abstracts/search?q=Salma%20F.%20Elkofhaifee"> Salma F. Elkofhaifee</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The aim of this work is to detect geometrical shape objects in an image. In this paper, the object is considered to be as a circle shape. The identification requires find three characteristics, which are number, size, and location of the object. To achieve the goal of this work, this paper presents an algorithm that combines from some of statistical approaches and image analysis techniques. This algorithm has been implemented to arrive at the major objectives in this paper. The algorithm has been evaluated by using simulated data, and yields good results, and then it has been applied to real data. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=image%20processing" title="image processing">image processing</a>, <a href="https://publications.waset.org/abstracts/search?q=median%20filter" title=" median filter"> median filter</a>, <a href="https://publications.waset.org/abstracts/search?q=projection" title=" projection"> projection</a>, <a href="https://publications.waset.org/abstracts/search?q=scale-space" title=" scale-space"> scale-space</a>, <a href="https://publications.waset.org/abstracts/search?q=segmentation" title=" segmentation"> segmentation</a>, <a href="https://publications.waset.org/abstracts/search?q=threshold" title=" threshold"> threshold</a> </p> <a href="https://publications.waset.org/abstracts/37141/detect-circles-in-image-using-statistical-image-analysis" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/37141.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">432</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">12894</span> An Exploratory Study of Reliability of Ranking vs. Rating in Peer Assessment</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Yang%20Song">Yang Song</a>, <a href="https://publications.waset.org/abstracts/search?q=Yifan%20Guo"> Yifan Guo</a>, <a href="https://publications.waset.org/abstracts/search?q=Edward%20F.%20Gehringer"> Edward F. Gehringer</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Fifty years of research has found great potential for peer assessment as a pedagogical approach. With peer assessment, not only do students receive more copious assessments; they also learn to become assessors. In recent decades, more educational peer assessments have been facilitated by online systems. Those online systems are designed differently to suit different class settings and student groups, but they basically fall into two categories: rating-based and ranking-based. The rating-based systems ask assessors to rate the artifacts one by one following some review rubrics. The ranking-based systems allow assessors to review a set of artifacts and give a rank for each of them. Though there are different systems and a large number of users of each category, there is no comprehensive comparison on which design leads to higher reliability. In this paper, we designed algorithms to evaluate assessors' reliabilities based on their rating/ranking against the global ranks of the artifacts they have reviewed. These algorithms are suitable for data from both rating-based and ranking-based peer assessment systems. The experiments were done based on more than 15,000 peer assessments from multiple peer assessment systems. We found that the assessors in ranking-based peer assessments are at least 10% more reliable than the assessors in rating-based peer assessments. Further analysis also demonstrated that the assessors in ranking-based assessments tend to assess the more differentiable artifacts correctly, but there is no such pattern for rating-based assessors. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=peer%20assessment" title="peer assessment">peer assessment</a>, <a href="https://publications.waset.org/abstracts/search?q=peer%20rating" title=" peer rating"> peer rating</a>, <a href="https://publications.waset.org/abstracts/search?q=peer%20ranking" title=" peer ranking"> peer ranking</a>, <a href="https://publications.waset.org/abstracts/search?q=reliability" title=" reliability"> reliability</a> </p> <a href="https://publications.waset.org/abstracts/66206/an-exploratory-study-of-reliability-of-ranking-vs-rating-in-peer-assessment" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/66206.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">439</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">12893</span> Cluster Analysis of Students’ Learning Satisfaction</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Purevdolgor%20Luvsantseren">Purevdolgor Luvsantseren</a>, <a href="https://publications.waset.org/abstracts/search?q=Ajnai%20Luvsan-Ish"> Ajnai Luvsan-Ish</a>, <a href="https://publications.waset.org/abstracts/search?q=Oyuntsetseg%20Sandag"> Oyuntsetseg Sandag</a>, <a href="https://publications.waset.org/abstracts/search?q=Javzmaa%20Tsend"> Javzmaa Tsend</a>, <a href="https://publications.waset.org/abstracts/search?q=Akhit%20Tileubai"> Akhit Tileubai</a>, <a href="https://publications.waset.org/abstracts/search?q=Baasandorj%20Chilhaasuren"> Baasandorj Chilhaasuren</a>, <a href="https://publications.waset.org/abstracts/search?q=Jargalbat%20Puntsagdash"> Jargalbat Puntsagdash</a>, <a href="https://publications.waset.org/abstracts/search?q=Galbadrakh%20Chuluunbaatar"> Galbadrakh Chuluunbaatar</a> </p> <p class="card-text"><strong>Abstract:</strong></p> One of the indicators of the quality of university services is student satisfaction. Aim: We aimed to study the level of satisfaction of students in the first year of premedical courses in the course of Medical Physics using the cluster method. Materials and Methods: In the framework of this goal, a questionnaire was collected from a total of 324 students who studied the medical physics course of the 1st course of the premedical course at the Mongolian National University of Medical Sciences. When determining the level of satisfaction, the answers were obtained on five levels of satisfaction: "excellent", "good", "medium", "bad" and "very bad". A total of 39 questionnaires were collected from students: 8 for course evaluation, 19 for teacher evaluation, and 12 for student evaluation. From the research, a database with 39 fields and 324 records was created. Results: In this database, cluster analysis was performed in MATLAB and R programs using the k-means method of data mining. Calculated the Hopkins statistic in the created database, the values are 0.88, 0.87, and 0.97. This shows that cluster analysis methods can be used. The course evaluation sub-fund is divided into three clusters. Among them, cluster I has 150 objects with a "good" rating of 46.2%, cluster II has 119 objects with a "medium" rating of 36.7%, and Cluster III has 54 objects with a "good" rating of 16.6%. The teacher evaluation sub-base into three clusters, there are 179 objects with a "good" rating of 55.2% in cluster II, 108 objects with an "average" rating of 33.3% in cluster III, and 36 objects with an "excellent" rating in cluster I of 11.1%. The sub-base of student evaluations is divided into two clusters: cluster II has 215 objects with an "excellent" rating of 66.3%, and cluster I has 108 objects with an "excellent" rating of 33.3%. Evaluating the resulting clusters with the Silhouette coefficient, 0.32 for the course evaluation cluster, 0.31 for the teacher evaluation cluster, and 0.30 for student evaluation show statistical significance. Conclusion: Finally, to conclude, cluster analysis in the model of the medical physics lesson “good” - 46.2%, “middle” - 36.7%, “bad” - 16.6%; 55.2% - “good”, 33.3% - “middle”, 11.1% - “bad” in the teacher evaluation model; 66.3% - “good” and 33.3% of “bad” in the student evaluation model. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=questionnaire" title="questionnaire">questionnaire</a>, <a href="https://publications.waset.org/abstracts/search?q=data%20mining" title=" data mining"> data mining</a>, <a href="https://publications.waset.org/abstracts/search?q=k-means%20method" title=" k-means method"> k-means method</a>, <a href="https://publications.waset.org/abstracts/search?q=silhouette%20coefficient" title=" silhouette coefficient"> silhouette coefficient</a> </p> <a href="https://publications.waset.org/abstracts/185266/cluster-analysis-of-students-learning-satisfaction" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/185266.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">50</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">12892</span> Efficacy of Erector Spinae Plane Block for Postoperative Pain Management in Coronary Artery Bypass Graft Patients</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Santosh%20Sharma%20Parajuli">Santosh Sharma Parajuli</a>, <a href="https://publications.waset.org/abstracts/search?q=Diwas%20Manandhar"> Diwas Manandhar</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Background: Perioperative pain management plays an integral part in patients undergoing cardiac surgery. We studied the effect of Erector Spinae Plane block on acute postoperative pain reduction and 24 hours opioid consumption in adult cardiac surgical patients. Methods: Twenty-five adult cardiac surgical patients who underwent cardiac surgery with sternotomy in whom ESP catheters were placed preoperatively were kept in group E, and the other 25 patients who had undergone cardiac surgery without ESP catheter and pain management done with conventional opioid injection were placed in group C. Fentanyl was used for pain management. The primary study endpoint was to compare the consumption of fentanyl and to assess the numeric rating scale in the postoperative period in the first 24 hours in both groups. Results: The 24 hours fentanyl consumption was 43.00±51.29 micrograms in the Erector Spinae Plane catheter group and 147.00±60.94 micrograms in the control group postoperatively which was statistically significant (p <0.001). The numeric rating scale was also significantly reduced in the Erector Spinae Plane group compared to the control group in the first 24 hours postoperatively. Conclusion: Erector Spinae Plane block is superior to the conventional opioid injection method for postoperative pain management in CABG patients. Erector Spinae Plane block not only decreases the overall opioid consumption but also the NRS score in these patients. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=erector" title="erector">erector</a>, <a href="https://publications.waset.org/abstracts/search?q=spinae" title=" spinae"> spinae</a>, <a href="https://publications.waset.org/abstracts/search?q=plane" title=" plane"> plane</a>, <a href="https://publications.waset.org/abstracts/search?q=numerical%20rating%20scale" title=" numerical rating scale"> numerical rating scale</a> </p> <a href="https://publications.waset.org/abstracts/167320/efficacy-of-erector-spinae-plane-block-for-postoperative-pain-management-in-coronary-artery-bypass-graft-patients" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/167320.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">67</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">12891</span> Geographical Data Visualization Using Video Games Technologies</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Nizar%20Karim%20Uribe-Orihuela">Nizar Karim Uribe-Orihuela</a>, <a href="https://publications.waset.org/abstracts/search?q=Fernando%20Brambila-Paz"> Fernando Brambila-Paz</a>, <a href="https://publications.waset.org/abstracts/search?q=Ivette%20Caldelas"> Ivette Caldelas</a>, <a href="https://publications.waset.org/abstracts/search?q=Rodrigo%20Montufar-Chaveznava"> Rodrigo Montufar-Chaveznava</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In this paper, we present the advances corresponding to the implementation of a strategy to visualize geographical data using a Software Development Kit (SDK) for video games. We use multispectral images from Landsat 7 platform and Laser Imaging Detection and Ranging (LIDAR) data from The National Institute of Geography and Statistics of Mexican (INEGI). We select a place of interest to visualize from Landsat platform and make some processing to the image (rotations, atmospheric correction and enhancement). The resulting image will be our gray scale color-map to fusion with the LIDAR data, which was selected using the same coordinates than in Landsat. The LIDAR data is translated to 8-bit raw data. Both images are fused in a software developed using Unity (an SDK employed for video games). The resulting image is then displayed and can be explored moving around. The idea is the software could be used for students of geology and geophysics at the Engineering School of the National University of Mexico. They will download the software and images corresponding to a geological place of interest to a smartphone and could virtually visit and explore the site with a virtual reality visor such as Google cardboard. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=virtual%20reality" title="virtual reality">virtual reality</a>, <a href="https://publications.waset.org/abstracts/search?q=interactive%20technologies" title=" interactive technologies"> interactive technologies</a>, <a href="https://publications.waset.org/abstracts/search?q=geographical%20data%20visualization" title=" geographical data visualization"> geographical data visualization</a>, <a href="https://publications.waset.org/abstracts/search?q=video%20games%20technologies" title=" video games technologies"> video games technologies</a>, <a href="https://publications.waset.org/abstracts/search?q=educational%20material" title=" educational material"> educational material</a> </p> <a href="https://publications.waset.org/abstracts/79894/geographical-data-visualization-using-video-games-technologies" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/79894.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">246</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">12890</span> Corporate Governance and Share Prices: Firm Level Review in Turkey</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Raif%20Parlakkaya">Raif Parlakkaya</a>, <a href="https://publications.waset.org/abstracts/search?q=Ahmet%20Diken"> Ahmet Diken</a>, <a href="https://publications.waset.org/abstracts/search?q=Erkan%20Kara"> Erkan Kara</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper examines the relationship between corporate governance rating and stock prices of 26 Turkish firms listed in Turkish stock exchange (Borsa Istanbul) by using panel data analysis over five-year period. The paper also investigates the stock performance of firms with governance rating with regards to the market portfolio (i.e. BIST 100 Index) both prior and after governance scoring began. The empirical results show that there is no relation between corporate governance rating and stock prices when using panel data for annual variation in both rating score and stock prices. Further analysis indicates surprising results that while the selected firms outperform the market significantly prior to rating, the same performance does not continue afterwards. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=corporate%20governance" title="corporate governance">corporate governance</a>, <a href="https://publications.waset.org/abstracts/search?q=stock%20price" title=" stock price"> stock price</a>, <a href="https://publications.waset.org/abstracts/search?q=performance" title=" performance"> performance</a>, <a href="https://publications.waset.org/abstracts/search?q=panel%20data%20analysis" title=" panel data analysis "> panel data analysis </a> </p> <a href="https://publications.waset.org/abstracts/29587/corporate-governance-and-share-prices-firm-level-review-in-turkey" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/29587.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">393</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">12889</span> National Branding through Education: South Korean Image in Romania through the Language Textbooks for Foreigners</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Raluca-Ioana%20Antonescu">Raluca-Ioana Antonescu</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The paper treats about the Korean public diplomacy and national branding strategies, and how the Korean language textbooks were used in order to construct the Korean national image. The field research of the paper stands at the intersection between Linguistics and Political Science, while the problem of the research is the role of language and culture in national branding process. The research goal is to contribute to the literature situated at the intersection between International Relations and Applied Linguistics, while the objective is to conceptualize the idea of national branding by emphasizing a dimension which is not much discussed, and that would be the education as an instrument of the national branding and public diplomacy strategies. In order to examine the importance of language upon the national branding strategies, the paper will answer one main question, How is the Korean language used in the construction of national branding?, and two secondary questions, How are explored in literature the relations between language and national branding construction? and What kind of image of South Korea the language textbooks for foreigners transmit? In order to answer the research questions, the paper starts from one main hypothesis, that the language is an essential component of the culture, which is used in the construction of the national branding influenced by traditional elements (like Confucianism) but also by modern elements (like Western influence), and from two secondary hypothesis, the first one is that in the International Relations literature there are little explored the connections between language and national branding, while the second hypothesis is that the South Korean image is constructed through the promotion of a traditional society, but also a modern one. In terms of methodology, the paper will analyze the textbooks used in Romania at the universities which provide Korean Language classes during the three years program B.A., following the dialogs, the descriptive texts and the additional text about the Korean culture. The analysis will focus on the rank status difference, the individual in relation to the collectivity, the respect for the harmony, and the image of the foreigner. The results of the research show that the South Korean image projected in the textbooks convey the Confucian values and it does not emphasize the changes suffered by the society due to the modernity and globalization. The Westernized aspect of the Korean society is conveyed more in an informative way about the Korean international companies, Korean internal development (like the transport or other services), but it does not show the cultural changed the society underwent. Even if the paper is using the textbooks which are used in Romania as a teaching material, it could be used and applied at least to other European countries, since the textbooks are the ones issued by the South Korean language schools, which other European countries are using also. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=confucianism" title="confucianism">confucianism</a>, <a href="https://publications.waset.org/abstracts/search?q=modernism" title=" modernism"> modernism</a>, <a href="https://publications.waset.org/abstracts/search?q=national%20branding" title=" national branding"> national branding</a>, <a href="https://publications.waset.org/abstracts/search?q=public%20diplomacy" title=" public diplomacy"> public diplomacy</a>, <a href="https://publications.waset.org/abstracts/search?q=traditionalism" title=" traditionalism"> traditionalism</a> </p> <a href="https://publications.waset.org/abstracts/72363/national-branding-through-education-south-korean-image-in-romania-through-the-language-textbooks-for-foreigners" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/72363.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">242</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">12888</span> Sustainable Urban Waterfronts Using Sustainability Assessment Rating System</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=R.%20M.%20R.%20Hussein">R. M. R. Hussein</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Sustainable urban waterfront development is one of the most interesting phenomena of urban renewal in the last decades. However, there are still many cities whose visual image is compromised due to the lack of a sustainable urban waterfront development, which consequently affects the place of those cities globally. This paper aims to reimagine the role of waterfront areas in city design, with a particular focus on Egypt, so that they provide attractive, sustainable urban environments while promoting the continued aesthetic development of the city overall. This aim will be achieved by determining the main principles of a sustainable urban waterfront and its applications. This paper concentrates on sustainability assessment rating systems. A number of international case-studies, wherein a city has applied the basic principles for a sustainable urban waterfront and have made use of sustainability assessment rating systems, have been selected as examples which can be applied to the urban waterfronts in Egypt. This paper establishes the importance of developing the design of urban environments in Egypt, as well as identifying the methods of sustainability application for urban waterfronts. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=sustainable%20urban%20waterfront" title="sustainable urban waterfront">sustainable urban waterfront</a>, <a href="https://publications.waset.org/abstracts/search?q=green%20infrastructure" title=" green infrastructure"> green infrastructure</a>, <a href="https://publications.waset.org/abstracts/search?q=energy%20efficient" title=" energy efficient"> energy efficient</a>, <a href="https://publications.waset.org/abstracts/search?q=Cairo" title=" Cairo"> Cairo</a> </p> <a href="https://publications.waset.org/abstracts/7715/sustainable-urban-waterfronts-using-sustainability-assessment-rating-system" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/7715.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">471</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">12887</span> Influential Factors of Employees’ Work Motivation: Case Study of Siam Thai Co., Ltd</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Pitsanu%20Poonpetpun">Pitsanu Poonpetpun</a>, <a href="https://publications.waset.org/abstracts/search?q=Witthaya%20Mekhum"> Witthaya Mekhum</a>, <a href="https://publications.waset.org/abstracts/search?q=Warangkana%20Kongsil"> Warangkana Kongsil</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This research was an attempt to study work motivation of employees in Siam Thai Co., Ltd. The study took place in Rayong with 59 employees as participants. The research tool was questionnaires which consisted of sets of questions about company’s policy, management, executives and good relationship within the firm. The questionnaires style was rating scale with 5 score bands. The questionnaires were analyzed by percentage, frequency, mean and standard deviation. From the study, the result showed that policy and management were in moderate scale, executive and managers were in moderate scale and relationship within the firm were in high scale. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=motivation" title="motivation">motivation</a>, <a href="https://publications.waset.org/abstracts/search?q=job" title=" job"> job</a>, <a href="https://publications.waset.org/abstracts/search?q=performance" title=" performance"> performance</a>, <a href="https://publications.waset.org/abstracts/search?q=employees" title=" employees"> employees</a> </p> <a href="https://publications.waset.org/abstracts/11984/influential-factors-of-employees-work-motivation-case-study-of-siam-thai-co-ltd" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/11984.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">262</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">12886</span> Adaptive Dehazing Using Fusion Strategy </h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=M.%20Ramesh%20Kanthan">M. Ramesh Kanthan</a>, <a href="https://publications.waset.org/abstracts/search?q=S.%20Naga%20Nandini%20Sujatha"> S. Naga Nandini Sujatha</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The goal of haze removal algorithms is to enhance and recover details of scene from foggy image. In enhancement the proposed method focus into two main categories: (i) image enhancement based on Adaptive contrast Histogram equalization, and (ii) image edge strengthened Gradient model. Many circumstances accurate haze removal algorithms are needed. The de-fog feature works through a complex algorithm which first determines the fog destiny of the scene, then analyses the obscured image before applying contrast and sharpness adjustments to the video in real-time to produce image the fusion strategy is driven by the intrinsic properties of the original image and is highly dependent on the choice of the inputs and the weights. Then the output haze free image has reconstructed using fusion methodology. In order to increase the accuracy, interpolation method has used in the output reconstruction. A promising retrieval performance is achieved especially in particular examples. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=single%20image" title="single image">single image</a>, <a href="https://publications.waset.org/abstracts/search?q=fusion" title=" fusion"> fusion</a>, <a href="https://publications.waset.org/abstracts/search?q=dehazing" title=" dehazing"> dehazing</a>, <a href="https://publications.waset.org/abstracts/search?q=multi-scale%20fusion" title=" multi-scale fusion"> multi-scale fusion</a>, <a href="https://publications.waset.org/abstracts/search?q=per-pixel" title=" per-pixel"> per-pixel</a>, <a href="https://publications.waset.org/abstracts/search?q=weight%20map" title=" weight map"> weight map</a> </p> <a href="https://publications.waset.org/abstracts/32544/adaptive-dehazing-using-fusion-strategy" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/32544.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">465</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">12885</span> Exploring the Spatial Relationship between Built Environment and Ride-hailing Demand: Applying Street-Level Images</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Jingjue%20Bao">Jingjue Bao</a>, <a href="https://publications.waset.org/abstracts/search?q=Ye%20Li"> Ye Li</a>, <a href="https://publications.waset.org/abstracts/search?q=Yujie%20Qi"> Yujie Qi</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The explosive growth of ride-hailing has reshaped residents' travel behavior and plays a crucial role in urban mobility within the built environment. Contributing to the research of the spatial variation of ride-hailing demand and its relationship to the built environment and socioeconomic factors, this study utilizes multi-source data from Haikou, China, to construct a Multi-scale Geographically Weighted Regression model (MGWR), considering spatial scale heterogeneity. The regression results showed that MGWR model was demonstrated superior interpretability and reliability with an improvement of 3.4% on R2 and from 4853 to 4787 on AIC, compared with Geographically Weighted Regression model (GWR). Furthermore, to precisely identify the surrounding environment of sampling point, DeepLabv3+ model is employed to segment street-level images. Features extracted from these images are incorporated as variables in the regression model, further enhancing its rationality and accuracy by 7.78% improvement on R2 compared with the MGWR model only considered region-level variables. By integrating multi-scale geospatial data and utilizing advanced computer vision techniques, this study provides a comprehensive understanding of the spatial dynamics between ride-hailing demand and the urban built environment. The insights gained from this research are expected to contribute significantly to urban transportation planning and policy making, as well as ride-hailing platforms, facilitating the development of more efficient and effective mobility solutions in modern cities. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=travel%20behavior" title="travel behavior">travel behavior</a>, <a href="https://publications.waset.org/abstracts/search?q=ride-hailing" title=" ride-hailing"> ride-hailing</a>, <a href="https://publications.waset.org/abstracts/search?q=spatial%20relationship" title=" spatial relationship"> spatial relationship</a>, <a href="https://publications.waset.org/abstracts/search?q=built%20environment" title=" built environment"> built environment</a>, <a href="https://publications.waset.org/abstracts/search?q=street-level%20image" title=" street-level image"> street-level image</a> </p> <a href="https://publications.waset.org/abstracts/174593/exploring-the-spatial-relationship-between-built-environment-and-ride-hailing-demand-applying-street-level-images" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/174593.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">82</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">12884</span> Image Segmentation Techniques: Review</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Lindani%20Mbatha">Lindani Mbatha</a>, <a href="https://publications.waset.org/abstracts/search?q=Suvendi%20Rimer"> Suvendi Rimer</a>, <a href="https://publications.waset.org/abstracts/search?q=Mpho%20Gololo"> Mpho Gololo</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Image segmentation is the process of dividing an image into several sections, such as the object's background and the foreground. It is a critical technique in both image-processing tasks and computer vision. Most of the image segmentation algorithms have been developed for gray-scale images and little research and algorithms have been developed for the color images. Most image segmentation algorithms or techniques vary based on the input data and the application. Nearly all of the techniques are not suitable for noisy environments. Most of the work that has been done uses the Markov Random Field (MRF), which involves the computations and is said to be robust to noise. In the past recent years' image segmentation has been brought to tackle problems such as easy processing of an image, interpretation of the contents of an image, and easy analysing of an image. This article reviews and summarizes some of the image segmentation techniques and algorithms that have been developed in the past years. The techniques include neural networks (CNN), edge-based techniques, region growing, clustering, and thresholding techniques and so on. The advantages and disadvantages of medical ultrasound image segmentation techniques are also discussed. The article also addresses the applications and potential future developments that can be done around image segmentation. This review article concludes with the fact that no technique is perfectly suitable for the segmentation of all different types of images, but the use of hybrid techniques yields more accurate and efficient results. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=clustering-based" title="clustering-based">clustering-based</a>, <a href="https://publications.waset.org/abstracts/search?q=convolution-network" title=" convolution-network"> convolution-network</a>, <a href="https://publications.waset.org/abstracts/search?q=edge-based" title=" edge-based"> edge-based</a>, <a href="https://publications.waset.org/abstracts/search?q=region-growing" title=" region-growing"> region-growing</a> </p> <a href="https://publications.waset.org/abstracts/166513/image-segmentation-techniques-review" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/166513.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">97</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">12883</span> The Effects of Music Therapy on Positive Negative Syndrome Scale, Cognitive Function, and Quality of Life in Female Schizophrenic Patients</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Elmeida%20Effendy">Elmeida Effendy</a>, <a href="https://publications.waset.org/abstracts/search?q=Mustafa%20M.%20Amin"> Mustafa M. Amin</a>, <a href="https://publications.waset.org/abstracts/search?q=Nauli%20Aulia%20Lubis"> Nauli Aulia Lubis</a>, <a href="https://publications.waset.org/abstracts/search?q=P.%20J.%20Sirait"> P. J. Sirait</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Music therapy may have an effect on mental illnesses. This is a comparative, quasi-experimental study to examine the effect of music therapy added to standard care on Positive Negative Syndrome Scale, Cognitive Function and Quality of Life in female schizophrenic patients. 50 schizophrenic participants who were diagnosed with semistructured MINI ICD-X, were assigned into two groups received pharmacotherapy. Participants were assigned into each group of therapy by using matched allocation method. Music therapy added on to the first group. They received music therapy, using Mozart Sonata four times a week, over a period of six week. Positive and negative symptoms were measured by using Positive and Negative Syndrome Scale (PANSS). Cognitive function were measured by using Mini Mental State Examination (MMSE) and Montreal Cognitive Assessment (MOCA). All rating scale were administrated by certified skill residents every week after music therapy session. The participants who were received pharmaco-and-music therapy significantly showed greater response than who received pharmacotherapy only. The mean difference of response were -6,6164 (p=0,001) for PANNS, 2,911 (p=0,004) for MMSE, 3,618 (p=0,001) for MOCA, 4,599 (p=0,001) for SF-36. Music therapy have beneficial effects on PANSS, Cognitive Function and Quality of Life in schizophrenic patients. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=music%20therapy" title="music therapy">music therapy</a>, <a href="https://publications.waset.org/abstracts/search?q=rating%20scale" title=" rating scale"> rating scale</a>, <a href="https://publications.waset.org/abstracts/search?q=schizophrenia" title=" schizophrenia"> schizophrenia</a>, <a href="https://publications.waset.org/abstracts/search?q=symptoms" title=" symptoms"> symptoms</a> </p> <a href="https://publications.waset.org/abstracts/58534/the-effects-of-music-therapy-on-positive-negative-syndrome-scale-cognitive-function-and-quality-of-life-in-female-schizophrenic-patients" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/58534.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">347</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">12882</span> Color Image Enhancement Using Multiscale Retinex and Image Fusion Techniques</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Chang-Hsing%20Lee">Chang-Hsing Lee</a>, <a href="https://publications.waset.org/abstracts/search?q=Cheng-Chang%20Lien"> Cheng-Chang Lien</a>, <a href="https://publications.waset.org/abstracts/search?q=Chin-Chuan%20Han"> Chin-Chuan Han</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In this paper, an edge-strength guided multiscale retinex (EGMSR) approach will be proposed for color image contrast enhancement. In EGMSR, the pixel-dependent weight associated with each pixel in the single scale retinex output image is computed according to the edge strength around this pixel in order to prevent from over-enhancing the noises contained in the smooth dark/bright regions. Further, by fusing together the enhanced results of EGMSR and adaptive multiscale retinex (AMSR), we can get a natural fused image having high contrast and proper tonal rendition. Experimental results on several low-contrast images have shown that our proposed approach can produce natural and appealing enhanced images. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=image%20enhancement" title="image enhancement">image enhancement</a>, <a href="https://publications.waset.org/abstracts/search?q=multiscale%20retinex" title=" multiscale retinex"> multiscale retinex</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20fusion" title=" image fusion"> image fusion</a>, <a href="https://publications.waset.org/abstracts/search?q=EGMSR" title=" EGMSR"> EGMSR</a> </p> <a href="https://publications.waset.org/abstracts/15139/color-image-enhancement-using-multiscale-retinex-and-image-fusion-techniques" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/15139.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">458</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">12881</span> Examination of 12-14 Years Old Volleyball Players’ Body Image Levels</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Dilek%20Yal%C4%B1z%20Solmaz">Dilek Yalız Solmaz</a>, <a href="https://publications.waset.org/abstracts/search?q=G%C3%BCls%C3%BCn%20G%C3%BCven"> Gülsün Güven</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The aim of this study is to examine the body image levels of 12-14 years old girls who are playing volleyball. The research group consists of 113 girls who are playing volleyball in Sakarya during the fall season of 2015-2016. Data was collected by means of the 'Body Image Questionnaire' which was originally developed by Secord and Jourard. The consequence of repeated analysis of the reliability of the scale was determined to as '.96'. This study employed statistical calculations as mean, standard deviation and t-test. According to results of this study, it was determined that the mean point of the volleyball players is 158.5 ± 25.1 (minimum=40; maximum=200) and it can be said that the volleyball players’ body image levels are high. There is a significant difference between the underweight (167.4 ± 20.7) and normal weight (151.4 ± 26.2) groups according to their Body Mass Index. Body image levels of underweight group were determined higher than normal weight group. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=volleyball" title="volleyball">volleyball</a>, <a href="https://publications.waset.org/abstracts/search?q=players" title=" players"> players</a>, <a href="https://publications.waset.org/abstracts/search?q=body%20image" title=" body image"> body image</a>, <a href="https://publications.waset.org/abstracts/search?q=body%20image%20levels" title=" body image levels"> body image levels</a> </p> <a href="https://publications.waset.org/abstracts/79358/examination-of-12-14-years-old-volleyball-players-body-image-levels" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/79358.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">210</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">12880</span> The Impact of the Enron Scandal on the Reputation of Corporate Social Responsibility Rating Agencies</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Jaballah%20Jamil">Jaballah Jamil</a> </p> <p class="card-text"><strong>Abstract:</strong></p> KLD (Peter Kinder, Steve Lydenberg and Amy Domini) research & analytics is an independent intermediary of social performance information that adopts an investor-pay model. KLD rating agency does not have an explicit monitoring on the rated firm which suggests that KLD ratings may not include private informations. Moreover, the incapacity of KLD to predict accurately the extra-financial rating of Enron casts doubt on the reliability of KLD ratings. Therefore, we first investigate whether KLD ratings affect investors' perception by studying the effect of KLD rating changes on firms' financial performances. Second, we study the impact of the Enron scandal on investors' perception of KLD rating changes by comparing the effect of KLD rating changes on firms' financial performances before and after the failure of Enron. We propose an empirical study that relates a number of equally-weighted portfolios returns, excess stock returns and book-to-market ratio to different dimensions of KLD social responsibility ratings. We first find that over the last two decades KLD rating changes influence significantly and negatively stock returns and book-to-market ratio of rated firms. This finding suggests that a raise in corporate social responsibility rating lowers the firm's risk. Second, to assess the Enron scandal's effect on the perception of KLD ratings, we compare the effect of KLD rating changes before and after the Enron scandal. We find that after the Enron scandal this significant effect disappears. This finding supports the view that the Enron scandal annihilates the KLD's effect on Socially Responsible Investors. Therefore, our findings may question results of recent studies that use KLD ratings as a proxy for Corporate Social Responsibility behavior. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=KLD%20social%20rating%20agency" title="KLD social rating agency">KLD social rating agency</a>, <a href="https://publications.waset.org/abstracts/search?q=investors%27%20perception" title=" investors' perception"> investors' perception</a>, <a href="https://publications.waset.org/abstracts/search?q=investment%20decision" title=" investment decision"> investment decision</a>, <a href="https://publications.waset.org/abstracts/search?q=financial%20performance" title=" financial performance"> financial performance</a> </p> <a href="https://publications.waset.org/abstracts/25867/the-impact-of-the-enron-scandal-on-the-reputation-of-corporate-social-responsibility-rating-agencies" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/25867.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">439</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">12879</span> Impact of Teacher Qualifications on the Pedagogical Competencies of University Lecturers in Northwest Nigeria: A Pilot Study Report</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Collins%20Ekpiwre%20Augustine">Collins Ekpiwre Augustine</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Taking into account the impact of teacher training on primary and secondary teachers’ classroom competencies and practices, as revealed by many empirical studies, this study investigated the impact of teacher qualifications on the pedagogical competencies of university teachers in Northwest Nigeria.Four research questions were answered while four hypotheses were tested. Both descriptive statistic of frequencies/arithmetic mean and inferential statistic oft-test were used to analyze the data collected. In order to provide a focus to the study,an observational rating scale titled “University Teachers’ Pedagogical Competency Observation Rating Scale” (UTPCORS) was used to collect data for the study. The population for the study comprised all the university teachers in the three Federal Universities in Northwest Nigeria totaling about 3,401. However, this pilot study was administered on 8 teachers - with 4 participants in each comparison group in Bayero University, Kano.The findings of the study revealed that there was no significant difference in the four hypotheses postulated for the study. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=impact" title="impact">impact</a>, <a href="https://publications.waset.org/abstracts/search?q=university%20teachers" title=" university teachers"> university teachers</a>, <a href="https://publications.waset.org/abstracts/search?q=teachers%27%20qualifications" title=" teachers' qualifications"> teachers' qualifications</a>, <a href="https://publications.waset.org/abstracts/search?q=competencies" title=" competencies"> competencies</a> </p> <a href="https://publications.waset.org/abstracts/22029/impact-of-teacher-qualifications-on-the-pedagogical-competencies-of-university-lecturers-in-northwest-nigeria-a-pilot-study-report" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/22029.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">512</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">12878</span> Image Distortion Correction Method of 2-MHz Side Scan Sonar for Underwater Structure Inspection</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Youngseok%20Kim">Youngseok Kim</a>, <a href="https://publications.waset.org/abstracts/search?q=Chul%20Park"> Chul Park</a>, <a href="https://publications.waset.org/abstracts/search?q=Jonghwa%20Yi"> Jonghwa Yi</a>, <a href="https://publications.waset.org/abstracts/search?q=Sangsik%20Choi"> Sangsik Choi</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The 2-MHz Side Scan SONAR (SSS) attached to the boat for inspection of underwater structures is affected by shaking. It is difficult to determine the exact scale of damage of structure. In this study, a motion sensor is attached to the inside of the 2-MHz SSS to get roll, pitch, and yaw direction data, and developed the image stabilization tool to correct the sonar image. We checked that reliable data can be obtained with an average error rate of 1.99% between the measured value and the actual distance through experiment. It is possible to get the accurate sonar data to inspect damage in underwater structure. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=image%20stabilization" title="image stabilization">image stabilization</a>, <a href="https://publications.waset.org/abstracts/search?q=motion%20sensor" title=" motion sensor"> motion sensor</a>, <a href="https://publications.waset.org/abstracts/search?q=safety%20inspection" title=" safety inspection"> safety inspection</a>, <a href="https://publications.waset.org/abstracts/search?q=sonar%20image" title=" sonar image"> sonar image</a>, <a href="https://publications.waset.org/abstracts/search?q=underwater%20structure" title=" underwater structure"> underwater structure</a> </p> <a href="https://publications.waset.org/abstracts/84612/image-distortion-correction-method-of-2-mhz-side-scan-sonar-for-underwater-structure-inspection" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/84612.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">280</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">12877</span> Formulation of a Rapid Earthquake Risk Ranking Criteria for National Bridges in the National Capital Region Affected by the West Valley Fault Using GIS Data Integration</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=George%20Mariano%20Soriano">George Mariano Soriano</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In this study, a Rapid Earthquake Risk Ranking Criteria was formulated by integrating various existing maps and databases by the Department of Public Works and Highways (DPWH) and Philippine Institute of Volcanology and Seismology (PHIVOLCS). Utilizing Geographic Information System (GIS) software, the above-mentioned maps and databases were used in extracting seismic hazard parameters and bridge vulnerability characteristics in order to rank the seismic damage risk rating of bridges in the National Capital Region. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=bridge" title="bridge">bridge</a>, <a href="https://publications.waset.org/abstracts/search?q=earthquake" title=" earthquake"> earthquake</a>, <a href="https://publications.waset.org/abstracts/search?q=GIS" title=" GIS"> GIS</a>, <a href="https://publications.waset.org/abstracts/search?q=hazard" title=" hazard"> hazard</a>, <a href="https://publications.waset.org/abstracts/search?q=risk" title=" risk"> risk</a>, <a href="https://publications.waset.org/abstracts/search?q=vulnerability" title=" vulnerability"> vulnerability</a> </p> <a href="https://publications.waset.org/abstracts/60021/formulation-of-a-rapid-earthquake-risk-ranking-criteria-for-national-bridges-in-the-national-capital-region-affected-by-the-west-valley-fault-using-gis-data-integration" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/60021.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">409</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">12876</span> Influential Factors Impacting the Utilization of Pain Assessment Tools among Hospitalized Elderly Patients in Taiwan</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Huei%20Jiun%20Chen">Huei Jiun Chen</a>, <a href="https://publications.waset.org/abstracts/search?q=Hui%20Mei%20Huan"> Hui Mei Huan</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Introduction: Pain is an unpleasant experience for hospitalized patients that impacts both their physical and mental well-being. It is important to select appropriate pain assessment tools to ensure effective pain management. Therefore, it is suggested to use Verbal Rating Scale (VRS) instead for better assessment. The Wong-Baker FACES Pain Rating Scale(WBS) is a widely used pain assessment tool in Taiwan to help individuals communicate the intensity of their pain. However, in clinical practice, even when using various assessment tools to evaluate pain, Numeric Rating Scale-11 (NRS-11) is still commonly utilized to quantify the intensity of pain. The correlation between NRS and other pain assessment tools has not been extensively explored in Taiwan. Additionally, the influence of gender and education level on pain assessment among elderly individuals has not been extensively studied in Taiwan. The aim of this study is to investigate the correlation between pain assessment scales (NRS-11, VRS, WBS) in assessing pain intensity among elderly inpatients. The secondary objective of this study is to examine how gender and education level influence pain assessment among individuals, as well as to explore their preferences regarding pain assessment tools. Method: In this study, a questionnaire survey and purposive sampling were employed to recruit participants from a medical center located in central Taiwan. Participants were requested to assess their pain intensity in the past 24 hours using NRS-11, VRS, and WBS. Additionally, the study investigated their preferences for pain assessment tools. Result: A total of 252 participants were included in this study, with a mean age of 71.1 years (SD=6.2). Of these participants, 135 were male (53.6%), and 44.4% had a primary level or below education. Participants were asked to use NRS-11, VRS, and WBS to assess their current, maximum, and minimum pain intensity experienced in the past 24 hours. The findings indicated a significant correlation (p< .01) among all three pain assessment tools. No significant differences were observed in gender across the three pain assessment scales. For severe pain, there were significant differences in self-rated pain scales among the elderly participants with different education levels (F=3.08, p< .01; X²=17.25, X²=17.21, p< .01), but there were no significant differences observed for mild pain. Regarding preferences for pain assessment tools, 158 participants (62.7%) favored VRS, followed by WBS; gender and education level had no influence on their preferences. Conclusion: Most elderly participants prefer using VRS (Verbal Rating Scale) to self-reported their pain. The reason for this preference may be attributed to the verbal nature of VRS, as it is simple and easy to understand. Furthermore, it could be associated with the level of education among the elderly participants. The pain assessment using VRS demonstrated a significant correlation with NRS-11 and WBS, and gender was not found to have any influence on these assessment. Further research is needed to explore the effect of different education levels on self-reported pain intensity among elderly people in Taiwan. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=pain%20assessment" title="pain assessment">pain assessment</a>, <a href="https://publications.waset.org/abstracts/search?q=elderly" title=" elderly"> elderly</a>, <a href="https://publications.waset.org/abstracts/search?q=gender" title=" gender"> gender</a>, <a href="https://publications.waset.org/abstracts/search?q=education" title=" education"> education</a> </p> <a href="https://publications.waset.org/abstracts/170640/influential-factors-impacting-the-utilization-of-pain-assessment-tools-among-hospitalized-elderly-patients-in-taiwan" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/170640.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">76</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">12875</span> The Importance of Visual Communication in Artificial Intelligence</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Manjitsingh%20Rajput">Manjitsingh Rajput</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Visual communication plays an important role in artificial intelligence (AI) because it enables machines to understand and interpret visual information, similar to how humans do. This abstract explores the importance of visual communication in AI and emphasizes the importance of various applications such as computer vision, object emphasis recognition, image classification and autonomous systems. In going deeper, with deep learning techniques and neural networks that modify visual understanding, In addition to AI programming, the abstract discusses challenges facing visual interfaces for AI, such as data scarcity, domain optimization, and interpretability. Visual communication and other approaches, such as natural language processing and speech recognition, have also been explored. Overall, this abstract highlights the critical role that visual communication plays in advancing AI capabilities and enabling machines to perceive and understand the world around them. The abstract also explores the integration of visual communication with other modalities like natural language processing and speech recognition, emphasizing the critical role of visual communication in AI capabilities. This methodology explores the importance of visual communication in AI development and implementation, highlighting its potential to enhance the effectiveness and accessibility of AI systems. It provides a comprehensive approach to integrating visual elements into AI systems, making them more user-friendly and efficient. In conclusion, Visual communication is crucial in AI systems for object recognition, facial analysis, and augmented reality, but challenges like data quality, interpretability, and ethics must be addressed. Visual communication enhances user experience, decision-making, accessibility, and collaboration. Developers can integrate visual elements for efficient and accessible AI systems. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=visual%20communication%20AI" title="visual communication AI">visual communication AI</a>, <a href="https://publications.waset.org/abstracts/search?q=computer%20vision" title=" computer vision"> computer vision</a>, <a href="https://publications.waset.org/abstracts/search?q=visual%20aid%20in%20communication" title=" visual aid in communication"> visual aid in communication</a>, <a href="https://publications.waset.org/abstracts/search?q=essence%20of%20visual%20communication." title=" essence of visual communication."> essence of visual communication.</a> </p> <a href="https://publications.waset.org/abstracts/174998/the-importance-of-visual-communication-in-artificial-intelligence" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/174998.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">95</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">12874</span> Multi-Sensor Image Fusion for Visible and Infrared Thermal Images</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Amit%20Kumar%20Happy">Amit Kumar Happy</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper is motivated by the importance of multi-sensor image fusion with a specific focus on infrared (IR) and visual image (VI) fusion for various applications, including military reconnaissance. Image fusion can be defined as the process of combining two or more source images into a single composite image with extended information content that improves visual perception or feature extraction. These images can be from different modalities like visible camera & IR thermal imager. While visible images are captured by reflected radiations in the visible spectrum, the thermal images are formed from thermal radiation (infrared) that may be reflected or self-emitted. A digital color camera captures the visible source image, and a thermal infrared camera acquires the thermal source image. In this paper, some image fusion algorithms based upon multi-scale transform (MST) and region-based selection rule with consistency verification have been proposed and presented. This research includes the implementation of the proposed image fusion algorithm in MATLAB along with a comparative analysis to decide the optimum number of levels for MST and the coefficient fusion rule. The results are presented, and several commonly used evaluation metrics are used to assess the suggested method's validity. Experiments show that the proposed approach is capable of producing good fusion results. While deploying our image fusion algorithm approaches, we observe several challenges from the popular image fusion methods. While high computational cost and complex processing steps of image fusion algorithms provide accurate fused results, they also make it hard to become deployed in systems and applications that require a real-time operation, high flexibility, and low computation ability. So, the methods presented in this paper offer good results with minimum time complexity. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=image%20fusion" title="image fusion">image fusion</a>, <a href="https://publications.waset.org/abstracts/search?q=IR%20thermal%20imager" title=" IR thermal imager"> IR thermal imager</a>, <a href="https://publications.waset.org/abstracts/search?q=multi-sensor" title=" multi-sensor"> multi-sensor</a>, <a href="https://publications.waset.org/abstracts/search?q=multi-scale%20transform" title=" multi-scale transform"> multi-scale transform</a> </p> <a href="https://publications.waset.org/abstracts/138086/multi-sensor-image-fusion-for-visible-and-infrared-thermal-images" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/138086.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">115</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">12873</span> Use of Interpretable Evolved Search Query Classifiers for Sinhala Documents</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Prasanna%20Haddela">Prasanna Haddela</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Document analysis is a well matured yet still active research field, partly as a result of the intricate nature of building computational tools but also due to the inherent problems arising from the variety and complexity of human languages. Breaking down language barriers is vital in enabling access to a number of recent technologies. This paper investigates the application of document classification methods to new Sinhalese datasets. This language is geographically isolated and rich with many of its own unique features. We will examine the interpretability of the classification models with a particular focus on the use of evolved Lucene search queries generated using a Genetic Algorithm (GA) as a method of document classification. We will compare the accuracy and interpretability of these search queries with other popular classifiers. The results are promising and are roughly in line with previous work on English language datasets. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=evolved%20search%20queries" title="evolved search queries">evolved search queries</a>, <a href="https://publications.waset.org/abstracts/search?q=Sinhala%20document%20classification" title=" Sinhala document classification"> Sinhala document classification</a>, <a href="https://publications.waset.org/abstracts/search?q=Lucene%20Sinhala%20analyzer" title=" Lucene Sinhala analyzer"> Lucene Sinhala analyzer</a>, <a href="https://publications.waset.org/abstracts/search?q=interpretable%20text%20classification" title=" interpretable text classification"> interpretable text classification</a>, <a href="https://publications.waset.org/abstracts/search?q=genetic%20algorithm" title=" genetic algorithm"> genetic algorithm</a> </p> <a href="https://publications.waset.org/abstracts/126324/use-of-interpretable-evolved-search-query-classifiers-for-sinhala-documents" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/126324.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">114</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">12872</span> Analyzing the Shearing-Layer Concept Applied to Urban Green System</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=S.%20Pushkar">S. Pushkar</a>, <a href="https://publications.waset.org/abstracts/search?q=O.%20Verbitsky"> O. Verbitsky</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Currently, green rating systems are mainly utilized for correctly sizing mechanical and electrical systems, which have short lifetime expectancies. In these systems, passive solar and bio-climatic architecture, which have long lifetime expectancies, are neglected. Urban rating systems consider buildings and services in addition to neighborhoods and public transportation as integral parts of the built environment. The main goal of this study was to develop a more consistent point allocation system for urban building standards by using six different lifetime shearing layers: Site, Structure, Skin, Services, Space, and Stuff, each reflecting distinct environmental damages. This shearing-layer concept was applied to internationally well-known rating systems: Leadership in Energy and Environmental Design (LEED) for Neighborhood Development, BRE Environmental Assessment Method (BREEAM) for Communities, and Comprehensive Assessment System for Building Environmental Efficiency (CASBEE) for Urban Development. The results showed that LEED for Neighborhood Development and BREEAM for Communities focused on long-lifetime-expectancy building designs, whereas CASBEE for Urban Development gave equal importance to the Building and Service Layers. Moreover, although this rating system was applied using a building-scale assessment, “Urban Area + Buildings” focuses on a short-lifetime-expectancy system design, neglecting to improve the architectural design by considering bio-climatic and passive solar aspects. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=green%20rating%20system" title="green rating system">green rating system</a>, <a href="https://publications.waset.org/abstracts/search?q=urban%20community" title=" urban community"> urban community</a>, <a href="https://publications.waset.org/abstracts/search?q=sustainable%20design" title=" sustainable design"> sustainable design</a>, <a href="https://publications.waset.org/abstracts/search?q=standardization" title=" standardization"> standardization</a>, <a href="https://publications.waset.org/abstracts/search?q=shearing-layer%20concept" title=" shearing-layer concept"> shearing-layer concept</a>, <a href="https://publications.waset.org/abstracts/search?q=passive%20solar%20architecture" title=" passive solar architecture"> passive solar architecture</a> </p> <a href="https://publications.waset.org/abstracts/20551/analyzing-the-shearing-layer-concept-applied-to-urban-green-system" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/20551.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">579</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">12871</span> A Method of the Semantic on Image Auto-Annotation</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Lin%20Huo">Lin Huo</a>, <a href="https://publications.waset.org/abstracts/search?q=Xianwei%20Liu"> Xianwei Liu</a>, <a href="https://publications.waset.org/abstracts/search?q=Jingxiong%20Zhou"> Jingxiong Zhou</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Recently, due to the existence of semantic gap between image visual features and human concepts, the semantic of image auto-annotation has become an important topic. Firstly, by extract low-level visual features of the image, and the corresponding Hash method, mapping the feature into the corresponding Hash coding, eventually, transformed that into a group of binary string and store it, image auto-annotation by search is a popular method, we can use it to design and implement a method of image semantic auto-annotation. Finally, Through the test based on the Corel image set, and the results show that, this method is effective. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=image%20auto-annotation" title="image auto-annotation">image auto-annotation</a>, <a href="https://publications.waset.org/abstracts/search?q=color%20correlograms" title=" color correlograms"> color correlograms</a>, <a href="https://publications.waset.org/abstracts/search?q=Hash%20code" title=" Hash code"> Hash code</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20retrieval" title=" image retrieval"> image retrieval</a> </p> <a href="https://publications.waset.org/abstracts/15628/a-method-of-the-semantic-on-image-auto-annotation" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/15628.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">497</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">12870</span> Multiscale Connected Component Labelling and Applications to Scientific Microscopy Image Processing</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Yayun%20Hsu">Yayun Hsu</a>, <a href="https://publications.waset.org/abstracts/search?q=Henry%20Horng-Shing%20Lu"> Henry Horng-Shing Lu</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In this paper, a new method is proposed to extending the method of connected component labeling from processing binary images to multi-scale modeling of images. By using the adaptive threshold of multi-scale attributes, this approach minimizes the possibility of missing those important components with weak intensities. In addition, the computational cost of this approach remains similar to that of the typical approach of component labeling. Then, this methodology is applied to grain boundary detection and Drosophila Brain-bow neuron segmentation. These demonstrate the feasibility of the proposed approach in the analysis of challenging microscopy images for scientific discovery. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=microscopic%20image%20processing" title="microscopic image processing">microscopic image processing</a>, <a href="https://publications.waset.org/abstracts/search?q=scientific%20data%20mining" title=" scientific data mining"> scientific data mining</a>, <a href="https://publications.waset.org/abstracts/search?q=multi-scale%20modeling" title=" multi-scale modeling"> multi-scale modeling</a>, <a href="https://publications.waset.org/abstracts/search?q=data%20mining" title=" data mining"> data mining</a> </p> <a href="https://publications.waset.org/abstracts/2589/multiscale-connected-component-labelling-and-applications-to-scientific-microscopy-image-processing" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/2589.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">435</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">12869</span> An Image Enhancement Method Based on Curvelet Transform for CBCT-Images</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Shahriar%20Farzam">Shahriar Farzam</a>, <a href="https://publications.waset.org/abstracts/search?q=Maryam%20Rastgarpour"> Maryam Rastgarpour</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Image denoising plays extremely important role in digital image processing. Enhancement of clinical image research based on Curvelet has been developed rapidly in recent years. In this paper, we present a method for image contrast enhancement for cone beam CT (CBCT) images based on fast discrete curvelet transforms (FDCT) that work through Unequally Spaced Fast Fourier Transform (USFFT). These transforms return a table of Curvelet transform coefficients indexed by a scale parameter, an orientation and a spatial location. Accordingly, the coefficients obtained from FDCT-USFFT can be modified in order to enhance contrast in an image. Our proposed method first uses a two-dimensional mathematical transform, namely the FDCT through unequal-space fast Fourier transform on input image and then applies thresholding on coefficients of Curvelet to enhance the CBCT images. Consequently, applying unequal-space fast Fourier Transform leads to an accurate reconstruction of the image with high resolution. The experimental results indicate the performance of the proposed method is superior to the existing ones in terms of Peak Signal to Noise Ratio (PSNR) and Effective Measure of Enhancement (EME). <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=curvelet%20transform" title="curvelet transform">curvelet transform</a>, <a href="https://publications.waset.org/abstracts/search?q=CBCT" title=" CBCT"> CBCT</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20enhancement" title=" image enhancement"> image enhancement</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20denoising" title=" image denoising"> image denoising</a> </p> <a href="https://publications.waset.org/abstracts/69244/an-image-enhancement-method-based-on-curvelet-transform-for-cbct-images" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/69244.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">300</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">12868</span> Effects of Cognitive Reframe on Depression among Secondary School Adolescents: The Moderating Role of Self-Esteem</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Olayinka%20M.%20Ayannuga">Olayinka M. Ayannuga</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This study explored the effect of cognitive reframe in reducing depression among Senior Secondary School Adolescents. It adopted a pre-test, post-test, control quasi-experimental research design with a 2x2 factorial matrix. Participants included 120 depressed adolescents randomly drawn from public Senior Secondary School Two (SSS.II) students in Lagos State, Nigeria. Sixty participants were randomly selected and assigned to the treatment and control groups. Participants in the Cognitive Reframe (CR) group were trained for 8 weeks, while those in the Control group were given a placebo. Two instruments were used for data collection namely: Self – Esteem Scale (SES: Rosenberg 1965: α = 0.85), and The Self Rating Depression Scale (SDS: Zung, 1972; α 0 = 0.87) were administered at pretest level. However, only the Self-Rating Depression Scale (SDS) was re-administered at post-test to measure the effect of the intervention. The results revealed that there was a significant effect of cognitive reframe training programmes on secondary school adolescents’ depression, also there were significant effects of self-esteem on secondary school adolescents’ depression. The study showed that the technique is capable of reducing depression among adolescents. It was recommended, amongst others, that Counselling psychologists, Curriculum planners and Teachers could explore incorporating the contents of cognitive reframe into the secondary school curriculum for students’ capacity building to reduce depression tendencies. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=adolescents" title="adolescents">adolescents</a>, <a href="https://publications.waset.org/abstracts/search?q=cognitive%20reframe" title=" cognitive reframe"> cognitive reframe</a>, <a href="https://publications.waset.org/abstracts/search?q=depression" title=" depression"> depression</a>, <a href="https://publications.waset.org/abstracts/search?q=self%20%E2%80%93%20esteem" title=" self – esteem"> self – esteem</a> </p> <a href="https://publications.waset.org/abstracts/53972/effects-of-cognitive-reframe-on-depression-among-secondary-school-adolescents-the-moderating-role-of-self-esteem" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/53972.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">284</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">12867</span> Correlation of Spirometry with Six Minute Walk Test and Grading of Dyspnoea in COPD Patients</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Anand%20K.%20Patel">Anand K. Patel</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Background: Patients with COPD have decreased pulmonary functions, which in turn reflect on their day-to-day activities. Objectives: To assess the correlation between functional vital capacity (FVC) and forced expiratory volume in one second (FEV1) with 6 minutes walk test (6MWT). To correlate the Borg rating for perceived exertion scale (Borg scale) and Modified medical research council (MMRC) dyspnea scale with the 6MWT, FVC and FEV1. Method: In this prospective study total 72 patients with COPD diagnosed by the GOLD guidelines were enrolled after taking written consent. They were first asked to rate physical exertion on the Borg scale as well as the modified medical research council dyspnea scale and then were subjected to perform pre and post bronchodilator spirometry followed by 6 minute walk test. The findings were correlated by calculating the Pearson coefficient for each set and obtaining the p-values, with a p < 0.05 being clinically significant. Result: There was a significant correlation between spirometry and 6MWT suggesting that patients with lower measurements were unable to walk for longer distances. However, FVC had the stronger correlation than FEV1. MMRC scale had a stronger correlation with 6MWT as compared to the Borg scale. Conclusion: The study suggests that 6MWT is a better test for monitoring the patients of COPD. In spirometry, FVC should be used in monitoring patients with COPD, instead of FEV1. MMRC scale shows a stronger correlation than the Borg scale, and we should use it more often. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=spirometry" title="spirometry">spirometry</a>, <a href="https://publications.waset.org/abstracts/search?q=6%20minute%20walk%20test" title=" 6 minute walk test"> 6 minute walk test</a>, <a href="https://publications.waset.org/abstracts/search?q=MMRC" title=" MMRC"> MMRC</a>, <a href="https://publications.waset.org/abstracts/search?q=Borg%20scale" title=" Borg scale"> Borg scale</a> </p> <a href="https://publications.waset.org/abstracts/83688/correlation-of-spirometry-with-six-minute-walk-test-and-grading-of-dyspnoea-in-copd-patients" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/83688.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">202</span> </span> </div> </div> <ul class="pagination"> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=national%20image%20interpretability%20rating%20scale&page=1" rel="prev">‹</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=national%20image%20interpretability%20rating%20scale&page=1">1</a></li> <li class="page-item active"><span class="page-link">2</span></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=national%20image%20interpretability%20rating%20scale&page=3">3</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=national%20image%20interpretability%20rating%20scale&page=4">4</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=national%20image%20interpretability%20rating%20scale&page=5">5</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=national%20image%20interpretability%20rating%20scale&page=6">6</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=national%20image%20interpretability%20rating%20scale&page=7">7</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=national%20image%20interpretability%20rating%20scale&page=8">8</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=national%20image%20interpretability%20rating%20scale&page=9">9</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=national%20image%20interpretability%20rating%20scale&page=10">10</a></li> <li class="page-item disabled"><span class="page-link">...</span></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=national%20image%20interpretability%20rating%20scale&page=430">430</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=national%20image%20interpretability%20rating%20scale&page=431">431</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=national%20image%20interpretability%20rating%20scale&page=3" rel="next">›</a></li> </ul> </div> </main> <footer> <div id="infolinks" class="pt-3 pb-2"> <div class="container"> <div style="background-color:#f5f5f5;" class="p-3"> <div class="row"> <div class="col-md-2"> <ul class="list-unstyled"> About <li><a href="https://waset.org/page/support">About Us</a></li> <li><a href="https://waset.org/page/support#legal-information">Legal</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/WASET-16th-foundational-anniversary.pdf">WASET celebrates its 16th foundational anniversary</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Account <li><a href="https://waset.org/profile">My Account</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Explore <li><a href="https://waset.org/disciplines">Disciplines</a></li> <li><a href="https://waset.org/conferences">Conferences</a></li> <li><a href="https://waset.org/conference-programs">Conference Program</a></li> <li><a href="https://waset.org/committees">Committees</a></li> <li><a href="https://publications.waset.org">Publications</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Research <li><a href="https://publications.waset.org/abstracts">Abstracts</a></li> <li><a href="https://publications.waset.org">Periodicals</a></li> <li><a href="https://publications.waset.org/archive">Archive</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Open Science <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Science-Philosophy.pdf">Open Science Philosophy</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Science-Award.pdf">Open Science Award</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Society-Open-Science-and-Open-Innovation.pdf">Open Innovation</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Postdoctoral-Fellowship-Award.pdf">Postdoctoral Fellowship Award</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Scholarly-Research-Review.pdf">Scholarly Research Review</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Support <li><a href="https://waset.org/page/support">Support</a></li> <li><a href="https://waset.org/profile/messages/create">Contact Us</a></li> <li><a href="https://waset.org/profile/messages/create">Report Abuse</a></li> </ul> </div> </div> </div> </div> </div> <div class="container text-center"> <hr style="margin-top:0;margin-bottom:.3rem;"> <a href="https://creativecommons.org/licenses/by/4.0/" target="_blank" class="text-muted small">Creative Commons Attribution 4.0 International License</a> <div id="copy" class="mt-2">© 2024 World Academy of Science, Engineering and Technology</div> </div> </footer> <a href="javascript:" id="return-to-top"><i class="fas fa-arrow-up"></i></a> <div class="modal" id="modal-template"> <div class="modal-dialog"> <div class="modal-content"> <div class="row m-0 mt-1"> <div class="col-md-12"> <button type="button" class="close" data-dismiss="modal" aria-label="Close"><span aria-hidden="true">×</span></button> </div> </div> <div class="modal-body"></div> </div> </div> </div> <script src="https://cdn.waset.org/static/plugins/jquery-3.3.1.min.js"></script> <script src="https://cdn.waset.org/static/plugins/bootstrap-4.2.1/js/bootstrap.bundle.min.js"></script> <script src="https://cdn.waset.org/static/js/site.js?v=150220211556"></script> <script> jQuery(document).ready(function() { /*jQuery.get("https://publications.waset.org/xhr/user-menu", function (response) { jQuery('#mainNavMenu').append(response); });*/ jQuery.get({ url: "https://publications.waset.org/xhr/user-menu", cache: false }).then(function(response){ jQuery('#mainNavMenu').append(response); }); }); </script> </body> </html>