CINXE.COM

Search results for: pooling

<!DOCTYPE html> <html lang="en" dir="ltr"> <head> <!-- Google tag (gtag.js) --> <script async src="https://www.googletagmanager.com/gtag/js?id=G-P63WKM1TM1"></script> <script> window.dataLayer = window.dataLayer || []; function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'G-P63WKM1TM1'); </script> <!-- Yandex.Metrika counter --> <script type="text/javascript" > (function(m,e,t,r,i,k,a){m[i]=m[i]||function(){(m[i].a=m[i].a||[]).push(arguments)}; m[i].l=1*new Date(); for (var j = 0; j < document.scripts.length; j++) {if (document.scripts[j].src === r) { return; }} k=e.createElement(t),a=e.getElementsByTagName(t)[0],k.async=1,k.src=r,a.parentNode.insertBefore(k,a)}) (window, document, "script", "https://mc.yandex.ru/metrika/tag.js", "ym"); ym(55165297, "init", { clickmap:false, trackLinks:true, accurateTrackBounce:true, webvisor:false }); </script> <noscript><div><img src="https://mc.yandex.ru/watch/55165297" style="position:absolute; left:-9999px;" alt="" /></div></noscript> <!-- /Yandex.Metrika counter --> <!-- Matomo --> <!-- End Matomo Code --> <title>Search results for: pooling</title> <meta name="description" content="Search results for: pooling"> <meta name="keywords" content="pooling"> <meta name="viewport" content="width=device-width, initial-scale=1, minimum-scale=1, maximum-scale=1, user-scalable=no"> <meta charset="utf-8"> <link href="https://cdn.waset.org/favicon.ico" type="image/x-icon" rel="shortcut icon"> <link href="https://cdn.waset.org/static/plugins/bootstrap-4.2.1/css/bootstrap.min.css" rel="stylesheet"> <link href="https://cdn.waset.org/static/plugins/fontawesome/css/all.min.css" rel="stylesheet"> <link href="https://cdn.waset.org/static/css/site.css?v=150220211555" rel="stylesheet"> </head> <body> <header> <div class="container"> <nav class="navbar navbar-expand-lg navbar-light"> <a class="navbar-brand" href="https://waset.org"> <img src="https://cdn.waset.org/static/images/wasetc.png" alt="Open Science Research Excellence" title="Open Science Research Excellence" /> </a> <button class="d-block d-lg-none navbar-toggler ml-auto" type="button" data-toggle="collapse" data-target="#navbarMenu" aria-controls="navbarMenu" aria-expanded="false" aria-label="Toggle navigation"> <span class="navbar-toggler-icon"></span> </button> <div class="w-100"> <div class="d-none d-lg-flex flex-row-reverse"> <form method="get" action="https://waset.org/search" class="form-inline my-2 my-lg-0"> <input class="form-control mr-sm-2" type="search" placeholder="Search Conferences" value="pooling" name="q" aria-label="Search"> <button class="btn btn-light my-2 my-sm-0" type="submit"><i class="fas fa-search"></i></button> </form> </div> <div class="collapse navbar-collapse mt-1" id="navbarMenu"> <ul class="navbar-nav ml-auto align-items-center" id="mainNavMenu"> <li class="nav-item"> <a class="nav-link" href="https://waset.org/conferences" title="Conferences in 2024/2025/2026">Conferences</a> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/disciplines" title="Disciplines">Disciplines</a> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/committees" rel="nofollow">Committees</a> </li> <li class="nav-item dropdown"> <a class="nav-link dropdown-toggle" href="#" id="navbarDropdownPublications" role="button" data-toggle="dropdown" aria-haspopup="true" aria-expanded="false"> Publications </a> <div class="dropdown-menu" aria-labelledby="navbarDropdownPublications"> <a class="dropdown-item" href="https://publications.waset.org/abstracts">Abstracts</a> <a class="dropdown-item" href="https://publications.waset.org">Periodicals</a> <a class="dropdown-item" href="https://publications.waset.org/archive">Archive</a> </div> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/page/support" title="Support">Support</a> </li> </ul> </div> </div> </nav> </div> </header> <main> <div class="container mt-4"> <div class="row"> <div class="col-md-9 mx-auto"> <form method="get" action="https://publications.waset.org/abstracts/search"> <div id="custom-search-input"> <div class="input-group"> <i class="fas fa-search"></i> <input type="text" class="search-query" name="q" placeholder="Author, Title, Abstract, Keywords" value="pooling"> <input type="submit" class="btn_search" value="Search"> </div> </div> </form> </div> </div> <div class="row mt-3"> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Commenced</strong> in January 2007</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Frequency:</strong> Monthly</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Edition:</strong> International</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Paper Count:</strong> 50</div> </div> </div> </div> <h1 class="mt-3 mb-3 text-center" style="font-size:1.6rem;">Search results for: pooling</h1> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">50</span> HIV Incidence among Men Who Have Sex with Men Measured by Pooling Polymerase Chain Reaction, and Its Comparison with HIV Incidence Estimated by BED-Capture Enzyme-Linked Immunosorbent Assay and Observed in a Prospective Cohort</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Mei%20Han">Mei Han</a>, <a href="https://publications.waset.org/abstracts/search?q=Jinkou%20Zhao"> Jinkou Zhao</a>, <a href="https://publications.waset.org/abstracts/search?q=Yuan%20Yao"> Yuan Yao</a>, <a href="https://publications.waset.org/abstracts/search?q=Liangui%20Feng"> Liangui Feng</a>, <a href="https://publications.waset.org/abstracts/search?q=Xianbin%20Ding"> Xianbin Ding</a>, <a href="https://publications.waset.org/abstracts/search?q=Guohui%20Wu"> Guohui Wu</a>, <a href="https://publications.waset.org/abstracts/search?q=Chao%20Zhou"> Chao Zhou</a>, <a href="https://publications.waset.org/abstracts/search?q=Lin%20Ouyang"> Lin Ouyang</a>, <a href="https://publications.waset.org/abstracts/search?q=Rongrong%20Lu"> Rongrong Lu</a>, <a href="https://publications.waset.org/abstracts/search?q=Bo%20Zhang"> Bo Zhang </a> </p> <p class="card-text"><strong>Abstract:</strong></p> To compare the HIV incidence estimated using BED capture enzyme linked immunosorbent assay (BED-CEIA) and observed in a cohort against the HIV incidence among men who have sex with men (MSM) measured by pooling polymerase chain reaction (pooling-PCR). A total of 617 MSM subjects were included in a respondent driven sampling survey in Chongqing in 2008. Among the 129 that were tested HIV antibody positive, 102 were defined with long-term infection, 27 were assessed for recent HIV infection (RHI) using BED-CEIA. The remaining 488 HIV negative subjects were enrolled to the prospective cohort and followed-up every 6 months to monitor HIV seroconversion. All of the 488 HIV negative specimens were assessed for acute HIV infection (AHI) using pooling-PCR. Among the 488 negative subjects in the open cohort, 214 (43.9%) were followed-up for six months, with 107 person-years of observation and 14 subjects seroconverted. The observed HIV incidence was 12.5 per 100 person-years (95% CI=9.1-15.7). Among the 488 HIV negative specimens, 5 were identified with acute HIV infection using pooling-PCR at an annual rate of 14.02% (95% CI=1.73-26.30). The estimated HIV-1 incidence was 12.02% (95% CI=7.49-16.56) based on BED-CEIA. The HIV incidence estimated with three different approaches was different among subgroups. In the highly HIV prevalent MSM, it costs US$ 1724 to detect one AHI case, while detection of one case of RHI with BED assay costs only US$ 42. Three approaches generated comparable and high HIV incidences, pooling PCR and prospective cohort are more close to the true level of incidence, while BED-CEIA seemed to be the most convenient and economical approach for at-risk population’s HIV incidence evaluation at the beginning of HIV pandemic. HIV-1 incidences were alarmingly high among MSM population in Chongqing, particularly within the subgroup under 25 years of age and those migrants aged between 25 to 34 years. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=BED-CEIA" title="BED-CEIA">BED-CEIA</a>, <a href="https://publications.waset.org/abstracts/search?q=HIV" title=" HIV"> HIV</a>, <a href="https://publications.waset.org/abstracts/search?q=incidence" title=" incidence"> incidence</a>, <a href="https://publications.waset.org/abstracts/search?q=pooled%20PCR" title=" pooled PCR"> pooled PCR</a>, <a href="https://publications.waset.org/abstracts/search?q=prospective%20cohort" title=" prospective cohort"> prospective cohort</a> </p> <a href="https://publications.waset.org/abstracts/75145/hiv-incidence-among-men-who-have-sex-with-men-measured-by-pooling-polymerase-chain-reaction-and-its-comparison-with-hiv-incidence-estimated-by-bed-capture-enzyme-linked-immunosorbent-assay-and-observed-in-a-prospective-cohort" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/75145.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">411</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">49</span> Solar Power Monitoring and Control System using Internet of Things</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Oladapo%20Tolulope%20Ibitoye">Oladapo Tolulope Ibitoye</a> </p> <p class="card-text"><strong>Abstract:</strong></p> It has become imperative to harmonize energy poverty alleviation and carbon footprint reduction. This is geared towards embracing independent power generation at local levels to reduce the popular ambiguity in the transmission of generated power. Also, it will contribute towards the total adoption of electric vehicles and direct current (DC) appliances that are currently flooding the global market. Solar power system is gaining momentum as it is now an affordable and less complex alternative to fossil fuel-based power generation. Although, there are many issues associated with solar power system, which resulted in deprivation of optimum working capacity. One of the key problems is inadequate monitoring of the energy pool from solar irradiance, which can then serve as a foundation for informed energy usage decisions and appropriate solar system control for effective energy pooling. The proposed technique utilized Internet of Things (IoT) in developing a system to automate solar irradiance pooling by controlling solar photovoltaic panels autonomously for optimal usage. The technique is potent with better solar irradiance exposure which results into 30% voltage pooling capacity than a system with static solar panels. The evaluation of the system show that the developed system possesses higher voltage pooling capacity than a system of static positioning of solar panel. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=solar%20system" title="solar system">solar system</a>, <a href="https://publications.waset.org/abstracts/search?q=internet%20of%20things" title=" internet of things"> internet of things</a>, <a href="https://publications.waset.org/abstracts/search?q=renewable%20energy" title=" renewable energy"> renewable energy</a>, <a href="https://publications.waset.org/abstracts/search?q=power%20monitoring" title=" power monitoring"> power monitoring</a> </p> <a href="https://publications.waset.org/abstracts/163865/solar-power-monitoring-and-control-system-using-internet-of-things" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/163865.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">83</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">48</span> Unsupervised Learning of Spatiotemporally Coherent Metrics</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Ross%20Goroshin">Ross Goroshin</a>, <a href="https://publications.waset.org/abstracts/search?q=Joan%20Bruna"> Joan Bruna</a>, <a href="https://publications.waset.org/abstracts/search?q=Jonathan%20Tompson"> Jonathan Tompson</a>, <a href="https://publications.waset.org/abstracts/search?q=David%20Eigen"> David Eigen</a>, <a href="https://publications.waset.org/abstracts/search?q=Yann%20LeCun"> Yann LeCun</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Current state-of-the-art classification and detection algorithms rely on supervised training. In this work we study unsupervised feature learning in the context of temporally coherent video data. We focus on feature learning from unlabeled video data, using the assumption that adjacent video frames contain semantically similar information. This assumption is exploited to train a convolutional pooling auto-encoder regularized by slowness and sparsity. We establish a connection between slow feature learning to metric learning and show that the trained encoder can be used to define a more temporally and semantically coherent metric. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=machine%20learning" title="machine learning">machine learning</a>, <a href="https://publications.waset.org/abstracts/search?q=pattern%20clustering" title=" pattern clustering"> pattern clustering</a>, <a href="https://publications.waset.org/abstracts/search?q=pooling" title=" pooling"> pooling</a>, <a href="https://publications.waset.org/abstracts/search?q=classification" title=" classification "> classification </a> </p> <a href="https://publications.waset.org/abstracts/29488/unsupervised-learning-of-spatiotemporally-coherent-metrics" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/29488.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">456</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">47</span> Facial Emotion Recognition Using Deep Learning</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Ashutosh%20Mishra">Ashutosh Mishra</a>, <a href="https://publications.waset.org/abstracts/search?q=Nikhil%20Goyal"> Nikhil Goyal</a> </p> <p class="card-text"><strong>Abstract:</strong></p> A 3D facial emotion recognition model based on deep learning is proposed in this paper. Two convolution layers and a pooling layer are employed in the deep learning architecture. After the convolution process, the pooling is finished. The probabilities for various classes of human faces are calculated using the sigmoid activation function. To verify the efficiency of deep learning-based systems, a set of faces. The Kaggle dataset is used to verify the accuracy of a deep learning-based face recognition model. The model's accuracy is about 65 percent, which is lower than that of other facial expression recognition techniques. Despite significant gains in representation precision due to the nonlinearity of profound image representations. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=facial%20recognition" title="facial recognition">facial recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=computational%20intelligence" title=" computational intelligence"> computational intelligence</a>, <a href="https://publications.waset.org/abstracts/search?q=convolutional%20neural%20network" title=" convolutional neural network"> convolutional neural network</a>, <a href="https://publications.waset.org/abstracts/search?q=depth%20map" title=" depth map"> depth map</a> </p> <a href="https://publications.waset.org/abstracts/139253/facial-emotion-recognition-using-deep-learning" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/139253.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">231</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">46</span> Human Talent Management: A Research Agenda</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Mehraj%20Udin%20Ganaie">Mehraj Udin Ganaie</a>, <a href="https://publications.waset.org/abstracts/search?q=Mohammad%20Israrul%20Haque"> Mohammad Israrul Haque</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The purpose of this paper is to enhance the theoretical and conceptual understanding of human talent management (HTM). With the help of extensive review of existing literature, we proposed a conceptual framework and few propositions to elucidate the influential relationship of competency focus, talent pooling, talent investment, and talenting orientation with value creation of a firm. It is believed that human talent management model will enhance the understanding of talent management orientation among practitioners and academicians. Practitioners will be able to align HTM orientation with business strategy wisely to yield better value for business (Shareholders, Employees, Owners, Customers, agents, and other stakeholders). Future research directions will explain how human talent management researchers will work on the integration of relationship and contribute towards the maturity of talent management by further exploring and validating the model empirically to enhance the body of knowledge. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=talent%20management%20orientation" title="talent management orientation">talent management orientation</a>, <a href="https://publications.waset.org/abstracts/search?q=competency%20focus" title=" competency focus"> competency focus</a>, <a href="https://publications.waset.org/abstracts/search?q=talent%20pooling" title=" talent pooling"> talent pooling</a>, <a href="https://publications.waset.org/abstracts/search?q=talent%20investment" title=" talent investment"> talent investment</a>, <a href="https://publications.waset.org/abstracts/search?q=talenting%20orientation" title=" talenting orientation"> talenting orientation</a> </p> <a href="https://publications.waset.org/abstracts/69757/human-talent-management-a-research-agenda" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/69757.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">384</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">45</span> DMBR-Net: Deep Multiple-Resolution Bilateral Networks for Real-Time and Accurate Semantic Segmentation</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Pengfei%20Meng">Pengfei Meng</a>, <a href="https://publications.waset.org/abstracts/search?q=Shuangcheng%20Jia"> Shuangcheng Jia</a>, <a href="https://publications.waset.org/abstracts/search?q=Qian%20Li"> Qian Li</a> </p> <p class="card-text"><strong>Abstract:</strong></p> We proposed a real-time high-precision semantic segmentation network based on a multi-resolution feature fusion module, the auxiliary feature extracting module, upsampling module, and atrous spatial pyramid pooling (ASPP) module. We designed a feature fusion structure, which is integrated with sufficient features of different resolutions. We also studied the effect of side-branch structure on the network and made discoveries. Based on the discoveries about the side-branch of the network structure, we used a side-branch auxiliary feature extraction layer in the network to improve the effectiveness of the network. We also designed upsampling module, which has better results than the original upsampling module. In addition, we also re-considered the locations and number of atrous spatial pyramid pooling (ASPP) modules and modified the network structure according to the experimental results to further improve the effectiveness of the network. The network presented in this paper takes the backbone network of Bisenetv2 as a basic network, based on which we constructed a network structure on which we made improvements. We named this network deep multiple-resolution bilateral networks for real-time, referred to as DMBR-Net. After experimental testing, our proposed DMBR-Net network achieved 81.2% mIoU at 119FPS on the Cityscapes validation dataset, 80.7% mIoU at 109FPS on the CamVid test dataset, 29.9% mIoU at 78FPS on the COCOStuff test dataset. Compared with all lightweight real-time semantic segmentation networks, our network achieves the highest accuracy at an appropriate speed. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=multi-resolution%20feature%20fusion" title="multi-resolution feature fusion">multi-resolution feature fusion</a>, <a href="https://publications.waset.org/abstracts/search?q=atrous%20convolutional" title=" atrous convolutional"> atrous convolutional</a>, <a href="https://publications.waset.org/abstracts/search?q=bilateral%20networks" title=" bilateral networks"> bilateral networks</a>, <a href="https://publications.waset.org/abstracts/search?q=pyramid%20pooling" title=" pyramid pooling"> pyramid pooling</a> </p> <a href="https://publications.waset.org/abstracts/147792/dmbr-net-deep-multiple-resolution-bilateral-networks-for-real-time-and-accurate-semantic-segmentation" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/147792.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">149</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">44</span> Recognition of Gene Names from Gene Pathway Figures Using Siamese Network</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Muhammad%20Azam">Muhammad Azam</a>, <a href="https://publications.waset.org/abstracts/search?q=Micheal%20Olaolu%20Arowolo"> Micheal Olaolu Arowolo</a>, <a href="https://publications.waset.org/abstracts/search?q=Fei%20He"> Fei He</a>, <a href="https://publications.waset.org/abstracts/search?q=Mihail%20Popescu"> Mihail Popescu</a>, <a href="https://publications.waset.org/abstracts/search?q=Dong%20Xu"> Dong Xu</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The number of biological papers is growing quickly, which means that the number of biological pathway figures in those papers is also increasing quickly. Each pathway figure shows extensive biological information, like the names of genes and how the genes are related. However, manually annotating pathway figures takes a lot of time and work. Even though using advanced image understanding models could speed up the process of curation, these models still need to be made more accurate. To improve gene name recognition from pathway figures, we applied a Siamese network to map image segments to a library of pictures containing known genes in a similar way to person recognition from photos in many photo applications. We used a triple loss function and a triplet spatial pyramid pooling network by combining the triplet convolution neural network and the spatial pyramid pooling (TSPP-Net). We compared VGG19 and VGG16 as the Siamese network model. VGG16 achieved better performance with an accuracy of 93%, which is much higher than OCR results. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=biological%20pathway" title="biological pathway">biological pathway</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20understanding" title=" image understanding"> image understanding</a>, <a href="https://publications.waset.org/abstracts/search?q=gene%20name%20recognition" title=" gene name recognition"> gene name recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=object%20detection" title=" object detection"> object detection</a>, <a href="https://publications.waset.org/abstracts/search?q=Siamese%20network" title=" Siamese network"> Siamese network</a>, <a href="https://publications.waset.org/abstracts/search?q=VGG" title=" VGG"> VGG</a> </p> <a href="https://publications.waset.org/abstracts/160723/recognition-of-gene-names-from-gene-pathway-figures-using-siamese-network" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/160723.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">291</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">43</span> On Pooling Different Levels of Data in Estimating Parameters of Continuous Meta-Analysis</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=N.%20R.%20N.%20Idris">N. R. N. Idris</a>, <a href="https://publications.waset.org/abstracts/search?q=S.%20Baharom"> S. Baharom </a> </p> <p class="card-text"><strong>Abstract:</strong></p> A meta-analysis may be performed using aggregate data (AD) or an individual patient data (IPD). In practice, studies may be available at both IPD and AD level. In this situation, both the IPD and AD should be utilised in order to maximize the available information. Statistical advantages of combining the studies from different level have not been fully explored. This study aims to quantify the statistical benefits of including available IPD when conducting a conventional summary-level meta-analysis. Simulated meta-analysis were used to assess the influence of the levels of data on overall meta-analysis estimates based on IPD-only, AD-only and the combination of IPD and AD (mixed data, MD), under different study scenario. The percentage relative bias (PRB), root mean-square-error (RMSE) and coverage probability were used to assess the efficiency of the overall estimates. The results demonstrate that available IPD should always be included in a conventional meta-analysis using summary level data as they would significantly increased the accuracy of the estimates. On the other hand, if more than 80% of the available data are at IPD level, including the AD does not provide significant differences in terms of accuracy of the estimates. Additionally, combining the IPD and AD has moderating effects on the biasness of the estimates of the treatment effects as the IPD tends to overestimate the treatment effects, while the AD has the tendency to produce underestimated effect estimates. These results may provide some guide in deciding if significant benefit is gained by pooling the two levels of data when conducting meta-analysis. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=aggregate%20data" title="aggregate data">aggregate data</a>, <a href="https://publications.waset.org/abstracts/search?q=combined-level%20data" title=" combined-level data"> combined-level data</a>, <a href="https://publications.waset.org/abstracts/search?q=individual%20patient%20data" title=" individual patient data"> individual patient data</a>, <a href="https://publications.waset.org/abstracts/search?q=meta-analysis" title=" meta-analysis"> meta-analysis</a> </p> <a href="https://publications.waset.org/abstracts/8777/on-pooling-different-levels-of-data-in-estimating-parameters-of-continuous-meta-analysis" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/8777.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">375</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">42</span> Surgical Prep-Related Burns in Laterally Positioned Hip Procedures</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=B.%20Kenny">B. Kenny</a>, <a href="https://publications.waset.org/abstracts/search?q=M.%20%20Dixon"> M. Dixon</a>, <a href="https://publications.waset.org/abstracts/search?q=A.%20Boshell"> A. Boshell</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The use of alcoholic surgical prep was recently introduced into the Royal Newcastle Center for elective procedures. In the past 3 months there have been a significant number of burns believed to be related to ‘pooling’ of this surgical prep in patients undergoing procedures where they are placed in the lateral position with hip bolsters. The aim of the audit was to determine the reason for the burns, analyze what pre-existing factors may contribute to the development of the burns and what can be changed to prevent further burns occurring. All patients undergoing a procedure performed on the hip who were placed in the lateral position with sacral and anterior, superior iliac spine (ASIS) support with ‘bolsters’ were included in the audit. Patients who developed a ‘burn’ were recorded, details of the surgery, demographics, surgical prep used and length of surgery were obtained as well as photographs taken to document the burn. Measures were then taken to prevent further burns and the efficacy was documented. Overall 14 patients developed burns over the ipsilateral ASIS. Of these, 13 were Total Hip Arthroplasty (THA) and 1 was a removal of femoral nail. All patients had Chlorhexidine 0.5% in Alcohol 70% Tinted Red surgical preparation or Betadine Alcoholic Skin Prep (70% etoh). Patients were set up in the standard lateral decubitus position with sacral and bilateral ASIS bolsters with a valband covering. 86% of patients were found to have pre-existing hypersensitivities to various substances. There is very little literature besides a few case reports on surgical prep-related burns. The case reports that do exist are related to the use of tourniquet-related burns and there is no mention in the literature examining ‘bolster’ related burns. The burns are hypothesized to be caused by pooling of the alcoholic solution which is amplified by the use of Valband. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=arthroplasty" title="arthroplasty">arthroplasty</a>, <a href="https://publications.waset.org/abstracts/search?q=chemical%20burns" title=" chemical burns"> chemical burns</a>, <a href="https://publications.waset.org/abstracts/search?q=wounds" title=" wounds"> wounds</a>, <a href="https://publications.waset.org/abstracts/search?q=rehabilitation" title=" rehabilitation"> rehabilitation</a> </p> <a href="https://publications.waset.org/abstracts/20980/surgical-prep-related-burns-in-laterally-positioned-hip-procedures" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/20980.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">300</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">41</span> Application of Industrial Ecology to the INSPIRA Zone: Territory Planification and New Activities</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Mary%20Hanhoun">Mary Hanhoun</a>, <a href="https://publications.waset.org/abstracts/search?q=Jilla%20Bamarni"> Jilla Bamarni</a>, <a href="https://publications.waset.org/abstracts/search?q=Anne-Sophie%20Bougard"> Anne-Sophie Bougard</a> </p> <p class="card-text"><strong>Abstract:</strong></p> INSPIR’ECO is a 18-month research and innovation project that aims to specify and develop a tool to offer new services for industrials and territorial planners/managers based on Industrial Ecology Principles. This project is carried out on the territory of Salaise Sablons and the services are designed to be deployed on other territories. Salaise-Sablons area is located in the limit of 5 departments on a major European economic axis multimodal traffic (river, rail and road). The perimeter of 330 ha includes 90 hectares occupied by 20 companies, with a total of 900 jobs, and represents a significant potential basin of development. The project involves five multi-disciplinary partners (Syndicat Mixte INSPIRA, ENGIE, IDEEL, IDEAs Laboratory and TREDI). INSPIR’ECO project is based on the principles that local stakeholders need services to pool, share their activities/equipment/purchases/materials. These services aims to : 1. initiate and promote exchanges between existing companies and 2. identify synergies between pre-existing industries and future companies that could be implemented in INSPIRA. These eco-industrial synergies can be related to: the recovery / exchange of industrial flows (industrial wastewater, waste, by-products, etc.); the pooling of business services (collective waste management, stormwater collection and reuse, transport, etc.); the sharing of equipments (boiler, steam production, wastewater treatment unit, etc.) or resources (splitting jobs cost, etc.); and the creation of new activities (interface activities necessary for by-product recovery, development of products or services from a newly identified resource, etc.). These services are based on IT tool used by the interested local stakeholders that intends to allow local stakeholders to take decisions. Thus, this IT tool: - include an economic and environmental assessment of each implantation or pooling/sharing scenarios for existing or further industries; - is meant for industrial and territorial manager/planners - is designed to be used for each new industrial project. - The specification of the IT tool is made through an agile process all along INSPIR’ECO project fed with: - Users expectations thanks to workshop sessions where mock-up interfaces are displayed; - Data availability based on local and industrial data inventory. These input allow to specify the tool not only with technical and methodological constraints (notably the ones from economic and environmental assessments) but also with data availability and users expectations. A feedback on innovative resource management initiatives in port areas has been realized in the beginning of the project to feed the designing services step. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=development%20opportunities" title="development opportunities">development opportunities</a>, <a href="https://publications.waset.org/abstracts/search?q=INSPIR%E2%80%99ECO" title=" INSPIR’ECO"> INSPIR’ECO</a>, <a href="https://publications.waset.org/abstracts/search?q=INSPIRA" title=" INSPIRA"> INSPIRA</a>, <a href="https://publications.waset.org/abstracts/search?q=industrial%20ecology" title=" industrial ecology"> industrial ecology</a>, <a href="https://publications.waset.org/abstracts/search?q=planification" title=" planification"> planification</a>, <a href="https://publications.waset.org/abstracts/search?q=synergy%20identification" title=" synergy identification"> synergy identification</a> </p> <a href="https://publications.waset.org/abstracts/42572/application-of-industrial-ecology-to-the-inspira-zone-territory-planification-and-new-activities" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/42572.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">163</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">40</span> Deep-Learning Based Approach to Facial Emotion Recognition through Convolutional Neural Network</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Nouha%20Khediri">Nouha Khediri</a>, <a href="https://publications.waset.org/abstracts/search?q=Mohammed%20Ben%20Ammar"> Mohammed Ben Ammar</a>, <a href="https://publications.waset.org/abstracts/search?q=Monji%20Kherallah"> Monji Kherallah</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Recently, facial emotion recognition (FER) has become increasingly essential to understand the state of the human mind. Accurately classifying emotion from the face is a challenging task. In this paper, we present a facial emotion recognition approach named CV-FER, benefiting from deep learning, especially CNN and VGG16. First, the data is pre-processed with data cleaning and data rotation. Then, we augment the data and proceed to our FER model, which contains five convolutions layers and five pooling layers. Finally, a softmax classifier is used in the output layer to recognize emotions. Based on the above contents, this paper reviews the works of facial emotion recognition based on deep learning. Experiments show that our model outperforms the other methods using the same FER2013 database and yields a recognition rate of 92%. We also put forward some suggestions for future work. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=CNN" title="CNN">CNN</a>, <a href="https://publications.waset.org/abstracts/search?q=deep-learning" title=" deep-learning"> deep-learning</a>, <a href="https://publications.waset.org/abstracts/search?q=facial%20emotion%20recognition" title=" facial emotion recognition"> facial emotion recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=machine%20learning" title=" machine learning"> machine learning</a> </p> <a href="https://publications.waset.org/abstracts/150291/deep-learning-based-approach-to-facial-emotion-recognition-through-convolutional-neural-network" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/150291.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">95</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">39</span> The UAV Feasibility Trajectory Prediction Using Convolution Neural Networks</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Adrien%20Marque">Adrien Marque</a>, <a href="https://publications.waset.org/abstracts/search?q=Daniel%20Delahaye"> Daniel Delahaye</a>, <a href="https://publications.waset.org/abstracts/search?q=Pierre%20Mar%C3%A9chal"> Pierre Maréchal</a>, <a href="https://publications.waset.org/abstracts/search?q=Isabelle%20Berry"> Isabelle Berry</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Wind direction and uncertainty are crucial in aircraft or unmanned aerial vehicle trajectories. By computing wind covariance matrices on each spatial grid point, these spatial grids can be defined as images with symmetric positive definite matrix elements. A data pre-processing step, a specific convolution, a specific max-pooling, and a specific flatten layers are implemented to process such images. Then, the neural network is applied to spatial grids, whose elements are wind covariance matrices, to solve classification problems related to the feasibility of unmanned aerial vehicles based on wind direction and wind uncertainty. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=wind%20direction" title="wind direction">wind direction</a>, <a href="https://publications.waset.org/abstracts/search?q=uncertainty%20level" title=" uncertainty level"> uncertainty level</a>, <a href="https://publications.waset.org/abstracts/search?q=unmanned%20aerial%20vehicle" title=" unmanned aerial vehicle"> unmanned aerial vehicle</a>, <a href="https://publications.waset.org/abstracts/search?q=convolution%20neural%20network" title=" convolution neural network"> convolution neural network</a>, <a href="https://publications.waset.org/abstracts/search?q=SPD%20matrices" title=" SPD matrices"> SPD matrices</a> </p> <a href="https://publications.waset.org/abstracts/188367/the-uav-feasibility-trajectory-prediction-using-convolution-neural-networks" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/188367.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">49</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">38</span> Embedded Semantic Segmentation Network Optimized for Matrix Multiplication Accelerator</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Jaeyoung%20Lee">Jaeyoung Lee</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Autonomous driving systems require high reliability to provide people with a safe and comfortable driving experience. However, despite the development of a number of vehicle sensors, it is difficult to always provide high perceived performance in driving environments that vary from time to season. The image segmentation method using deep learning, which has recently evolved rapidly, provides high recognition performance in various road environments stably. However, since the system controls a vehicle in real time, a highly complex deep learning network cannot be used due to time and memory constraints. Moreover, efficient networks are optimized for GPU environments, which degrade performance in embedded processor environments equipped simple hardware accelerators. In this paper, a semantic segmentation network, matrix multiplication accelerator network (MMANet), optimized for matrix multiplication accelerator (MMA) on Texas instrument digital signal processors (TI DSP) is proposed to improve the recognition performance of autonomous driving system. The proposed method is designed to maximize the number of layers that can be performed in a limited time to provide reliable driving environment information in real time. First, the number of channels in the activation map is fixed to fit the structure of MMA. By increasing the number of parallel branches, the lack of information caused by fixing the number of channels is resolved. Second, an efficient convolution is selected depending on the size of the activation. Since MMA is a fixed, it may be more efficient for normal convolution than depthwise separable convolution depending on memory access overhead. Thus, a convolution type is decided according to output stride to increase network depth. In addition, memory access time is minimized by processing operations only in L3 cache. Lastly, reliable contexts are extracted using the extended atrous spatial pyramid pooling (ASPP). The suggested method gets stable features from an extended path by increasing the kernel size and accessing consecutive data. In addition, it consists of two ASPPs to obtain high quality contexts using the restored shape without global average pooling paths since the layer uses MMA as a simple adder. To verify the proposed method, an experiment is conducted using perfsim, a timing simulator, and the Cityscapes validation sets. The proposed network can process an image with 640 x 480 resolution for 6.67 ms, so six cameras can be used to identify the surroundings of the vehicle as 20 frame per second (FPS). In addition, it achieves 73.1% mean intersection over union (mIoU) which is the highest recognition rate among embedded networks on the Cityscapes validation set. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=edge%20network" title="edge network">edge network</a>, <a href="https://publications.waset.org/abstracts/search?q=embedded%20network" title=" embedded network"> embedded network</a>, <a href="https://publications.waset.org/abstracts/search?q=MMA" title=" MMA"> MMA</a>, <a href="https://publications.waset.org/abstracts/search?q=matrix%20multiplication%20accelerator" title=" matrix multiplication accelerator"> matrix multiplication accelerator</a>, <a href="https://publications.waset.org/abstracts/search?q=semantic%20segmentation%20network" title=" semantic segmentation network"> semantic segmentation network</a> </p> <a href="https://publications.waset.org/abstracts/125967/embedded-semantic-segmentation-network-optimized-for-matrix-multiplication-accelerator" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/125967.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">129</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">37</span> Facial Emotion Recognition with Convolutional Neural Network Based Architecture</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Koray%20U.%20Erbas">Koray U. Erbas</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Neural networks are appealing for many applications since they are able to learn complex non-linear relationships between input and output data. As the number of neurons and layers in a neural network increase, it is possible to represent more complex relationships with automatically extracted features. Nowadays Deep Neural Networks (DNNs) are widely used in Computer Vision problems such as; classification, object detection, segmentation image editing etc. In this work, Facial Emotion Recognition task is performed by proposed Convolutional Neural Network (CNN)-based DNN architecture using FER2013 Dataset. Moreover, the effects of different hyperparameters (activation function, kernel size, initializer, batch size and network size) are investigated and ablation study results for Pooling Layer, Dropout and Batch Normalization are presented. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=convolutional%20neural%20network" title="convolutional neural network">convolutional neural network</a>, <a href="https://publications.waset.org/abstracts/search?q=deep%20learning" title=" deep learning"> deep learning</a>, <a href="https://publications.waset.org/abstracts/search?q=deep%20learning%20based%20FER" title=" deep learning based FER"> deep learning based FER</a>, <a href="https://publications.waset.org/abstracts/search?q=facial%20emotion%20recognition" title=" facial emotion recognition"> facial emotion recognition</a> </p> <a href="https://publications.waset.org/abstracts/128197/facial-emotion-recognition-with-convolutional-neural-network-based-architecture" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/128197.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">274</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">36</span> Assessing the Quality of Clinical Photographs Taken for Orthodontic Patients at Queen’s Hospital, Romford</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Maya%20Agarwala">Maya Agarwala</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Objectives: Audit the quality of clinical photographs taken for Orthodontic patients at Queen’s hospital, Romford. Design and setting: All Orthodontic photographs are taken in the Medical Photography Department at Queen’s Hospital. Retrospective audit with data collected between January - March 2023. Gold standard: Institute of Medical Illustrators (IMI) standard 12 photographs: 6 extraoral and 6 intraoral. 100% of patients to have the standard 12 photographs meeting a satisfactory diagnostic quality. Materials and methods: 30 patients randomly selected. All photographs analysed against the IMI gold standard. Results: A total of 360 photographs were analysed. 100% of the photographs had the 12 photographic views. Of which, 93.1% met the gold standard. Of the extraoral photos: 99.4% met the gold standard, 0.6% had incorrect head positioning. Of the intraoral photographs: 87.2% met the gold standard. The most common intraoral errors were: the presence of saliva pooling (7.2%), insufficient soft tissue retraction (3.3%), incomplete occlusal surface visibility (2.2%) and mirror fogging (1.1%). Conclusion: The gold standard was not met, however the overall standard of Orthodontic photographs is high. Further training of the Medical Photography team is needed to improve the quality of photographs. Following the training, the audit will be repeated. High-quality clinical photographs are an important part of clinical record keeping. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=orthodontics" title="orthodontics">orthodontics</a>, <a href="https://publications.waset.org/abstracts/search?q=paediatric" title=" paediatric"> paediatric</a>, <a href="https://publications.waset.org/abstracts/search?q=photography" title=" photography"> photography</a>, <a href="https://publications.waset.org/abstracts/search?q=audit" title=" audit"> audit</a> </p> <a href="https://publications.waset.org/abstracts/167678/assessing-the-quality-of-clinical-photographs-taken-for-orthodontic-patients-at-queens-hospital-romford" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/167678.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">95</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">35</span> Traffic Congestion Analysis and Modeling for Urban Roads of Srinagar City</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Adinarayana%20Badveeti">Adinarayana Badveeti</a>, <a href="https://publications.waset.org/abstracts/search?q=Mohammad%20Shafi%20Mir"> Mohammad Shafi Mir</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In Srinagar City, in India, traffic congestion is a condition on transport networks that occurs as use increases and is characterized by slower speeds, longer trip times, and increased vehicular queuing. Traffic congestion is conventionally measured using indicators such as roadway level-of-service, the Travel Time Index and their variants. Several measures have been taken in order to counteract congestion like road pricing, car pooling, improved traffic management, etc. While new road construction can temporarily relieve congestion in the longer term, it simply encourages further growth in car traffic through increased travel and a switch away from public transport. The full paper report, on which this abstract is based, aims to provide policymakers and technical staff with the real-time data, conceptual framework and guidance on some of the engineering tools necessary to manage congestion in such a way as to reduce its overall impact on individuals, families, communities, and societies dynamic, affordable, liveable and attractive urban regions will never be free of congestion. Road transport policies, however, should seek to manage congestion on a cost-effective basis with the aim of reducing the burden that excessive congestion imposes upon travellers and urban dwellers throughout the urban road network. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=traffic%20congestion" title="traffic congestion">traffic congestion</a>, <a href="https://publications.waset.org/abstracts/search?q=modeling" title=" modeling"> modeling</a>, <a href="https://publications.waset.org/abstracts/search?q=traffic%20management" title=" traffic management"> traffic management</a>, <a href="https://publications.waset.org/abstracts/search?q=travel%20time%20index" title=" travel time index"> travel time index</a> </p> <a href="https://publications.waset.org/abstracts/82508/traffic-congestion-analysis-and-modeling-for-urban-roads-of-srinagar-city" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/82508.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">319</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">34</span> Towards Long-Range Pixels Connection for Context-Aware Semantic Segmentation</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Muhammad%20Zubair%20Khan">Muhammad Zubair Khan</a>, <a href="https://publications.waset.org/abstracts/search?q=Yugyung%20Lee"> Yugyung Lee</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Deep learning has recently achieved enormous response in semantic image segmentation. The previously developed U-Net inspired architectures operate with continuous stride and pooling operations, leading to spatial data loss. Also, the methods lack establishing long-term pixels connection to preserve context knowledge and reduce spatial loss in prediction. This article developed encoder-decoder architecture with bi-directional LSTM embedded in long skip-connections and densely connected convolution blocks. The network non-linearly combines the feature maps across encoder-decoder paths for finding dependency and correlation between image pixels. Additionally, the densely connected convolutional blocks are kept in the final encoding layer to reuse features and prevent redundant data sharing. The method applied batch-normalization for reducing internal covariate shift in data distributions. The empirical evidence shows a promising response to our method compared with other semantic segmentation techniques. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=deep%20learning" title="deep learning">deep learning</a>, <a href="https://publications.waset.org/abstracts/search?q=semantic%20segmentation" title=" semantic segmentation"> semantic segmentation</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20analysis" title=" image analysis"> image analysis</a>, <a href="https://publications.waset.org/abstracts/search?q=pixels%20connection" title=" pixels connection"> pixels connection</a>, <a href="https://publications.waset.org/abstracts/search?q=convolution%20neural%20network" title=" convolution neural network"> convolution neural network</a> </p> <a href="https://publications.waset.org/abstracts/147965/towards-long-range-pixels-connection-for-context-aware-semantic-segmentation" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/147965.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">102</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">33</span> Studying the Effects of Conditional Conservatism and Lack of Information Asymmetry on the Cost of Capital of the Accepted Companies in Tehran Stock Exchange</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Fayaz%20Moosavi">Fayaz Moosavi</a>, <a href="https://publications.waset.org/abstracts/search?q=Saeid%20Moradyfard"> Saeid Moradyfard</a> </p> <p class="card-text"><strong>Abstract:</strong></p> One of the methods in avoiding management fraud and increasing the quality of financial information, is the notification of qualitative features of financial information, including conservatism characteristic. Although taking a conservatism approach, while boosting the quality of financial information, is able to reduce the informational risk and the cost of capital stock of commercial department, by presenting an improper image about the situation of the commercial department, raises the risk of failure in returning the main and capital interest, and consequently the cost of capital of the commercial department. In order to know if conservatism finally leads to the increase or decrease of the cost of capital or does not have any influence on it, information regarding accepted companies in Tehran stock exchange is utilized by application of pooling method from 2007 to 2012 and it included 124 companies. The results of the study revealed that there is an opposite and meaningful relationship between conditional conservatism and the cost of capital of the company. In other words, if bad and unsuitable news and signs are reflected sooner than good news in accounting profit, the cost of capital of the company increases. In addition, there is a positive and meaningful relationship between the cost of capital and lack of information asymmetry. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=conditional%20conservatism" title="conditional conservatism">conditional conservatism</a>, <a href="https://publications.waset.org/abstracts/search?q=lack%20of%20information%20asymmetry" title=" lack of information asymmetry"> lack of information asymmetry</a>, <a href="https://publications.waset.org/abstracts/search?q=the%20cost%20of%20capital" title=" the cost of capital"> the cost of capital</a>, <a href="https://publications.waset.org/abstracts/search?q=stock%20exchange" title=" stock exchange"> stock exchange</a> </p> <a href="https://publications.waset.org/abstracts/53090/studying-the-effects-of-conditional-conservatism-and-lack-of-information-asymmetry-on-the-cost-of-capital-of-the-accepted-companies-in-tehran-stock-exchange" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/53090.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">265</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">32</span> REITs India- New Investment Avenue for Financing Urban Infrastructure in India</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Rajat%20Kapoor">Rajat Kapoor</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Indian Real Estate sector is the second largest employer after agriculture and is slated to grow at 30 percent over the next decade. Indian cities have shown tumultuous growth since last two decades. With the growing need of infrastructure, it has become inevitable for real estate sector to adopt more organized and transparent system of investment. SPVs such as REITs ensure transparency facilitating accessibility to invest in real estate for those who find it difficult to purchase real estate as an investment option with a realistic income expectation from their investment. RIETs or real estate investment trusts is an instrument of pooling funds similar to that of mutual funds. In a simpler term REIT is an Investment Vehicle in the form a trust which holds & manages large commercial rent¬ earning properties on behalf of investors and distributes most of its profit as dividends. REIT enables individual investors to invest their money in commercial real estate assets in a diversified portfolio and on the other hand provides fiscal liquidity to developers as easy exit option and channel funds for new projects. However, the success REIT is very much dependent on the taxation structure making such models attractive and adaptive enough for both developers and investors to opt for such investment option. This paper is intended to capture an overview of REITs with context to Indian real estate scenario. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=Indian%20real%20estate" title="Indian real estate">Indian real estate</a>, <a href="https://publications.waset.org/abstracts/search?q=real%20estate%20infrastructure%20trusts" title=" real estate infrastructure trusts"> real estate infrastructure trusts</a>, <a href="https://publications.waset.org/abstracts/search?q=urban%20finance" title=" urban finance"> urban finance</a>, <a href="https://publications.waset.org/abstracts/search?q=infrastructure%20investment%20trusts" title=" infrastructure investment trusts"> infrastructure investment trusts</a> </p> <a href="https://publications.waset.org/abstracts/33069/reits-india-new-investment-avenue-for-financing-urban-infrastructure-in-india" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/33069.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">467</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">31</span> Effectiveness of Weather Index Insurance for Smallholders in Ethiopia</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Federica%20Di%20Marcantonio">Federica Di Marcantonio</a>, <a href="https://publications.waset.org/abstracts/search?q=Antoine%20Leblois"> Antoine Leblois</a>, <a href="https://publications.waset.org/abstracts/search?q=Wolfgang%20G%C3%B6bel"> Wolfgang Göbel</a>, <a href="https://publications.waset.org/abstracts/search?q=Herv%C3%A8%20Kerdiles"> Hervè Kerdiles</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Weather-related shocks can threaten the ability of farmers to maintain their agricultural output and food security levels. Informal coping mechanisms (i.e. migration or community risk sharing) have always played a significant role in mitigating the negative effects of weather-related shocks in Ethiopia, but they have been found to be an incomplete strategy, particularly as a response to covariate shocks. Particularly, as an alternative to the traditional risk pooling products, an innovative form of insurance known as Index-based Insurance has received a lot of attention from researchers and international organizations, leading to an increased number of pilot initiatives in many countries. Despite the potential benefit of the product in protecting the livelihoods of farmers and pastoralists against climate shocks, to date there has been an unexpectedly low uptake. Using information from current pilot projects on index-based insurance in Ethiopia, this paper discusses the determinants of uptake that have so far undermined the scaling-up of the products, by focusing in particular on weather data availability, price affordability and willingness to pay. We found that, aside from data constraint issues, high price elasticity and low willingness to pay represent impediments to the development of the market. These results, bring us to rethink the role of index insurance as products for enhancing smallholders’ response to covariate shocks, and particularly for improving their food security. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=index-based%20insurance" title="index-based insurance">index-based insurance</a>, <a href="https://publications.waset.org/abstracts/search?q=willingness%20to%20pay" title=" willingness to pay"> willingness to pay</a>, <a href="https://publications.waset.org/abstracts/search?q=satellite%20information" title=" satellite information"> satellite information</a>, <a href="https://publications.waset.org/abstracts/search?q=Ethiopia" title=" Ethiopia"> Ethiopia</a> </p> <a href="https://publications.waset.org/abstracts/40493/effectiveness-of-weather-index-insurance-for-smallholders-in-ethiopia" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/40493.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">403</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">30</span> The Relationship Between Hourly Compensation and Unemployment Rate Using the Panel Data Regression Analysis</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=S.%20K.%20Ashiquer%20Rahman">S. K. Ashiquer Rahman</a> </p> <p class="card-text"><strong>Abstract:</strong></p> the paper concentrations on the importance of hourly compensation, emphasizing the significance of the unemployment rate. There are the two most important factors of a nation these are its unemployment rate and hourly compensation. These are not merely statistics but they have profound effects on individual, families, and the economy. They are inversely related to one another. When we consider the unemployment rate that will probably decline as hourly compensations in manufacturing rise. But when we reduced the unemployment rates and increased job prospects could result from higher compensation. That’s why, the increased hourly compensation in the manufacturing sector that could have a favorable effect on job changing issues. Moreover, the relationship between hourly compensation and unemployment is complex and influenced by broader economic factors. In this paper, we use panel data regression models to evaluate the expected link between hourly compensation and unemployment rate in order to determine the effect of hourly compensation on unemployment rate. We estimate the fixed effects model, evaluate the error components, and determine which model (the FEM or ECM) is better by pooling all 60 observations. We then analysis and review the data by comparing 3 several countries (United States, Canada and the United Kingdom) using panel data regression models. Finally, we provide result, analysis and a summary of the extensive research on how the hourly compensation effects on the unemployment rate. Additionally, this paper offers relevant and useful informational to help the government and academic community use an econometrics and social approach to lessen on the effect of the hourly compensation on Unemployment rate to eliminate the problem. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=hourly%20compensation" title="hourly compensation">hourly compensation</a>, <a href="https://publications.waset.org/abstracts/search?q=Unemployment%20rate" title=" Unemployment rate"> Unemployment rate</a>, <a href="https://publications.waset.org/abstracts/search?q=panel%20data%20regression%20models" title=" panel data regression models"> panel data regression models</a>, <a href="https://publications.waset.org/abstracts/search?q=dummy%20variables" title=" dummy variables"> dummy variables</a>, <a href="https://publications.waset.org/abstracts/search?q=random%20effects%20model" title=" random effects model"> random effects model</a>, <a href="https://publications.waset.org/abstracts/search?q=fixed%20effects%20model" title=" fixed effects model"> fixed effects model</a>, <a href="https://publications.waset.org/abstracts/search?q=the%20linear%20regression%20model" title=" the linear regression model"> the linear regression model</a> </p> <a href="https://publications.waset.org/abstracts/183027/the-relationship-between-hourly-compensation-and-unemployment-rate-using-the-panel-data-regression-analysis" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/183027.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">81</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">29</span> Refined Edge Detection Network</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Omar%20Elharrouss">Omar Elharrouss</a>, <a href="https://publications.waset.org/abstracts/search?q=Youssef%20Hmamouche"> Youssef Hmamouche</a>, <a href="https://publications.waset.org/abstracts/search?q=Assia%20Kamal%20Idrissi"> Assia Kamal Idrissi</a>, <a href="https://publications.waset.org/abstracts/search?q=Btissam%20El%20Khamlichi"> Btissam El Khamlichi</a>, <a href="https://publications.waset.org/abstracts/search?q=Amal%20El%20Fallah-Seghrouchni"> Amal El Fallah-Seghrouchni</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Edge detection is represented as one of the most challenging tasks in computer vision, due to the complexity of detecting the edges or boundaries in real-world images that contains objects of different types and scales like trees, building as well as various backgrounds. Edge detection is represented also as a key task for many computer vision applications. Using a set of backbones as well as attention modules, deep-learning-based methods improved the detection of edges compared with the traditional methods like Sobel and Canny. However, images of complex scenes still represent a challenge for these methods. Also, the detected edges using the existing approaches suffer from non-refined results while the image output contains many erroneous edges. To overcome this, n this paper, by using the mechanism of residual learning, a refined edge detection network is proposed (RED-Net). By maintaining the high resolution of edges during the training process, and conserving the resolution of the edge image during the network stage, we make the pooling outputs at each stage connected with the output of the previous layer. Also, after each layer, we use an affined batch normalization layer as an erosion operation for the homogeneous region in the image. The proposed methods are evaluated using the most challenging datasets including BSDS500, NYUD, and Multicue. The obtained results outperform the designed edge detection networks in terms of performance metrics and quality of output images. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=edge%20detection" title="edge detection">edge detection</a>, <a href="https://publications.waset.org/abstracts/search?q=convolutional%20neural%20networks" title=" convolutional neural networks"> convolutional neural networks</a>, <a href="https://publications.waset.org/abstracts/search?q=deep%20learning" title=" deep learning"> deep learning</a>, <a href="https://publications.waset.org/abstracts/search?q=scale-representation" title=" scale-representation"> scale-representation</a>, <a href="https://publications.waset.org/abstracts/search?q=backbone" title=" backbone"> backbone</a> </p> <a href="https://publications.waset.org/abstracts/150865/refined-edge-detection-network" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/150865.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">102</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">28</span> Improving Similarity Search Using Clustered Data</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Deokho%20Kim">Deokho Kim</a>, <a href="https://publications.waset.org/abstracts/search?q=Wonwoo%20Lee"> Wonwoo Lee</a>, <a href="https://publications.waset.org/abstracts/search?q=Jaewoong%20Lee"> Jaewoong Lee</a>, <a href="https://publications.waset.org/abstracts/search?q=Teresa%20Ng"> Teresa Ng</a>, <a href="https://publications.waset.org/abstracts/search?q=Gun-Ill%20Lee"> Gun-Ill Lee</a>, <a href="https://publications.waset.org/abstracts/search?q=Jiwon%20Jeong"> Jiwon Jeong</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper presents a method for improving object search accuracy using a deep learning model. A major limitation to provide accurate similarity with deep learning is the requirement of huge amount of data for training pairwise similarity scores (metrics), which is impractical to collect. Thus, similarity scores are usually trained with a relatively small dataset, which comes from a different domain, causing limited accuracy on measuring similarity. For this reason, this paper proposes a deep learning model that can be trained with a significantly small amount of data, a clustered data which of each cluster contains a set of visually similar images. In order to measure similarity distance with the proposed method, visual features of two images are extracted from intermediate layers of a convolutional neural network with various pooling methods, and the network is trained with pairwise similarity scores which is defined zero for images in identical cluster. The proposed method outperforms the state-of-the-art object similarity scoring techniques on evaluation for finding exact items. The proposed method achieves 86.5% of accuracy compared to the accuracy of the state-of-the-art technique, which is 59.9%. That is, an exact item can be found among four retrieved images with an accuracy of 86.5%, and the rest can possibly be similar products more than the accuracy. Therefore, the proposed method can greatly reduce the amount of training data with an order of magnitude as well as providing a reliable similarity metric. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=visual%20search" title="visual search">visual search</a>, <a href="https://publications.waset.org/abstracts/search?q=deep%20learning" title=" deep learning"> deep learning</a>, <a href="https://publications.waset.org/abstracts/search?q=convolutional%20neural%20network" title=" convolutional neural network"> convolutional neural network</a>, <a href="https://publications.waset.org/abstracts/search?q=machine%20learning" title=" machine learning"> machine learning</a> </p> <a href="https://publications.waset.org/abstracts/92185/improving-similarity-search-using-clustered-data" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/92185.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">215</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">27</span> Is Hormone Replacement Therapy Associated with Age-Related Macular Degeneration? A Systematic Review and Meta-Analysis</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Hongxin%20Zhao">Hongxin Zhao</a>, <a href="https://publications.waset.org/abstracts/search?q=Shibing%20Yang"> Shibing Yang</a>, <a href="https://publications.waset.org/abstracts/search?q=Bingming%20Yi"> Bingming Yi</a>, <a href="https://publications.waset.org/abstracts/search?q=Yi%20Ning"> Yi Ning</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Background: A few studies have found evidence that exposure to endogenous or postmenopausal exogenous estrogens may be associated with a lower prevalence of age-related macular degeneration (AMD), but dispute over this association is ongoing due to inconsistent results reported by different studies. Objectives: To conduct a systematic review and meta-analysis to investigate the association between hormone replacement therapy (HRT) use and AMD. Methods: Relevant studies that assessed the association between HRT and AMD were searched through four databases (PubMed, Web of Science, Cochrane Library, EMBASE) and reference lists of retrieved studies. Study selection, data extraction and quality assessment were conducted by three independent reviewers. The fixed-effect meta-analyses were performed to estimate the association between HRT ever-use and AMD by pooling risk ratio (RR) or odds ratio (OR) across studies. Results: The review identified 2 prospective and 7 cross-sectional studies with 93992 female participants that reported an estimate of the association between HRT ever-use and presence of early AMD or late AMD. Meta-analyses showed that there were no statistically significant associations between HRT ever-use and early AMD (pooled RR for cohort studies was 1.04, 95% CI 0.86 - 1.24; pooled OR for cross-sectional studies was 0.91, 95% CI 0.82 - 1.01). The pooled results from cross-sectional studies also showed no statistically significant association between HRT ever-use and late AMD (OR 1.01; 95% CI 0.89 - 1.15). Conclusions: The pooled effects from observational studies published to date indicate that HRT use is associated with neither early nor late AMD. Exposure to HRT may not protect women from developing AMD. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=hormone%20replacement%20therapy" title="hormone replacement therapy">hormone replacement therapy</a>, <a href="https://publications.waset.org/abstracts/search?q=age-related%20macular%20degeneration" title=" age-related macular degeneration"> age-related macular degeneration</a>, <a href="https://publications.waset.org/abstracts/search?q=meta-analysis" title=" meta-analysis"> meta-analysis</a>, <a href="https://publications.waset.org/abstracts/search?q=systematic%20review" title=" systematic review "> systematic review </a> </p> <a href="https://publications.waset.org/abstracts/31385/is-hormone-replacement-therapy-associated-with-age-related-macular-degeneration-a-systematic-review-and-meta-analysis" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/31385.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">350</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">26</span> Identification and Prioritisation of Students Requiring Literacy Intervention and Subsequent Communication with Key Stakeholders</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Emilie%20Zimet">Emilie Zimet</a> </p> <p class="card-text"><strong>Abstract:</strong></p> During networking and NCCD moderation meetings, best practices for identifying students who require Literacy Intervention are often discussed. Once these students are identified, consideration is given to the most effective process for prioritising those who have the greatest need for Literacy Support and the allocation of resources, tracking of intervention effectiveness and communicating with teachers/external providers/parents. Through a workshop, the group will investigate best practices to identify students who require literacy support and strategies to communicate and track their progress. In groups, participants will examine what they do in their settings and then compare with other models, including the researcher’s model, to decide the most effective path to identification and communication. Participants will complete a worksheet at the beginning of the session to deeply consider their current approaches. The participants will be asked to critically analyse their own identification processes for Literacy Intervention, ensuring students are not overlooked if they fall into the borderline category. A cut-off for students to access intervention will be considered so as not to place strain on already stretched resources along with the most effective allocation of resources. Furthermore, communicating learning needs and differentiation strategies to staff is paramount to the success of an intervention, and participants will look at the frequency of communication to share such strategies and updates. At the end of the session, the group will look at creating or evolving models that allow for best practices for the identification and communication of Literacy Interventions. The proposed outcome for this research is to develop a model of identification of students requiring Literacy Intervention that incorporates the allocation of resources and communication to key stakeholders. This will be done by pooling information and discussing a variety of models used in the participant's school settings. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=identification" title="identification">identification</a>, <a href="https://publications.waset.org/abstracts/search?q=student%20selection" title=" student selection"> student selection</a>, <a href="https://publications.waset.org/abstracts/search?q=communication" title=" communication"> communication</a>, <a href="https://publications.waset.org/abstracts/search?q=special%20education" title=" special education"> special education</a>, <a href="https://publications.waset.org/abstracts/search?q=school%20policy" title=" school policy"> school policy</a>, <a href="https://publications.waset.org/abstracts/search?q=planning%20for%20intervention" title=" planning for intervention"> planning for intervention</a> </p> <a href="https://publications.waset.org/abstracts/183367/identification-and-prioritisation-of-students-requiring-literacy-intervention-and-subsequent-communication-with-key-stakeholders" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/183367.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">47</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">25</span> Medical Diagnosis of Retinal Diseases Using Artificial Intelligence Deep Learning Models</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Ethan%20James">Ethan James</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Over one billion people worldwide suffer from some level of vision loss or blindness as a result of progressive retinal diseases. Many patients, particularly in developing areas, are incorrectly diagnosed or undiagnosed whatsoever due to unconventional diagnostic tools and screening methods. Artificial intelligence (AI) based on deep learning (DL) convolutional neural networks (CNN) have recently gained a high interest in ophthalmology for its computer-imaging diagnosis, disease prognosis, and risk assessment. Optical coherence tomography (OCT) is a popular imaging technique used to capture high-resolution cross-sections of retinas. In ophthalmology, DL has been applied to fundus photographs, optical coherence tomography, and visual fields, achieving robust classification performance in the detection of various retinal diseases including macular degeneration, diabetic retinopathy, and retinitis pigmentosa. However, there is no complete diagnostic model to analyze these retinal images that provide a diagnostic accuracy above 90%. Thus, the purpose of this project was to develop an AI model that utilizes machine learning techniques to automatically diagnose specific retinal diseases from OCT scans. The algorithm consists of neural network architecture that was trained from a dataset of over 20,000 real-world OCT images to train the robust model to utilize residual neural networks with cyclic pooling. This DL model can ultimately aid ophthalmologists in diagnosing patients with these retinal diseases more quickly and more accurately, therefore facilitating earlier treatment, which results in improved post-treatment outcomes. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=artificial%20intelligence" title="artificial intelligence">artificial intelligence</a>, <a href="https://publications.waset.org/abstracts/search?q=deep%20learning" title=" deep learning"> deep learning</a>, <a href="https://publications.waset.org/abstracts/search?q=imaging" title=" imaging"> imaging</a>, <a href="https://publications.waset.org/abstracts/search?q=medical%20devices" title=" medical devices"> medical devices</a>, <a href="https://publications.waset.org/abstracts/search?q=ophthalmic%20devices" title=" ophthalmic devices"> ophthalmic devices</a>, <a href="https://publications.waset.org/abstracts/search?q=ophthalmology" title=" ophthalmology"> ophthalmology</a>, <a href="https://publications.waset.org/abstracts/search?q=retina" title=" retina"> retina</a> </p> <a href="https://publications.waset.org/abstracts/127742/medical-diagnosis-of-retinal-diseases-using-artificial-intelligence-deep-learning-models" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/127742.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">181</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">24</span> Remaining Useful Life Estimation of Bearings Based on Nonlinear Dimensional Reduction Combined with Timing Signals</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Zhongmin%20Wang">Zhongmin Wang</a>, <a href="https://publications.waset.org/abstracts/search?q=Wudong%20Fan"> Wudong Fan</a>, <a href="https://publications.waset.org/abstracts/search?q=Hengshan%20Zhang"> Hengshan Zhang</a>, <a href="https://publications.waset.org/abstracts/search?q=Yimin%20Zhou"> Yimin Zhou</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In data-driven prognostic methods, the prediction accuracy of the estimation for remaining useful life of bearings mainly depends on the performance of health indicators, which are usually fused some statistical features extracted from vibrating signals. However, the existing health indicators have the following two drawbacks: (1) The differnet ranges of the statistical features have the different contributions to construct the health indicators, the expert knowledge is required to extract the features. (2) When convolutional neural networks are utilized to tackle time-frequency features of signals, the time-series of signals are not considered. To overcome these drawbacks, in this study, the method combining convolutional neural network with gated recurrent unit is proposed to extract the time-frequency image features. The extracted features are utilized to construct health indicator and predict remaining useful life of bearings. First, original signals are converted into time-frequency images by using continuous wavelet transform so as to form the original feature sets. Second, with convolutional and pooling layers of convolutional neural networks, the most sensitive features of time-frequency images are selected from the original feature sets. Finally, these selected features are fed into the gated recurrent unit to construct the health indicator. The results state that the proposed method shows the enhance performance than the related studies which have used the same bearing dataset provided by PRONOSTIA. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=continuous%20wavelet%20transform" title="continuous wavelet transform">continuous wavelet transform</a>, <a href="https://publications.waset.org/abstracts/search?q=convolution%20neural%20net-work" title=" convolution neural net-work"> convolution neural net-work</a>, <a href="https://publications.waset.org/abstracts/search?q=gated%20recurrent%20unit" title=" gated recurrent unit"> gated recurrent unit</a>, <a href="https://publications.waset.org/abstracts/search?q=health%20indicators" title=" health indicators"> health indicators</a>, <a href="https://publications.waset.org/abstracts/search?q=remaining%20useful%20life" title=" remaining useful life"> remaining useful life</a> </p> <a href="https://publications.waset.org/abstracts/108324/remaining-useful-life-estimation-of-bearings-based-on-nonlinear-dimensional-reduction-combined-with-timing-signals" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/108324.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">133</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">23</span> A Sentence-to-Sentence Relation Network for Recognizing Textual Entailment </h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Isaac%20K.%20E.%20Ampomah">Isaac K. E. Ampomah</a>, <a href="https://publications.waset.org/abstracts/search?q=Seong-Bae%20Park"> Seong-Bae Park</a>, <a href="https://publications.waset.org/abstracts/search?q=Sang-Jo%20Lee"> Sang-Jo Lee</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Over the past decade, there have been promising developments in Natural Language Processing (NLP) with several investigations of approaches focusing on Recognizing Textual Entailment (RTE). These models include models based on lexical similarities, models based on formal reasoning, and most recently deep neural models. In this paper, we present a sentence encoding model that exploits the sentence-to-sentence relation information for RTE. In terms of sentence modeling, Convolutional neural network (CNN) and recurrent neural networks (RNNs) adopt different approaches. RNNs are known to be well suited for sequence modeling, whilst CNN is suited for the extraction of n-gram features through the filters and can learn ranges of relations via the pooling mechanism. We combine the strength of RNN and CNN as stated above to present a unified model for the RTE task. Our model basically combines relation vectors computed from the phrasal representation of each sentence and final encoded sentence representations. Firstly, we pass each sentence through a convolutional layer to extract a sequence of higher-level phrase representation for each sentence from which the first relation vector is computed. Secondly, the phrasal representation of each sentence from the convolutional layer is fed into a Bidirectional Long Short Term Memory (Bi-LSTM) to obtain the final sentence representations from which a second relation vector is computed. The relations vectors are combined and then used in then used in the same fashion as attention mechanism over the Bi-LSTM outputs to yield the final sentence representations for the classification. Experiment on the Stanford Natural Language Inference (SNLI) corpus suggests that this is a promising technique for RTE. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=deep%20neural%20models" title="deep neural models">deep neural models</a>, <a href="https://publications.waset.org/abstracts/search?q=natural%20language%20inference" title=" natural language inference"> natural language inference</a>, <a href="https://publications.waset.org/abstracts/search?q=recognizing%20textual%20entailment%20%28RTE%29" title=" recognizing textual entailment (RTE)"> recognizing textual entailment (RTE)</a>, <a href="https://publications.waset.org/abstracts/search?q=sentence-to-sentence%20relation" title=" sentence-to-sentence relation"> sentence-to-sentence relation</a> </p> <a href="https://publications.waset.org/abstracts/60423/a-sentence-to-sentence-relation-network-for-recognizing-textual-entailment" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/60423.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">348</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">22</span> High Fidelity Interactive Video Segmentation Using Tensor Decomposition, Boundary Loss, Convolutional Tessellations, and Context-Aware Skip Connections</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Anthony%20D.%20Rhodes">Anthony D. Rhodes</a>, <a href="https://publications.waset.org/abstracts/search?q=Manan%20Goel"> Manan Goel</a> </p> <p class="card-text"><strong>Abstract:</strong></p> We provide a high fidelity deep learning algorithm (HyperSeg) for interactive video segmentation tasks using a dense convolutional network with context-aware skip connections and compressed, 'hypercolumn' image features combined with a convolutional tessellation procedure. In order to maintain high output fidelity, our model crucially processes and renders all image features in high resolution, without utilizing downsampling or pooling procedures. We maintain this consistent, high grade fidelity efficiently in our model chiefly through two means: (1) we use a statistically-principled, tensor decomposition procedure to modulate the number of hypercolumn features and (2) we render these features in their native resolution using a convolutional tessellation technique. For improved pixel-level segmentation results, we introduce a boundary loss function; for improved temporal coherence in video data, we include temporal image information in our model. Through experiments, we demonstrate the improved accuracy of our model against baseline models for interactive segmentation tasks using high resolution video data. We also introduce a benchmark video segmentation dataset, the VFX Segmentation Dataset, which contains over 27,046 high resolution video frames, including green screen and various composited scenes with corresponding, hand-crafted, pixel-level segmentations. Our work presents a improves state of the art segmentation fidelity with high resolution data and can be used across a broad range of application domains, including VFX pipelines and medical imaging disciplines. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=computer%20vision" title="computer vision">computer vision</a>, <a href="https://publications.waset.org/abstracts/search?q=object%20segmentation" title=" object segmentation"> object segmentation</a>, <a href="https://publications.waset.org/abstracts/search?q=interactive%20segmentation" title=" interactive segmentation"> interactive segmentation</a>, <a href="https://publications.waset.org/abstracts/search?q=model%20compression" title=" model compression"> model compression</a> </p> <a href="https://publications.waset.org/abstracts/122051/high-fidelity-interactive-video-segmentation-using-tensor-decomposition-boundary-loss-convolutional-tessellations-and-context-aware-skip-connections" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/122051.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">120</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">21</span> Sex Difference of the Incidence of Sudden Cardiac Arrest/Death in Athletes: A Systematic Review and Meta-analysis</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Lingxia%20Li">Lingxia Li</a>, <a href="https://publications.waset.org/abstracts/search?q=Fr%C3%A9d%C3%A9ric%20Schnell"> Frédéric Schnell</a>, <a href="https://publications.waset.org/abstracts/search?q=Shuzhe%20Ding"> Shuzhe Ding</a>, <a href="https://publications.waset.org/abstracts/search?q=Sol%C3%A8ne%20Le%20Douairon%20Lahaye"> Solène Le Douairon Lahaye</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Background: The risk of sudden cardiac arret/death (SCA/D) in athletes is controversial. There is a lack of meta-analyses assessing the sex differences in the risk of SCA/D in competitive athletes. Purpose: The aim of the present study was to evaluate sex differences in the incidence of SCA/D in competitive athletes using meta-analyses. Methods: The systematic review was registered in the PROSPERO database (registration ID: CRD42023432022) and was conducted according to the PRISMA guidelines. PubMed, Embase, Scopus, SPORT Discus and Cochrane Library were searched up to July 2023. To avoid systematic bias in data pooling, only studies with data for both sexes were included. Results: From the 18 included studies, 2028 cases of SCA/D were observed (males 1821 (89.79%), females 207 (10.21%)). The age ranges from the adolescents (<26 years) to the elderly (>45 years). The incidence in male athletes was 1.32/100,000 AY (95% CI: [0.90, 1.93]) and in females was 0.26/100,000 AY (95% CI: [0.16, 0.43]), the incidence rate ratio (IRR) was 6.43 (95% CI: [4.22, 9.79]). The subgroup synthesis showed a higher incidence in males than in females in both age groups <25 years and ≤35 years, the IRR was 5.86 (95% CI: [4.69, 7.32]) and 5.79 (95% CI: [4.73, 7.09]), respectively. When considering the events, the IRR was 6.73 (95%CI: [3.06, 14.78]) among studies involving both SCA/D events and 7.16 (95% CI: [4.93, 10.40]) among studies including only cases of SCD. The available clinical evidence showed that cardiac events were most frequently seen in long-distance running races (26, 35.1%), marathon (16, 21.6%) and soccer (10, 13.5%). Coronary artery disease (14, 18.9%), hypertrophic cardiomyopathy (8, 10.8%), and arrhythmogenic right ventricular cardiomyopathy (7, 9.5%) are the most common causes of SCA/D in competitive athletes. Conclusion: The meta-analysis provides evidence of sex differences in the incidence of SCA/D in competitive athletes. The incidence of SCA/D in male athletes was 6 to 7 times higher than in females. Identifying the reasons for this difference may have implications for targeted the prevention of fatal evets in athletes. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=incidence" title="incidence">incidence</a>, <a href="https://publications.waset.org/abstracts/search?q=sudden%20cardiac%20arrest" title=" sudden cardiac arrest"> sudden cardiac arrest</a>, <a href="https://publications.waset.org/abstracts/search?q=sudden%20cardiac%20death" title=" sudden cardiac death"> sudden cardiac death</a>, <a href="https://publications.waset.org/abstracts/search?q=sex%20difference" title=" sex difference"> sex difference</a>, <a href="https://publications.waset.org/abstracts/search?q=athletes" title=" athletes"> athletes</a> </p> <a href="https://publications.waset.org/abstracts/173995/sex-difference-of-the-incidence-of-sudden-cardiac-arrestdeath-in-athletes-a-systematic-review-and-meta-analysis" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/173995.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">64</span> </span> </div> </div> <ul class="pagination"> <li class="page-item disabled"><span class="page-link">&lsaquo;</span></li> <li class="page-item active"><span class="page-link">1</span></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=pooling&amp;page=2">2</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=pooling&amp;page=2" rel="next">&rsaquo;</a></li> </ul> </div> </main> <footer> <div id="infolinks" class="pt-3 pb-2"> <div class="container"> <div style="background-color:#f5f5f5;" class="p-3"> <div class="row"> <div class="col-md-2"> <ul class="list-unstyled"> About <li><a href="https://waset.org/page/support">About Us</a></li> <li><a href="https://waset.org/page/support#legal-information">Legal</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/WASET-16th-foundational-anniversary.pdf">WASET celebrates its 16th foundational anniversary</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Account <li><a href="https://waset.org/profile">My Account</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Explore <li><a href="https://waset.org/disciplines">Disciplines</a></li> <li><a href="https://waset.org/conferences">Conferences</a></li> <li><a href="https://waset.org/conference-programs">Conference Program</a></li> <li><a href="https://waset.org/committees">Committees</a></li> <li><a href="https://publications.waset.org">Publications</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Research <li><a href="https://publications.waset.org/abstracts">Abstracts</a></li> <li><a href="https://publications.waset.org">Periodicals</a></li> <li><a href="https://publications.waset.org/archive">Archive</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Open Science <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Science-Philosophy.pdf">Open Science Philosophy</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Science-Award.pdf">Open Science Award</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Society-Open-Science-and-Open-Innovation.pdf">Open Innovation</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Postdoctoral-Fellowship-Award.pdf">Postdoctoral Fellowship Award</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Scholarly-Research-Review.pdf">Scholarly Research Review</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Support <li><a href="https://waset.org/page/support">Support</a></li> <li><a href="https://waset.org/profile/messages/create">Contact Us</a></li> <li><a href="https://waset.org/profile/messages/create">Report Abuse</a></li> </ul> </div> </div> </div> </div> </div> <div class="container text-center"> <hr style="margin-top:0;margin-bottom:.3rem;"> <a href="https://creativecommons.org/licenses/by/4.0/" target="_blank" class="text-muted small">Creative Commons Attribution 4.0 International License</a> <div id="copy" class="mt-2">&copy; 2024 World Academy of Science, Engineering and Technology</div> </div> </footer> <a href="javascript:" id="return-to-top"><i class="fas fa-arrow-up"></i></a> <div class="modal" id="modal-template"> <div class="modal-dialog"> <div class="modal-content"> <div class="row m-0 mt-1"> <div class="col-md-12"> <button type="button" class="close" data-dismiss="modal" aria-label="Close"><span aria-hidden="true">&times;</span></button> </div> </div> <div class="modal-body"></div> </div> </div> </div> <script src="https://cdn.waset.org/static/plugins/jquery-3.3.1.min.js"></script> <script src="https://cdn.waset.org/static/plugins/bootstrap-4.2.1/js/bootstrap.bundle.min.js"></script> <script src="https://cdn.waset.org/static/js/site.js?v=150220211556"></script> <script> jQuery(document).ready(function() { /*jQuery.get("https://publications.waset.org/xhr/user-menu", function (response) { jQuery('#mainNavMenu').append(response); });*/ jQuery.get({ url: "https://publications.waset.org/xhr/user-menu", cache: false }).then(function(response){ jQuery('#mainNavMenu').append(response); }); }); </script> </body> </html>

Pages: 1 2 3 4 5 6 7 8 9 10