CINXE.COM
Search results for: annotations
<!DOCTYPE html> <html lang="en" dir="ltr"> <head> <!-- Google tag (gtag.js) --> <script async src="https://www.googletagmanager.com/gtag/js?id=G-P63WKM1TM1"></script> <script> window.dataLayer = window.dataLayer || []; function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'G-P63WKM1TM1'); </script> <!-- Yandex.Metrika counter --> <script type="text/javascript" > (function(m,e,t,r,i,k,a){m[i]=m[i]||function(){(m[i].a=m[i].a||[]).push(arguments)}; m[i].l=1*new Date(); for (var j = 0; j < document.scripts.length; j++) {if (document.scripts[j].src === r) { return; }} k=e.createElement(t),a=e.getElementsByTagName(t)[0],k.async=1,k.src=r,a.parentNode.insertBefore(k,a)}) (window, document, "script", "https://mc.yandex.ru/metrika/tag.js", "ym"); ym(55165297, "init", { clickmap:false, trackLinks:true, accurateTrackBounce:true, webvisor:false }); </script> <noscript><div><img src="https://mc.yandex.ru/watch/55165297" style="position:absolute; left:-9999px;" alt="" /></div></noscript> <!-- /Yandex.Metrika counter --> <!-- Matomo --> <!-- End Matomo Code --> <title>Search results for: annotations</title> <meta name="description" content="Search results for: annotations"> <meta name="keywords" content="annotations"> <meta name="viewport" content="width=device-width, initial-scale=1, minimum-scale=1, maximum-scale=1, user-scalable=no"> <meta charset="utf-8"> <link href="https://cdn.waset.org/favicon.ico" type="image/x-icon" rel="shortcut icon"> <link href="https://cdn.waset.org/static/plugins/bootstrap-4.2.1/css/bootstrap.min.css" rel="stylesheet"> <link href="https://cdn.waset.org/static/plugins/fontawesome/css/all.min.css" rel="stylesheet"> <link href="https://cdn.waset.org/static/css/site.css?v=150220211555" rel="stylesheet"> </head> <body> <header> <div class="container"> <nav class="navbar navbar-expand-lg navbar-light"> <a class="navbar-brand" href="https://waset.org"> <img src="https://cdn.waset.org/static/images/wasetc.png" alt="Open Science Research Excellence" title="Open Science Research Excellence" /> </a> <button class="d-block d-lg-none navbar-toggler ml-auto" type="button" data-toggle="collapse" data-target="#navbarMenu" aria-controls="navbarMenu" aria-expanded="false" aria-label="Toggle navigation"> <span class="navbar-toggler-icon"></span> </button> <div class="w-100"> <div class="d-none d-lg-flex flex-row-reverse"> <form method="get" action="https://waset.org/search" class="form-inline my-2 my-lg-0"> <input class="form-control mr-sm-2" type="search" placeholder="Search Conferences" value="annotations" name="q" aria-label="Search"> <button class="btn btn-light my-2 my-sm-0" type="submit"><i class="fas fa-search"></i></button> </form> </div> <div class="collapse navbar-collapse mt-1" id="navbarMenu"> <ul class="navbar-nav ml-auto align-items-center" id="mainNavMenu"> <li class="nav-item"> <a class="nav-link" href="https://waset.org/conferences" title="Conferences in 2024/2025/2026">Conferences</a> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/disciplines" title="Disciplines">Disciplines</a> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/committees" rel="nofollow">Committees</a> </li> <li class="nav-item dropdown"> <a class="nav-link dropdown-toggle" href="#" id="navbarDropdownPublications" role="button" data-toggle="dropdown" aria-haspopup="true" aria-expanded="false"> Publications </a> <div class="dropdown-menu" aria-labelledby="navbarDropdownPublications"> <a class="dropdown-item" href="https://publications.waset.org/abstracts">Abstracts</a> <a class="dropdown-item" href="https://publications.waset.org">Periodicals</a> <a class="dropdown-item" href="https://publications.waset.org/archive">Archive</a> </div> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/page/support" title="Support">Support</a> </li> </ul> </div> </div> </nav> </div> </header> <main> <div class="container mt-4"> <div class="row"> <div class="col-md-9 mx-auto"> <form method="get" action="https://publications.waset.org/abstracts/search"> <div id="custom-search-input"> <div class="input-group"> <i class="fas fa-search"></i> <input type="text" class="search-query" name="q" placeholder="Author, Title, Abstract, Keywords" value="annotations"> <input type="submit" class="btn_search" value="Search"> </div> </div> </form> </div> </div> <div class="row mt-3"> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Commenced</strong> in January 2007</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Frequency:</strong> Monthly</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Edition:</strong> International</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Paper Count:</strong> 44</div> </div> </div> </div> <h1 class="mt-3 mb-3 text-center" style="font-size:1.6rem;">Search results for: annotations</h1> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">44</span> Fuzzy Semantic Annotation of Web Resources </h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Sahar%20Ma%C3%A2lej%20Dammak">Sahar Maâlej Dammak</a>, <a href="https://publications.waset.org/abstracts/search?q=Anis%20Jedidi"> Anis Jedidi</a>, <a href="https://publications.waset.org/abstracts/search?q=Rafik%20Bouaziz"> Rafik Bouaziz</a> </p> <p class="card-text"><strong>Abstract:</strong></p> With the great mass of pages managed through the world, and especially with the advent of the Web, their manual annotation is impossible. We focus, in this paper, on the semiautomatic annotation of the web pages. We propose an approach and a framework for semantic annotation of web pages entitled “Querying Web”. Our solution is an enhancement of the first result of annotation done by the “Semantic Radar” Plug-in on the web resources, by annotations using an enriched domain ontology. The concepts of the result of Semantic Radar may be connected to several terms of the ontology, but connections may be uncertain. We represent annotations as possibility distributions. We use the hierarchy defined in the ontology to compute degrees of possibilities. We want to achieve an automation of the fuzzy semantic annotation of web resources. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=fuzzy%20semantic%20annotation" title="fuzzy semantic annotation">fuzzy semantic annotation</a>, <a href="https://publications.waset.org/abstracts/search?q=semantic%20web" title=" semantic web"> semantic web</a>, <a href="https://publications.waset.org/abstracts/search?q=domain%20ontologies" title=" domain ontologies"> domain ontologies</a>, <a href="https://publications.waset.org/abstracts/search?q=querying%20web" title=" querying web"> querying web</a> </p> <a href="https://publications.waset.org/abstracts/1854/fuzzy-semantic-annotation-of-web-resources" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/1854.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">374</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">43</span> Deep Learning-Based Object Detection on Low Quality Images: A Case Study of Real-Time Traffic Monitoring</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Jean-Francois%20Rajotte">Jean-Francois Rajotte</a>, <a href="https://publications.waset.org/abstracts/search?q=Martin%20Sotir"> Martin Sotir</a>, <a href="https://publications.waset.org/abstracts/search?q=Frank%20Gouineau"> Frank Gouineau</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The installation and management of traffic monitoring devices can be costly from both a financial and resource point of view. It is therefore important to take advantage of in-place infrastructures to extract the most information. Here we show how low-quality urban road traffic images from cameras already available in many cities (such as Montreal, Vancouver, and Toronto) can be used to estimate traffic flow. To this end, we use a pre-trained neural network, developed for object detection, to count vehicles within images. We then compare the results with human annotations gathered through crowdsourcing campaigns. We use this comparison to assess performance and calibrate the neural network annotations. As a use case, we consider six months of continuous monitoring over hundreds of cameras installed in the city of Montreal. We compare the results with city-provided manual traffic counting performed in similar conditions at the same location. The good performance of our system allows us to consider applications which can monitor the traffic conditions in near real-time, making the counting usable for traffic-related services. Furthermore, the resulting annotations pave the way for building a historical vehicle counting dataset to be used for analysing the impact of road traffic on many city-related issues, such as urban planning, security, and pollution. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=traffic%20monitoring" title="traffic monitoring">traffic monitoring</a>, <a href="https://publications.waset.org/abstracts/search?q=deep%20learning" title=" deep learning"> deep learning</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20annotation" title=" image annotation"> image annotation</a>, <a href="https://publications.waset.org/abstracts/search?q=vehicles" title=" vehicles"> vehicles</a>, <a href="https://publications.waset.org/abstracts/search?q=roads" title=" roads"> roads</a>, <a href="https://publications.waset.org/abstracts/search?q=artificial%20intelligence" title=" artificial intelligence"> artificial intelligence</a>, <a href="https://publications.waset.org/abstracts/search?q=real-time%20systems" title=" real-time systems"> real-time systems</a> </p> <a href="https://publications.waset.org/abstracts/82867/deep-learning-based-object-detection-on-low-quality-images-a-case-study-of-real-time-traffic-monitoring" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/82867.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">200</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">42</span> Preliminary Knowledge Extraction from Beethoven’s Sonatas: from Musical Referential Patterns to Emotional Normative Ratings</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Christina%20Volioti">Christina Volioti</a>, <a href="https://publications.waset.org/abstracts/search?q=Sotiris%20Manitsaris"> Sotiris Manitsaris</a>, <a href="https://publications.waset.org/abstracts/search?q=Eleni%20Katsouli"> Eleni Katsouli</a>, <a href="https://publications.waset.org/abstracts/search?q=Vasiliki%20Tsekouropoulou"> Vasiliki Tsekouropoulou</a>, <a href="https://publications.waset.org/abstracts/search?q=Leontios%20J.%20Hadjileontiadis"> Leontios J. Hadjileontiadis</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The piano sonatas of Beethoven represent part of the Intangible Cultural Heritage. The aims of this research were to further explore this intangibility by placing emphasis on defining emotional normative ratings for the “Waldstein” (Op. 53) and “Tempest” (Op. 31) Sonatas of Beethoven. To this end, a musicological analysis was conducted on these particular sonatas and referential patterns in these works of Beethoven were defined. Appropriate interactive questionnaires were designed in order to create a statistical normative rating that describes the emotional status when an individual listens to these musical excerpts. Based on these ratings, it is possible for emotional annotations for these same referential patterns to be created and integrated into the music score. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=emotional%20annotations" title="emotional annotations">emotional annotations</a>, <a href="https://publications.waset.org/abstracts/search?q=intangible%20cultural%20heritage" title=" intangible cultural heritage"> intangible cultural heritage</a>, <a href="https://publications.waset.org/abstracts/search?q=musicological%20analysis" title=" musicological analysis"> musicological analysis</a>, <a href="https://publications.waset.org/abstracts/search?q=normative%20ratings" title=" normative ratings"> normative ratings</a> </p> <a href="https://publications.waset.org/abstracts/89504/preliminary-knowledge-extraction-from-beethovens-sonatas-from-musical-referential-patterns-to-emotional-normative-ratings" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/89504.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">175</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">41</span> Corpus Stylistics and Multidimensional Analysis for English for Specific Purposes Teaching and Assessment</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Svetlana%20Strinyuk">Svetlana Strinyuk</a>, <a href="https://publications.waset.org/abstracts/search?q=Viacheslav%20Lanin"> Viacheslav Lanin</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Academic English has become lingua franca for international scientific community which stimulates universities to introduce English for Specific Purposes (EAP) courses into curriculum. Teaching L2 EAP students might be fulfilled with corpus technologies and digital stylistics. A special software developed to reach the manifold task of teaching, assessing and researching academic writing of L2 students on basis of digital stylistics and multidimensional analysis was created. A set of annotations (style markers) – grammar, lexical and syntactic features most significant of academic writing was built. Contrastive comparison of two corpora “model corpus”, subject domain limited papers published by competent writers in leading academic journals, and “students’ corpus”, subject domain limited papers written by last year students allows to receive data about the features of academic writing underused or overused by L2 EAP student. Both corpora are tagged with a special software created in GATE Developer. Style markers within the framework of research might be replaced depending on the relevance and validity of the result which is achieved from research corpora. Thus, selecting relevant (high frequency) style markers and excluding less relevant, i.e. less frequent annotations, high validity of the model is achieved. Software allows to compare the data received from processing model corpus to students’ corpus and get reports which can be used in teaching and assessment. The less deviation from the model corpus students demonstrates in their writing the higher is academic writing skill acquisition. The research showed that several style markers (hedging devices) were underused by L2 EAP students whereas lexical linking devices were used excessively. A special software implemented into teaching of EAP courses serves as a successful visual aid, makes assessment more valid; it is indicative of the degree of writing skill acquisition, and provides data for further research. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=corpus%20technologies%20in%20EAP%20teaching" title="corpus technologies in EAP teaching">corpus technologies in EAP teaching</a>, <a href="https://publications.waset.org/abstracts/search?q=multidimensional%20analysis" title=" multidimensional analysis"> multidimensional analysis</a>, <a href="https://publications.waset.org/abstracts/search?q=GATE%20Developer" title=" GATE Developer"> GATE Developer</a>, <a href="https://publications.waset.org/abstracts/search?q=corpus%20stylistics" title=" corpus stylistics"> corpus stylistics</a> </p> <a href="https://publications.waset.org/abstracts/91140/corpus-stylistics-and-multidimensional-analysis-for-english-for-specific-purposes-teaching-and-assessment" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/91140.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">200</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">40</span> Digital Development of Cultural Heritage: Construction of Traditional Chinese Pattern Database</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Shaojian%20Li">Shaojian Li</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The traditional Chinese patterns, as an integral part of Chinese culture, possess unique values in history, culture, and art. However, with the passage of time and societal changes, many of these traditional patterns are at risk of being lost, damaged, or forgotten. To undertake the digital preservation and protection of these traditional patterns, this paper will collect and organize images of traditional Chinese patterns. It will provide exhaustive and comprehensive semantic annotations, creating a resource library of traditional Chinese pattern images. This will support the digital preservation and application of traditional Chinese patterns. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=digitization%20of%20cultural%20heritage" title="digitization of cultural heritage">digitization of cultural heritage</a>, <a href="https://publications.waset.org/abstracts/search?q=traditional%20Chinese%20patterns" title=" traditional Chinese patterns"> traditional Chinese patterns</a>, <a href="https://publications.waset.org/abstracts/search?q=digital%20humanities" title=" digital humanities"> digital humanities</a>, <a href="https://publications.waset.org/abstracts/search?q=database%20construction" title=" database construction"> database construction</a> </p> <a href="https://publications.waset.org/abstracts/182148/digital-development-of-cultural-heritage-construction-of-traditional-chinese-pattern-database" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/182148.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">59</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">39</span> A Clinician’s Perspective on Electroencephalography Annotation and Analysis for Driver Drowsiness Estimation</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Ruxandra%20Aursulesei">Ruxandra Aursulesei</a>, <a href="https://publications.waset.org/abstracts/search?q=David%20O%E2%80%99Callaghan"> David O’Callaghan</a>, <a href="https://publications.waset.org/abstracts/search?q=Cian%20Ryan"> Cian Ryan</a>, <a href="https://publications.waset.org/abstracts/search?q=Diarmaid%20O%E2%80%99Cualain"> Diarmaid O’Cualain</a>, <a href="https://publications.waset.org/abstracts/search?q=Viktor%20Varkarakis"> Viktor Varkarakis</a>, <a href="https://publications.waset.org/abstracts/search?q=Alina%20Sultana"> Alina Sultana</a>, <a href="https://publications.waset.org/abstracts/search?q=Joseph%20Lemley"> Joseph Lemley</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Human errors caused by drowsiness are among the leading causes of road accidents. Neurobiological research gives information about the electrical signals emitted by neurons firing within the brain. Electrical signal frequencies can be determined by attaching bio-sensors to the head surface. By observing the electrical impulses and the rhythmic interaction of neurons with each other, we can predict the mental state of a person. In this paper, we aim to better understand intersubject and intrasubject variability in terms of electrophysiological patterns that occur at the onset of drowsiness and their evolution with the decreasing of vigilance. The purpose is to lay the foundations for an algorithm that detects the onset of drowsiness before the physical signs become apparent. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=electroencephalography" title="electroencephalography">electroencephalography</a>, <a href="https://publications.waset.org/abstracts/search?q=drowsiness" title=" drowsiness"> drowsiness</a>, <a href="https://publications.waset.org/abstracts/search?q=ADAS" title=" ADAS"> ADAS</a>, <a href="https://publications.waset.org/abstracts/search?q=annotations" title=" annotations"> annotations</a>, <a href="https://publications.waset.org/abstracts/search?q=clinician" title=" clinician"> clinician</a> </p> <a href="https://publications.waset.org/abstracts/156014/a-clinicians-perspective-on-electroencephalography-annotation-and-analysis-for-driver-drowsiness-estimation" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/156014.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">115</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">38</span> Reading as Moral Afternoon Tea: An Empirical Study on the Compensation Effect between Literary Novel Reading and Readers’ Moral Motivation</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Chong%20Jiang">Chong Jiang</a>, <a href="https://publications.waset.org/abstracts/search?q=Liang%20Zhao"> Liang Zhao</a>, <a href="https://publications.waset.org/abstracts/search?q=Hua%20Jian"> Hua Jian</a>, <a href="https://publications.waset.org/abstracts/search?q=Xiaoguang%20Wang"> Xiaoguang Wang</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The belief that there is a strong relationship between reading narrative and morality has generally become the basic assumption of scholars, philosophers, critics, and cultural critics. The virtuality constructed by literary novels inspires readers to regard the narrative as a thinking experiment, creating the distance between readers and events so that they can freely and morally experience the positions of different roles. Therefore, the virtual narrative combined with literary characteristics is always considered as a "moral laboratory." Well-established findings revealed that people show less lying and deceptive behaviors in the morning than in the afternoon, called the morning morality effect. As a limited self-regulation resource, morality will be constantly depleted with the change of time rhythm under the influence of the morning morality effect. It can also be compensated and restored in various ways, such as eating, sleeping, etc. As a common form of entertainment in modern society, literary novel reading gives people more virtual experience and emotional catharsis, just as a relaxing afternoon tea that helps people break away from fast-paced work, restore physical strength, and relieve stress in a short period of leisure. In this paper, inspired by the compensation control theory, we wonder whether reading literary novels in the digital environment could replenish a kind of spiritual energy for self-regulation to compensate for people's moral loss in the afternoon. Based on this assumption, we leverage the social annotation text content generated by readers in digital reading to represent the readers' reading attention. We then recognized the semantics and calculated the readers' moral motivation expressed in the annotations and investigated the fine-grained dynamics of the moral motivation changing in each time slot within 24 hours of a day. Comprehensively comparing the division of different time intervals, sufficient experiments showed that the moral motivation reflected in the annotations in the afternoon is significantly higher than that in the morning. The results robustly verified the hypothesis that reading compensates for moral motivation, which we called the moral afternoon tea effect. Moreover, we quantitatively identified that such moral compensation can last until 14:00 in the afternoon and 21:00 in the evening. In addition, it is interesting to find that the division of time intervals of different units impacts the identification of moral rhythms. Dividing the time intervals by four-hour time slot brings more insights of moral rhythms compared with that of three-hour and six-hour time slot. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=digital%20reading" title="digital reading">digital reading</a>, <a href="https://publications.waset.org/abstracts/search?q=social%20annotation" title=" social annotation"> social annotation</a>, <a href="https://publications.waset.org/abstracts/search?q=moral%20motivation" title=" moral motivation"> moral motivation</a>, <a href="https://publications.waset.org/abstracts/search?q=morning%20morality%20effect" title=" morning morality effect"> morning morality effect</a>, <a href="https://publications.waset.org/abstracts/search?q=control%20compensation" title=" control compensation"> control compensation</a> </p> <a href="https://publications.waset.org/abstracts/144570/reading-as-moral-afternoon-tea-an-empirical-study-on-the-compensation-effect-between-literary-novel-reading-and-readers-moral-motivation" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/144570.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">149</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">37</span> Philosophical Interpretations of Spells in the Imperial Chinese Buddhism</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Saiping%20An">Saiping An</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The spells in Chinese Buddhism are often regarded by current scholarship as syllables with mystical power, as a ritual and practice of oral chanting, or as texts engraved on cultural relics. This study hopes to point out that the spell as a kind of behavior and material also provokes the believers to interpret its soteriology with various Buddhist doctrines and philosophies. It will analyze Mahāvairocana Tantra which is the main classic of the tradition regarded by the academic circles as 'Esoteric Buddhism', two annotations of these scriptures composed in the Tang and Liao Dynasty respectively, as well as some works of monks and lay Buddhists in the late Ming and early Qing dynasties. It aims to illustrate that spells in Chinese Buddhism are not simply magical voices and the words engraved on the cultural relics; they have also enriched the doctrines and thoughts of Chinese Buddhism. Their nature and soteriological methods are far more abundant than current academic circles have revealed. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=spell" title="spell">spell</a>, <a href="https://publications.waset.org/abstracts/search?q=Chinese%20Buddhism" title=" Chinese Buddhism"> Chinese Buddhism</a>, <a href="https://publications.waset.org/abstracts/search?q=philosophy" title=" philosophy"> philosophy</a>, <a href="https://publications.waset.org/abstracts/search?q=Buddhist%20doctrines" title=" Buddhist doctrines"> Buddhist doctrines</a> </p> <a href="https://publications.waset.org/abstracts/108168/philosophical-interpretations-of-spells-in-the-imperial-chinese-buddhism" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/108168.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">179</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">36</span> A Quality Index Optimization Method for Non-Invasive Fetal ECG Extraction</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Lucia%20Billeci">Lucia Billeci</a>, <a href="https://publications.waset.org/abstracts/search?q=Gennaro%20Tartarisco"> Gennaro Tartarisco</a>, <a href="https://publications.waset.org/abstracts/search?q=Maurizio%20Varanini"> Maurizio Varanini</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Fetal cardiac monitoring by fetal electrocardiogram (fECG) can provide significant clinical information about the healthy condition of the fetus. Despite this potentiality till now the use of fECG in clinical practice has been quite limited due to the difficulties in its measuring. The recovery of fECG from the signals acquired non-invasively by using electrodes placed on the maternal abdomen is a challenging task because abdominal signals are a mixture of several components and the fetal one is very weak. This paper presents an approach for fECG extraction from abdominal maternal recordings, which exploits the characteristics of pseudo-periodicity of fetal ECG. It consists of devising a quality index (fQI) for fECG and of finding the linear combinations of preprocessed abdominal signals, which maximize these fQI (quality index optimization - QIO). It aims at improving the performances of the most commonly adopted methods for fECG extraction, usually based on maternal ECG (mECG) estimating and canceling. The procedure for the fECG extraction and fetal QRS (fQRS) detection is completely unsupervised and based on the following steps: signal pre-processing; maternal ECG (mECG) extraction and maternal QRS detection; mECG component approximation and canceling by weighted principal component analysis; fECG extraction by fQI maximization and fetal QRS detection. The proposed method was compared with our previously developed procedure, which obtained the highest at the Physionet/Computing in Cardiology Challenge 2013. That procedure was based on removing the mECG from abdominal signals estimated by a principal component analysis (PCA) and applying the Independent component Analysis (ICA) on the residual signals. Both methods were developed and tuned using 69, 1 min long, abdominal measurements with fetal QRS annotation of the dataset A provided by PhysioNet/Computing in Cardiology Challenge 2013. The QIO-based and the ICA-based methods were compared in analyzing two databases of abdominal maternal ECG available on the Physionet site. The first is the Abdominal and Direct Fetal Electrocardiogram Database (ADdb) which contains the fetal QRS annotations thus allowing a quantitative performance comparison, the second is the Non-Invasive Fetal Electrocardiogram Database (NIdb), which does not contain the fetal QRS annotations so that the comparison between the two methods can be only qualitative. In particular, the comparison on NIdb was performed defining an index of quality for the fetal RR series. On the annotated database ADdb the QIO method, provided the performance indexes Sens=0.9988, PPA=0.9991, F1=0.9989 overcoming the ICA-based one, which provided Sens=0.9966, PPA=0.9972, F1=0.9969. The comparison on NIdb was performed defining an index of quality for the fetal RR series. The index of quality resulted higher for the QIO-based method compared to the ICA-based one in 35 records out 55 cases of the NIdb. The QIO-based method gave very high performances with both the databases. The results of this study foresees the application of the algorithm in a fully unsupervised way for the implementation in wearable devices for self-monitoring of fetal health. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=fetal%20electrocardiography" title="fetal electrocardiography">fetal electrocardiography</a>, <a href="https://publications.waset.org/abstracts/search?q=fetal%20QRS%20detection" title=" fetal QRS detection"> fetal QRS detection</a>, <a href="https://publications.waset.org/abstracts/search?q=independent%20component%20analysis%20%28ICA%29" title=" independent component analysis (ICA)"> independent component analysis (ICA)</a>, <a href="https://publications.waset.org/abstracts/search?q=optimization" title=" optimization"> optimization</a>, <a href="https://publications.waset.org/abstracts/search?q=wearable" title=" wearable"> wearable</a> </p> <a href="https://publications.waset.org/abstracts/51208/a-quality-index-optimization-method-for-non-invasive-fetal-ecg-extraction" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/51208.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">280</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">35</span> OPEN-EmoRec-II-A Multimodal Corpus of Human-Computer Interaction</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Stefanie%20Rukavina">Stefanie Rukavina</a>, <a href="https://publications.waset.org/abstracts/search?q=Sascha%20Gruss"> Sascha Gruss</a>, <a href="https://publications.waset.org/abstracts/search?q=Steffen%20Walter"> Steffen Walter</a>, <a href="https://publications.waset.org/abstracts/search?q=Holger%20Hoffmann"> Holger Hoffmann</a>, <a href="https://publications.waset.org/abstracts/search?q=Harald%20C.%20Traue"> Harald C. Traue</a> </p> <p class="card-text"><strong>Abstract:</strong></p> OPEN-EmoRecII is an open multimodal corpus with experimentally induced emotions. In the first half of the experiment, emotions were induced with standardized picture material and in the second half during a human-computer interaction (HCI), realized with a wizard-of-oz design. The induced emotions are based on the dimensional theory of emotions (valence, arousal and dominance). These emotional sequences - recorded with multimodal data (mimic reactions, speech, audio and physiological reactions) during a naturalistic-like HCI-environment one can improve classification methods on a multimodal level. This database is the result of an HCI-experiment, for which 30 subjects in total agreed to a publication of their data including the video material for research purposes. The now available open corpus contains sensory signal of: video, audio, physiology (SCL, respiration, BVP, EMG Corrugator supercilii, EMG Zygomaticus Major) and mimic annotations. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=open%20multimodal%20emotion%20corpus" title="open multimodal emotion corpus">open multimodal emotion corpus</a>, <a href="https://publications.waset.org/abstracts/search?q=annotated%20labels" title=" annotated labels"> annotated labels</a>, <a href="https://publications.waset.org/abstracts/search?q=intelligent%20interaction" title=" intelligent interaction"> intelligent interaction</a> </p> <a href="https://publications.waset.org/abstracts/29365/open-emorec-ii-a-multimodal-corpus-of-human-computer-interaction" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/29365.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">416</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">34</span> Addressing the Exorbitant Cost of Labeling Medical Images with Active Learning</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Saba%20Rahimi">Saba Rahimi</a>, <a href="https://publications.waset.org/abstracts/search?q=Ozan%20Oktay"> Ozan Oktay</a>, <a href="https://publications.waset.org/abstracts/search?q=Javier%20Alvarez-Valle"> Javier Alvarez-Valle</a>, <a href="https://publications.waset.org/abstracts/search?q=Sujeeth%20Bharadwaj"> Sujeeth Bharadwaj</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Successful application of deep learning in medical image analysis necessitates unprecedented amounts of labeled training data. Unlike conventional 2D applications, radiological images can be three-dimensional (e.g., CT, MRI), consisting of many instances within each image. The problem is exacerbated when expert annotations are required for effective pixel-wise labeling, which incurs exorbitant labeling effort and cost. Active learning is an established research domain that aims to reduce labeling workload by prioritizing a subset of informative unlabeled examples to annotate. Our contribution is a cost-effective approach for U-Net 3D models that uses Monte Carlo sampling to analyze pixel-wise uncertainty. Experiments on the AAPM 2017 lung CT segmentation challenge dataset show that our proposed framework can achieve promising segmentation results by using only 42% of the training data. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=image%20segmentation" title="image segmentation">image segmentation</a>, <a href="https://publications.waset.org/abstracts/search?q=active%20learning" title=" active learning"> active learning</a>, <a href="https://publications.waset.org/abstracts/search?q=convolutional%20neural%20network" title=" convolutional neural network"> convolutional neural network</a>, <a href="https://publications.waset.org/abstracts/search?q=3D%20U-Net" title=" 3D U-Net"> 3D U-Net</a> </p> <a href="https://publications.waset.org/abstracts/137198/addressing-the-exorbitant-cost-of-labeling-medical-images-with-active-learning" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/137198.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">155</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">33</span> Spatio-Temporal Dynamic of Woody Vegetation Assessment Using Oblique Landscape Photographs</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=V.%20V.%20Fomin">V. V. Fomin</a>, <a href="https://publications.waset.org/abstracts/search?q=A.%20P.%20Mikhailovich"> A. P. Mikhailovich</a>, <a href="https://publications.waset.org/abstracts/search?q=E.%20M.%20Agapitov"> E. M. Agapitov</a>, <a href="https://publications.waset.org/abstracts/search?q=V.%20E.%20Rogachev"> V. E. Rogachev</a>, <a href="https://publications.waset.org/abstracts/search?q=E.%20A.%20Kostousova"> E. A. Kostousova</a>, <a href="https://publications.waset.org/abstracts/search?q=E.%20S.%20Perekhodova"> E. S. Perekhodova</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Ground-level landscape photos can be used as a source of objective data on woody vegetation and vegetation dynamics. We proposed a method for processing, analyzing, and presenting ground photographs, which has the following advantages: 1) researcher has to form holistic representation of the study area in form of a set of interlapping ground-level landscape photographs; 2) it is necessary to define or obtain characteristics of the landscape, objects, and phenomena present on the photographs; 3) it is necessary to create new or supplement existing textual descriptions and annotations for the ground-level landscape photographs; 4) single or multiple ground-level landscape photographs can be used to develop specialized geoinformation layers, schematic maps or thematic maps; 5) it is necessary to determine quantitative data that describes both images as a whole, and displayed objects and phenomena, using algorithms for automated image analysis. It is suggested to match each photo with a polygonal geoinformation layer, which is a sector consisting of areas corresponding with parts of the landscape visible in the photos. Calculation of visibility areas is performed in a geoinformation system within a sector using a digital model of a study area relief and visibility analysis functions. Superposition of the visibility sectors corresponding with various camera viewpoints allows matching landscape photos with each other to create a complete and wholesome representation of the space in question. It is suggested to user-defined data or phenomenons on the images with the following superposition over the visibility sector in the form of map symbols. The technology of geoinformation layers’ spatial superposition over the visibility sector creates opportunities for image geotagging using quantitative data obtained from raster or vector layers within the sector with the ability to generate annotations in natural language. The proposed method has proven itself well for relatively open and clearly visible areas with well-defined relief, for example, in mountainous areas in the treeline ecotone. When the polygonal layers of visibility sectors for a large number of different points of photography are topologically superimposed, a layer of visibility of sections of the entire study area is formed, which is displayed in the photographs. Also, as a result of this overlapping of sectors, areas that did not appear in the photo will be assessed as gaps. According to the results of this procedure, it becomes possible to obtain information about the photos that display a specific area and from which points of photography it is visible. This information may be obtained either as a query on the map or as a query for the attribute table of the layer. The method was tested using repeated photos taken from forty camera viewpoints located on Ray-Iz mountain massif (Polar Urals, Russia) from 1960 until 2023. It has been successfully used in combination with other ground-based and remote sensing methods of studying the climate-driven dynamics of woody vegetation in the Polar Urals. Acknowledgment: This research was collaboratively funded by the Russian Ministry for Science and Education project No. FEUG-2023-0002 (image representation) and Russian Science Foundation project No. 24-24-00235 (automated textual description). <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=woody" title="woody">woody</a>, <a href="https://publications.waset.org/abstracts/search?q=vegetation" title=" vegetation"> vegetation</a>, <a href="https://publications.waset.org/abstracts/search?q=repeated" title=" repeated"> repeated</a>, <a href="https://publications.waset.org/abstracts/search?q=photographs" title=" photographs"> photographs</a> </p> <a href="https://publications.waset.org/abstracts/178907/spatio-temporal-dynamic-of-woody-vegetation-assessment-using-oblique-landscape-photographs" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/178907.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">89</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">32</span> Meta Mask Correction for Nuclei Segmentation in Histopathological Image</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Jiangbo%20Shi">Jiangbo Shi</a>, <a href="https://publications.waset.org/abstracts/search?q=Zeyu%20Gao"> Zeyu Gao</a>, <a href="https://publications.waset.org/abstracts/search?q=Chen%20Li"> Chen Li</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Nuclei segmentation is a fundamental task in digital pathology analysis and can be automated by deep learning-based methods. However, the development of such an automated method requires a large amount of data with precisely annotated masks which is hard to obtain. Training with weakly labeled data is a popular solution for reducing the workload of annotation. In this paper, we propose a novel meta-learning-based nuclei segmentation method which follows the label correction paradigm to leverage data with noisy masks. Specifically, we design a fully conventional meta-model that can correct noisy masks by using a small amount of clean meta-data. Then the corrected masks are used to supervise the training of the segmentation model. Meanwhile, a bi-level optimization method is adopted to alternately update the parameters of the main segmentation model and the meta-model. Extensive experimental results on two nuclear segmentation datasets show that our method achieves the state-of-the-art result. In particular, in some noise scenarios, it even exceeds the performance of training on supervised data. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=deep%20learning" title="deep learning">deep learning</a>, <a href="https://publications.waset.org/abstracts/search?q=histopathological%20image" title=" histopathological image"> histopathological image</a>, <a href="https://publications.waset.org/abstracts/search?q=meta-learning" title=" meta-learning"> meta-learning</a>, <a href="https://publications.waset.org/abstracts/search?q=nuclei%20segmentation" title=" nuclei segmentation"> nuclei segmentation</a>, <a href="https://publications.waset.org/abstracts/search?q=weak%20annotations" title=" weak annotations"> weak annotations</a> </p> <a href="https://publications.waset.org/abstracts/136409/meta-mask-correction-for-nuclei-segmentation-in-histopathological-image" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/136409.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">140</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">31</span> VideoAssist: A Labelling Assistant to Increase Efficiency in Annotating Video-Based Fire Dataset Using a Foundation Model</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Keyur%20Joshi">Keyur Joshi</a>, <a href="https://publications.waset.org/abstracts/search?q=Philip%20Dietrich"> Philip Dietrich</a>, <a href="https://publications.waset.org/abstracts/search?q=Tjark%20Windisch"> Tjark Windisch</a>, <a href="https://publications.waset.org/abstracts/search?q=Markus%20K%C3%B6nig"> Markus König</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In the field of surveillance-based fire detection, the volume of incoming data is increasing rapidly. However, the labeling of a large industrial dataset is costly due to the high annotation costs associated with current state-of-the-art methods, which often require bounding boxes or segmentation masks for model training. This paper introduces VideoAssist, a video annotation solution that utilizes a video-based foundation model to annotate entire videos with minimal effort, requiring the labeling of bounding boxes for only a few keyframes. To the best of our knowledge, VideoAssist is the first method to significantly reduce the effort required for labeling fire detection videos. The approach offers bounding box and segmentation annotations for the video dataset with minimal manual effort. Results demonstrate that the performance of labels annotated by VideoAssist is comparable to those annotated by humans, indicating the potential applicability of this approach in fire detection scenarios. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=fire%20detection" title="fire detection">fire detection</a>, <a href="https://publications.waset.org/abstracts/search?q=label%20annotation" title=" label annotation"> label annotation</a>, <a href="https://publications.waset.org/abstracts/search?q=foundation%20models" title=" foundation models"> foundation models</a>, <a href="https://publications.waset.org/abstracts/search?q=object%20detection" title=" object detection"> object detection</a>, <a href="https://publications.waset.org/abstracts/search?q=segmentation" title=" segmentation"> segmentation</a> </p> <a href="https://publications.waset.org/abstracts/194622/videoassist-a-labelling-assistant-to-increase-efficiency-in-annotating-video-based-fire-dataset-using-a-foundation-model" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/194622.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">7</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">30</span> The Advancements of Transformer Models in Part-of-Speech Tagging System for Low-Resource Tigrinya Language</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Shamm%20Kidane">Shamm Kidane</a>, <a href="https://publications.waset.org/abstracts/search?q=Ibrahim%20Abdella"> Ibrahim Abdella</a>, <a href="https://publications.waset.org/abstracts/search?q=Fitsum%20Gaim"> Fitsum Gaim</a>, <a href="https://publications.waset.org/abstracts/search?q=Simon%20Mulugeta"> Simon Mulugeta</a>, <a href="https://publications.waset.org/abstracts/search?q=Sirak%20Asmerom"> Sirak Asmerom</a>, <a href="https://publications.waset.org/abstracts/search?q=Natnael%20Ambasager"> Natnael Ambasager</a>, <a href="https://publications.waset.org/abstracts/search?q=Yoel%20Ghebrihiwot"> Yoel Ghebrihiwot</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The call for natural language processing (NLP) systems for low-resource languages has become more apparent than ever in the past few years, with the arduous challenges still present in preparing such systems. This paper presents an improved dataset version of the Nagaoka Tigrinya Corpus for Parts-of-Speech (POS) classification system in the Tigrinya language. The size of the initial Nagaoka dataset was incremented, totaling the new tagged corpus to 118K tokens, which comprised the 12 basic POS annotations used previously. The additional content was also annotated manually in a stringent manner, followed similar rules to the former dataset and was formatted in CONLL format. The system made use of the novel approach in NLP tasks and use of the monolingually pre-trained TiELECTRA, TiBERT and TiRoBERTa transformer models. The highest achieved score is an impressive weighted F1-score of 94.2%, which surpassed the previous systems by a significant measure. The system will prove useful in the progress of NLP-related tasks for Tigrinya and similarly related low-resource languages with room for cross-referencing higher-resource languages. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=Tigrinya%20POS%20corpus" title="Tigrinya POS corpus">Tigrinya POS corpus</a>, <a href="https://publications.waset.org/abstracts/search?q=TiBERT" title=" TiBERT"> TiBERT</a>, <a href="https://publications.waset.org/abstracts/search?q=TiRoBERTa" title=" TiRoBERTa"> TiRoBERTa</a>, <a href="https://publications.waset.org/abstracts/search?q=conditional%20random%20fields" title=" conditional random fields"> conditional random fields</a> </p> <a href="https://publications.waset.org/abstracts/177822/the-advancements-of-transformer-models-in-part-of-speech-tagging-system-for-low-resource-tigrinya-language" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/177822.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">103</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">29</span> Comparison of Rumen Microbial Analysis Pipelines Based on 16s rRNA Gene Sequencing</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Xiaoxing%20Ye">Xiaoxing Ye</a> </p> <p class="card-text"><strong>Abstract:</strong></p> To investigate complex rumen microbial communities, 16S ribosomal RNA (rRNA) sequencing is widely used. Here, we evaluated the impact of bioinformatics pipelines on the observation of OTUs and taxonomic classification of 750 cattle rumen microbial samples by comparing three commonly used pipelines (LotuS, UPARSE, and QIIME) with Usearch. In LotuS-based analyses, 189 archaeal and 3894 bacterial OTUs were observed. The observed OTUs for the Usearch analysis were significantly larger than the LotuS results. We discovered 1495 OTUs for archaea and 92665 OTUs for bacteria using Usearch analysis. In addition, taxonomic assignments were made for the rumen microbial samples. All pipelines had consistent taxonomic annotations from the phylum to the genus level. A difference in relative abundance was calculated for all microbial levels, including Bacteroidetes (QIIME: 72.2%, Usearch: 74.09%), Firmicutes (QIIME: 18.3%, Usearch: 20.20%) for the bacterial phylum, Methanobacteriales (QIIME: 64.2%, Usearch: 45.7%) for the archaeal class, Methanobacteriaceae (QIIME: 35%, Usearch: 45.7%) and Methanomassiliicoccaceae (QIIME: 35%, Usearch: 31.13%) for archaeal family. However, the most prevalent archaeal class varied between these two annotation pipelines. The Thermoplasmata was the top class according to the QIIME annotation, whereas Methanobacteria was the top class according to Usearch. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=cattle%20rumen" title="cattle rumen">cattle rumen</a>, <a href="https://publications.waset.org/abstracts/search?q=rumen%20microbial" title=" rumen microbial"> rumen microbial</a>, <a href="https://publications.waset.org/abstracts/search?q=16S%20rRNA%20gene%20sequencing" title=" 16S rRNA gene sequencing"> 16S rRNA gene sequencing</a>, <a href="https://publications.waset.org/abstracts/search?q=bioinformatics%20pipeline" title=" bioinformatics pipeline"> bioinformatics pipeline</a> </p> <a href="https://publications.waset.org/abstracts/171247/comparison-of-rumen-microbial-analysis-pipelines-based-on-16s-rrna-gene-sequencing" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/171247.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">88</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">28</span> Experiments on Weakly-Supervised Learning on Imperfect Data</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Yan%20Cheng">Yan Cheng</a>, <a href="https://publications.waset.org/abstracts/search?q=Yijun%20Shao"> Yijun Shao</a>, <a href="https://publications.waset.org/abstracts/search?q=James%20Rudolph"> James Rudolph</a>, <a href="https://publications.waset.org/abstracts/search?q=Charlene%20R.%20Weir"> Charlene R. Weir</a>, <a href="https://publications.waset.org/abstracts/search?q=Beth%20Sahlmann"> Beth Sahlmann</a>, <a href="https://publications.waset.org/abstracts/search?q=Qing%20Zeng-Treitler"> Qing Zeng-Treitler</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Supervised predictive models require labeled data for training purposes. Complete and accurate labeled data, i.e., a ‘gold standard’, is not always available, and imperfectly labeled data may need to serve as an alternative. An important question is if the accuracy of the labeled data creates a performance ceiling for the trained model. In this study, we trained several models to recognize the presence of delirium in clinical documents using data with annotations that are not completely accurate (i.e., weakly-supervised learning). In the external evaluation, the support vector machine model with a linear kernel performed best, achieving an area under the curve of 89.3% and accuracy of 88%, surpassing the 80% accuracy of the training sample. We then generated a set of simulated data and carried out a series of experiments which demonstrated that models trained on imperfect data can (but do not always) outperform the accuracy of the training data, e.g., the area under the curve for some models is higher than 80% when trained on the data with an error rate of 40%. Our experiments also showed that the error resistance of linear modeling is associated with larger sample size, error type, and linearity of the data (all p-values < 0.001). In conclusion, this study sheds light on the usefulness of imperfect data in clinical research via weakly-supervised learning. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=weakly-supervised%20learning" title="weakly-supervised learning">weakly-supervised learning</a>, <a href="https://publications.waset.org/abstracts/search?q=support%20vector%20machine" title=" support vector machine"> support vector machine</a>, <a href="https://publications.waset.org/abstracts/search?q=prediction" title=" prediction"> prediction</a>, <a href="https://publications.waset.org/abstracts/search?q=delirium" title=" delirium"> delirium</a>, <a href="https://publications.waset.org/abstracts/search?q=simulation" title=" simulation"> simulation</a> </p> <a href="https://publications.waset.org/abstracts/99362/experiments-on-weakly-supervised-learning-on-imperfect-data" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/99362.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">199</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">27</span> Visual Template Detection and Compositional Automatic Regular Expression Generation for Business Invoice Extraction</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Anthony%20Proschka">Anthony Proschka</a>, <a href="https://publications.waset.org/abstracts/search?q=Deepak%20Mishra"> Deepak Mishra</a>, <a href="https://publications.waset.org/abstracts/search?q=Merlyn%20Ramanan"> Merlyn Ramanan</a>, <a href="https://publications.waset.org/abstracts/search?q=Zurab%20Baratashvili"> Zurab Baratashvili</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Small and medium-sized businesses receive over 160 billion invoices every year. Since these documents exhibit many subtle differences in layout and text, extracting structured fields such as sender name, amount, and VAT rate from them automatically is an open research question. In this paper, existing work in template-based document extraction is extended, and a system is devised that is able to reliably extract all required fields for up to 70% of all documents in the data set, more than any other previously reported method. The approaches are described for 1) detecting through visual features which template a given document belongs to, 2) automatically generating extraction rules for a given new template by composing regular expressions from multiple components, and 3) computing confidence scores that indicate the accuracy of the automatic extractions. The system can generate templates with as little as one training sample and only requires the ground truth field values instead of detailed annotations such as bounding boxes that are hard to obtain. The system is deployed and used inside a commercial accounting software. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=data%20mining" title="data mining">data mining</a>, <a href="https://publications.waset.org/abstracts/search?q=information%20retrieval" title=" information retrieval"> information retrieval</a>, <a href="https://publications.waset.org/abstracts/search?q=business" title=" business"> business</a>, <a href="https://publications.waset.org/abstracts/search?q=feature%20extraction" title=" feature extraction"> feature extraction</a>, <a href="https://publications.waset.org/abstracts/search?q=layout" title=" layout"> layout</a>, <a href="https://publications.waset.org/abstracts/search?q=business%20data%20processing" title=" business data processing"> business data processing</a>, <a href="https://publications.waset.org/abstracts/search?q=document%20handling" title=" document handling"> document handling</a>, <a href="https://publications.waset.org/abstracts/search?q=end-user%20trained%20information%20extraction" title=" end-user trained information extraction"> end-user trained information extraction</a>, <a href="https://publications.waset.org/abstracts/search?q=document%20archiving" title=" document archiving"> document archiving</a>, <a href="https://publications.waset.org/abstracts/search?q=scanned%20business%20documents" title=" scanned business documents"> scanned business documents</a>, <a href="https://publications.waset.org/abstracts/search?q=automated%20document%20processing" title=" automated document processing"> automated document processing</a>, <a href="https://publications.waset.org/abstracts/search?q=F1-measure" title=" F1-measure"> F1-measure</a>, <a href="https://publications.waset.org/abstracts/search?q=commercial%20accounting%20software" title=" commercial accounting software"> commercial accounting software</a> </p> <a href="https://publications.waset.org/abstracts/128370/visual-template-detection-and-compositional-automatic-regular-expression-generation-for-business-invoice-extraction" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/128370.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">130</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">26</span> Video Object Segmentation for Automatic Image Annotation of Ethernet Connectors with Environment Mapping and 3D Projection</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Marrone%20Silverio%20Melo%20Dantas%20Pedro%20Henrique%20Dreyer">Marrone Silverio Melo Dantas Pedro Henrique Dreyer</a>, <a href="https://publications.waset.org/abstracts/search?q=Gabriel%20Fonseca%20Reis%20de%20Souza"> Gabriel Fonseca Reis de Souza</a>, <a href="https://publications.waset.org/abstracts/search?q=Daniel%20Bezerra"> Daniel Bezerra</a>, <a href="https://publications.waset.org/abstracts/search?q=Ricardo%20Souza"> Ricardo Souza</a>, <a href="https://publications.waset.org/abstracts/search?q=Silvia%20Lins"> Silvia Lins</a>, <a href="https://publications.waset.org/abstracts/search?q=Judith%20Kelner"> Judith Kelner</a>, <a href="https://publications.waset.org/abstracts/search?q=Djamel%20Fawzi%20Hadj%20Sadok"> Djamel Fawzi Hadj Sadok</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The creation of a dataset is time-consuming and often discourages researchers from pursuing their goals. To overcome this problem, we present and discuss two solutions adopted for the automation of this process. Both optimize valuable user time and resources and support video object segmentation with object tracking and 3D projection. In our scenario, we acquire images from a moving robotic arm and, for each approach, generate distinct annotated datasets. We evaluated the precision of the annotations by comparing these with a manually annotated dataset, as well as the efficiency in the context of detection and classification problems. For detection support, we used YOLO and obtained for the projection dataset an F1-Score, accuracy, and mAP values of 0.846, 0.924, and 0.875, respectively. Concerning the tracking dataset, we achieved an F1-Score of 0.861, an accuracy of 0.932, whereas mAP reached 0.894. In order to evaluate the quality of the annotated images used for classification problems, we employed deep learning architectures. We adopted metrics accuracy and F1-Score, for VGG, DenseNet, MobileNet, Inception, and ResNet. The VGG architecture outperformed the others for both projection and tracking datasets. It reached an accuracy and F1-score of 0.997 and 0.993, respectively. Similarly, for the tracking dataset, it achieved an accuracy of 0.991 and an F1-Score of 0.981. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=RJ45" title="RJ45">RJ45</a>, <a href="https://publications.waset.org/abstracts/search?q=automatic%20annotation" title=" automatic annotation"> automatic annotation</a>, <a href="https://publications.waset.org/abstracts/search?q=object%20tracking" title=" object tracking"> object tracking</a>, <a href="https://publications.waset.org/abstracts/search?q=3D%20projection" title=" 3D projection"> 3D projection</a> </p> <a href="https://publications.waset.org/abstracts/130540/video-object-segmentation-for-automatic-image-annotation-of-ethernet-connectors-with-environment-mapping-and-3d-projection" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/130540.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">167</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">25</span> Automated Digital Mammogram Segmentation Using Dispersed Region Growing and Pectoral Muscle Sliding Window Algorithm </h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Ayush%20Shrivastava">Ayush Shrivastava</a>, <a href="https://publications.waset.org/abstracts/search?q=Arpit%20Chaudhary"> Arpit Chaudhary</a>, <a href="https://publications.waset.org/abstracts/search?q=Devang%20Kulshreshtha"> Devang Kulshreshtha</a>, <a href="https://publications.waset.org/abstracts/search?q=Vibhav%20Prakash%20Singh"> Vibhav Prakash Singh</a>, <a href="https://publications.waset.org/abstracts/search?q=Rajeev%20Srivastava"> Rajeev Srivastava</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Early diagnosis of breast cancer can improve the survival rate by detecting cancer at an early stage. Breast region segmentation is an essential step in the analysis of digital mammograms. Accurate image segmentation leads to better detection of cancer. It aims at separating out Region of Interest (ROI) from rest of the image. The procedure begins with removal of labels, annotations and tags from the mammographic image using morphological opening method. Pectoral Muscle Sliding Window Algorithm (PMSWA) is used for removal of pectoral muscle from mammograms which is necessary as the intensity values of pectoral muscles are similar to that of ROI which makes it difficult to separate out. After removing the pectoral muscle, Dispersed Region Growing Algorithm (DRGA) is used for segmentation of mammogram which disperses seeds in different regions instead of a single bright region. To demonstrate the validity of our segmentation method, 322 mammographic images from Mammographic Image Analysis Society (MIAS) database are used. The dataset contains medio-lateral oblique (MLO) view of mammograms. Experimental results on MIAS dataset show the effectiveness of our proposed method. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=CAD" title="CAD">CAD</a>, <a href="https://publications.waset.org/abstracts/search?q=dispersed%20region%20growing%20algorithm%20%28DRGA%29" title=" dispersed region growing algorithm (DRGA)"> dispersed region growing algorithm (DRGA)</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20segmentation" title=" image segmentation"> image segmentation</a>, <a href="https://publications.waset.org/abstracts/search?q=mammography" title=" mammography"> mammography</a>, <a href="https://publications.waset.org/abstracts/search?q=pectoral%20muscle%20sliding%20window%20algorithm%20%28PMSWA%29" title=" pectoral muscle sliding window algorithm (PMSWA)"> pectoral muscle sliding window algorithm (PMSWA)</a> </p> <a href="https://publications.waset.org/abstracts/69020/automated-digital-mammogram-segmentation-using-dispersed-region-growing-and-pectoral-muscle-sliding-window-algorithm" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/69020.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">312</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">24</span> The Transcriptome of Carnation (Dianthus Caryophyllus) of Elicited Cells with Fusarium Oxysporum f.sp. Dianthi </h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Juan%20Jose%20Filgueira">Juan Jose Filgueira</a>, <a href="https://publications.waset.org/abstracts/search?q=Daniela%20%20Londono-Serna"> Daniela Londono-Serna</a>, <a href="https://publications.waset.org/abstracts/search?q=Liliana%20Maria%20%20Hoyos"> Liliana Maria Hoyos</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Carnation (Dianthus caryophyllus) is one of the most important products of exportation in the floriculture industry worldwide. Fusariosis is the disease that causes the highest losses on farms, in particular the one produced by Fusarium oxysporum f.sp. dianthi, called vascular wilt. Gene identification and metabolic routes of the genes that participate in the building of the plant response to Fusarium are some of the current targets in the carnation breeding industry. The techniques for the identifying of resistant genes in the plants, is the analysis of the transcriptome obtained during the host-pathogen interaction. In this work, we report the cell transcriptome of different varieties of carnation that present differential response from Fusarium oxysporum f.sp. dianthi attack. The cells of the different hybrids produced in the outbreeding program were cultured in vitro and elicited with the parasite in a dual culture. The isolation and purification of mRNA was achieved by using affinity chromatography Oligo dT columns and the transcriptomes were obtained by using Illumina NGS techniques. A total of 85,669 unigenes were detected in all the transcriptomes analyzed and 31,000 annotations were found in databases, which correspond to 36.2%. The library construction of genic expression techniques used, allowed to recognize the variation in the expression of genes such as Germin-like protein, Glycosyl hydrolase family and Cinnamate 4-hydroxylase. These have been reported in this study for the first time as part of the response mechanism to the presence of Fusarium oxysporum. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=Carnation" title="Carnation">Carnation</a>, <a href="https://publications.waset.org/abstracts/search?q=Fusarium" title=" Fusarium"> Fusarium</a>, <a href="https://publications.waset.org/abstracts/search?q=vascular%20wilt" title=" vascular wilt"> vascular wilt</a>, <a href="https://publications.waset.org/abstracts/search?q=transcriptome" title=" transcriptome"> transcriptome</a> </p> <a href="https://publications.waset.org/abstracts/134862/the-transcriptome-of-carnation-dianthus-caryophyllus-of-elicited-cells-with-fusarium-oxysporum-fsp-dianthi" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/134862.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">150</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">23</span> PaSA: A Dataset for Patent Sentiment Analysis to Highlight Patent Paragraphs</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Renukswamy%20Chikkamath">Renukswamy Chikkamath</a>, <a href="https://publications.waset.org/abstracts/search?q=Vishvapalsinhji%20Ramsinh%20Parmar"> Vishvapalsinhji Ramsinh Parmar</a>, <a href="https://publications.waset.org/abstracts/search?q=Christoph%20Hewel"> Christoph Hewel</a>, <a href="https://publications.waset.org/abstracts/search?q=Markus%20Endres"> Markus Endres</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Given a patent document, identifying distinct semantic annotations is an interesting research aspect. Text annotation helps the patent practitioners such as examiners and patent attorneys to quickly identify the key arguments of any invention, successively providing a timely marking of a patent text. In the process of manual patent analysis, to attain better readability, recognising the semantic information by marking paragraphs is in practice. This semantic annotation process is laborious and time-consuming. To alleviate such a problem, we proposed a dataset to train machine learning algorithms to automate the highlighting process. The contributions of this work are: i) we developed a multi-class dataset of size 150k samples by traversing USPTO patents over a decade, ii) articulated statistics and distributions of data using imperative exploratory data analysis, iii) baseline Machine Learning models are developed to utilize the dataset to address patent paragraph highlighting task, and iv) future path to extend this work using Deep Learning and domain-specific pre-trained language models to develop a tool to highlight is provided. This work assists patent practitioners in highlighting semantic information automatically and aids in creating a sustainable and efficient patent analysis using the aptitude of machine learning. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=machine%20learning" title="machine learning">machine learning</a>, <a href="https://publications.waset.org/abstracts/search?q=patents" title=" patents"> patents</a>, <a href="https://publications.waset.org/abstracts/search?q=patent%20sentiment%20analysis" title=" patent sentiment analysis"> patent sentiment analysis</a>, <a href="https://publications.waset.org/abstracts/search?q=patent%20information%20retrieval" title=" patent information retrieval"> patent information retrieval</a> </p> <a href="https://publications.waset.org/abstracts/144394/pasa-a-dataset-for-patent-sentiment-analysis-to-highlight-patent-paragraphs" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/144394.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">91</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">22</span> Influences on Female Gender Identity and Role in Pre-School, Saudi Arabian: Analyzing Children's Perspectives through Narratives and Teachers' Pedagogies</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Mona%20Alzahrani">Mona Alzahrani</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Microworld theories can help to define the many influences on female development. In this research, theories together with narratives have been used to discover the reality of children’s gender perceptions in Saudi Arabia. Today, Saudi Arabia is considered a ‘closed and conserved’ society due to tribal, cultural and religious factors. This study focuses on how young girls in Saudi Arabia learn about what is expected of them as females. Cultural beliefs and experiences contribute to children’s notions of identity. Moreover, significant others such as more experienced peers, teachers, parents, and other members of a society can influence a child’s development of knowledge through interactions within their social world. There are dominant influences from the Saudi State. These influences have very strong devices and perceptions of what or how a female should act and be. However, children may have other viewpoints, as it also needs to be considered that the Internet and other media sources could have an influence. Consequently, difficulties could exist for these young children to feel an authentic sense of belonging. The study gathered data using a multi-method approach that elicited the perspectives of the children using ‘multiple modes of expression’ such as observations, story-telling, picture prompt cards, group interviews, drawings and annotations. For this study, prompts and a book was devised, specifically, for use in a Saudi setting. It was found that Saudi young girls in preschool were heteronomous, mainly influenced by culture and society, in their perceptions of female gender and role. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=Saudi%20Arabia" title="Saudi Arabia">Saudi Arabia</a>, <a href="https://publications.waset.org/abstracts/search?q=pre-school" title=" pre-school"> pre-school</a>, <a href="https://publications.waset.org/abstracts/search?q=female" title=" female"> female</a>, <a href="https://publications.waset.org/abstracts/search?q=teachers" title=" teachers"> teachers</a>, <a href="https://publications.waset.org/abstracts/search?q=gender" title=" gender"> gender</a>, <a href="https://publications.waset.org/abstracts/search?q=identity" title=" identity"> identity</a>, <a href="https://publications.waset.org/abstracts/search?q=role" title=" role"> role</a> </p> <a href="https://publications.waset.org/abstracts/94408/influences-on-female-gender-identity-and-role-in-pre-school-saudi-arabian-analyzing-childrens-perspectives-through-narratives-and-teachers-pedagogies" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/94408.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">150</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">21</span> A Framework for Secure Information Flow Analysis in Web Applications </h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Ralph%20Adaimy">Ralph Adaimy</a>, <a href="https://publications.waset.org/abstracts/search?q=Wassim%20El-Hajj"> Wassim El-Hajj</a>, <a href="https://publications.waset.org/abstracts/search?q=Ghassen%20Ben%20Brahim"> Ghassen Ben Brahim</a>, <a href="https://publications.waset.org/abstracts/search?q=Hazem%20Hajj"> Hazem Hajj</a>, <a href="https://publications.waset.org/abstracts/search?q=Haidar%20Safa"> Haidar Safa</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Huge amounts of data and personal information are being sent to and retrieved from web applications on daily basis. Every application has its own confidentiality and integrity policies. Violating these policies can have broad negative impact on the involved company’s financial status, while enforcing them is very hard even for the developers with good security background. In this paper, we propose a framework that enforces security-by-construction in web applications. Minimal developer effort is required, in a sense that the developer only needs to annotate database attributes by a security class. The web application code is then converted into an intermediary representation, called Extended Program Dependence Graph (EPDG). Using the EPDG, the provided annotations are propagated to the application code and run against generic security enforcement rules that were carefully designed to detect insecure information flows as early as they occur. As a result, any violation in the data’s confidentiality or integrity policies is reported. As a proof of concept, two PHP web applications, Hotel Reservation and Auction, were used for testing and validation. The proposed system was able to catch all the existing insecure information flows at their source. Moreover and to highlight the simplicity of the suggested approaches vs. existing approaches, two professional web developers assessed the annotation tasks needed in the presented case studies and provided a very positive feedback on the simplicity of the annotation task. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=web%20applications%20security" title="web applications security">web applications security</a>, <a href="https://publications.waset.org/abstracts/search?q=secure%20information%20flow" title=" secure information flow"> secure information flow</a>, <a href="https://publications.waset.org/abstracts/search?q=program%20dependence%20graph" title=" program dependence graph"> program dependence graph</a>, <a href="https://publications.waset.org/abstracts/search?q=database%20annotation" title=" database annotation"> database annotation</a> </p> <a href="https://publications.waset.org/abstracts/19919/a-framework-for-secure-information-flow-analysis-in-web-applications" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/19919.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">471</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">20</span> A Deep Learning Approach to Calculate Cardiothoracic Ratio From Chest Radiographs</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Pranav%20Ajmera">Pranav Ajmera</a>, <a href="https://publications.waset.org/abstracts/search?q=Amit%20Kharat">Amit Kharat</a>, <a href="https://publications.waset.org/abstracts/search?q=Tanveer%20Gupte"> Tanveer Gupte</a>, <a href="https://publications.waset.org/abstracts/search?q=Richa%20Pant"> Richa Pant</a>, <a href="https://publications.waset.org/abstracts/search?q=Viraj%20Kulkarni"> Viraj Kulkarni</a>, <a href="https://publications.waset.org/abstracts/search?q=Vinay%20Duddalwar"> Vinay Duddalwar</a>, <a href="https://publications.waset.org/abstracts/search?q=Purnachandra%20Lamghare"> Purnachandra Lamghare</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The cardiothoracic ratio (CTR) is the ratio of the diameter of the heart to the diameter of the thorax. An abnormal CTR, that is, a value greater than 0.55, is often an indicator of an underlying pathological condition. The accurate prediction of an abnormal CTR from chest X-rays (CXRs) aids in the early diagnosis of clinical conditions. We propose a deep learning-based model for automatic CTR calculation that can assist the radiologist with the diagnosis of cardiomegaly and optimize the radiology flow. The study population included 1012 posteroanterior (PA) CXRs from a single institution. The Attention U-Net deep learning (DL) architecture was used for the automatic calculation of CTR. A CTR of 0.55 was used as a cut-off to categorize the condition as cardiomegaly present or absent. An observer performance test was conducted to assess the radiologist's performance in diagnosing cardiomegaly with and without artificial intelligence (AI) assistance. The Attention U-Net model was highly specific in calculating the CTR. The model exhibited a sensitivity of 0.80 [95% CI: 0.75, 0.85], precision of 0.99 [95% CI: 0.98, 1], and a F1 score of 0.88 [95% CI: 0.85, 0.91]. During the analysis, we observed that 51 out of 1012 samples were misclassified by the model when compared to annotations made by the expert radiologist. We further observed that the sensitivity of the reviewing radiologist in identifying cardiomegaly increased from 40.50% to 88.4% when aided by the AI-generated CTR. Our segmentation-based AI model demonstrated high specificity and sensitivity for CTR calculation. The performance of the radiologist on the observer performance test improved significantly with AI assistance. A DL-based segmentation model for rapid quantification of CTR can therefore have significant potential to be used in clinical workflows. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=cardiomegaly" title="cardiomegaly">cardiomegaly</a>, <a href="https://publications.waset.org/abstracts/search?q=deep%20learning" title=" deep learning"> deep learning</a>, <a href="https://publications.waset.org/abstracts/search?q=chest%20radiograph" title=" chest radiograph"> chest radiograph</a>, <a href="https://publications.waset.org/abstracts/search?q=artificial%20intelligence" title=" artificial intelligence"> artificial intelligence</a>, <a href="https://publications.waset.org/abstracts/search?q=cardiothoracic%20ratio" title=" cardiothoracic ratio"> cardiothoracic ratio</a> </p> <a href="https://publications.waset.org/abstracts/150795/a-deep-learning-approach-to-calculate-cardiothoracic-ratio-from-chest-radiographs" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/150795.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">98</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">19</span> C-eXpress: A Web-Based Analysis Platform for Comparative Functional Genomics and Proteomics in Human Cancer Cell Line, NCI-60 as an Example</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Chi-Ching%20Lee">Chi-Ching Lee</a>, <a href="https://publications.waset.org/abstracts/search?q=Po-Jung%20Huang"> Po-Jung Huang</a>, <a href="https://publications.waset.org/abstracts/search?q=Kuo-Yang%20Huang"> Kuo-Yang Huang</a>, <a href="https://publications.waset.org/abstracts/search?q=Petrus%20Tang"> Petrus Tang</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Background: Recent advances in high-throughput research technologies such as new-generation sequencing and multi-dimensional liquid chromatography makes it possible to dissect the complete transcriptome and proteome in a single run for the first time. However, it is almost impossible for many laboratories to handle and analysis these “BIG” data without the support from a bioinformatics team. We aimed to provide a web-based analysis platform for users with only limited knowledge on bio-computing to study the functional genomics and proteomics. Method: We use NCI-60 as an example dataset to demonstrate the power of the web-based analysis platform and data delivering system: C-eXpress takes a simple text file that contain the standard NCBI gene or protein ID and expression levels (rpkm or fold) as input file to generate a distribution map of gene/protein expression levels in a heatmap diagram organized by color gradients. The diagram is hyper-linked to a dynamic html table that allows the users to filter the datasets based on various gene features. A dynamic summary chart is generated automatically after each filtering process. Results: We implemented an integrated database that contain pre-defined annotations such as gene/protein properties (ID, name, length, MW, pI); pathways based on KEGG and GO biological process; subcellular localization based on GO cellular component; functional classification based on GO molecular function, kinase, peptidase and transporter. Multiple ways of sorting of column and rows is also provided for comparative analysis and visualization of multiple samples. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=cancer" title="cancer">cancer</a>, <a href="https://publications.waset.org/abstracts/search?q=visualization" title=" visualization"> visualization</a>, <a href="https://publications.waset.org/abstracts/search?q=database" title=" database"> database</a>, <a href="https://publications.waset.org/abstracts/search?q=functional%20annotation" title=" functional annotation"> functional annotation</a> </p> <a href="https://publications.waset.org/abstracts/16079/c-express-a-web-based-analysis-platform-for-comparative-functional-genomics-and-proteomics-in-human-cancer-cell-line-nci-60-as-an-example" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/16079.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">618</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">18</span> Detection of Safety Goggles on Humans in Industrial Environment Using Faster-Region Based on Convolutional Neural Network with Rotated Bounding Box</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Ankit%20Kamboj">Ankit Kamboj</a>, <a href="https://publications.waset.org/abstracts/search?q=Shikha%20Talwar"> Shikha Talwar</a>, <a href="https://publications.waset.org/abstracts/search?q=Nilesh%20Powar"> Nilesh Powar</a> </p> <p class="card-text"><strong>Abstract:</strong></p> To successfully deliver our products in the market, the employees need to be in a safe environment, especially in an industrial and manufacturing environment. The consequences of delinquency in wearing safety glasses while working in industrial plants could be high risk to employees, hence the need to develop a real-time automatic detection system which detects the persons (violators) not wearing safety glasses. In this study a convolutional neural network (CNN) algorithm called faster region based CNN (Faster RCNN) with rotated bounding box has been used for detecting safety glasses on persons; the algorithm has an advantage of detecting safety glasses with different orientation angles on the persons. The proposed method of rotational bounding boxes with a convolutional neural network first detects a person from the images, and then the method detects whether the person is wearing safety glasses or not. The video data is captured at the entrance of restricted zones of the industrial environment (manufacturing plant), which is further converted into images at 2 frames per second. In the first step, the CNN with pre-trained weights on COCO dataset is used for person detection where the detections are cropped as images. Then the safety goggles are labelled on the cropped images using the image labelling tool called roLabelImg, which is used to annotate the ground truth values of rotated objects more accurately, and the annotations obtained are further modified to depict four coordinates of the rectangular bounding box. Next, the faster RCNN with rotated bounding box is used to detect safety goggles, which is then compared with traditional bounding box faster RCNN in terms of detection accuracy (average precision), which shows the effectiveness of the proposed method for detection of rotatory objects. The deep learning benchmarking is done on a Dell workstation with a 16GB Nvidia GPU. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=CNN" title="CNN">CNN</a>, <a href="https://publications.waset.org/abstracts/search?q=deep%20learning" title=" deep learning"> deep learning</a>, <a href="https://publications.waset.org/abstracts/search?q=faster%20RCNN" title=" faster RCNN"> faster RCNN</a>, <a href="https://publications.waset.org/abstracts/search?q=roLabelImg%20rotated%20bounding%20box" title=" roLabelImg rotated bounding box"> roLabelImg rotated bounding box</a>, <a href="https://publications.waset.org/abstracts/search?q=safety%20goggle%20detection" title=" safety goggle detection"> safety goggle detection</a> </p> <a href="https://publications.waset.org/abstracts/125856/detection-of-safety-goggles-on-humans-in-industrial-environment-using-faster-region-based-on-convolutional-neural-network-with-rotated-bounding-box" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/125856.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">130</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">17</span> Phenotype Prediction of DNA Sequence Data: A Machine and Statistical Learning Approach</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Mpho%20Mokoatle">Mpho Mokoatle</a>, <a href="https://publications.waset.org/abstracts/search?q=Darlington%20Mapiye"> Darlington Mapiye</a>, <a href="https://publications.waset.org/abstracts/search?q=James%20Mashiyane"> James Mashiyane</a>, <a href="https://publications.waset.org/abstracts/search?q=Stephanie%20Muller"> Stephanie Muller</a>, <a href="https://publications.waset.org/abstracts/search?q=Gciniwe%20Dlamini"> Gciniwe Dlamini</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Great advances in high-throughput sequencing technologies have resulted in availability of huge amounts of sequencing data in public and private repositories, enabling a holistic understanding of complex biological phenomena. Sequence data are used for a wide range of applications such as gene annotations, expression studies, personalized treatment and precision medicine. However, this rapid growth in sequence data poses a great challenge which calls for novel data processing and analytic methods, as well as huge computing resources. In this work, a machine and statistical learning approach for DNA sequence classification based on $k$-mer representation of sequence data is proposed. The approach is tested using whole genome sequences of Mycobacterium tuberculosis (MTB) isolates to (i) reduce the size of genomic sequence data, (ii) identify an optimum size of k-mers and utilize it to build classification models, (iii) predict the phenotype from whole genome sequence data of a given bacterial isolate, and (iv) demonstrate computing challenges associated with the analysis of whole genome sequence data in producing interpretable and explainable insights. The classification models were trained on 104 whole genome sequences of MTB isoloates. Cluster analysis showed that k-mers maybe used to discriminate phenotypes and the discrimination becomes more concise as the size of k-mers increase. The best performing classification model had a k-mer size of 10 (longest k-mer) an accuracy, recall, precision, specificity, and Matthews Correlation coeffient of 72.0%, 80.5%, 80.5%, 63.6%, and 0.4 respectively. This study provides a comprehensive approach for resampling whole genome sequencing data, objectively selecting a k-mer size, and performing classification for phenotype prediction. The analysis also highlights the importance of increasing the k-mer size to produce more biological explainable results, which brings to the fore the interplay that exists amongst accuracy, computing resources and explainability of classification results. However, the analysis provides a new way to elucidate genetic information from genomic data, and identify phenotype relationships which are important especially in explaining complex biological mechanisms. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=AWD-LSTM" title="AWD-LSTM">AWD-LSTM</a>, <a href="https://publications.waset.org/abstracts/search?q=bootstrapping" title=" bootstrapping"> bootstrapping</a>, <a href="https://publications.waset.org/abstracts/search?q=k-mers" title=" k-mers"> k-mers</a>, <a href="https://publications.waset.org/abstracts/search?q=next%20generation%20sequencing" title=" next generation sequencing"> next generation sequencing</a> </p> <a href="https://publications.waset.org/abstracts/122679/phenotype-prediction-of-dna-sequence-data-a-machine-and-statistical-learning-approach" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/122679.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">167</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">16</span> Phenotype Prediction of DNA Sequence Data: A Machine and Statistical Learning Approach</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Darlington%20Mapiye">Darlington Mapiye</a>, <a href="https://publications.waset.org/abstracts/search?q=Mpho%20Mokoatle"> Mpho Mokoatle</a>, <a href="https://publications.waset.org/abstracts/search?q=James%20Mashiyane"> James Mashiyane</a>, <a href="https://publications.waset.org/abstracts/search?q=Stephanie%20Muller"> Stephanie Muller</a>, <a href="https://publications.waset.org/abstracts/search?q=Gciniwe%20Dlamini"> Gciniwe Dlamini</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Great advances in high-throughput sequencing technologies have resulted in availability of huge amounts of sequencing data in public and private repositories, enabling a holistic understanding of complex biological phenomena. Sequence data are used for a wide range of applications such as gene annotations, expression studies, personalized treatment and precision medicine. However, this rapid growth in sequence data poses a great challenge which calls for novel data processing and analytic methods, as well as huge computing resources. In this work, a machine and statistical learning approach for DNA sequence classification based on k-mer representation of sequence data is proposed. The approach is tested using whole genome sequences of Mycobacterium tuberculosis (MTB) isolates to (i) reduce the size of genomic sequence data, (ii) identify an optimum size of k-mers and utilize it to build classification models, (iii) predict the phenotype from whole genome sequence data of a given bacterial isolate, and (iv) demonstrate computing challenges associated with the analysis of whole genome sequence data in producing interpretable and explainable insights. The classification models were trained on 104 whole genome sequences of MTB isoloates. Cluster analysis showed that k-mers maybe used to discriminate phenotypes and the discrimination becomes more concise as the size of k-mers increase. The best performing classification model had a k-mer size of 10 (longest k-mer) an accuracy, recall, precision, specificity, and Matthews Correlation coeffient of 72.0 %, 80.5 %, 80.5 %, 63.6 %, and 0.4 respectively. This study provides a comprehensive approach for resampling whole genome sequencing data, objectively selecting a k-mer size, and performing classification for phenotype prediction. The analysis also highlights the importance of increasing the k-mer size to produce more biological explainable results, which brings to the fore the interplay that exists amongst accuracy, computing resources and explainability of classification results. However, the analysis provides a new way to elucidate genetic information from genomic data, and identify phenotype relationships which are important especially in explaining complex biological mechanisms <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=AWD-LSTM" title="AWD-LSTM">AWD-LSTM</a>, <a href="https://publications.waset.org/abstracts/search?q=bootstrapping" title=" bootstrapping"> bootstrapping</a>, <a href="https://publications.waset.org/abstracts/search?q=k-mers" title=" k-mers"> k-mers</a>, <a href="https://publications.waset.org/abstracts/search?q=next%20generation%20sequencing" title=" next generation sequencing"> next generation sequencing</a> </p> <a href="https://publications.waset.org/abstracts/122670/phenotype-prediction-of-dna-sequence-data-a-machine-and-statistical-learning-approach" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/122670.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">159</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">15</span> Computational Pipeline for Lynch Syndrome Detection: Integrating Alignment, Variant Calling, and Annotations</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Rofida%20Gamal">Rofida Gamal</a>, <a href="https://publications.waset.org/abstracts/search?q=Mostafa%20Mohammed"> Mostafa Mohammed</a>, <a href="https://publications.waset.org/abstracts/search?q=Mariam%20Adel"> Mariam Adel</a>, <a href="https://publications.waset.org/abstracts/search?q=Marwa%20Gamal"> Marwa Gamal</a>, <a href="https://publications.waset.org/abstracts/search?q=Marwa%20kamal"> Marwa kamal</a>, <a href="https://publications.waset.org/abstracts/search?q=Ayat%20Saber"> Ayat Saber</a>, <a href="https://publications.waset.org/abstracts/search?q=Maha%20Mamdouh"> Maha Mamdouh</a>, <a href="https://publications.waset.org/abstracts/search?q=Amira%20Emad"> Amira Emad</a>, <a href="https://publications.waset.org/abstracts/search?q=Mai%20Ramadan"> Mai Ramadan</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Lynch Syndrome is an inherited genetic condition associated with an increased risk of colorectal and other cancers. Detecting Lynch Syndrome in individuals is crucial for early intervention and preventive measures. This study proposes a computational pipeline for Lynch Syndrome detection by integrating alignment, variant calling, and annotation. The pipeline leverages popular tools such as FastQC, Trimmomatic, BWA, bcftools, and ANNOVAR to process the input FASTQ file, perform quality trimming, align reads to the reference genome, call variants, and annotate them. It is believed that the computational pipeline was applied to a dataset of Lynch Syndrome cases, and its performance was evaluated. It is believed that the quality check step ensured the integrity of the sequencing data, while the trimming process is thought to have removed low-quality bases and adaptors. In the alignment step, it is believed that the reads were accurately mapped to the reference genome, and the subsequent variant calling step is believed to have identified potential genetic variants. The annotation step is believed to have provided functional insights into the detected variants, including their effects on known Lynch Syndrome-associated genes. The results obtained from the pipeline revealed Lynch Syndrome-related positions in the genome, providing valuable information for further investigation and clinical decision-making. The pipeline's effectiveness was demonstrated through its ability to streamline the analysis workflow and identify potential genetic markers associated with Lynch Syndrome. It is believed that the computational pipeline presents a comprehensive and efficient approach to Lynch Syndrome detection, contributing to early diagnosis and intervention. The modularity and flexibility of the pipeline are believed to enable customization and adaptation to various datasets and research settings. Further optimization and validation are believed to be necessary to enhance performance and applicability across diverse populations. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=Lynch%20Syndrome" title="Lynch Syndrome">Lynch Syndrome</a>, <a href="https://publications.waset.org/abstracts/search?q=computational%20pipeline" title=" computational pipeline"> computational pipeline</a>, <a href="https://publications.waset.org/abstracts/search?q=alignment" title=" alignment"> alignment</a>, <a href="https://publications.waset.org/abstracts/search?q=variant%20calling" title=" variant calling"> variant calling</a>, <a href="https://publications.waset.org/abstracts/search?q=annotation" title=" annotation"> annotation</a>, <a href="https://publications.waset.org/abstracts/search?q=genetic%20markers" title=" genetic markers"> genetic markers</a> </p> <a href="https://publications.waset.org/abstracts/178986/computational-pipeline-for-lynch-syndrome-detection-integrating-alignment-variant-calling-and-annotations" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/178986.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">76</span> </span> </div> </div> <ul class="pagination"> <li class="page-item disabled"><span class="page-link">‹</span></li> <li class="page-item active"><span class="page-link">1</span></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=annotations&page=2">2</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=annotations&page=2" rel="next">›</a></li> </ul> </div> </main> <footer> <div id="infolinks" class="pt-3 pb-2"> <div class="container"> <div style="background-color:#f5f5f5;" class="p-3"> <div class="row"> <div class="col-md-2"> <ul class="list-unstyled"> About <li><a href="https://waset.org/page/support">About Us</a></li> <li><a href="https://waset.org/page/support#legal-information">Legal</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/WASET-16th-foundational-anniversary.pdf">WASET celebrates its 16th foundational anniversary</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Account <li><a href="https://waset.org/profile">My Account</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Explore <li><a href="https://waset.org/disciplines">Disciplines</a></li> <li><a href="https://waset.org/conferences">Conferences</a></li> <li><a href="https://waset.org/conference-programs">Conference Program</a></li> <li><a href="https://waset.org/committees">Committees</a></li> <li><a href="https://publications.waset.org">Publications</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Research <li><a href="https://publications.waset.org/abstracts">Abstracts</a></li> <li><a href="https://publications.waset.org">Periodicals</a></li> <li><a href="https://publications.waset.org/archive">Archive</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Open Science <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Science-Philosophy.pdf">Open Science Philosophy</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Science-Award.pdf">Open Science Award</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Society-Open-Science-and-Open-Innovation.pdf">Open Innovation</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Postdoctoral-Fellowship-Award.pdf">Postdoctoral Fellowship Award</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Scholarly-Research-Review.pdf">Scholarly Research Review</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Support <li><a href="https://waset.org/page/support">Support</a></li> <li><a href="https://waset.org/profile/messages/create">Contact Us</a></li> <li><a href="https://waset.org/profile/messages/create">Report Abuse</a></li> </ul> </div> </div> </div> </div> </div> <div class="container text-center"> <hr style="margin-top:0;margin-bottom:.3rem;"> <a href="https://creativecommons.org/licenses/by/4.0/" target="_blank" class="text-muted small">Creative Commons Attribution 4.0 International License</a> <div id="copy" class="mt-2">© 2024 World Academy of Science, Engineering and Technology</div> </div> </footer> <a href="javascript:" id="return-to-top"><i class="fas fa-arrow-up"></i></a> <div class="modal" id="modal-template"> <div class="modal-dialog"> <div class="modal-content"> <div class="row m-0 mt-1"> <div class="col-md-12"> <button type="button" class="close" data-dismiss="modal" aria-label="Close"><span aria-hidden="true">×</span></button> </div> </div> <div class="modal-body"></div> </div> </div> </div> <script src="https://cdn.waset.org/static/plugins/jquery-3.3.1.min.js"></script> <script src="https://cdn.waset.org/static/plugins/bootstrap-4.2.1/js/bootstrap.bundle.min.js"></script> <script src="https://cdn.waset.org/static/js/site.js?v=150220211556"></script> <script> jQuery(document).ready(function() { /*jQuery.get("https://publications.waset.org/xhr/user-menu", function (response) { jQuery('#mainNavMenu').append(response); });*/ jQuery.get({ url: "https://publications.waset.org/xhr/user-menu", cache: false }).then(function(response){ jQuery('#mainNavMenu').append(response); }); }); </script> </body> </html>