CINXE.COM

Search results for: text similarity

<!DOCTYPE html> <html lang="en" dir="ltr"> <head> <!-- Google tag (gtag.js) --> <script async src="https://www.googletagmanager.com/gtag/js?id=G-P63WKM1TM1"></script> <script> window.dataLayer = window.dataLayer || []; function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'G-P63WKM1TM1'); </script> <!-- Yandex.Metrika counter --> <script type="text/javascript" > (function(m,e,t,r,i,k,a){m[i]=m[i]||function(){(m[i].a=m[i].a||[]).push(arguments)}; m[i].l=1*new Date(); for (var j = 0; j < document.scripts.length; j++) {if (document.scripts[j].src === r) { return; }} k=e.createElement(t),a=e.getElementsByTagName(t)[0],k.async=1,k.src=r,a.parentNode.insertBefore(k,a)}) (window, document, "script", "https://mc.yandex.ru/metrika/tag.js", "ym"); ym(55165297, "init", { clickmap:false, trackLinks:true, accurateTrackBounce:true, webvisor:false }); </script> <noscript><div><img src="https://mc.yandex.ru/watch/55165297" style="position:absolute; left:-9999px;" alt="" /></div></noscript> <!-- /Yandex.Metrika counter --> <!-- Matomo --> <!-- End Matomo Code --> <title>Search results for: text similarity</title> <meta name="description" content="Search results for: text similarity"> <meta name="keywords" content="text similarity"> <meta name="viewport" content="width=device-width, initial-scale=1, minimum-scale=1, maximum-scale=1, user-scalable=no"> <meta charset="utf-8"> <link href="https://cdn.waset.org/favicon.ico" type="image/x-icon" rel="shortcut icon"> <link href="https://cdn.waset.org/static/plugins/bootstrap-4.2.1/css/bootstrap.min.css" rel="stylesheet"> <link href="https://cdn.waset.org/static/plugins/fontawesome/css/all.min.css" rel="stylesheet"> <link href="https://cdn.waset.org/static/css/site.css?v=150220211555" rel="stylesheet"> </head> <body> <header> <div class="container"> <nav class="navbar navbar-expand-lg navbar-light"> <a class="navbar-brand" href="https://waset.org"> <img src="https://cdn.waset.org/static/images/wasetc.png" alt="Open Science Research Excellence" title="Open Science Research Excellence" /> </a> <button class="d-block d-lg-none navbar-toggler ml-auto" type="button" data-toggle="collapse" data-target="#navbarMenu" aria-controls="navbarMenu" aria-expanded="false" aria-label="Toggle navigation"> <span class="navbar-toggler-icon"></span> </button> <div class="w-100"> <div class="d-none d-lg-flex flex-row-reverse"> <form method="get" action="https://waset.org/search" class="form-inline my-2 my-lg-0"> <input class="form-control mr-sm-2" type="search" placeholder="Search Conferences" value="text similarity" name="q" aria-label="Search"> <button class="btn btn-light my-2 my-sm-0" type="submit"><i class="fas fa-search"></i></button> </form> </div> <div class="collapse navbar-collapse mt-1" id="navbarMenu"> <ul class="navbar-nav ml-auto align-items-center" id="mainNavMenu"> <li class="nav-item"> <a class="nav-link" href="https://waset.org/conferences" title="Conferences in 2024/2025/2026">Conferences</a> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/disciplines" title="Disciplines">Disciplines</a> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/committees" rel="nofollow">Committees</a> </li> <li class="nav-item dropdown"> <a class="nav-link dropdown-toggle" href="#" id="navbarDropdownPublications" role="button" data-toggle="dropdown" aria-haspopup="true" aria-expanded="false"> Publications </a> <div class="dropdown-menu" aria-labelledby="navbarDropdownPublications"> <a class="dropdown-item" href="https://publications.waset.org/abstracts">Abstracts</a> <a class="dropdown-item" href="https://publications.waset.org">Periodicals</a> <a class="dropdown-item" href="https://publications.waset.org/archive">Archive</a> </div> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/page/support" title="Support">Support</a> </li> </ul> </div> </div> </nav> </div> </header> <main> <div class="container mt-4"> <div class="row"> <div class="col-md-9 mx-auto"> <form method="get" action="https://publications.waset.org/abstracts/search"> <div id="custom-search-input"> <div class="input-group"> <i class="fas fa-search"></i> <input type="text" class="search-query" name="q" placeholder="Author, Title, Abstract, Keywords" value="text similarity"> <input type="submit" class="btn_search" value="Search"> </div> </div> </form> </div> </div> <div class="row mt-3"> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Commenced</strong> in January 2007</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Frequency:</strong> Monthly</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Edition:</strong> International</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Paper Count:</strong> 1947</div> </div> </div> </div> <h1 class="mt-3 mb-3 text-center" style="font-size:1.6rem;">Search results for: text similarity</h1> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1947</span> Text Similarity in Vector Space Models: A Comparative Study</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Omid%20Shahmirzadi">Omid Shahmirzadi</a>, <a href="https://publications.waset.org/abstracts/search?q=Adam%20Lugowski"> Adam Lugowski</a>, <a href="https://publications.waset.org/abstracts/search?q=Kenneth%20Younge"> Kenneth Younge</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Automatic measurement of semantic text similarity is an important task in natural language processing. In this paper, we evaluate the performance of different vector space models to perform this task. We address the real-world problem of modeling patent-to-patent similarity and compare TFIDF (and related extensions), topic models (e.g., latent semantic indexing), and neural models (e.g., paragraph vectors). Contrary to expectations, the added computational cost of text embedding methods is justified only when: 1) the target text is condensed; and 2) the similarity comparison is trivial. Otherwise, TFIDF performs surprisingly well in other cases: in particular for longer and more technical texts or for making finer-grained distinctions between nearest neighbors. Unexpectedly, extensions to the TFIDF method, such as adding noun phrases or calculating term weights incrementally, were not helpful in our context. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=big%20data" title="big data">big data</a>, <a href="https://publications.waset.org/abstracts/search?q=patent" title=" patent"> patent</a>, <a href="https://publications.waset.org/abstracts/search?q=text%20embedding" title=" text embedding"> text embedding</a>, <a href="https://publications.waset.org/abstracts/search?q=text%20similarity" title=" text similarity"> text similarity</a>, <a href="https://publications.waset.org/abstracts/search?q=vector%20space%20model" title=" vector space model"> vector space model</a> </p> <a href="https://publications.waset.org/abstracts/102930/text-similarity-in-vector-space-models-a-comparative-study" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/102930.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">175</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1946</span> A Similarity Measure for Classification and Clustering in Image Based Medical and Text Based Banking Applications</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=K.%20P.%20Sandesh">K. P. Sandesh</a>, <a href="https://publications.waset.org/abstracts/search?q=M.%20H.%20Suman"> M. H. Suman</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Text processing plays an important role in information retrieval, data-mining, and web search. Measuring the similarity between the documents is an important operation in the text processing field. In this project, a new similarity measure is proposed. To compute the similarity between two documents with respect to a feature the proposed measure takes the following three cases into account: (1) The feature appears in both documents; (2) The feature appears in only one document and; (3) The feature appears in none of the documents. The proposed measure is extended to gauge the similarity between two sets of documents. The effectiveness of our measure is evaluated on several real-world data sets for text classification and clustering problems, especially in banking and health sectors. The results show that the performance obtained by the proposed measure is better than that achieved by the other measures. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=document%20classification" title="document classification">document classification</a>, <a href="https://publications.waset.org/abstracts/search?q=document%20clustering" title=" document clustering"> document clustering</a>, <a href="https://publications.waset.org/abstracts/search?q=entropy" title=" entropy"> entropy</a>, <a href="https://publications.waset.org/abstracts/search?q=accuracy" title=" accuracy"> accuracy</a>, <a href="https://publications.waset.org/abstracts/search?q=classifiers" title=" classifiers"> classifiers</a>, <a href="https://publications.waset.org/abstracts/search?q=clustering%20algorithms" title=" clustering algorithms"> clustering algorithms</a> </p> <a href="https://publications.waset.org/abstracts/22708/a-similarity-measure-for-classification-and-clustering-in-image-based-medical-and-text-based-banking-applications" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/22708.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">518</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1945</span> Measuring Text-Based Semantics Relatedness Using WordNet</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Madiha%20Khan">Madiha Khan</a>, <a href="https://publications.waset.org/abstracts/search?q=Sidrah%20Ramzan"> Sidrah Ramzan</a>, <a href="https://publications.waset.org/abstracts/search?q=Seemab%20Khan"> Seemab Khan</a>, <a href="https://publications.waset.org/abstracts/search?q=Shahzad%20Hassan"> Shahzad Hassan</a>, <a href="https://publications.waset.org/abstracts/search?q=Kamran%20Saeed"> Kamran Saeed</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Measuring semantic similarity between texts is calculating semantic relatedness between texts using various techniques. Our web application (Measuring Relatedness of Concepts-MRC) allows user to input two text corpuses and get semantic similarity percentage between both using WordNet. Our application goes through five stages for the computation of semantic relatedness. Those stages are: Preprocessing (extracts keywords from content), Feature Extraction (classification of words into Parts-of-Speech), Synonyms Extraction (retrieves synonyms against each keyword), Measuring Similarity (using keywords and synonyms, similarity is measured) and Visualization (graphical representation of similarity measure). Hence the user can measure similarity on basis of features as well. The end result is a percentage score and the word(s) which form the basis of similarity between both texts with use of different tools on same platform. In future work we look forward for a Web as a live corpus application that provides a simpler and user friendly tool to compare documents and extract useful information. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=Graphviz%20representation" title="Graphviz representation">Graphviz representation</a>, <a href="https://publications.waset.org/abstracts/search?q=semantic%20relatedness" title=" semantic relatedness"> semantic relatedness</a>, <a href="https://publications.waset.org/abstracts/search?q=similarity%20measurement" title=" similarity measurement"> similarity measurement</a>, <a href="https://publications.waset.org/abstracts/search?q=WordNet%20similarity" title=" WordNet similarity"> WordNet similarity</a> </p> <a href="https://publications.waset.org/abstracts/95106/measuring-text-based-semantics-relatedness-using-wordnet" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/95106.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">237</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1944</span> A Word-to-Vector Formulation for Word Representation </h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Sandra%20Rizkallah">Sandra Rizkallah</a>, <a href="https://publications.waset.org/abstracts/search?q=Amir%20F.%20Atiya"> Amir F. Atiya</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This work presents a novel word to vector representation that is based on embedding the words into a sphere, whereby the dot product of the corresponding vectors represents the similarity between any two words. Embedding the vectors into a sphere enabled us to take into consideration the antonymity between words, not only the synonymity, because of the suitability to handle the polarity nature of words. For example, a word and its antonym can be represented as a vector and its negative. Moreover, we have managed to extract an adequate vocabulary. The obtained results show that the proposed approach can capture the essence of the language, and can be generalized to estimate a correct similarity of any new pair of words. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=natural%20language%20processing" title="natural language processing">natural language processing</a>, <a href="https://publications.waset.org/abstracts/search?q=word%20to%20vector" title=" word to vector"> word to vector</a>, <a href="https://publications.waset.org/abstracts/search?q=text%20similarity" title=" text similarity"> text similarity</a>, <a href="https://publications.waset.org/abstracts/search?q=text%20mining" title=" text mining"> text mining</a> </p> <a href="https://publications.waset.org/abstracts/81808/a-word-to-vector-formulation-for-word-representation" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/81808.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">275</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1943</span> Graph-Based Semantical Extractive Text Analysis</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Mina%20Samizadeh">Mina Samizadeh</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In the past few decades, there has been an explosion in the amount of available data produced from various sources with different topics. The availability of this enormous data necessitates us to adopt effective computational tools to explore the data. This leads to an intense growing interest in the research community to develop computational methods focused on processing this text data. A line of study focused on condensing the text so that we are able to get a higher level of understanding in a shorter time. The two important tasks to do this are keyword extraction and text summarization. In keyword extraction, we are interested in finding the key important words from a text. This makes us familiar with the general topic of a text. In text summarization, we are interested in producing a short-length text which includes important information about the document. The TextRank algorithm, an unsupervised learning method that is an extension of the PageRank (algorithm which is the base algorithm of Google search engine for searching pages and ranking them), has shown its efficacy in large-scale text mining, especially for text summarization and keyword extraction. This algorithm can automatically extract the important parts of a text (keywords or sentences) and declare them as a result. However, this algorithm neglects the semantic similarity between the different parts. In this work, we improved the results of the TextRank algorithm by incorporating the semantic similarity between parts of the text. Aside from keyword extraction and text summarization, we develop a topic clustering algorithm based on our framework, which can be used individually or as a part of generating the summary to overcome coverage problems. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=keyword%20extraction" title="keyword extraction">keyword extraction</a>, <a href="https://publications.waset.org/abstracts/search?q=n-gram%20extraction" title=" n-gram extraction"> n-gram extraction</a>, <a href="https://publications.waset.org/abstracts/search?q=text%20summarization" title=" text summarization"> text summarization</a>, <a href="https://publications.waset.org/abstracts/search?q=topic%20clustering" title=" topic clustering"> topic clustering</a>, <a href="https://publications.waset.org/abstracts/search?q=semantic%20analysis" title=" semantic analysis"> semantic analysis</a> </p> <a href="https://publications.waset.org/abstracts/160526/graph-based-semantical-extractive-text-analysis" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/160526.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">70</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1942</span> Network Word Discovery Framework Based on Sentence Semantic Vector Similarity</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Ganfeng%20Yu">Ganfeng Yu</a>, <a href="https://publications.waset.org/abstracts/search?q=Yuefeng%20Ma"> Yuefeng Ma</a>, <a href="https://publications.waset.org/abstracts/search?q=Shanliang%20Yang"> Shanliang Yang</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The word discovery is a key problem in text information retrieval technology. Methods in new word discovery tend to be closely related to words because they generally obtain new word results by analyzing words. With the popularity of social networks, individual netizens and online self-media have generated various network texts for the convenience of online life, including network words that are far from standard Chinese expression. How detect network words is one of the important goals in the field of text information retrieval today. In this paper, we integrate the word embedding model and clustering methods to propose a network word discovery framework based on sentence semantic similarity (S³-NWD) to detect network words effectively from the corpus. This framework constructs sentence semantic vectors through a distributed representation model, uses the similarity of sentence semantic vectors to determine the semantic relationship between sentences, and finally realizes network word discovery by the meaning of semantic replacement between sentences. The experiment verifies that the framework not only completes the rapid discovery of network words but also realizes the standard word meaning of the discovery of network words, which reflects the effectiveness of our work. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=text%20information%20retrieval" title="text information retrieval">text information retrieval</a>, <a href="https://publications.waset.org/abstracts/search?q=natural%20language%20processing" title=" natural language processing"> natural language processing</a>, <a href="https://publications.waset.org/abstracts/search?q=new%20word%20discovery" title=" new word discovery"> new word discovery</a>, <a href="https://publications.waset.org/abstracts/search?q=information%20extraction" title=" information extraction"> information extraction</a> </p> <a href="https://publications.waset.org/abstracts/153917/network-word-discovery-framework-based-on-sentence-semantic-vector-similarity" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/153917.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">95</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1941</span> The Acquisition of Case in Biological Domain Based on Text Mining</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Shen%20Jian">Shen Jian</a>, <a href="https://publications.waset.org/abstracts/search?q=Hu%20Jie"> Hu Jie</a>, <a href="https://publications.waset.org/abstracts/search?q=Qi%20Jin"> Qi Jin</a>, <a href="https://publications.waset.org/abstracts/search?q=Liu%20Wei%20Jie"> Liu Wei Jie</a>, <a href="https://publications.waset.org/abstracts/search?q=Chen%20Ji%20Yi"> Chen Ji Yi</a>, <a href="https://publications.waset.org/abstracts/search?q=Peng%20Ying%20Hong"> Peng Ying Hong</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In order to settle the problem of acquiring case in biological related to design problems, a biometrics instance acquisition method based on text mining is presented. Through the construction of corpus text vector space and knowledge mining, the feature selection, similarity measure and case retrieval method of text in the field of biology are studied. First, we establish a vector space model of the corpus in the biological field and complete the preprocessing steps. Then, the corpus is retrieved by using the vector space model combined with the functional keywords to obtain the biological domain examples related to the design problems. Finally, we verify the validity of this method by taking the example of text. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=text%20mining" title="text mining">text mining</a>, <a href="https://publications.waset.org/abstracts/search?q=vector%20space%20model" title=" vector space model"> vector space model</a>, <a href="https://publications.waset.org/abstracts/search?q=feature%20selection" title=" feature selection"> feature selection</a>, <a href="https://publications.waset.org/abstracts/search?q=biologically%20inspired%20design" title=" biologically inspired design"> biologically inspired design</a> </p> <a href="https://publications.waset.org/abstracts/88075/the-acquisition-of-case-in-biological-domain-based-on-text-mining" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/88075.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">261</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1940</span> Multi-Objective Optimal Threshold Selection for Similarity Functions in Siamese Networks for Semantic Textual Similarity Tasks</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Kriuk%20Boris">Kriuk Boris</a>, <a href="https://publications.waset.org/abstracts/search?q=Kriuk%20Fedor"> Kriuk Fedor</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper presents a comparative study of fundamental similarity functions for Siamese networks in semantic textual similarity (STS) tasks. We evaluate various similarity functions using the STS Benchmark dataset, analyzing their performance and stability. Additionally, we introduce a multi-objective approach for optimal threshold selection. Our findings provide insights into the effectiveness of different similarity functions and offer a straightforward method for threshold selection optimization, contributing to the advancement of Siamese network architectures in STS applications. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=siamese%20networks" title="siamese networks">siamese networks</a>, <a href="https://publications.waset.org/abstracts/search?q=semantic%20textual%20similarity" title=" semantic textual similarity"> semantic textual similarity</a>, <a href="https://publications.waset.org/abstracts/search?q=similarity%20functions" title=" similarity functions"> similarity functions</a>, <a href="https://publications.waset.org/abstracts/search?q=STS%20benchmark%20dataset" title=" STS benchmark dataset"> STS benchmark dataset</a>, <a href="https://publications.waset.org/abstracts/search?q=threshold%20selection" title=" threshold selection"> threshold selection</a> </p> <a href="https://publications.waset.org/abstracts/187407/multi-objective-optimal-threshold-selection-for-similarity-functions-in-siamese-networks-for-semantic-textual-similarity-tasks" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/187407.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">37</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1939</span> Literature Review on Text Comparison Techniques: Analysis of Text Extraction, Main Comparison and Visual Representation Tools</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Andriana%20Mkrtchyan">Andriana Mkrtchyan</a>, <a href="https://publications.waset.org/abstracts/search?q=Vahe%20Khlghatyan"> Vahe Khlghatyan</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The choice of a profession is one of the most important decisions people make throughout their life. With the development of modern science, technologies, and all the spheres existing in the modern world, more and more professions are being arisen that complicate even more the process of choosing. Hence, there is a need for a guiding platform to help people to choose a profession and the right career path based on their interests, skills, and personality. This review aims at analyzing existing methods of comparing PDF format documents and suggests that a 3-stage approach is implemented for the comparison, that is – 1. text extraction from PDF format documents, 2. comparison of the extracted text via NLP algorithms, 3. comparison representation using special shape and color psychology methodology. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=color%20psychology" title="color psychology">color psychology</a>, <a href="https://publications.waset.org/abstracts/search?q=data%20acquisition%2Fextraction" title=" data acquisition/extraction"> data acquisition/extraction</a>, <a href="https://publications.waset.org/abstracts/search?q=data%20augmentation" title=" data augmentation"> data augmentation</a>, <a href="https://publications.waset.org/abstracts/search?q=disambiguation" title=" disambiguation"> disambiguation</a>, <a href="https://publications.waset.org/abstracts/search?q=natural%20language%20processing" title=" natural language processing"> natural language processing</a>, <a href="https://publications.waset.org/abstracts/search?q=outlier%20detection" title=" outlier detection"> outlier detection</a>, <a href="https://publications.waset.org/abstracts/search?q=semantic%20similarity" title=" semantic similarity"> semantic similarity</a>, <a href="https://publications.waset.org/abstracts/search?q=text-mining" title=" text-mining"> text-mining</a>, <a href="https://publications.waset.org/abstracts/search?q=user%20evaluation" title=" user evaluation"> user evaluation</a>, <a href="https://publications.waset.org/abstracts/search?q=visual%20search" title=" visual search"> visual search</a> </p> <a href="https://publications.waset.org/abstracts/161588/literature-review-on-text-comparison-techniques-analysis-of-text-extraction-main-comparison-and-visual-representation-tools" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/161588.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">76</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1938</span> Resume Ranking Using Custom Word2vec and Rule-Based Natural Language Processing Techniques</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Subodh%20Chandra%20Shakya">Subodh Chandra Shakya</a>, <a href="https://publications.waset.org/abstracts/search?q=Rajendra%20Sapkota"> Rajendra Sapkota</a>, <a href="https://publications.waset.org/abstracts/search?q=Aakash%20Tamang"> Aakash Tamang</a>, <a href="https://publications.waset.org/abstracts/search?q=Shushant%20Pudasaini"> Shushant Pudasaini</a>, <a href="https://publications.waset.org/abstracts/search?q=Sujan%20Adhikari"> Sujan Adhikari</a>, <a href="https://publications.waset.org/abstracts/search?q=Sajjan%20Adhikari"> Sajjan Adhikari</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Lots of efforts have been made in order to measure the semantic similarity between the text corpora in the documents. Techniques have been evolved to measure the similarity of two documents. One such state-of-art technique in the field of Natural Language Processing (NLP) is word to vector models, which converts the words into their word-embedding and measures the similarity between the vectors. We found this to be quite useful for the task of resume ranking. So, this research paper is the implementation of the word2vec model along with other Natural Language Processing techniques in order to rank the resumes for the particular job description so as to automate the process of hiring. The research paper proposes the system and the findings that were made during the process of building the system. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=chunking" title="chunking">chunking</a>, <a href="https://publications.waset.org/abstracts/search?q=document%20similarity" title=" document similarity"> document similarity</a>, <a href="https://publications.waset.org/abstracts/search?q=information%20extraction" title=" information extraction"> information extraction</a>, <a href="https://publications.waset.org/abstracts/search?q=natural%20language%20processing" title=" natural language processing"> natural language processing</a>, <a href="https://publications.waset.org/abstracts/search?q=word2vec" title=" word2vec"> word2vec</a>, <a href="https://publications.waset.org/abstracts/search?q=word%20embedding" title=" word embedding"> word embedding</a> </p> <a href="https://publications.waset.org/abstracts/129534/resume-ranking-using-custom-word2vec-and-rule-based-natural-language-processing-techniques" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/129534.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">158</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1937</span> Benchmarking Bert-Based Low-Resource Language: Case Uzbek NLP Models</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Jamshid%20Qodirov">Jamshid Qodirov</a>, <a href="https://publications.waset.org/abstracts/search?q=Sirojiddin%20Komolov"> Sirojiddin Komolov</a>, <a href="https://publications.waset.org/abstracts/search?q=Ravilov%20Mirahmad"> Ravilov Mirahmad</a>, <a href="https://publications.waset.org/abstracts/search?q=Olimjon%20Mirzayev"> Olimjon Mirzayev</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Nowadays, natural language processing tools play a crucial role in our daily lives, including various techniques with text processing. There are very advanced models in modern languages, such as English, Russian etc. But, in some languages, such as Uzbek, the NLP models have been developed recently. Thus, there are only a few NLP models in Uzbek language. Moreover, there is no such work that could show which Uzbek NLP model behaves in different situations and when to use them. This work tries to close this gap and compares the Uzbek NLP models existing as of the time this article was written. The authors try to compare the NLP models in two different scenarios: sentiment analysis and sentence similarity, which are the implementations of the two most common problems in the industry: classification and similarity. Another outcome from this work is two datasets for classification and sentence similarity in Uzbek language that we generated ourselves and can be useful in both industry and academia as well. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=NLP" title="NLP">NLP</a>, <a href="https://publications.waset.org/abstracts/search?q=benchmak" title=" benchmak"> benchmak</a>, <a href="https://publications.waset.org/abstracts/search?q=bert" title=" bert"> bert</a>, <a href="https://publications.waset.org/abstracts/search?q=vectorization" title=" vectorization"> vectorization</a> </p> <a href="https://publications.waset.org/abstracts/182098/benchmarking-bert-based-low-resource-language-case-uzbek-nlp-models" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/182098.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">54</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1936</span> Extraction of Text Subtitles in Multimedia Systems</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Amarjit%20Singh">Amarjit Singh</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In this paper, a method for extraction of text subtitles in large video is proposed. The video data needs to be annotated for many multimedia applications. Text is incorporated in digital video for the motive of providing useful information about that video. So need arises to detect text present in video to understanding and video indexing. This is achieved in two steps. First step is text localization and the second step is text verification. The method of text detection can be extended to text recognition which finds applications in automatic video indexing; video annotation and content based video retrieval. The method has been tested on various types of videos. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=video" title="video">video</a>, <a href="https://publications.waset.org/abstracts/search?q=subtitles" title=" subtitles"> subtitles</a>, <a href="https://publications.waset.org/abstracts/search?q=extraction" title=" extraction"> extraction</a>, <a href="https://publications.waset.org/abstracts/search?q=annotation" title=" annotation"> annotation</a>, <a href="https://publications.waset.org/abstracts/search?q=frames" title=" frames"> frames</a> </p> <a href="https://publications.waset.org/abstracts/24441/extraction-of-text-subtitles-in-multimedia-systems" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/24441.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">601</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1935</span> Approximately Similarity Measurement of Web Sites Using Genetic Algorithms and Binary Trees</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Doru%20Anastasiu%20Popescu">Doru Anastasiu Popescu</a>, <a href="https://publications.waset.org/abstracts/search?q=Dan%20R%C4%83dulescu"> Dan Rădulescu</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In this paper, we determine the similarity of two HTML web applications. We are going to use a genetic algorithm in order to determine the most significant web pages of each application (we are not going to use every web page of a site). Using these significant web pages, we will find the similarity value between the two applications. The algorithm is going to be efficient because we are going to use a reduced number of web pages for comparisons but it will return an approximate value of the similarity. The binary trees are used to keep the tags from the significant pages. The algorithm was implemented in Java language. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=Tag" title="Tag">Tag</a>, <a href="https://publications.waset.org/abstracts/search?q=HTML" title=" HTML"> HTML</a>, <a href="https://publications.waset.org/abstracts/search?q=web%20page" title=" web page"> web page</a>, <a href="https://publications.waset.org/abstracts/search?q=genetic%20algorithm" title=" genetic algorithm"> genetic algorithm</a>, <a href="https://publications.waset.org/abstracts/search?q=similarity%20value" title=" similarity value"> similarity value</a>, <a href="https://publications.waset.org/abstracts/search?q=binary%20tree" title=" binary tree"> binary tree</a> </p> <a href="https://publications.waset.org/abstracts/50460/approximately-similarity-measurement-of-web-sites-using-genetic-algorithms-and-binary-trees" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/50460.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">355</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1934</span> A Summary-Based Text Classification Model for Graph Attention Networks</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Shuo%20Liu">Shuo Liu</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In Chinese text classification tasks, redundant words and phrases can interfere with the formation of extracted and analyzed text information, leading to a decrease in the accuracy of the classification model. To reduce irrelevant elements, extract and utilize text content information more efficiently and improve the accuracy of text classification models. In this paper, the text in the corpus is first extracted using the TextRank algorithm for abstraction, the words in the abstract are used as nodes to construct a text graph, and then the graph attention network (GAT) is used to complete the task of classifying the text. Testing on a Chinese dataset from the network, the classification accuracy was improved over the direct method of generating graph structures using text. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=Chinese%20natural%20language%20processing" title="Chinese natural language processing">Chinese natural language processing</a>, <a href="https://publications.waset.org/abstracts/search?q=text%20classification" title=" text classification"> text classification</a>, <a href="https://publications.waset.org/abstracts/search?q=abstract%20extraction" title=" abstract extraction"> abstract extraction</a>, <a href="https://publications.waset.org/abstracts/search?q=graph%20attention%20network" title=" graph attention network"> graph attention network</a> </p> <a href="https://publications.waset.org/abstracts/158060/a-summary-based-text-classification-model-for-graph-attention-networks" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/158060.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">100</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1933</span> Urdu Text Extraction Method from Images</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Samabia%20Tehsin">Samabia Tehsin</a>, <a href="https://publications.waset.org/abstracts/search?q=Sumaira%20Kausar"> Sumaira Kausar</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Due to the vast increase in the multimedia data in recent years, efficient and robust retrieval techniques are needed to retrieve and index images/ videos. Text embedded in the images can serve as the strong retrieval tool for images. This is the reason that text extraction is an area of research with increasing attention. English text extraction is the focus of many researchers but very less work has been done on other languages like Urdu. This paper is focusing on Urdu text extraction from video frames. This paper presents a text detection feature set, which has the ability to deal up with most of the problems connected with the text extraction process. To test the validity of the method, it is tested on Urdu news dataset, which gives promising results. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=caption%20text" title="caption text">caption text</a>, <a href="https://publications.waset.org/abstracts/search?q=content-based%20image%20retrieval" title=" content-based image retrieval"> content-based image retrieval</a>, <a href="https://publications.waset.org/abstracts/search?q=document%20analysis" title=" document analysis"> document analysis</a>, <a href="https://publications.waset.org/abstracts/search?q=text%20extraction" title=" text extraction"> text extraction</a> </p> <a href="https://publications.waset.org/abstracts/9566/urdu-text-extraction-method-from-images" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/9566.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">516</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1932</span> Information Disclosure And Financial Sentiment Index Using a Machine Learning Approach</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Alev%20Atak">Alev Atak</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In this paper, we aim to create a financial sentiment index by investigating the company’s voluntary information disclosures. We retrieve structured content from BIST 100 companies’ financial reports for the period 1998-2018 and extract relevant financial information for sentiment analysis through Natural Language Processing. We measure strategy-related disclosures and their cross-sectional variation and classify report content into generic sections using synonym lists divided into four main categories according to their liquidity risk profile, risk positions, intra-annual information, and exposure to risk. We use Word Error Rate and Cosin Similarity for comparing and measuring text similarity and derivation in sets of texts. In addition to performing text extraction, we will provide a range of text analysis options, such as the readability metrics, word counts using pre-determined lists (e.g., forward-looking, uncertainty, tone, etc.), and comparison with reference corpus (word, parts of speech and semantic level). Therefore, we create an adequate analytical tool and a financial dictionary to depict the importance of granular financial disclosure for investors to identify correctly the risk-taking behavior and hence make the aggregated effects traceable. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=financial%20sentiment" title="financial sentiment">financial sentiment</a>, <a href="https://publications.waset.org/abstracts/search?q=machine%20learning" title=" machine learning"> machine learning</a>, <a href="https://publications.waset.org/abstracts/search?q=information%20disclosure" title=" information disclosure"> information disclosure</a>, <a href="https://publications.waset.org/abstracts/search?q=risk" title=" risk"> risk</a> </p> <a href="https://publications.waset.org/abstracts/158769/information-disclosure-and-financial-sentiment-index-using-a-machine-learning-approach" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/158769.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">94</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1931</span> Quick Similarity Measurement of Binary Images via Probabilistic Pixel Mapping</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Adnan%20A.%20Y.%20Mustafa">Adnan A. Y. Mustafa</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In this paper we present a quick technique to measure the similarity between binary images. The technique is based on a probabilistic mapping approach and is fast because only a minute percentage of the image pixels need to be compared to measure the similarity, and not the whole image. We exploit the power of the Probabilistic Matching Model for Binary Images (PMMBI) to arrive at an estimate of the similarity. We show that the estimate is a good approximation of the actual value, and the quality of the estimate can be improved further with increased image mappings. Furthermore, the technique is image size invariant; the similarity between big images can be measured as fast as that for small images. Examples of trials conducted on real images are presented. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=big%20images" title="big images">big images</a>, <a href="https://publications.waset.org/abstracts/search?q=binary%20images" title=" binary images"> binary images</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20matching" title=" image matching"> image matching</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20similarity" title=" image similarity"> image similarity</a> </p> <a href="https://publications.waset.org/abstracts/89963/quick-similarity-measurement-of-binary-images-via-probabilistic-pixel-mapping" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/89963.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">196</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1930</span> A Context-Sensitive Algorithm for Media Similarity Search </h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Guang-Ho%20Cha">Guang-Ho Cha</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper presents a context-sensitive media similarity search algorithm. One of the central problems regarding media search is the semantic gap between the low-level features computed automatically from media data and the human interpretation of them. This is because the notion of similarity is usually based on high-level abstraction but the low-level features do not sometimes reflect the human perception. Many media search algorithms have used the Minkowski metric to measure similarity between image pairs. However those functions cannot adequately capture the aspects of the characteristics of the human visual system as well as the nonlinear relationships in contextual information given by images in a collection. Our search algorithm tackles this problem by employing a similarity measure and a ranking strategy that reflect the nonlinearity of human perception and contextual information in a dataset. Similarity search in an image database based on this contextual information shows encouraging experimental results. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=context-sensitive%20search" title="context-sensitive search">context-sensitive search</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20search" title=" image search"> image search</a>, <a href="https://publications.waset.org/abstracts/search?q=similarity%20ranking" title=" similarity ranking"> similarity ranking</a>, <a href="https://publications.waset.org/abstracts/search?q=similarity%20search" title=" similarity search"> similarity search</a> </p> <a href="https://publications.waset.org/abstracts/65150/a-context-sensitive-algorithm-for-media-similarity-search" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/65150.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">365</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1929</span> Semantic Textual Similarity on Contracts: Exploring Multiple Negative Ranking Losses for Sentence Transformers</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Yogendra%20Sisodia">Yogendra Sisodia</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Researchers are becoming more interested in extracting useful information from legal documents thanks to the development of large-scale language models in natural language processing (NLP), and deep learning has accelerated the creation of powerful text mining models. Legal fields like contracts benefit greatly from semantic text search since it makes it quick and easy to find related clauses. After collecting sentence embeddings, it is relatively simple to locate sentences with a comparable meaning throughout the entire legal corpus. The author of this research investigated two pre-trained language models for this task: MiniLM and Roberta, and further fine-tuned them on Legal Contracts. The author used Multiple Negative Ranking Loss for the creation of sentence transformers. The fine-tuned language models and sentence transformers showed promising results. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=legal%20contracts" title="legal contracts">legal contracts</a>, <a href="https://publications.waset.org/abstracts/search?q=multiple%20negative%20ranking%20loss" title=" multiple negative ranking loss"> multiple negative ranking loss</a>, <a href="https://publications.waset.org/abstracts/search?q=natural%20language%20inference" title=" natural language inference"> natural language inference</a>, <a href="https://publications.waset.org/abstracts/search?q=sentence%20transformers" title=" sentence transformers"> sentence transformers</a>, <a href="https://publications.waset.org/abstracts/search?q=semantic%20textual%20similarity" title=" semantic textual similarity"> semantic textual similarity</a> </p> <a href="https://publications.waset.org/abstracts/156624/semantic-textual-similarity-on-contracts-exploring-multiple-negative-ranking-losses-for-sentence-transformers" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/156624.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">107</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1928</span> Small Text Extraction from Documents and Chart Images</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Rominkumar%20Busa">Rominkumar Busa</a>, <a href="https://publications.waset.org/abstracts/search?q=Shahira%20K.%20C."> Shahira K. C.</a>, <a href="https://publications.waset.org/abstracts/search?q=Lijiya%20A."> Lijiya A.</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Text recognition is an important area in computer vision which deals with detecting and recognising text from an image. The Optical Character Recognition (OCR) is a saturated area these days and with very good text recognition accuracy. However the same OCR methods when applied on text with small font sizes like the text data of chart images, the recognition rate is less than 30%. In this work, aims to extract small text in images using the deep learning model, CRNN with CTC loss. The text recognition accuracy is found to improve by applying image enhancement by super resolution prior to CRNN model. We also observe the text recognition rate further increases by 18% by applying the proposed method, which involves super resolution and character segmentation followed by CRNN with CTC loss. The efficiency of the proposed method shows that further pre-processing on chart image text and other small text images will improve the accuracy further, thereby helping text extraction from chart images. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=small%20text%20extraction" title="small text extraction">small text extraction</a>, <a href="https://publications.waset.org/abstracts/search?q=OCR" title=" OCR"> OCR</a>, <a href="https://publications.waset.org/abstracts/search?q=scene%20text%20recognition" title=" scene text recognition"> scene text recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=CRNN" title=" CRNN"> CRNN</a> </p> <a href="https://publications.waset.org/abstracts/150310/small-text-extraction-from-documents-and-chart-images" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/150310.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">125</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1927</span> Improving Topic Quality of Scripts by Using Scene Similarity Based Word Co-Occurrence</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Yunseok%20Noh">Yunseok Noh</a>, <a href="https://publications.waset.org/abstracts/search?q=Chang-Uk%20Kwak"> Chang-Uk Kwak</a>, <a href="https://publications.waset.org/abstracts/search?q=Sun-Joong%20Kim"> Sun-Joong Kim</a>, <a href="https://publications.waset.org/abstracts/search?q=Seong-Bae%20Park"> Seong-Bae Park</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Scripts are one of the basic text resources to understand broadcasting contents. Since broadcast media wields lots of influence over the public, tools for understanding broadcasting contents are more required. Topic modeling is the method to get the summary of the broadcasting contents from its scripts. Generally, scripts represent contents descriptively with directions and speeches. Scripts also provide scene segments that can be seen as semantic units. Therefore, a script can be topic modeled by treating a scene segment as a document. Because scripts consist of speeches mainly, however, relatively small co-occurrences among words in the scene segments are observed. This causes inevitably the bad quality of topics based on statistical learning method. To tackle this problem, we propose a method of learning with additional word co-occurrence information obtained using scene similarities. The main idea of improving topic quality is that the information that two or more texts are topically related can be useful to learn high quality of topics. In addition, by using high quality of topics, we can get information more accurate whether two texts are related or not. In this paper, we regard two scene segments are related if their topical similarity is high enough. We also consider that words are co-occurred if they are in topically related scene segments together. In the experiments, we showed the proposed method generates a higher quality of topics from Korean drama scripts than the baselines. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=broadcasting%20contents" title="broadcasting contents">broadcasting contents</a>, <a href="https://publications.waset.org/abstracts/search?q=scripts" title=" scripts"> scripts</a>, <a href="https://publications.waset.org/abstracts/search?q=text%20similarity" title=" text similarity"> text similarity</a>, <a href="https://publications.waset.org/abstracts/search?q=topic%20model" title=" topic model"> topic model</a> </p> <a href="https://publications.waset.org/abstracts/43196/improving-topic-quality-of-scripts-by-using-scene-similarity-based-word-co-occurrence" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/43196.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">318</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1926</span> Text Data Preprocessing Library: Bilingual Approach</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Kabil%20Boukhari">Kabil Boukhari</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In the context of information retrieval, the selection of the most relevant words is a very important step. In fact, the text cleaning allows keeping only the most representative words for a better use. In this paper, we propose a library for the purpose text preprocessing within an implemented application to facilitate this task. This study has two purposes. The first, is to present the related work of the various steps involved in text preprocessing, presenting the segmentation, stemming and lemmatization algorithms that could be efficient in the rest of study. The second, is to implement a developed tool for text preprocessing in French and English. This library accepts unstructured text as input and provides the preprocessed text as output, based on a set of rules and on a base of stop words for both languages. The proposed library has been made on different corpora and gave an interesting result. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=text%20preprocessing" title="text preprocessing">text preprocessing</a>, <a href="https://publications.waset.org/abstracts/search?q=segmentation" title=" segmentation"> segmentation</a>, <a href="https://publications.waset.org/abstracts/search?q=knowledge%20extraction" title=" knowledge extraction"> knowledge extraction</a>, <a href="https://publications.waset.org/abstracts/search?q=normalization" title=" normalization"> normalization</a>, <a href="https://publications.waset.org/abstracts/search?q=text%20generation" title=" text generation"> text generation</a>, <a href="https://publications.waset.org/abstracts/search?q=information%20retrieval" title=" information retrieval"> information retrieval</a> </p> <a href="https://publications.waset.org/abstracts/150846/text-data-preprocessing-library-bilingual-approach" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/150846.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">94</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1925</span> A Clustering Algorithm for Massive Texts</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Ming%20Liu">Ming Liu</a>, <a href="https://publications.waset.org/abstracts/search?q=Chong%20Wu"> Chong Wu</a>, <a href="https://publications.waset.org/abstracts/search?q=Bingquan%20Liu"> Bingquan Liu</a>, <a href="https://publications.waset.org/abstracts/search?q=Lei%20Chen"> Lei Chen</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Internet users have to face the massive amount of textual data every day. Organizing texts into categories can help users dig the useful information from large-scale text collection. Clustering, in fact, is one of the most promising tools for categorizing texts due to its unsupervised characteristic. Unfortunately, most of traditional clustering algorithms lose their high qualities on large-scale text collection. This situation mainly attributes to the high- dimensional vectors generated from texts. To effectively and efficiently cluster large-scale text collection, this paper proposes a vector reconstruction based clustering algorithm. Only the features that can represent the cluster are preserved in cluster’s representative vector. This algorithm alternately repeats two sub-processes until it converges. One process is partial tuning sub-process, where feature’s weight is fine-tuned by iterative process. To accelerate clustering velocity, an intersection based similarity measurement and its corresponding neuron adjustment function are proposed and implemented in this sub-process. The other process is overall tuning sub-process, where the features are reallocated among different clusters. In this sub-process, the features useless to represent the cluster are removed from cluster’s representative vector. Experimental results on the three text collections (including two small-scale and one large-scale text collections) demonstrate that our algorithm obtains high quality on both small-scale and large-scale text collections. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=vector%20reconstruction" title="vector reconstruction">vector reconstruction</a>, <a href="https://publications.waset.org/abstracts/search?q=large-scale%20text%20clustering" title=" large-scale text clustering"> large-scale text clustering</a>, <a href="https://publications.waset.org/abstracts/search?q=partial%20tuning%20sub-process" title=" partial tuning sub-process"> partial tuning sub-process</a>, <a href="https://publications.waset.org/abstracts/search?q=overall%20tuning%20sub-process" title=" overall tuning sub-process"> overall tuning sub-process</a> </p> <a href="https://publications.waset.org/abstracts/22681/a-clustering-algorithm-for-massive-texts" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/22681.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">435</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1924</span> Detecting Paraphrases in Arabic Text</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Amal%20Alshahrani">Amal Alshahrani</a>, <a href="https://publications.waset.org/abstracts/search?q=Allan%20Ramsay"> Allan Ramsay</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Paraphrasing is one of the important tasks in natural language processing; i.e. alternative ways to express the same concept by using different words or phrases. Paraphrases can be used in many natural language applications, such as Information Retrieval, Machine Translation, Question Answering, Text Summarization, or Information Extraction. To obtain pairs of sentences that are paraphrases we create a system that automatically extracts paraphrases from a corpus, which is built from different sources of news article since these are likely to contain paraphrases when they report the same event on the same day. There are existing simple standard approaches (e.g. TF-IDF vector space, cosine similarity) and alignment technique (e.g. Dynamic Time Warping (DTW)) for extracting paraphrase which have been applied to the English. However, the performance of these approaches could be affected when they are applied to another language, for instance Arabic language, due to the presence of phenomena which are not present in English, such as Free Word Order, Zero copula, and Pro-dropping. These phenomena will affect the performance of these algorithms. Thus, if we can analysis how the existing algorithms for English fail for Arabic then we can find a solution for Arabic. The results are promising. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=natural%20language%20processing" title="natural language processing">natural language processing</a>, <a href="https://publications.waset.org/abstracts/search?q=TF-IDF" title=" TF-IDF"> TF-IDF</a>, <a href="https://publications.waset.org/abstracts/search?q=cosine%20similarity" title=" cosine similarity"> cosine similarity</a>, <a href="https://publications.waset.org/abstracts/search?q=dynamic%20time%20warping%20%28DTW%29" title=" dynamic time warping (DTW)"> dynamic time warping (DTW)</a> </p> <a href="https://publications.waset.org/abstracts/35776/detecting-paraphrases-in-arabic-text" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/35776.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">386</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1923</span> Review and Suggestions of the Similarity between Employee and Its Workplace</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Gi%20Ryung%20Song">Gi Ryung Song</a>, <a href="https://publications.waset.org/abstracts/search?q=Kyoung%20Seok%20Kim"> Kyoung Seok Kim</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This study reviewed the literature that focused on similarity of various characteristics such as values, personality, or demographics between employee and other elements in its organization for example employee with leader, job, and organization. We divided a body of this study into two parts and organized and demonstrated recent studies in first part. Three issues appeared in this part, which are statistical ways of measuring similarity, supervisor-subordinate similarity, and person-organization fit with person-job fit. In the latter part, based on the three issues of recent studies, we suggested three propositions about points that the recent studies missed or the studies did not orient. First proposition argued about the direction of similarity, which could also be interpreted as there is causal relation between employee and its workplace environments. Second, we suggested a consideration of eliminating common variance buried in one’s characteristics or its profiles. Third proposition was about the similarity of extra role behavior between individual and organization, and we treated this organization’s level of extra role behavior as a kind of its culture. In doing so, similarity of individual’s extra role behavior and organization’s has the meaning that individual’s congruence against their organization culture. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=similarity" title="similarity">similarity</a>, <a href="https://publications.waset.org/abstracts/search?q=person-organization%20fit" title=" person-organization fit"> person-organization fit</a>, <a href="https://publications.waset.org/abstracts/search?q=supervisor-subordinate%20similarity" title=" supervisor-subordinate similarity"> supervisor-subordinate similarity</a>, <a href="https://publications.waset.org/abstracts/search?q=literature%20review" title=" literature review"> literature review</a> </p> <a href="https://publications.waset.org/abstracts/54492/review-and-suggestions-of-the-similarity-between-employee-and-its-workplace" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/54492.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">283</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1922</span> OCR/ICR Text Recognition Using ABBYY FineReader as an Example Text</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=A.%20R.%20Bagirzade">A. R. Bagirzade</a>, <a href="https://publications.waset.org/abstracts/search?q=A.%20Sh.%20Najafova"> A. Sh. Najafova</a>, <a href="https://publications.waset.org/abstracts/search?q=S.%20M.%20Yessirkepova"> S. M. Yessirkepova</a>, <a href="https://publications.waset.org/abstracts/search?q=E.%20S.%20Albert"> E. S. Albert</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This article describes a text recognition method based on Optical Character Recognition (OCR). The features of the OCR method were examined using the ABBYY FineReader program. It describes automatic text recognition in images. OCR is necessary because optical input devices can only transmit raster graphics as a result. Text recognition describes the task of recognizing letters shown as such, to identify and assign them an assigned numerical value in accordance with the usual text encoding (ASCII, Unicode). The peculiarity of this study conducted by the authors using the example of the ABBYY FineReader, was confirmed and shown in practice, the improvement of digital text recognition platforms developed by Electronic Publication. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=ABBYY%20FineReader%20system" title="ABBYY FineReader system">ABBYY FineReader system</a>, <a href="https://publications.waset.org/abstracts/search?q=algorithm%20symbol%20recognition" title=" algorithm symbol recognition"> algorithm symbol recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=OCR%2FICR%20techniques" title=" OCR/ICR techniques"> OCR/ICR techniques</a>, <a href="https://publications.waset.org/abstracts/search?q=recognition%20technologies" title=" recognition technologies"> recognition technologies</a> </p> <a href="https://publications.waset.org/abstracts/130255/ocricr-text-recognition-using-abbyy-finereader-as-an-example-text" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/130255.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">168</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1921</span> 2D Fingerprint Performance for PubChem Chemical Database</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Fatimah%20Zawani%20Abdullah">Fatimah Zawani Abdullah</a>, <a href="https://publications.waset.org/abstracts/search?q=Shereena%20Mohd%20Arif"> Shereena Mohd Arif</a>, <a href="https://publications.waset.org/abstracts/search?q=Nurul%20Malim"> Nurul Malim</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The study of molecular similarity search in chemical database is increasingly widespread, especially in the area of drug discovery. Similarity search is an application in the field of Chemoinformatics to measure the similarity between the molecular structure which is known as the query and the structure of chemical compounds in the database. Similarity search is also one of the approaches in virtual screening which involves computational techniques and scoring the probabilities of activity. The main objective of this work is to determine the best fingerprint when compared to the other five fingerprints selected in this study using PubChem chemical dataset. This paper will discuss the similarity searching process conducted using 6 types of descriptors, which are ECFP4, ECFC4, FCFP4, FCFC4, SRECFC4 and SRFCFC4 on 15 activity classes of PubChem dataset using Tanimoto coefficient to calculate the similarity between the query structures and each of the database structure. The results suggest that ECFP4 performs the best to be used with Tanimoto coefficient in the PubChem dataset. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=2D%20fingerprints" title="2D fingerprints">2D fingerprints</a>, <a href="https://publications.waset.org/abstracts/search?q=Tanimoto" title=" Tanimoto"> Tanimoto</a>, <a href="https://publications.waset.org/abstracts/search?q=PubChem" title=" PubChem"> PubChem</a>, <a href="https://publications.waset.org/abstracts/search?q=similarity%20searching" title=" similarity searching"> similarity searching</a>, <a href="https://publications.waset.org/abstracts/search?q=chemoinformatics" title=" chemoinformatics"> chemoinformatics</a> </p> <a href="https://publications.waset.org/abstracts/15097/2d-fingerprint-performance-for-pubchem-chemical-database" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/15097.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">293</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1920</span> Programmed Speech to Text Summarization Using Graph-Based Algorithm</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Hamsini%20Pulugurtha">Hamsini Pulugurtha</a>, <a href="https://publications.waset.org/abstracts/search?q=P.%20V.%20S.%20L.%20Jagadamba"> P. V. S. L. Jagadamba</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Programmed Speech to Text and Text Summarization Using Graph-based Algorithms can be utilized in gatherings to get the short depiction of the gathering for future reference. This gives signature check utilizing Siamese neural organization to confirm the personality of the client and convert the client gave sound record which is in English into English text utilizing the discourse acknowledgment bundle given in python. At times just the outline of the gathering is required, the answer for this text rundown. Thus, the record is then summed up utilizing the regular language preparing approaches, for example, solo extractive text outline calculations <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=Siamese%20neural%20network" title="Siamese neural network">Siamese neural network</a>, <a href="https://publications.waset.org/abstracts/search?q=English%20speech" title=" English speech"> English speech</a>, <a href="https://publications.waset.org/abstracts/search?q=English%20text" title=" English text"> English text</a>, <a href="https://publications.waset.org/abstracts/search?q=natural%20language%20processing" title=" natural language processing"> natural language processing</a>, <a href="https://publications.waset.org/abstracts/search?q=unsupervised%20extractive%20text%20summarization" title=" unsupervised extractive text summarization"> unsupervised extractive text summarization</a> </p> <a href="https://publications.waset.org/abstracts/143079/programmed-speech-to-text-summarization-using-graph-based-algorithm" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/143079.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">218</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1919</span> On-Road Text Detection Platform for Driver Assistance Systems</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Guezouli%20Larbi">Guezouli Larbi</a>, <a href="https://publications.waset.org/abstracts/search?q=Belkacem%20Soundes"> Belkacem Soundes</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The automation of the text detection process can help the human in his driving task. Its application can be very useful to help drivers to have more information about their environment by facilitating the reading of road signs such as directional signs, events, stores, etc. In this paper, a system consisting of two stages has been proposed. In the first one, we used pseudo-Zernike moments to pinpoint areas of the image that may contain text. The architecture of this part is based on three main steps, region of interest (ROI) detection, text localization, and non-text region filtering. Then, in the second step, we present a convolutional neural network architecture (On-Road Text Detection Network - ORTDN) which is considered a classification phase. The results show that the proposed framework achieved ≈ 35 fps and an mAP of ≈ 90%, thus a low computational time with competitive accuracy. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=text%20detection" title="text detection">text detection</a>, <a href="https://publications.waset.org/abstracts/search?q=CNN" title=" CNN"> CNN</a>, <a href="https://publications.waset.org/abstracts/search?q=PZM" title=" PZM"> PZM</a>, <a href="https://publications.waset.org/abstracts/search?q=deep%20learning" title=" deep learning"> deep learning</a> </p> <a href="https://publications.waset.org/abstracts/161507/on-road-text-detection-platform-for-driver-assistance-systems" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/161507.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">83</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1918</span> Reducing Accidents Using Text Stops</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Benish%20Chaudhry">Benish Chaudhry</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Most of the accidents these days are occurring because of the ‘text-and-drive’ concept. If we look at the structure of cities in UAE, there are great distances, because of which it is impossible to drive without using or merely checking the cellphone. Moreover, if we look at the road structure, it is almost impossible to stop at a point and text. With the introduction of TEXT STOPs, drivers will be able to stop different stops for a maximum of 1 and a half-minute in order to reply or write a message. They can be introduced at a distance of 10 minutes of driving on the average speed of the road, so the drivers can look forward to a stop and can reply to a text when needed. A user survey indicates that drivers are willing to NOT text-and-drive if they have such a facility available. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=transport" title="transport">transport</a>, <a href="https://publications.waset.org/abstracts/search?q=accidents" title=" accidents"> accidents</a>, <a href="https://publications.waset.org/abstracts/search?q=urban%20planning" title=" urban planning"> urban planning</a>, <a href="https://publications.waset.org/abstracts/search?q=road%20planning" title=" road planning"> road planning</a> </p> <a href="https://publications.waset.org/abstracts/44563/reducing-accidents-using-text-stops" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/44563.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">394</span> </span> </div> </div> <ul class="pagination"> <li class="page-item disabled"><span class="page-link">&lsaquo;</span></li> <li class="page-item active"><span class="page-link">1</span></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=text%20similarity&amp;page=2">2</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=text%20similarity&amp;page=3">3</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=text%20similarity&amp;page=4">4</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=text%20similarity&amp;page=5">5</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=text%20similarity&amp;page=6">6</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=text%20similarity&amp;page=7">7</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=text%20similarity&amp;page=8">8</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=text%20similarity&amp;page=9">9</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=text%20similarity&amp;page=10">10</a></li> <li class="page-item disabled"><span class="page-link">...</span></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=text%20similarity&amp;page=64">64</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=text%20similarity&amp;page=65">65</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=text%20similarity&amp;page=2" rel="next">&rsaquo;</a></li> </ul> </div> </main> <footer> <div id="infolinks" class="pt-3 pb-2"> <div class="container"> <div style="background-color:#f5f5f5;" class="p-3"> <div class="row"> <div class="col-md-2"> <ul class="list-unstyled"> About <li><a href="https://waset.org/page/support">About Us</a></li> <li><a href="https://waset.org/page/support#legal-information">Legal</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/WASET-16th-foundational-anniversary.pdf">WASET celebrates its 16th foundational anniversary</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Account <li><a href="https://waset.org/profile">My Account</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Explore <li><a href="https://waset.org/disciplines">Disciplines</a></li> <li><a href="https://waset.org/conferences">Conferences</a></li> <li><a href="https://waset.org/conference-programs">Conference Program</a></li> <li><a href="https://waset.org/committees">Committees</a></li> <li><a href="https://publications.waset.org">Publications</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Research <li><a href="https://publications.waset.org/abstracts">Abstracts</a></li> <li><a href="https://publications.waset.org">Periodicals</a></li> <li><a href="https://publications.waset.org/archive">Archive</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Open Science <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Science-Philosophy.pdf">Open Science Philosophy</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Science-Award.pdf">Open Science Award</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Society-Open-Science-and-Open-Innovation.pdf">Open Innovation</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Postdoctoral-Fellowship-Award.pdf">Postdoctoral Fellowship Award</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Scholarly-Research-Review.pdf">Scholarly Research Review</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Support <li><a href="https://waset.org/page/support">Support</a></li> <li><a href="https://waset.org/profile/messages/create">Contact Us</a></li> <li><a href="https://waset.org/profile/messages/create">Report Abuse</a></li> </ul> </div> </div> </div> </div> </div> <div class="container text-center"> <hr style="margin-top:0;margin-bottom:.3rem;"> <a href="https://creativecommons.org/licenses/by/4.0/" target="_blank" class="text-muted small">Creative Commons Attribution 4.0 International License</a> <div id="copy" class="mt-2">&copy; 2024 World Academy of Science, Engineering and Technology</div> </div> </footer> <a href="javascript:" id="return-to-top"><i class="fas fa-arrow-up"></i></a> <div class="modal" id="modal-template"> <div class="modal-dialog"> <div class="modal-content"> <div class="row m-0 mt-1"> <div class="col-md-12"> <button type="button" class="close" data-dismiss="modal" aria-label="Close"><span aria-hidden="true">&times;</span></button> </div> </div> <div class="modal-body"></div> </div> </div> </div> <script src="https://cdn.waset.org/static/plugins/jquery-3.3.1.min.js"></script> <script src="https://cdn.waset.org/static/plugins/bootstrap-4.2.1/js/bootstrap.bundle.min.js"></script> <script src="https://cdn.waset.org/static/js/site.js?v=150220211556"></script> <script> jQuery(document).ready(function() { /*jQuery.get("https://publications.waset.org/xhr/user-menu", function (response) { jQuery('#mainNavMenu').append(response); });*/ jQuery.get({ url: "https://publications.waset.org/xhr/user-menu", cache: false }).then(function(response){ jQuery('#mainNavMenu').append(response); }); }); </script> </body> </html>

Pages: 1 2 3 4 5 6 7 8 9 10