CINXE.COM

Search results for: language models

<!DOCTYPE html> <html lang="en" dir="ltr"> <head> <!-- Google tag (gtag.js) --> <script async src="https://www.googletagmanager.com/gtag/js?id=G-P63WKM1TM1"></script> <script> window.dataLayer = window.dataLayer || []; function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'G-P63WKM1TM1'); </script> <!-- Yandex.Metrika counter --> <script type="text/javascript" > (function(m,e,t,r,i,k,a){m[i]=m[i]||function(){(m[i].a=m[i].a||[]).push(arguments)}; m[i].l=1*new Date(); for (var j = 0; j < document.scripts.length; j++) {if (document.scripts[j].src === r) { return; }} k=e.createElement(t),a=e.getElementsByTagName(t)[0],k.async=1,k.src=r,a.parentNode.insertBefore(k,a)}) (window, document, "script", "https://mc.yandex.ru/metrika/tag.js", "ym"); ym(55165297, "init", { clickmap:false, trackLinks:true, accurateTrackBounce:true, webvisor:false }); </script> <noscript><div><img src="https://mc.yandex.ru/watch/55165297" style="position:absolute; left:-9999px;" alt="" /></div></noscript> <!-- /Yandex.Metrika counter --> <!-- Matomo --> <!-- End Matomo Code --> <title>Search results for: language models</title> <meta name="description" content="Search results for: language models"> <meta name="keywords" content="language models"> <meta name="viewport" content="width=device-width, initial-scale=1, minimum-scale=1, maximum-scale=1, user-scalable=no"> <meta charset="utf-8"> <link href="https://cdn.waset.org/favicon.ico" type="image/x-icon" rel="shortcut icon"> <link href="https://cdn.waset.org/static/plugins/bootstrap-4.2.1/css/bootstrap.min.css" rel="stylesheet"> <link href="https://cdn.waset.org/static/plugins/fontawesome/css/all.min.css" rel="stylesheet"> <link href="https://cdn.waset.org/static/css/site.css?v=150220211555" rel="stylesheet"> </head> <body> <header> <div class="container"> <nav class="navbar navbar-expand-lg navbar-light"> <a class="navbar-brand" href="https://waset.org"> <img src="https://cdn.waset.org/static/images/wasetc.png" alt="Open Science Research Excellence" title="Open Science Research Excellence" /> </a> <button class="d-block d-lg-none navbar-toggler ml-auto" type="button" data-toggle="collapse" data-target="#navbarMenu" aria-controls="navbarMenu" aria-expanded="false" aria-label="Toggle navigation"> <span class="navbar-toggler-icon"></span> </button> <div class="w-100"> <div class="d-none d-lg-flex flex-row-reverse"> <form method="get" action="https://waset.org/search" class="form-inline my-2 my-lg-0"> <input class="form-control mr-sm-2" type="search" placeholder="Search Conferences" value="language models" name="q" aria-label="Search"> <button class="btn btn-light my-2 my-sm-0" type="submit"><i class="fas fa-search"></i></button> </form> </div> <div class="collapse navbar-collapse mt-1" id="navbarMenu"> <ul class="navbar-nav ml-auto align-items-center" id="mainNavMenu"> <li class="nav-item"> <a class="nav-link" href="https://waset.org/conferences" title="Conferences in 2024/2025/2026">Conferences</a> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/disciplines" title="Disciplines">Disciplines</a> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/committees" rel="nofollow">Committees</a> </li> <li class="nav-item dropdown"> <a class="nav-link dropdown-toggle" href="#" id="navbarDropdownPublications" role="button" data-toggle="dropdown" aria-haspopup="true" aria-expanded="false"> Publications </a> <div class="dropdown-menu" aria-labelledby="navbarDropdownPublications"> <a class="dropdown-item" href="https://publications.waset.org/abstracts">Abstracts</a> <a class="dropdown-item" href="https://publications.waset.org">Periodicals</a> <a class="dropdown-item" href="https://publications.waset.org/archive">Archive</a> </div> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/page/support" title="Support">Support</a> </li> </ul> </div> </div> </nav> </div> </header> <main> <div class="container mt-4"> <div class="row"> <div class="col-md-9 mx-auto"> <form method="get" action="https://publications.waset.org/abstracts/search"> <div id="custom-search-input"> <div class="input-group"> <i class="fas fa-search"></i> <input type="text" class="search-query" name="q" placeholder="Author, Title, Abstract, Keywords" value="language models"> <input type="submit" class="btn_search" value="Search"> </div> </div> </form> </div> </div> <div class="row mt-3"> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Commenced</strong> in January 2007</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Frequency:</strong> Monthly</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Edition:</strong> International</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Paper Count:</strong> 10186</div> </div> </div> </div> <h1 class="mt-3 mb-3 text-center" style="font-size:1.6rem;">Search results for: language models</h1> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">10186</span> Models and Metamodels for Computer-Assisted Natural Language Grammar Learning</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Evgeny%20Pyshkin">Evgeny Pyshkin</a>, <a href="https://publications.waset.org/abstracts/search?q=Maxim%20Mozgovoy"> Maxim Mozgovoy</a>, <a href="https://publications.waset.org/abstracts/search?q=Vladislav%20Volkov"> Vladislav Volkov</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The paper follows a discourse on computer-assisted language learning. We examine problems of foreign language teaching and learning and introduce a metamodel that can be used to define learning models of language grammar structures in order to support teacher/student interaction. Special attention is paid to the concept of a virtual language lab. Our approach to language education assumes to encourage learners to experiment with a language and to learn by discovering patterns of grammatically correct structures created and managed by a language expert. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=computer-assisted%20instruction" title="computer-assisted instruction">computer-assisted instruction</a>, <a href="https://publications.waset.org/abstracts/search?q=language%20learning" title=" language learning"> language learning</a>, <a href="https://publications.waset.org/abstracts/search?q=natural%20language%20grammar%20models" title=" natural language grammar models"> natural language grammar models</a>, <a href="https://publications.waset.org/abstracts/search?q=HCI" title=" HCI"> HCI</a> </p> <a href="https://publications.waset.org/abstracts/15680/models-and-metamodels-for-computer-assisted-natural-language-grammar-learning" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/15680.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">519</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">10185</span> The Content-Based Classroom: Perspectives on Integrating Language and Content</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Mourad%20Ben%20Bennani">Mourad Ben Bennani</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Views of language and language learning have undergone a tremendous change over the last decades. Language is no longer seen as a set of structured rules. It is rather viewed as a tool of interaction and communication. This shift in views has resulted in change in viewing language learning, which gave birth to various approaches and methodologies of language teaching. Two of these approaches are content-based instruction and content and language integrated learning (CLIL). These are similar approaches which integrate content and foreign/second language learning through various methodologies and models as a result of different implementations around the world. This presentation deals with sociocultural view of CBI and CLIL. It also defines language and content as vital components of CBI and CLIL. Next it reviews the origins of CBI and the continuum perspectives and CLIL definitions and models featured in the literature. Finally it summarizes current aspects around research in program evaluation with a focus on the benefits and challenges of these innovative approaches for second language teaching. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=CBI" title="CBI">CBI</a>, <a href="https://publications.waset.org/abstracts/search?q=CLIL" title=" CLIL"> CLIL</a>, <a href="https://publications.waset.org/abstracts/search?q=CBI%20continuum" title=" CBI continuum"> CBI continuum</a>, <a href="https://publications.waset.org/abstracts/search?q=CLIL%20models" title=" CLIL models"> CLIL models</a> </p> <a href="https://publications.waset.org/abstracts/40898/the-content-based-classroom-perspectives-on-integrating-language-and-content" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/40898.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">435</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">10184</span> Dual Language Immersion Models in Theory and Practice</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=S.%20Gordon">S. Gordon</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Dual language immersion is growing fast in language teaching today. This study provides an overview and evaluation of the different models of Dual language immersion programs in US K-12 schools. First, the paper provides a brief current literature review on the theory of Dual Language Immersion (DLI) in Second Language Acquisition (SLA) studies. Second, examples of several types of DLI language teaching models in US K-12 public schools are presented (including 50/50 models, 90/10 models, etc.). Third, we focus on the unique example of DLI education in the state of Utah, a successful, growing program in K-12 schools that includes: French, Chinese, Spanish, and Portuguese. The project investigates the theory and practice particularly of the case of public elementary and secondary school children that study half their school day in the L1 and the other half in the chosen L2, from kindergarten (age 5-6) through high school (age 17-18). Finally, the project takes the observations of Utah French DLI elementary through secondary programs as a case study. To conclude, we look at the principal challenges, pedagogical objectives and outcomes, and important implications for other US states and other countries (such as France currently) that are in the process of developing similar language learning programs. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=dual%20language%20immersion" title="dual language immersion">dual language immersion</a>, <a href="https://publications.waset.org/abstracts/search?q=second%20language%20acquisition" title=" second language acquisition"> second language acquisition</a>, <a href="https://publications.waset.org/abstracts/search?q=language%20teaching" title=" language teaching"> language teaching</a>, <a href="https://publications.waset.org/abstracts/search?q=pedagogy" title=" pedagogy"> pedagogy</a>, <a href="https://publications.waset.org/abstracts/search?q=teaching" title=" teaching"> teaching</a>, <a href="https://publications.waset.org/abstracts/search?q=French" title=" French"> French</a> </p> <a href="https://publications.waset.org/abstracts/103249/dual-language-immersion-models-in-theory-and-practice" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/103249.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">175</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">10183</span> Benchmarking Bert-Based Low-Resource Language: Case Uzbek NLP Models</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Jamshid%20Qodirov">Jamshid Qodirov</a>, <a href="https://publications.waset.org/abstracts/search?q=Sirojiddin%20Komolov"> Sirojiddin Komolov</a>, <a href="https://publications.waset.org/abstracts/search?q=Ravilov%20Mirahmad"> Ravilov Mirahmad</a>, <a href="https://publications.waset.org/abstracts/search?q=Olimjon%20Mirzayev"> Olimjon Mirzayev</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Nowadays, natural language processing tools play a crucial role in our daily lives, including various techniques with text processing. There are very advanced models in modern languages, such as English, Russian etc. But, in some languages, such as Uzbek, the NLP models have been developed recently. Thus, there are only a few NLP models in Uzbek language. Moreover, there is no such work that could show which Uzbek NLP model behaves in different situations and when to use them. This work tries to close this gap and compares the Uzbek NLP models existing as of the time this article was written. The authors try to compare the NLP models in two different scenarios: sentiment analysis and sentence similarity, which are the implementations of the two most common problems in the industry: classification and similarity. Another outcome from this work is two datasets for classification and sentence similarity in Uzbek language that we generated ourselves and can be useful in both industry and academia as well. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=NLP" title="NLP">NLP</a>, <a href="https://publications.waset.org/abstracts/search?q=benchmak" title=" benchmak"> benchmak</a>, <a href="https://publications.waset.org/abstracts/search?q=bert" title=" bert"> bert</a>, <a href="https://publications.waset.org/abstracts/search?q=vectorization" title=" vectorization"> vectorization</a> </p> <a href="https://publications.waset.org/abstracts/182098/benchmarking-bert-based-low-resource-language-case-uzbek-nlp-models" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/182098.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">54</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">10182</span> Probing Language Models for Multiple Linguistic Information</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Bowen%20Ding">Bowen Ding</a>, <a href="https://publications.waset.org/abstracts/search?q=Yihao%20Kuang"> Yihao Kuang</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In recent years, large-scale pre-trained language models have achieved state-of-the-art performance on a variety of natural language processing tasks. The word vectors produced by these language models can be viewed as dense encoded presentations of natural language that in text form. However, it is unknown how much linguistic information is encoded and how. In this paper, we construct several corresponding probing tasks for multiple linguistic information to clarify the encoding capabilities of different language models and performed a visual display. We firstly obtain word presentations in vector form from different language models, including BERT, ELMo, RoBERTa and GPT. Classifiers with a small scale of parameters and unsupervised tasks are then applied on these word vectors to discriminate their capability to encode corresponding linguistic information. The constructed probe tasks contain both semantic and syntactic aspects. The semantic aspect includes the ability of the model to understand semantic entities such as numbers, time, and characters, and the grammatical aspect includes the ability of the language model to understand grammatical structures such as dependency relationships and reference relationships. We also compare encoding capabilities of different layers in the same language model to infer how linguistic information is encoded in the model. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=language%20models" title="language models">language models</a>, <a href="https://publications.waset.org/abstracts/search?q=probing%20task" title=" probing task"> probing task</a>, <a href="https://publications.waset.org/abstracts/search?q=text%20presentation" title=" text presentation"> text presentation</a>, <a href="https://publications.waset.org/abstracts/search?q=linguistic%20information" title=" linguistic information"> linguistic information</a> </p> <a href="https://publications.waset.org/abstracts/168840/probing-language-models-for-multiple-linguistic-information" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/168840.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">110</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">10181</span> Evaluation and Compression of Different Language Transformer Models for Semantic Textual Similarity Binary Task Using Minority Language Resources</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Ma.%20Gracia%20Corazon%20Cayanan">Ma. Gracia Corazon Cayanan</a>, <a href="https://publications.waset.org/abstracts/search?q=Kai%20Yuen%20Cheong"> Kai Yuen Cheong</a>, <a href="https://publications.waset.org/abstracts/search?q=Li%20Sha"> Li Sha</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Training a language model for a minority language has been a challenging task. The lack of available corpora to train and fine-tune state-of-the-art language models is still a challenge in the area of Natural Language Processing (NLP). Moreover, the need for high computational resources and bulk data limit the attainment of this task. In this paper, we presented the following contributions: (1) we introduce and used a translation pair set of Tagalog and English (TL-EN) in pre-training a language model to a minority language resource; (2) we fine-tuned and evaluated top-ranking and pre-trained semantic textual similarity binary task (STSB) models, to both TL-EN and STS dataset pairs. (3) then, we reduced the size of the model to offset the need for high computational resources. Based on our results, the models that were pre-trained to translation pairs and STS pairs can perform well for STSB task. Also, having it reduced to a smaller dimension has no negative effect on the performance but rather has a notable increase on the similarity scores. Moreover, models that were pre-trained to a similar dataset have a tremendous effect on the model’s performance scores. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=semantic%20matching" title="semantic matching">semantic matching</a>, <a href="https://publications.waset.org/abstracts/search?q=semantic%20textual%20similarity%20binary%20task" title=" semantic textual similarity binary task"> semantic textual similarity binary task</a>, <a href="https://publications.waset.org/abstracts/search?q=low%20resource%20minority%20language" title=" low resource minority language"> low resource minority language</a>, <a href="https://publications.waset.org/abstracts/search?q=fine-tuning" title="fine-tuning">fine-tuning</a>, <a href="https://publications.waset.org/abstracts/search?q=dimension%20reduction" title=" dimension reduction"> dimension reduction</a>, <a href="https://publications.waset.org/abstracts/search?q=transformer%20models" title=" transformer models"> transformer models</a> </p> <a href="https://publications.waset.org/abstracts/145745/evaluation-and-compression-of-different-language-transformer-models-for-semantic-textual-similarity-binary-task-using-minority-language-resources" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/145745.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">211</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">10180</span> Prompt Design for Code Generation in Data Analysis Using Large Language Models</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Lu%20Song%20Ma%20Li%20Zhi">Lu Song Ma Li Zhi</a> </p> <p class="card-text"><strong>Abstract:</strong></p> With the rapid advancement of artificial intelligence technology, large language models (LLMs) have become a milestone in the field of natural language processing, demonstrating remarkable capabilities in semantic understanding, intelligent question answering, and text generation. These models are gradually penetrating various industries, particularly showcasing significant application potential in the data analysis domain. However, retraining or fine-tuning these models requires substantial computational resources and ample downstream task datasets, which poses a significant challenge for many enterprises and research institutions. Without modifying the internal parameters of the large models, prompt engineering techniques can rapidly adapt these models to new domains. This paper proposes a prompt design strategy aimed at leveraging the capabilities of large language models to automate the generation of data analysis code. By carefully designing prompts, data analysis requirements can be described in natural language, which the large language model can then understand and convert into executable data analysis code, thereby greatly enhancing the efficiency and convenience of data analysis. This strategy not only lowers the threshold for using large models but also significantly improves the accuracy and efficiency of data analysis. Our approach includes requirements for the precision of natural language descriptions, coverage of diverse data analysis needs, and mechanisms for immediate feedback and adjustment. Experimental results show that with this prompt design strategy, large language models perform exceptionally well in multiple data analysis tasks, generating high-quality code and significantly shortening the data analysis cycle. This method provides an efficient and convenient tool for the data analysis field and demonstrates the enormous potential of large language models in practical applications. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=large%20language%20models" title="large language models">large language models</a>, <a href="https://publications.waset.org/abstracts/search?q=prompt%20design" title=" prompt design"> prompt design</a>, <a href="https://publications.waset.org/abstracts/search?q=data%20analysis" title=" data analysis"> data analysis</a>, <a href="https://publications.waset.org/abstracts/search?q=code%20generation" title=" code generation"> code generation</a> </p> <a href="https://publications.waset.org/abstracts/188761/prompt-design-for-code-generation-in-data-analysis-using-large-language-models" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/188761.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">40</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">10179</span> JaCoText: A Pretrained Model for Java Code-Text Generation</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Jessica%20Lopez%20Espejel">Jessica Lopez Espejel</a>, <a href="https://publications.waset.org/abstracts/search?q=Mahaman%20Sanoussi%20Yahaya%20Alassan"> Mahaman Sanoussi Yahaya Alassan</a>, <a href="https://publications.waset.org/abstracts/search?q=Walid%20Dahhane"> Walid Dahhane</a>, <a href="https://publications.waset.org/abstracts/search?q=El%20Hassane%20Ettifouri"> El Hassane Ettifouri</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Pretrained transformer-based models have shown high performance in natural language generation tasks. However, a new wave of interest has surged: automatic programming language code generation. This task consists of translating natural language instructions to a source code. Despite the fact that well-known pre-trained models on language generation have achieved good performance in learning programming languages, effort is still needed in automatic code generation. In this paper, we introduce JaCoText, a model based on Transformer neural network. It aims to generate java source code from natural language text. JaCoText leverages the advantages of both natural language and code generation models. More specifically, we study some findings from state of the art and use them to (1) initialize our model from powerful pre-trained models, (2) explore additional pretraining on our java dataset, (3) lead experiments combining the unimodal and bimodal data in training, and (4) scale the input and output length during the fine-tuning of the model. Conducted experiments on CONCODE dataset show that JaCoText achieves new state-of-the-art results. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=java%20code%20generation" title="java code generation">java code generation</a>, <a href="https://publications.waset.org/abstracts/search?q=natural%20language%20processing" title=" natural language processing"> natural language processing</a>, <a href="https://publications.waset.org/abstracts/search?q=sequence-to-sequence%20models" title=" sequence-to-sequence models"> sequence-to-sequence models</a>, <a href="https://publications.waset.org/abstracts/search?q=transformer%20neural%20networks" title=" transformer neural networks"> transformer neural networks</a> </p> <a href="https://publications.waset.org/abstracts/156766/jacotext-a-pretrained-model-for-java-code-text-generation" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/156766.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">284</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">10178</span> A Large Language Model-Driven Method for Automated Building Energy Model Generation</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Yake%20Zhang">Yake Zhang</a>, <a href="https://publications.waset.org/abstracts/search?q=Peng%20Xu"> Peng Xu</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The development of building energy models (BEM) required for architectural design and analysis is a time-consuming and complex process, demanding a deep understanding and proficient use of simulation software. To streamline the generation of complex building energy models, this study proposes an automated method for generating building energy models using a large language model and the BEM library aimed at improving the efficiency of model generation. This method leverages a large language model to parse user-specified requirements for target building models, extracting key features such as building location, window-to-wall ratio, and thermal performance of the building envelope. The BEM library is utilized to retrieve energy models that match the target building’s characteristics, serving as reference information for the large language model to enhance the accuracy and relevance of the generated model, allowing for the creation of a building energy model that adapts to the user’s modeling requirements. This study enables the automatic creation of building energy models based on natural language inputs, reducing the professional expertise required for model development while significantly decreasing the time and complexity of manual configuration. In summary, this study provides an efficient and intelligent solution for building energy analysis and simulation, demonstrating the potential of a large language model in the field of building simulation and performance modeling. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=artificial%20intelligence" title="artificial intelligence">artificial intelligence</a>, <a href="https://publications.waset.org/abstracts/search?q=building%20energy%20modelling" title=" building energy modelling"> building energy modelling</a>, <a href="https://publications.waset.org/abstracts/search?q=building%20simulation" title=" building simulation"> building simulation</a>, <a href="https://publications.waset.org/abstracts/search?q=large%20language%20model" title=" large language model"> large language model</a> </p> <a href="https://publications.waset.org/abstracts/190794/a-large-language-model-driven-method-for-automated-building-energy-model-generation" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/190794.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">26</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">10177</span> Semantic Textual Similarity on Contracts: Exploring Multiple Negative Ranking Losses for Sentence Transformers</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Yogendra%20Sisodia">Yogendra Sisodia</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Researchers are becoming more interested in extracting useful information from legal documents thanks to the development of large-scale language models in natural language processing (NLP), and deep learning has accelerated the creation of powerful text mining models. Legal fields like contracts benefit greatly from semantic text search since it makes it quick and easy to find related clauses. After collecting sentence embeddings, it is relatively simple to locate sentences with a comparable meaning throughout the entire legal corpus. The author of this research investigated two pre-trained language models for this task: MiniLM and Roberta, and further fine-tuned them on Legal Contracts. The author used Multiple Negative Ranking Loss for the creation of sentence transformers. The fine-tuned language models and sentence transformers showed promising results. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=legal%20contracts" title="legal contracts">legal contracts</a>, <a href="https://publications.waset.org/abstracts/search?q=multiple%20negative%20ranking%20loss" title=" multiple negative ranking loss"> multiple negative ranking loss</a>, <a href="https://publications.waset.org/abstracts/search?q=natural%20language%20inference" title=" natural language inference"> natural language inference</a>, <a href="https://publications.waset.org/abstracts/search?q=sentence%20transformers" title=" sentence transformers"> sentence transformers</a>, <a href="https://publications.waset.org/abstracts/search?q=semantic%20textual%20similarity" title=" semantic textual similarity"> semantic textual similarity</a> </p> <a href="https://publications.waset.org/abstracts/156624/semantic-textual-similarity-on-contracts-exploring-multiple-negative-ranking-losses-for-sentence-transformers" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/156624.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">108</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">10176</span> Literacy in First and Second Language: Implication for Language Education</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Inuwa%20Danladi%20Bawa">Inuwa Danladi Bawa</a> </p> <p class="card-text"><strong>Abstract:</strong></p> One of the challenges of African states in the development of education in the past and the present is the problem of literacy. Literacy in the first language is seen as a strong base for the development of second language; they are mostly the language of education. Language development is an offshoot of language planning; so the need to develop literacy in both first and second language affects language education and predicts the extent of achievement of the entire education sector. The need to balance literacy acquisition in first language for good conditioning the acquisition of second language is paramount. Likely constraints that includes; non-standardization, underdeveloped and undeveloped first languages are among many. Solutions to some of these include the development of materials and use of the stages and levels of literacy acquisition. This is with believed that a child writes well in second language if he has literacy in the first language. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=first%20language" title="first language">first language</a>, <a href="https://publications.waset.org/abstracts/search?q=second%20language" title=" second language"> second language</a>, <a href="https://publications.waset.org/abstracts/search?q=literacy" title=" literacy"> literacy</a>, <a href="https://publications.waset.org/abstracts/search?q=english%20language" title=" english language"> english language</a>, <a href="https://publications.waset.org/abstracts/search?q=linguistics" title=" linguistics"> linguistics</a> </p> <a href="https://publications.waset.org/abstracts/3745/literacy-in-first-and-second-language-implication-for-language-education" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/3745.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">452</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">10175</span> Native Language Identification with Cross-Corpus Evaluation Using Social Media Data: ’Reddit’</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Yasmeen%20Bassas">Yasmeen Bassas</a>, <a href="https://publications.waset.org/abstracts/search?q=Sandra%20Kuebler"> Sandra Kuebler</a>, <a href="https://publications.waset.org/abstracts/search?q=Allen%20Riddell"> Allen Riddell</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Native language identification is one of the growing subfields in natural language processing (NLP). The task of native language identification (NLI) is mainly concerned with predicting the native language of an author’s writing in a second language. In this paper, we investigate the performance of two types of features; content-based features vs. content independent features, when they are evaluated on a different corpus (using social media data “Reddit”). In this NLI task, the predefined models are trained on one corpus (TOEFL), and then the trained models are evaluated on different data using an external corpus (Reddit). Three classifiers are used in this task; the baseline, linear SVM, and logistic regression. Results show that content-based features are more accurate and robust than content independent ones when tested within the corpus and across corpus. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=NLI" title="NLI">NLI</a>, <a href="https://publications.waset.org/abstracts/search?q=NLP" title=" NLP"> NLP</a>, <a href="https://publications.waset.org/abstracts/search?q=content-based%20features" title=" content-based features"> content-based features</a>, <a href="https://publications.waset.org/abstracts/search?q=content%20independent%20features" title=" content independent features"> content independent features</a>, <a href="https://publications.waset.org/abstracts/search?q=social%20media%20corpus" title=" social media corpus"> social media corpus</a>, <a href="https://publications.waset.org/abstracts/search?q=ML" title=" ML"> ML</a> </p> <a href="https://publications.waset.org/abstracts/142396/native-language-identification-with-cross-corpus-evaluation-using-social-media-data-reddit" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/142396.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">137</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">10174</span> Coupling Large Language Models with Disaster Knowledge Graphs for Intelligent Construction</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Zhengrong%20Wu">Zhengrong Wu</a>, <a href="https://publications.waset.org/abstracts/search?q=Haibo%20Yang"> Haibo Yang</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In the context of escalating global climate change and environmental degradation, the complexity and frequency of natural disasters are continually increasing. Confronted with an abundance of information regarding natural disasters, traditional knowledge graph construction methods, which heavily rely on grammatical rules and prior knowledge, demonstrate suboptimal performance in processing complex, multi-source disaster information. This study, drawing upon past natural disaster reports, disaster-related literature in both English and Chinese, and data from various disaster monitoring stations, constructs question-answer templates based on large language models. Utilizing the P-Tune method, the ChatGLM2-6B model is fine-tuned, leading to the development of a disaster knowledge graph based on large language models. This serves as a knowledge database support for disaster emergency response. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=large%20language%20model" title="large language model">large language model</a>, <a href="https://publications.waset.org/abstracts/search?q=knowledge%20graph" title=" knowledge graph"> knowledge graph</a>, <a href="https://publications.waset.org/abstracts/search?q=disaster" title=" disaster"> disaster</a>, <a href="https://publications.waset.org/abstracts/search?q=deep%20learning" title=" deep learning"> deep learning</a> </p> <a href="https://publications.waset.org/abstracts/182751/coupling-large-language-models-with-disaster-knowledge-graphs-for-intelligent-construction" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/182751.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">56</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">10173</span> Transportation Language Register as One of Language Community</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Diyah%20Atiek%20Mustikawati">Diyah Atiek Mustikawati</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Language register refers to a variety of a language used for particular purpose or in a particular social setting. Language register also means as a concept of adapting one’s use of language to conform to standards or tradition in a given professional or social situation. This descriptive study tends to discuss about the form of language register in transportation aspect, factors, also the function of use it. Mostly, language register in transportation aspect uses short sentences in form of informal register. The factor caused language register used are speaker, word choice, background of language. The functions of language register in transportations aspect are to make communication between crew easily, also to keep safety when they were in bad condition. Transportation language register developed naturally as one of variety of language used. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=language%20register" title="language register">language register</a>, <a href="https://publications.waset.org/abstracts/search?q=language%20variety" title=" language variety"> language variety</a>, <a href="https://publications.waset.org/abstracts/search?q=communication" title=" communication"> communication</a>, <a href="https://publications.waset.org/abstracts/search?q=transportation" title=" transportation"> transportation</a> </p> <a href="https://publications.waset.org/abstracts/37039/transportation-language-register-as-one-of-language-community" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/37039.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">487</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">10172</span> A Review of Research on Pre-training Technology for Natural Language Processing</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Moquan%20Gong">Moquan Gong</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In recent years, with the rapid development of deep learning, pre-training technology for natural language processing has made great progress. The early field of natural language processing has long used word vector methods such as Word2Vec to encode text. These word vector methods can also be regarded as static pre-training techniques. However, this context-free text representation brings very limited improvement to subsequent natural language processing tasks and cannot solve the problem of word polysemy. ELMo proposes a context-sensitive text representation method that can effectively handle polysemy problems. Since then, pre-training language models such as GPT and BERT have been proposed one after another. Among them, the BERT model has significantly improved its performance on many typical downstream tasks, greatly promoting the technological development in the field of natural language processing, and has since entered the field of natural language processing. The era of dynamic pre-training technology. Since then, a large number of pre-trained language models based on BERT and XLNet have continued to emerge, and pre-training technology has become an indispensable mainstream technology in the field of natural language processing. This article first gives an overview of pre-training technology and its development history, and introduces in detail the classic pre-training technology in the field of natural language processing, including early static pre-training technology and classic dynamic pre-training technology; and then briefly sorts out a series of enlightening technologies. Pre-training technology, including improved models based on BERT and XLNet; on this basis, analyze the problems faced by current pre-training technology research; finally, look forward to the future development trend of pre-training technology. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=natural%20language%20processing" title="natural language processing">natural language processing</a>, <a href="https://publications.waset.org/abstracts/search?q=pre-training" title=" pre-training"> pre-training</a>, <a href="https://publications.waset.org/abstracts/search?q=language%20model" title=" language model"> language model</a>, <a href="https://publications.waset.org/abstracts/search?q=word%20vectors" title=" word vectors"> word vectors</a> </p> <a href="https://publications.waset.org/abstracts/183121/a-review-of-research-on-pre-training-technology-for-natural-language-processing" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/183121.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">57</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">10171</span> Exploring Tweet Geolocation: Leveraging Large Language Models for Post-Hoc Explanations</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Sarra%20Hasni">Sarra Hasni</a>, <a href="https://publications.waset.org/abstracts/search?q=Sami%20Faiz"> Sami Faiz</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In recent years, location prediction on social networks has gained significant attention, with short and unstructured texts like tweets posing additional challenges. Advanced geolocation models have been proposed, increasing the need to explain their predictions. In this paper, we provide explanations for a geolocation black-box model using LIME and SHAP, two state-of-the-art XAI (eXplainable Artificial Intelligence) methods. We extend our evaluations to Large Language Models (LLMs) as post hoc explainers for tweet geolocation. Our preliminary results show that LLMs outperform LIME and SHAP by generating more accurate explanations. Additionally, we demonstrate that prompts with examples and meta-prompts containing phonetic spelling rules improve the interpretability of these models, even with informal input data. This approach highlights the potential of advanced prompt engineering techniques to enhance the effectiveness of black-box models in geolocation tasks on social networks. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=large%20language%20model" title="large language model">large language model</a>, <a href="https://publications.waset.org/abstracts/search?q=post%20hoc%20explainer" title=" post hoc explainer"> post hoc explainer</a>, <a href="https://publications.waset.org/abstracts/search?q=prompt%20engineering" title=" prompt engineering"> prompt engineering</a>, <a href="https://publications.waset.org/abstracts/search?q=local%20explanation" title=" local explanation"> local explanation</a>, <a href="https://publications.waset.org/abstracts/search?q=tweet%20geolocation" title=" tweet geolocation"> tweet geolocation</a> </p> <a href="https://publications.waset.org/abstracts/190334/exploring-tweet-geolocation-leveraging-large-language-models-for-post-hoc-explanations" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/190334.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">26</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">10170</span> Domain specific Ontology-Based Knowledge Extraction Using R-GNN and Large Language Models</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Andrey%20Khalov">Andrey Khalov</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The rapid proliferation of unstructured data in IT infrastructure management demands innovative approaches for extracting actionable knowledge. This paper presents a framework for ontology-based knowledge extraction that combines relational graph neural networks (R-GNN) with large language models (LLMs). The proposed method leverages the DOLCE framework as the foundational ontology, extending it with concepts from ITSMO for domain-specific applications in IT service management and outsourcing. A key component of this research is the use of transformer-based models, such as DeBERTa-v3-large, for automatic entity and relationship extraction from unstructured texts. Furthermore, the paper explores how transfer learning techniques can be applied to fine-tune large language models (LLaMA) for using to generate synthetic datasets to improve precision in BERT-based entity recognition and ontology alignment. The resulting IT Ontology (ITO) serves as a comprehensive knowledge base that integrates domain-specific insights from ITIL processes, enabling more efficient decision-making. Experimental results demonstrate significant improvements in knowledge extraction and relationship mapping, offering a cutting-edge solution for enhancing cognitive computing in IT service environments. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=ontology%20mapping" title="ontology mapping">ontology mapping</a>, <a href="https://publications.waset.org/abstracts/search?q=R-GNN" title=" R-GNN"> R-GNN</a>, <a href="https://publications.waset.org/abstracts/search?q=knowledge%20extraction" title=" knowledge extraction"> knowledge extraction</a>, <a href="https://publications.waset.org/abstracts/search?q=large%20language%20models" title=" large language models"> large language models</a>, <a href="https://publications.waset.org/abstracts/search?q=NER" title=" NER"> NER</a>, <a href="https://publications.waset.org/abstracts/search?q=knowlege%20graph" title=" knowlege graph"> knowlege graph</a> </p> <a href="https://publications.waset.org/abstracts/192578/domain-specific-ontology-based-knowledge-extraction-using-r-gnn-and-large-language-models" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/192578.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">16</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">10169</span> A Comparative Study of Approaches in User-Centred Health Information Retrieval</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Harsh%20Thakkar">Harsh Thakkar</a>, <a href="https://publications.waset.org/abstracts/search?q=Ganesh%20Iyer"> Ganesh Iyer</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In this paper, we survey various user-centered or context-based biomedical health information retrieval systems. We present and discuss the performance of systems submitted in CLEF eHealth 2014 Task 3 for this purpose. We classify and focus on comparing the two most prevalent retrieval models in biomedical information retrieval namely: Language Model (LM) and Vector Space Model (VSM). We also report on the effectiveness of using external medical resources and ontologies like MeSH, Metamap, UMLS, etc. We observed that the LM based retrieval systems outperform VSM based systems on various fronts. From the results we conclude that the state-of-art system scores for MAP was 0.4146, P@10 was 0.7560 and NDCG@10 was 0.7445, respectively. All of these score were reported by systems built on language modeling approaches. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=clinical%20document%20retrieval" title="clinical document retrieval">clinical document retrieval</a>, <a href="https://publications.waset.org/abstracts/search?q=concept-based%20information%20retrieval" title=" concept-based information retrieval"> concept-based information retrieval</a>, <a href="https://publications.waset.org/abstracts/search?q=query%20expansion" title=" query expansion"> query expansion</a>, <a href="https://publications.waset.org/abstracts/search?q=language%20models" title=" language models"> language models</a>, <a href="https://publications.waset.org/abstracts/search?q=vector%20space%20models" title=" vector space models"> vector space models</a> </p> <a href="https://publications.waset.org/abstracts/57392/a-comparative-study-of-approaches-in-user-centred-health-information-retrieval" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/57392.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">320</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">10168</span> A Graph-Based Retrieval Model for Passage Search</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Junjie%20Zhong">Junjie Zhong</a>, <a href="https://publications.waset.org/abstracts/search?q=Kai%20Hong"> Kai Hong</a>, <a href="https://publications.waset.org/abstracts/search?q=Lei%20Wang"> Lei Wang</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Passage Retrieval (PR) plays an important role in many Natural Language Processing (NLP) tasks. Traditional efficient retrieval models relying on exact term-matching, such as TF-IDF or BM25, have nowadays been exceeded by pre-trained language models which match by semantics. Though they gain effectiveness, deep language models often require large memory as well as time cost. To tackle the trade-off between efficiency and effectiveness in PR, this paper proposes Graph Passage Retriever (GraphPR), a graph-based model inspired by the development of graph learning techniques. Different from existing works, GraphPR is end-to-end and integrates both term-matching information and semantics. GraphPR constructs a passage-level graph from BM25 retrieval results and trains a GCN-like model on the graph with graph-based objectives. Passages were regarded as nodes in the constructed graph and were embedded in dense vectors. PR can then be implemented using embeddings and a fast vector-similarity search. Experiments on a variety of real-world retrieval datasets show that the proposed model outperforms related models in several evaluation metrics (e.g., mean reciprocal rank, accuracy, F1-scores) while maintaining a relatively low query latency and memory usage. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=efficiency" title="efficiency">efficiency</a>, <a href="https://publications.waset.org/abstracts/search?q=effectiveness" title=" effectiveness"> effectiveness</a>, <a href="https://publications.waset.org/abstracts/search?q=graph%20learning" title=" graph learning"> graph learning</a>, <a href="https://publications.waset.org/abstracts/search?q=language%20model" title=" language model"> language model</a>, <a href="https://publications.waset.org/abstracts/search?q=passage%20retrieval" title=" passage retrieval"> passage retrieval</a>, <a href="https://publications.waset.org/abstracts/search?q=term-matching%20model" title=" term-matching model"> term-matching model</a> </p> <a href="https://publications.waset.org/abstracts/162229/a-graph-based-retrieval-model-for-passage-search" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/162229.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">150</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">10167</span> The Mother Tongue and Related Issues in Algeria</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Farouk%20A.N.%20Bouhadiba">Farouk A.N. Bouhadiba</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Based on Fishman’s Theoretical Paradigm (1991), we shall first discuss his three value positions for the case of the so called minority native languages in Algeria and how they may be included into a global language teaching program in Algeria. We shall then move on to his scale on language loss, language maintenance and language renewal with illustrating examples taken from the Algerian context. The second part of our talk relates to pedagogical issues on how to proceed for a smooth transition from mother tongue to school tongue, what methods or approaches suit best the teaching of mother tongue and school tongue (Immersion Programs, The Natural Approach, Applied Literacy Programs, The Berlitz Method, etc.). We shall end up our talk on how one may reshuffle the current issues on the “Arabic-only” movement and the abrupt transition from mother tongue to school tongue in use today by opting for teaching programs that involve pre-school language acquisition and in-school language acquisition grammars, and thus pave the way to effective language teaching programs and living curricula and pedagogies such as language nests, intergenerational continuity, communication and identity teaching programs, which result in better language teaching models that make language policies become a reality. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=native%20languages" title="native languages">native languages</a>, <a href="https://publications.waset.org/abstracts/search?q=language%20maintenance" title=" language maintenance"> language maintenance</a>, <a href="https://publications.waset.org/abstracts/search?q=mother%20tongue" title=" mother tongue"> mother tongue</a>, <a href="https://publications.waset.org/abstracts/search?q=school%20tongue" title=" school tongue"> school tongue</a>, <a href="https://publications.waset.org/abstracts/search?q=education" title=" education"> education</a>, <a href="https://publications.waset.org/abstracts/search?q=Algeria" title=" Algeria"> Algeria</a> </p> <a href="https://publications.waset.org/abstracts/189378/the-mother-tongue-and-related-issues-in-algeria" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/189378.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">31</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">10166</span> Towards Efficient Reasoning about Families of Class Diagrams Using Union Models</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Tejush%20Badal">Tejush Badal</a>, <a href="https://publications.waset.org/abstracts/search?q=Sanaa%20Alwidian"> Sanaa Alwidian</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Class diagrams are useful tools within the Unified Modelling Language (UML) to model and visualize the relationships between, and properties of objects within a system. As a system evolves over time and space (e.g., products), a series of models with several commonalities and variabilities create what is known as a model family. In circumstances where there are several versions of a model, examining each model individually, becomes expensive in terms of computation resources. To avoid performing redundant operations, this paper proposes an approach for representing a family of class diagrams into Union Models to represent model families using a single generic model. The paper aims to analyze and reason about a family of class diagrams using union models as opposed to individual analysis of each member model in the family. The union algorithm provides a holistic view of the model family, where the latter cannot be otherwise obtained from an individual analysis approach, this in turn, enhances the analysis performed in terms of speeding up the time needed to analyze a family of models together as opposed to analyzing individual models, one model at a time. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=analysis" title="analysis">analysis</a>, <a href="https://publications.waset.org/abstracts/search?q=class%20diagram" title=" class diagram"> class diagram</a>, <a href="https://publications.waset.org/abstracts/search?q=model%20family" title=" model family"> model family</a>, <a href="https://publications.waset.org/abstracts/search?q=unified%20modeling%20language" title=" unified modeling language"> unified modeling language</a>, <a href="https://publications.waset.org/abstracts/search?q=union%20model" title=" union model"> union model</a> </p> <a href="https://publications.waset.org/abstracts/168580/towards-efficient-reasoning-about-families-of-class-diagrams-using-union-models" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/168580.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">74</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">10165</span> Neural Machine Translation for Low-Resource African Languages: Benchmarking State-of-the-Art Transformer for Wolof</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Cheikh%20Bamba%20Dione">Cheikh Bamba Dione</a>, <a href="https://publications.waset.org/abstracts/search?q=Alla%20Lo"> Alla Lo</a>, <a href="https://publications.waset.org/abstracts/search?q=Elhadji%20Mamadou%20Nguer"> Elhadji Mamadou Nguer</a>, <a href="https://publications.waset.org/abstracts/search?q=Siley%20O.%20Ba"> Siley O. Ba</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In this paper, we propose two neural machine translation (NMT) systems (French-to-Wolof and Wolof-to-French) based on sequence-to-sequence with attention and transformer architectures. We trained our models on a parallel French-Wolof corpus of about 83k sentence pairs. Because of the low-resource setting, we experimented with advanced methods for handling data sparsity, including subword segmentation, back translation, and the copied corpus method. We evaluate the models using the BLEU score and find that transformer outperforms the classic seq2seq model in all settings, in addition to being less sensitive to noise. In general, the best scores are achieved when training the models on word-level-based units. For subword-level models, using back translation proves to be slightly beneficial in low-resource (WO) to high-resource (FR) language translation for the transformer (but not for the seq2seq) models. A slight improvement can also be observed when injecting copied monolingual text in the target language. Moreover, combining the copied method data with back translation leads to a substantial improvement of the translation quality. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=backtranslation" title="backtranslation">backtranslation</a>, <a href="https://publications.waset.org/abstracts/search?q=low-resource%20language" title=" low-resource language"> low-resource language</a>, <a href="https://publications.waset.org/abstracts/search?q=neural%20machine%20translation" title=" neural machine translation"> neural machine translation</a>, <a href="https://publications.waset.org/abstracts/search?q=sequence-to-sequence" title=" sequence-to-sequence"> sequence-to-sequence</a>, <a href="https://publications.waset.org/abstracts/search?q=transformer" title=" transformer"> transformer</a>, <a href="https://publications.waset.org/abstracts/search?q=Wolof" title=" Wolof"> Wolof</a> </p> <a href="https://publications.waset.org/abstracts/135110/neural-machine-translation-for-low-resource-african-languages-benchmarking-state-of-the-art-transformer-for-wolof" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/135110.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">147</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">10164</span> Gender Bias in Natural Language Processing: Machines Reflect Misogyny in Society</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Irene%20Yi">Irene Yi</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Machine learning, natural language processing, and neural network models of language are becoming more and more prevalent in the fields of technology and linguistics today. Training data for machines are at best, large corpora of human literature and at worst, a reflection of the ugliness in society. Machines have been trained on millions of human books, only to find that in the course of human history, derogatory and sexist adjectives are used significantly more frequently when describing females in history and literature than when describing males. This is extremely problematic, both as training data, and as the outcome of natural language processing. As machines start to handle more responsibilities, it is crucial to ensure that they do not take with them historical sexist and misogynistic notions. This paper gathers data and algorithms from neural network models of language having to deal with syntax, semantics, sociolinguistics, and text classification. Results are significant in showing the existing intentional and unintentional misogynistic notions used to train machines, as well as in developing better technologies that take into account the semantics and syntax of text to be more mindful and reflect gender equality. Further, this paper deals with the idea of non-binary gender pronouns and how machines can process these pronouns correctly, given its semantic and syntactic context. This paper also delves into the implications of gendered grammar and its effect, cross-linguistically, on natural language processing. Languages such as French or Spanish not only have rigid gendered grammar rules, but also historically patriarchal societies. The progression of society comes hand in hand with not only its language, but how machines process those natural languages. These ideas are all extremely vital to the development of natural language models in technology, and they must be taken into account immediately. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=gendered%20grammar" title="gendered grammar">gendered grammar</a>, <a href="https://publications.waset.org/abstracts/search?q=misogynistic%20language" title=" misogynistic language"> misogynistic language</a>, <a href="https://publications.waset.org/abstracts/search?q=natural%20language%20processing" title=" natural language processing"> natural language processing</a>, <a href="https://publications.waset.org/abstracts/search?q=neural%20networks" title=" neural networks"> neural networks</a> </p> <a href="https://publications.waset.org/abstracts/123692/gender-bias-in-natural-language-processing-machines-reflect-misogyny-in-society" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/123692.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">120</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">10163</span> Improving Academic Literacy in the Secondary History Classroom</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Wilhelmina%20van%20den%20Berg">Wilhelmina van den Berg</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Through intentionally developing the Register Continuum and the Functional Model of Language in the secondary history classroom, teachers can effectively build a teaching and learning cycle geared towards literacy improvement and EAL differentiation. Developing an understanding of and engaging students in the field, tenor, and tone of written and spoken language, allows students to build the foundation for greater academic achievement due to integrated literacy skills in the history classroom. Building a variety of scaffolds during lessons within these models means students can improve their academic language and communication skills. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=academic%20language" title="academic language">academic language</a>, <a href="https://publications.waset.org/abstracts/search?q=EAL" title=" EAL"> EAL</a>, <a href="https://publications.waset.org/abstracts/search?q=functional%20model%20of%20language" title=" functional model of language"> functional model of language</a>, <a href="https://publications.waset.org/abstracts/search?q=international%20baccalaureate" title=" international baccalaureate"> international baccalaureate</a>, <a href="https://publications.waset.org/abstracts/search?q=literacy%20skills" title=" literacy skills"> literacy skills</a> </p> <a href="https://publications.waset.org/abstracts/157695/improving-academic-literacy-in-the-secondary-history-classroom" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/157695.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">62</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">10162</span> Models of Bilingual Education in Majority Language Contexts: An Exploratory Study of Bilingual Programmes in Qatari Primary Schools</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Fatma%20Al-Maadheed">Fatma Al-Maadheed</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Following an ethnographic approach this study explored bilingual programmes offered by two types of primary schools in Qatar: international and Independent schools. Qatar with its unique linguistic and socio-economic situation launched a new initiative for educatiobnal development in 2001 but with hardly any research linked to theses changes. The study reveals that the Qatari bilingual schools context was one of heteroglossia, with three codes in operation: Modern Standard Arabic, Colloquial Arabic dialects and English. The two schools adopted different models of bilingualism. The international school adopted a strict separation policy between the two languages following a monoglossic belief. The independent school was found to apply a flexible language policy. The study also highlighted the daily challnges produced from the diglossia situation in Qatar, the difference between students and teacher dialect as well as acquiring literacy in the formal language. In addition to an abscence of a clear language policy in Schools, the study brought attention to the instructional methods utilised in language teaching which are mostly associated with successful bilingual education. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=diglossia" title="diglossia">diglossia</a>, <a href="https://publications.waset.org/abstracts/search?q=instructional%20methods" title=" instructional methods"> instructional methods</a>, <a href="https://publications.waset.org/abstracts/search?q=language%20policy" title=" language policy"> language policy</a>, <a href="https://publications.waset.org/abstracts/search?q=qatari%20primary%20schools" title=" qatari primary schools"> qatari primary schools</a> </p> <a href="https://publications.waset.org/abstracts/30944/models-of-bilingual-education-in-majority-language-contexts-an-exploratory-study-of-bilingual-programmes-in-qatari-primary-schools" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/30944.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">473</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">10161</span> Hand Motion Trajectory Analysis for Dynamic Hand Gestures Used in Indian Sign Language</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Daleesha%20M.%20Viswanathan">Daleesha M. Viswanathan</a>, <a href="https://publications.waset.org/abstracts/search?q=Sumam%20Mary%20Idicula"> Sumam Mary Idicula</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Dynamic hand gestures are an intrinsic component in sign language communication. Extracting spatial temporal features of the hand gesture trajectory plays an important role in a dynamic gesture recognition system. Finding a discrete feature descriptor for the motion trajectory based on the orientation feature is the main concern of this paper. Kalman filter algorithm and Hidden Markov Models (HMM) models are incorporated with this recognition system for hand trajectory tracking and for spatial temporal classification, respectively. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=orientation%20features" title="orientation features">orientation features</a>, <a href="https://publications.waset.org/abstracts/search?q=discrete%20feature%20vector" title=" discrete feature vector"> discrete feature vector</a>, <a href="https://publications.waset.org/abstracts/search?q=HMM." title=" HMM."> HMM.</a>, <a href="https://publications.waset.org/abstracts/search?q=Indian%20sign%20language" title=" Indian sign language"> Indian sign language</a> </p> <a href="https://publications.waset.org/abstracts/35653/hand-motion-trajectory-analysis-for-dynamic-hand-gestures-used-in-indian-sign-language" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/35653.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">371</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">10160</span> Diagonal Vector Autoregressive Models and Their Properties</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Usoro%20Anthony%20E.">Usoro Anthony E.</a>, <a href="https://publications.waset.org/abstracts/search?q=Udoh%20Emediong"> Udoh Emediong</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Diagonal Vector Autoregressive Models are special classes of the general vector autoregressive models identified under certain conditions, where parameters are restricted to the diagonal elements in the coefficient matrices. Variance, autocovariance, and autocorrelation properties of the upper and lower diagonal VAR models are derived. The new set of VAR models is verified with empirical data and is found to perform favourably with the general VAR models. The advantage of the diagonal models over the existing models is that the new models are parsimonious, given the reduction in the interactive coefficients of the general VAR models. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=VAR%20models" title="VAR models">VAR models</a>, <a href="https://publications.waset.org/abstracts/search?q=diagonal%20VAR%20models" title=" diagonal VAR models"> diagonal VAR models</a>, <a href="https://publications.waset.org/abstracts/search?q=variance" title=" variance"> variance</a>, <a href="https://publications.waset.org/abstracts/search?q=autocovariance" title=" autocovariance"> autocovariance</a>, <a href="https://publications.waset.org/abstracts/search?q=autocorrelations" title=" autocorrelations"> autocorrelations</a> </p> <a href="https://publications.waset.org/abstracts/157980/diagonal-vector-autoregressive-models-and-their-properties" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/157980.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">116</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">10159</span> Enhancing English Language Learning through Learners Cultural Background</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=A.%20Attahiru">A. Attahiru</a>, <a href="https://publications.waset.org/abstracts/search?q=Rabi%20Abdullahi%20Danjuma"> Rabi Abdullahi Danjuma</a>, <a href="https://publications.waset.org/abstracts/search?q=Fatima%20Bint"> Fatima Bint</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Language and culture are two concepts which are closely related that one affects the other. This paper attempts to examine the definition of language and culture by discussing the relationship between them. The paper further presents some instructional strategies for the teaching of language and culture as well as the influence of culture on language. It also looks at its implication to language education and finally some recommendation and conclusion were drawn. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=culture" title="culture">culture</a>, <a href="https://publications.waset.org/abstracts/search?q=language" title=" language"> language</a>, <a href="https://publications.waset.org/abstracts/search?q=relationship" title=" relationship"> relationship</a>, <a href="https://publications.waset.org/abstracts/search?q=strategies" title=" strategies"> strategies</a>, <a href="https://publications.waset.org/abstracts/search?q=teaching" title=" teaching"> teaching</a> </p> <a href="https://publications.waset.org/abstracts/22922/enhancing-english-language-learning-through-learners-cultural-background" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/22922.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">415</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">10158</span> User Intention Generation with Large Language Models Using Chain-of-Thought Prompting Title</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Gangmin%20Li">Gangmin Li</a>, <a href="https://publications.waset.org/abstracts/search?q=Fan%20Yang"> Fan Yang</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Personalized recommendation is crucial for any recommendation system. One of the techniques for personalized recommendation is to identify the intention. Traditional user intention identification uses the user’s selection when facing multiple items. This modeling relies primarily on historical behaviour data resulting in challenges such as the cold start, unintended choice, and failure to capture intention when items are new. Motivated by recent advancements in Large Language Models (LLMs) like ChatGPT, we present an approach for user intention identification by embracing LLMs with Chain-of-Thought (CoT) prompting. We use the initial user profile as input to LLMs and design a collection of prompts to align the LLM's response through various recommendation tasks encompassing rating prediction, search and browse history, user clarification, etc. Our tests on real-world datasets demonstrate the improvements in recommendation by explicit user intention identification and, with that intention, merged into a user model. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=personalized%20recommendation" title="personalized recommendation">personalized recommendation</a>, <a href="https://publications.waset.org/abstracts/search?q=generative%20user%20modelling" title=" generative user modelling"> generative user modelling</a>, <a href="https://publications.waset.org/abstracts/search?q=user%20intention%20identification" title=" user intention identification"> user intention identification</a>, <a href="https://publications.waset.org/abstracts/search?q=large%20language%20models" title=" large language models"> large language models</a>, <a href="https://publications.waset.org/abstracts/search?q=chain-of-thought%20prompting" title=" chain-of-thought prompting"> chain-of-thought prompting</a> </p> <a href="https://publications.waset.org/abstracts/185916/user-intention-generation-with-large-language-models-using-chain-of-thought-prompting-title" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/185916.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">54</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">10157</span> Aspects of Diglossia in Arabic Language Learning</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Adil%20Ishag">Adil Ishag</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Diglossia emerges in a situation where two distinctive varieties of a language are used alongside within a certain community. In this case, one is considered as a high or standard variety and the second one as a low or colloquial variety. Arabic is an extreme example of a highly diglossic language. This diglossity is due to the fact that Arabic is one of the most spoken languages and spread over 22 Countries in two continents as a mother tongue, and it is also widely spoken in many other Islamic countries as a second language or simply the language of Quran. The geographical variation between the countries where the language is spoken and the duality of the classical Arabic and daily spoken dialects in the Arab world on the other hand; makes the Arabic language one of the most diglossic languages. This paper tries to investigate this phenomena and its relation to learning Arabic as a first and second language. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=Arabic%20language" title="Arabic language">Arabic language</a>, <a href="https://publications.waset.org/abstracts/search?q=diglossia" title=" diglossia"> diglossia</a>, <a href="https://publications.waset.org/abstracts/search?q=first%20and%20second%20language" title=" first and second language"> first and second language</a>, <a href="https://publications.waset.org/abstracts/search?q=language%20learning" title=" language learning"> language learning</a> </p> <a href="https://publications.waset.org/abstracts/24533/aspects-of-diglossia-in-arabic-language-learning" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/24533.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">564</span> </span> </div> </div> <ul class="pagination"> <li class="page-item disabled"><span class="page-link">&lsaquo;</span></li> <li class="page-item active"><span class="page-link">1</span></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=language%20models&amp;page=2">2</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=language%20models&amp;page=3">3</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=language%20models&amp;page=4">4</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=language%20models&amp;page=5">5</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=language%20models&amp;page=6">6</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=language%20models&amp;page=7">7</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=language%20models&amp;page=8">8</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=language%20models&amp;page=9">9</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=language%20models&amp;page=10">10</a></li> <li class="page-item disabled"><span class="page-link">...</span></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=language%20models&amp;page=339">339</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=language%20models&amp;page=340">340</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=language%20models&amp;page=2" rel="next">&rsaquo;</a></li> </ul> </div> </main> <footer> <div id="infolinks" class="pt-3 pb-2"> <div class="container"> <div style="background-color:#f5f5f5;" class="p-3"> <div class="row"> <div class="col-md-2"> <ul class="list-unstyled"> About <li><a href="https://waset.org/page/support">About Us</a></li> <li><a href="https://waset.org/page/support#legal-information">Legal</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/WASET-16th-foundational-anniversary.pdf">WASET celebrates its 16th foundational anniversary</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Account <li><a href="https://waset.org/profile">My Account</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Explore <li><a href="https://waset.org/disciplines">Disciplines</a></li> <li><a href="https://waset.org/conferences">Conferences</a></li> <li><a href="https://waset.org/conference-programs">Conference Program</a></li> <li><a href="https://waset.org/committees">Committees</a></li> <li><a href="https://publications.waset.org">Publications</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Research <li><a href="https://publications.waset.org/abstracts">Abstracts</a></li> <li><a href="https://publications.waset.org">Periodicals</a></li> <li><a href="https://publications.waset.org/archive">Archive</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Open Science <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Science-Philosophy.pdf">Open Science Philosophy</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Science-Award.pdf">Open Science Award</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Society-Open-Science-and-Open-Innovation.pdf">Open Innovation</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Postdoctoral-Fellowship-Award.pdf">Postdoctoral Fellowship Award</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Scholarly-Research-Review.pdf">Scholarly Research Review</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Support <li><a href="https://waset.org/page/support">Support</a></li> <li><a href="https://waset.org/profile/messages/create">Contact Us</a></li> <li><a href="https://waset.org/profile/messages/create">Report Abuse</a></li> </ul> </div> </div> </div> </div> </div> <div class="container text-center"> <hr style="margin-top:0;margin-bottom:.3rem;"> <a href="https://creativecommons.org/licenses/by/4.0/" target="_blank" class="text-muted small">Creative Commons Attribution 4.0 International License</a> <div id="copy" class="mt-2">&copy; 2024 World Academy of Science, Engineering and Technology</div> </div> </footer> <a href="javascript:" id="return-to-top"><i class="fas fa-arrow-up"></i></a> <div class="modal" id="modal-template"> <div class="modal-dialog"> <div class="modal-content"> <div class="row m-0 mt-1"> <div class="col-md-12"> <button type="button" class="close" data-dismiss="modal" aria-label="Close"><span aria-hidden="true">&times;</span></button> </div> </div> <div class="modal-body"></div> </div> </div> </div> <script src="https://cdn.waset.org/static/plugins/jquery-3.3.1.min.js"></script> <script src="https://cdn.waset.org/static/plugins/bootstrap-4.2.1/js/bootstrap.bundle.min.js"></script> <script src="https://cdn.waset.org/static/js/site.js?v=150220211556"></script> <script> jQuery(document).ready(function() { /*jQuery.get("https://publications.waset.org/xhr/user-menu", function (response) { jQuery('#mainNavMenu').append(response); });*/ jQuery.get({ url: "https://publications.waset.org/xhr/user-menu", cache: false }).then(function(response){ jQuery('#mainNavMenu').append(response); }); }); </script> </body> </html>

Pages: 1 2 3 4 5 6 7 8 9 10