CINXE.COM

Search results for: American Sign Language (ASL)

<!DOCTYPE html> <html lang="en" dir="ltr"> <head> <!-- Google tag (gtag.js) --> <script async src="https://www.googletagmanager.com/gtag/js?id=G-P63WKM1TM1"></script> <script> window.dataLayer = window.dataLayer || []; function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'G-P63WKM1TM1'); </script> <!-- Yandex.Metrika counter --> <script type="text/javascript" > (function(m,e,t,r,i,k,a){m[i]=m[i]||function(){(m[i].a=m[i].a||[]).push(arguments)}; m[i].l=1*new Date(); for (var j = 0; j < document.scripts.length; j++) {if (document.scripts[j].src === r) { return; }} k=e.createElement(t),a=e.getElementsByTagName(t)[0],k.async=1,k.src=r,a.parentNode.insertBefore(k,a)}) (window, document, "script", "https://mc.yandex.ru/metrika/tag.js", "ym"); ym(55165297, "init", { clickmap:false, trackLinks:true, accurateTrackBounce:true, webvisor:false }); </script> <noscript><div><img src="https://mc.yandex.ru/watch/55165297" style="position:absolute; left:-9999px;" alt="" /></div></noscript> <!-- /Yandex.Metrika counter --> <!-- Matomo --> <!-- End Matomo Code --> <title>Search results for: American Sign Language (ASL)</title> <meta name="description" content="Search results for: American Sign Language (ASL)"> <meta name="keywords" content="American Sign Language (ASL)"> <meta name="viewport" content="width=device-width, initial-scale=1, minimum-scale=1, maximum-scale=1, user-scalable=no"> <meta charset="utf-8"> <link href="https://cdn.waset.org/favicon.ico" type="image/x-icon" rel="shortcut icon"> <link href="https://cdn.waset.org/static/plugins/bootstrap-4.2.1/css/bootstrap.min.css" rel="stylesheet"> <link href="https://cdn.waset.org/static/plugins/fontawesome/css/all.min.css" rel="stylesheet"> <link href="https://cdn.waset.org/static/css/site.css?v=150220211555" rel="stylesheet"> </head> <body> <header> <div class="container"> <nav class="navbar navbar-expand-lg navbar-light"> <a class="navbar-brand" href="https://waset.org"> <img src="https://cdn.waset.org/static/images/wasetc.png" alt="Open Science Research Excellence" title="Open Science Research Excellence" /> </a> <button class="d-block d-lg-none navbar-toggler ml-auto" type="button" data-toggle="collapse" data-target="#navbarMenu" aria-controls="navbarMenu" aria-expanded="false" aria-label="Toggle navigation"> <span class="navbar-toggler-icon"></span> </button> <div class="w-100"> <div class="d-none d-lg-flex flex-row-reverse"> <form method="get" action="https://waset.org/search" class="form-inline my-2 my-lg-0"> <input class="form-control mr-sm-2" type="search" placeholder="Search Conferences" value="American Sign Language (ASL)" name="q" aria-label="Search"> <button class="btn btn-light my-2 my-sm-0" type="submit"><i class="fas fa-search"></i></button> </form> </div> <div class="collapse navbar-collapse mt-1" id="navbarMenu"> <ul class="navbar-nav ml-auto align-items-center" id="mainNavMenu"> <li class="nav-item"> <a class="nav-link" href="https://waset.org/conferences" title="Conferences in 2024/2025/2026">Conferences</a> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/disciplines" title="Disciplines">Disciplines</a> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/committees" rel="nofollow">Committees</a> </li> <li class="nav-item dropdown"> <a class="nav-link dropdown-toggle" href="#" id="navbarDropdownPublications" role="button" data-toggle="dropdown" aria-haspopup="true" aria-expanded="false"> Publications </a> <div class="dropdown-menu" aria-labelledby="navbarDropdownPublications"> <a class="dropdown-item" href="https://publications.waset.org/abstracts">Abstracts</a> <a class="dropdown-item" href="https://publications.waset.org">Periodicals</a> <a class="dropdown-item" href="https://publications.waset.org/archive">Archive</a> </div> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/page/support" title="Support">Support</a> </li> </ul> </div> </div> </nav> </div> </header> <main> <div class="container mt-4"> <div class="row"> <div class="col-md-9 mx-auto"> <form method="get" action="https://publications.waset.org/abstracts/search"> <div id="custom-search-input"> <div class="input-group"> <i class="fas fa-search"></i> <input type="text" class="search-query" name="q" placeholder="Author, Title, Abstract, Keywords" value="American Sign Language (ASL)"> <input type="submit" class="btn_search" value="Search"> </div> </div> </form> </div> </div> <div class="row mt-3"> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Commenced</strong> in January 2007</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Frequency:</strong> Monthly</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Edition:</strong> International</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Paper Count:</strong> 4817</div> </div> </div> </div> <h1 class="mt-3 mb-3 text-center" style="font-size:1.6rem;">Search results for: American Sign Language (ASL)</h1> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4817</span> Transmigration of American Sign Language from the American Deaf Community to the American Society</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Russell%20Rosen">Russell Rosen</a> </p> <p class="card-text"><strong>Abstract:</strong></p> American Sign Language (ASL) has been developed and used by signing deaf and hard of hearing (DHH) individuals in the American Deaf community since early nineteenth century. In the last two decades, secondary schools in the US offered ASL for foreign language credit to secondary school learners. The learners who learn ASL as a foreign language are largely American native speakers of English. They not only learn ASL in US schools but also create spaces under certain interactional and social conditions in their home communities outside of classrooms and use ASL with each other instead of their native English. This phenomenon is a transmigration of language from a native social group to a non-native, non-kin social group. This study looks at the transmigration of ASL from signing Deaf community to the general speaking and hearing American society. Theoretical implications of this study are discussed. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=American%20Sign%20Language" title="American Sign Language">American Sign Language</a>, <a href="https://publications.waset.org/abstracts/search?q=Foreign%20Language" title=" Foreign Language"> Foreign Language</a>, <a href="https://publications.waset.org/abstracts/search?q=Language%20transmission" title=" Language transmission"> Language transmission</a>, <a href="https://publications.waset.org/abstracts/search?q=United%20States" title=" United States"> United States</a> </p> <a href="https://publications.waset.org/abstracts/64273/transmigration-of-american-sign-language-from-the-american-deaf-community-to-the-american-society" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/64273.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">419</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4816</span> Prototyping a Portable, Affordable Sign Language Glove</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Vidhi%20Jain">Vidhi Jain</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Communication between speakers and non-speakers of American Sign Language (ASL) can be problematic, inconvenient, and expensive. This project attempts to bridge the communication gap by designing a portable glove that captures the user’s ASL gestures and outputs the translated text on a smartphone. The glove is equipped with flex sensors, contact sensors, and a gyroscope to measure the flexion of the fingers, the contact between fingers, and the rotation of the hand. The glove’s Arduino UNO microcontroller analyzes the sensor readings to identify the gesture from a library of learned gestures. The Bluetooth module transmits the gesture to a smartphone. Using this device, one day speakers of ASL may be able to communicate with others in an affordable and convenient way. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=sign%20language" title="sign language">sign language</a>, <a href="https://publications.waset.org/abstracts/search?q=morse%20code" title=" morse code"> morse code</a>, <a href="https://publications.waset.org/abstracts/search?q=convolutional%20neural%20network" title=" convolutional neural network"> convolutional neural network</a>, <a href="https://publications.waset.org/abstracts/search?q=American%20sign%20language" title=" American sign language"> American sign language</a>, <a href="https://publications.waset.org/abstracts/search?q=gesture%20recognition" title=" gesture recognition"> gesture recognition</a> </p> <a href="https://publications.waset.org/abstracts/183474/prototyping-a-portable-affordable-sign-language-glove" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/183474.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">63</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4815</span> Mouthing Patterns in Indian Sign Language</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Neha%20Kulshreshtha">Neha Kulshreshtha</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper examines the patterns of 'Mouthing', a non-manual marker, and its distribution in Indian Sign Language (ISL). Linguistic research in Indian Sign Language is an emerging field where much is needed to be done. The little research which has happened focuses on the structure of ISL in terms of physical or manual markers, therefore a study of mouthing patterns would give an insight into the distribution of this particular non-manual marker. Data has been collected with the help of native ISL users through various techniques in which natural signs can be captured, for example, storytelling, informal conversations etc. The aim of the study is to find out the various situations where mouthing is used. Sometimes, the mouthing is not actually the articulation of the word as spoken in the local languages. The paper aims to find out whether the mouthing patterns in ISL are influenced by any local language or they are independent of any influence from the local language or both. Mouthing patterns have been studied in many sign languages and an investigation into ISL will reveal whether it falls in pattern with the other sign languages. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=Indian%20sign%20language" title="Indian sign language">Indian sign language</a>, <a href="https://publications.waset.org/abstracts/search?q=mouthing" title=" mouthing"> mouthing</a>, <a href="https://publications.waset.org/abstracts/search?q=non-manual%20marker" title=" non-manual marker"> non-manual marker</a>, <a href="https://publications.waset.org/abstracts/search?q=spoken%20language%20influence" title=" spoken language influence"> spoken language influence</a> </p> <a href="https://publications.waset.org/abstracts/78826/mouthing-patterns-in-indian-sign-language" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/78826.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">264</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4814</span> Signed Language Phonological Awareness: Building Deaf Children&#039;s Vocabulary in Signed and Written Language</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Lynn%20Mcquarrie">Lynn Mcquarrie</a>, <a href="https://publications.waset.org/abstracts/search?q=Charlotte%20Enns"> Charlotte Enns</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The goal of this project was to develop a visually-based, signed language phonological awareness training program and to pilot the intervention with signing deaf children (ages 6 -10 years/ grades 1 - 4) who were beginning readers to assess the effects of systematic explicit American Sign Language (ASL) phonological instruction on both ASL vocabulary and English print vocabulary learning. Growing evidence that signing learners utilize visually-based signed language phonological knowledge (homologous to the sound-based phonological level of spoken language processing) when reading underscore the critical need for further research on the innovation of reading instructional practices for visual language learners. Multiple single-case studies using a multiple probe design across content (i.e., sign and print targets incorporating specific ASL phonological parameters – handshapes) was implemented to examine if a functional relationship existed between instruction and acquisition of these skills. The results indicated that for all cases, representing a variety of language abilities, the visually-based phonological teaching approach was exceptionally powerful in helping children to build their sign and print vocabularies. Although intervention/teaching studies have been essential in testing hypotheses about spoken language phonological processes supporting non-deaf children’s reading development, there are no parallel intervention/teaching studies exploring hypotheses about signed language phonological processes in supporting deaf children’s reading development. This study begins to provide the needed evidence to pursue innovative teaching strategies that incorporate the strengths of visual learners. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=American%20sign%20language%20phonological%20awareness" title="American sign language phonological awareness">American sign language phonological awareness</a>, <a href="https://publications.waset.org/abstracts/search?q=dual%20language%20strategies" title=" dual language strategies"> dual language strategies</a>, <a href="https://publications.waset.org/abstracts/search?q=vocabulary%20learning" title=" vocabulary learning"> vocabulary learning</a>, <a href="https://publications.waset.org/abstracts/search?q=word%20reading" title=" word reading"> word reading</a> </p> <a href="https://publications.waset.org/abstracts/46214/signed-language-phonological-awareness-building-deaf-childrens-vocabulary-in-signed-and-written-language" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/46214.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">333</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4813</span> Pattern Recognition Based on Simulation of Chemical Senses (SCS)</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Nermeen%20El%20Kashef">Nermeen El Kashef</a>, <a href="https://publications.waset.org/abstracts/search?q=Yasser%20Fouad"> Yasser Fouad</a>, <a href="https://publications.waset.org/abstracts/search?q=Khaled%20Mahar"> Khaled Mahar</a> </p> <p class="card-text"><strong>Abstract:</strong></p> No AI-complete system can model the human brain or behavior, without looking at the totality of the whole situation and incorporating a combination of senses. This paper proposes a Pattern Recognition model based on Simulation of Chemical Senses (SCS) for separation and classification of sign language. The model based on human taste controlling strategy. The main idea of the introduced model is motivated by the facts that the tongue cluster input substance into its basic tastes first, and then the brain recognizes its flavor. To implement this strategy, two level architecture is proposed (this is inspired from taste system). The separation-level of the architecture focuses on hand posture cluster, while the classification-level of the architecture to recognizes the sign language. The efficiency of proposed model is demonstrated experimentally by recognizing American Sign Language (ASL) data set. The recognition accuracy obtained for numbers of ASL is 92.9 percent. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=artificial%20intelligence" title="artificial intelligence">artificial intelligence</a>, <a href="https://publications.waset.org/abstracts/search?q=biocybernetics" title=" biocybernetics"> biocybernetics</a>, <a href="https://publications.waset.org/abstracts/search?q=gustatory%20system" title=" gustatory system"> gustatory system</a>, <a href="https://publications.waset.org/abstracts/search?q=sign%20language%20recognition" title=" sign language recognition"> sign language recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=taste%20sense" title=" taste sense"> taste sense</a> </p> <a href="https://publications.waset.org/abstracts/40814/pattern-recognition-based-on-simulation-of-chemical-senses-scs" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/40814.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">294</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4812</span> A Novel Combined Finger Counting and Finite State Machine Technique for ASL Translation Using Kinect</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Rania%20Ahmed%20Kadry%20Abdel%20Gawad%20Birry">Rania Ahmed Kadry Abdel Gawad Birry</a>, <a href="https://publications.waset.org/abstracts/search?q=Mohamed%20El-Habrouk"> Mohamed El-Habrouk</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper presents a brief survey of the techniques used for sign language recognition along with the types of sensors used to perform the task. It presents a modified method for identification of an isolated sign language gesture using Microsoft Kinect with the OpenNI framework. It presents the way of extracting robust features from the depth image provided by Microsoft Kinect and the OpenNI interface and to use them in creating a robust and accurate gesture recognition system, for the purpose of ASL translation. The Prime Sense’s Natural Interaction Technology for End-user - NITE™ - was also used in the C++ implementation of the system. The algorithm presents a simple finger counting algorithm for static signs as well as directional Finite State Machine (FSM) description of the hand motion in order to help in translating a sign language gesture. This includes both letters and numbers performed by a user, which in-turn may be used as an input for voice pronunciation systems. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=American%20sign%20language" title="American sign language">American sign language</a>, <a href="https://publications.waset.org/abstracts/search?q=finger%20counting" title=" finger counting"> finger counting</a>, <a href="https://publications.waset.org/abstracts/search?q=hand%20tracking" title=" hand tracking"> hand tracking</a>, <a href="https://publications.waset.org/abstracts/search?q=Microsoft%20Kinect" title=" Microsoft Kinect"> Microsoft Kinect</a> </p> <a href="https://publications.waset.org/abstracts/43466/a-novel-combined-finger-counting-and-finite-state-machine-technique-for-asl-translation-using-kinect" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/43466.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">296</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4811</span> Teaching Italian Sign Language in Higher Education</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Maria%20Tagarelli%20De%20Monte">Maria Tagarelli De Monte</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Since its formal recognition in 2021, Italian Sign Language (LIS) and interpreters’ education has become a topic for higher education in Italian universities. In April 2022, Italian universities have been invited to present their proposals to create sign language courses for interpreters’ training for both LIS and tactile LIS. As a result, a few universities have presented a three-year course leading candidate students from the introductory level to interpreters. In such a context, there is an open debate not only on the fact that three years may not be enough to prepare skillful interpreters but also on the need to refer to international standards in the definition of the training path to follow. Among these, are the Common European Framework of Reference (CEFR) for languages and Dublin’s descriptors. This contribution will discuss the potentials and the challenges given by LIS training in academic settings, by comparing traditional studies to the requests coming from universities. Particular attention will be given to the use of CEFR as a reference document for the Italian Sign Language Curriculum. Its use has given me the chance to reflect on how LIS can be taught in higher education, and the adaptations that need to be addressed to respect the visual-gestural nature of sign language and the formal requirements of academic settings. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=Italian%20sign%20language" title="Italian sign language">Italian sign language</a>, <a href="https://publications.waset.org/abstracts/search?q=higher%20education" title=" higher education"> higher education</a>, <a href="https://publications.waset.org/abstracts/search?q=sign%20language%20curriculum" title=" sign language curriculum"> sign language curriculum</a>, <a href="https://publications.waset.org/abstracts/search?q=interpreters%20education" title=" interpreters education"> interpreters education</a>, <a href="https://publications.waset.org/abstracts/search?q=CEFR" title=" CEFR"> CEFR</a> </p> <a href="https://publications.waset.org/abstracts/185246/teaching-italian-sign-language-in-higher-education" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/185246.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">45</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4810</span> Revitalization of Sign Language through Deaf Theatre: A Linguistic Analysis of an Art Form Which Combines Physical Theatre, Poetry, and Sign Language</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Gal%20Belsitzman">Gal Belsitzman</a>, <a href="https://publications.waset.org/abstracts/search?q=Rose%20Stamp"> Rose Stamp</a>, <a href="https://publications.waset.org/abstracts/search?q=Atay%20Citron"> Atay Citron</a>, <a href="https://publications.waset.org/abstracts/search?q=Wendy%20Sandler"> Wendy Sandler</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Sign languages are considered endangered. The vitality of sign languages is compromised by its unique sociolinguistic situation, in which hearing parents that give birth to deaf children usually decide to cochlear implant their child. Therefore, these children don’t acquire their natural language – Sign Language. Despite this, many sign languages, such as Israeli Sign Language (ISL) are thriving. The continued survival of similar languages under threat has been associated with the remarkable resilience of the language community. In particular, deaf literary traditions are central in reminding the community of the importance of the language. One example of a deaf literary tradition which has received increased popularity in recent years is deaf theatre. The Ebisu Sign Language Theatre Laboratory, developed as part of the multidisciplinary Grammar of the Body Research Project, is the first deaf theatre company in Israel. Ebisu Theatre combines physical theatre and sign language research, to allow for a natural laboratory to analyze the creative use of the body. In this presentation, we focus on the recent theatre production called ‘Their language’ which tells of the struggle faced by the deaf community to use their own natural language in the education system. A thorough analysis unravels how linguistic properties are integrated with the use of poetic devices and physical theatre techniques in this performance, enabling wider access by both deaf and hearing audiences, without interpretation. Interviews with the audience illustrate the significance of this art form which serves a dual purpose, both as empowering for the deaf community and educational for the hearing and deaf audiences, by raising awareness of community-related issues. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=deaf%20theatre" title="deaf theatre">deaf theatre</a>, <a href="https://publications.waset.org/abstracts/search?q=empowerment" title=" empowerment"> empowerment</a>, <a href="https://publications.waset.org/abstracts/search?q=language%20revitalization" title=" language revitalization"> language revitalization</a>, <a href="https://publications.waset.org/abstracts/search?q=sign%20language" title=" sign language"> sign language</a> </p> <a href="https://publications.waset.org/abstracts/99226/revitalization-of-sign-language-through-deaf-theatre-a-linguistic-analysis-of-an-art-form-which-combines-physical-theatre-poetry-and-sign-language" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/99226.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">167</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4809</span> Brazilian Sign Language: A Synthesis of the Research in the Period from 2000 to 2017</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Maria%20da%20Gloria%20Guara-Tavares">Maria da Gloria Guara-Tavares</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This article reports a synthesis of the research in Brazilian Sign Language conducted from 2000 to 2017. The objective of the synthesis was to identify the most researched areas and the most used methodologies. Articles published in three Brazilian journals of Translation Studies, unpublished dissertations and theses were included in the analysis. Abstracts and the method sections of the papers were scrutinized. Sixty studies were analyzed, and overall results indicate that the research in Brazilian Sign Language has been fragmented in several areas such as linguistic aspects, facial expressions, subtitling, identity issues, bilingualism, and interpretation strategies. Concerning research methods, the synthesis reveals that most research is qualitative in nature. Moreover, results show that the cognitive aspects of Brazilian Sign Language seem to be poorly explored. Implications for a future research agenda are also discussed. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=Brazilian%20sign%20language" title="Brazilian sign language">Brazilian sign language</a>, <a href="https://publications.waset.org/abstracts/search?q=qualitative%20methods" title=" qualitative methods"> qualitative methods</a>, <a href="https://publications.waset.org/abstracts/search?q=research%20agenda" title=" research agenda"> research agenda</a>, <a href="https://publications.waset.org/abstracts/search?q=synthesis" title=" synthesis"> synthesis</a> </p> <a href="https://publications.waset.org/abstracts/91686/brazilian-sign-language-a-synthesis-of-the-research-in-the-period-from-2000-to-2017" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/91686.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">240</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4808</span> A Motion Dictionary to Real-Time Recognition of Sign Language Alphabet Using Dynamic Time Warping and Artificial Neural Network</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Marcio%20Leal">Marcio Leal</a>, <a href="https://publications.waset.org/abstracts/search?q=Marta%20Villamil"> Marta Villamil</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Computacional recognition of sign languages aims to allow a greater social and digital inclusion of deaf people through interpretation of their language by computer. This article presents a model of recognition of two of global parameters from sign languages; hand configurations and hand movements. Hand motion is captured through an infrared technology and its joints are built into a virtual three-dimensional space. A Multilayer Perceptron Neural Network (MLP) was used to classify hand configurations and Dynamic Time Warping (DWT) recognizes hand motion. Beyond of the method of sign recognition, we provide a dataset of hand configurations and motion capture built with help of fluent professionals in sign languages. Despite this technology can be used to translate any sign from any signs dictionary, Brazilian Sign Language (Libras) was used as case study. Finally, the model presented in this paper achieved a recognition rate of 80.4%. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=artificial%20neural%20network" title="artificial neural network">artificial neural network</a>, <a href="https://publications.waset.org/abstracts/search?q=computer%20vision" title=" computer vision"> computer vision</a>, <a href="https://publications.waset.org/abstracts/search?q=dynamic%20time%20warping" title=" dynamic time warping"> dynamic time warping</a>, <a href="https://publications.waset.org/abstracts/search?q=infrared" title=" infrared"> infrared</a>, <a href="https://publications.waset.org/abstracts/search?q=sign%20language%20recognition" title=" sign language recognition"> sign language recognition</a> </p> <a href="https://publications.waset.org/abstracts/94322/a-motion-dictionary-to-real-time-recognition-of-sign-language-alphabet-using-dynamic-time-warping-and-artificial-neural-network" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/94322.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">217</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4807</span> Comparison of Sign Language Skill and Academic Achievement of Deaf Students in Special and Inclusive Primary Schools of South Nation Nationalities People Region, Ethiopia</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Tesfaye%20Basha">Tesfaye Basha</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The purpose of this study was to examine the sign language and academic achievement of deaf students in special and inclusive primary schools of Southern Ethiopia. The study used a mixed-method to collect varied data. The study contained Signed Amharic and English skill tasks, questionnaire, 8th-grade Primary School Leaving Certificate Examination results, classroom observation, and interviews. For quantitative (n=70) deaf students and for qualitative data collection, 16 participants were involved. The finding revealed that the limitation of sign language is a problem in signing and academic achievements. This displays that schools are not linguistically rich to enable sign language achievement for deaf students. Moreover, the finding revealed that the contribution of Total Communication in the growth of natural sign language for deaf students was unsatisfactory. The results also indicated that special schools of deaf students performed better sign language skills and academic achievement than inclusive schools. In addition, the findings revealed that high signed skill group showed higher academic achievement than the low skill group. This displayed that sign language skill is highly associated with academic achievement. In addition, to qualify deaf students in sign language and academics, teacher institutions must produce competent teachers on how to teach deaf students with sign language and literacy skills. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=academic%20achievement" title="academic achievement">academic achievement</a>, <a href="https://publications.waset.org/abstracts/search?q=inclusive%20school" title=" inclusive school"> inclusive school</a>, <a href="https://publications.waset.org/abstracts/search?q=sign%20language" title=" sign language"> sign language</a>, <a href="https://publications.waset.org/abstracts/search?q=signed%20Amharic" title=" signed Amharic"> signed Amharic</a>, <a href="https://publications.waset.org/abstracts/search?q=signed%20English" title=" signed English"> signed English</a>, <a href="https://publications.waset.org/abstracts/search?q=special%20school" title=" special school"> special school</a>, <a href="https://publications.waset.org/abstracts/search?q=total%20communication" title=" total communication"> total communication</a> </p> <a href="https://publications.waset.org/abstracts/130014/comparison-of-sign-language-skill-and-academic-achievement-of-deaf-students-in-special-and-inclusive-primary-schools-of-south-nation-nationalities-people-region-ethiopia" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/130014.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">133</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4806</span> American Sign Language Recognition System</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Rishabh%20Nagpal">Rishabh Nagpal</a>, <a href="https://publications.waset.org/abstracts/search?q=Riya%20Uchagaonkar"> Riya Uchagaonkar</a>, <a href="https://publications.waset.org/abstracts/search?q=Venkata%20Naga%20Narasimha%20Ashish%20Mernedi"> Venkata Naga Narasimha Ashish Mernedi</a>, <a href="https://publications.waset.org/abstracts/search?q=Ahmed%20Hambaba"> Ahmed Hambaba</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The rapid evolution of technology in the communication sector continually seeks to bridge the gap between different communities, notably between the deaf community and the hearing world. This project develops a comprehensive American Sign Language (ASL) recognition system, leveraging the advanced capabilities of convolutional neural networks (CNNs) and vision transformers (ViTs) to interpret and translate ASL in real-time. The primary objective of this system is to provide an effective communication tool that enables seamless interaction through accurate sign language interpretation. The architecture of the proposed system integrates dual networks -VGG16 for precise spatial feature extraction and vision transformers for contextual understanding of the sign language gestures. The system processes live input, extracting critical features through these sophisticated neural network models, and combines them to enhance gesture recognition accuracy. This integration facilitates a robust understanding of ASL by capturing detailed nuances and broader gesture dynamics. The system is evaluated through a series of tests that measure its efficiency and accuracy in real-world scenarios. Results indicate a high level of precision in recognizing diverse ASL signs, substantiating the potential of this technology in practical applications. Challenges such as enhancing the system’s ability to operate in varied environmental conditions and further expanding the dataset for training were identified and discussed. Future work will refine the model’s adaptability and incorporate haptic feedback to enhance the interactivity and richness of the user experience. This project demonstrates the feasibility of an advanced ASL recognition system and lays the groundwork for future innovations in assistive communication technologies. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=sign%20language" title="sign language">sign language</a>, <a href="https://publications.waset.org/abstracts/search?q=computer%20vision" title=" computer vision"> computer vision</a>, <a href="https://publications.waset.org/abstracts/search?q=vision%20transformer" title=" vision transformer"> vision transformer</a>, <a href="https://publications.waset.org/abstracts/search?q=VGG16" title=" VGG16"> VGG16</a>, <a href="https://publications.waset.org/abstracts/search?q=CNN" title=" CNN"> CNN</a> </p> <a href="https://publications.waset.org/abstracts/186514/american-sign-language-recognition-system" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/186514.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">43</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4805</span> Hand Gesture Interpretation Using Sensing Glove Integrated with Machine Learning Algorithms</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Aqsa%20Ali">Aqsa Ali</a>, <a href="https://publications.waset.org/abstracts/search?q=Aleem%20Mushtaq"> Aleem Mushtaq</a>, <a href="https://publications.waset.org/abstracts/search?q=Attaullah%20Memon"> Attaullah Memon</a>, <a href="https://publications.waset.org/abstracts/search?q=Monna"> Monna</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In this paper, we present a low cost design for a smart glove that can perform sign language recognition to assist the speech impaired people. Specifically, we have designed and developed an Assistive Hand Gesture Interpreter that recognizes hand movements relevant to the American Sign Language (ASL) and translates them into text for display on a Thin-Film-Transistor Liquid Crystal Display (<em>TFT</em>&nbsp;LCD) screen as well as synthetic speech. Linear Bayes Classifiers and Multilayer Neural Networks have been used to classify 11 feature vectors obtained from the sensors on the glove into one of the 27 ASL alphabets and a predefined gesture for space. Three types of features are used; bending using six bend sensors, orientation in three dimensions using accelerometers and contacts at vital points using contact sensors. To gauge the performance of the presented design, the training database was prepared using five volunteers. The accuracy of the current version on the prepared dataset was found to be up to 99.3% for target user. The solution combines electronics, e-textile technology, sensor technology, embedded system and machine learning techniques to build a low cost wearable glove that is scrupulous, elegant and portable. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=American%20sign%20language" title="American sign language">American sign language</a>, <a href="https://publications.waset.org/abstracts/search?q=assistive%20hand%20gesture%20interpreter" title=" assistive hand gesture interpreter"> assistive hand gesture interpreter</a>, <a href="https://publications.waset.org/abstracts/search?q=human-machine%20interface" title=" human-machine interface"> human-machine interface</a>, <a href="https://publications.waset.org/abstracts/search?q=machine%20learning" title=" machine learning"> machine learning</a>, <a href="https://publications.waset.org/abstracts/search?q=sensing%20glove" title=" sensing glove"> sensing glove</a> </p> <a href="https://publications.waset.org/abstracts/52683/hand-gesture-interpretation-using-sensing-glove-integrated-with-machine-learning-algorithms" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/52683.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">301</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4804</span> Online Multilingual Dictionary Using Hamburg Notation for Avatar-Based Indian Sign Language Generation System </h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Sugandhi">Sugandhi</a>, <a href="https://publications.waset.org/abstracts/search?q=Parteek%20Kumar"> Parteek Kumar</a>, <a href="https://publications.waset.org/abstracts/search?q=Sanmeet%20Kaur"> Sanmeet Kaur</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Sign Language (SL) is used by deaf and other people who cannot speak but can hear or have a problem with spoken languages due to some disability. It is a visual gesture language that makes use of either one hand or both hands, arms, face, body to convey meanings and thoughts. SL automation system is an effective way which provides an interface to communicate with normal people using a computer. In this paper, an avatar based dictionary has been proposed for text to Indian Sign Language (ISL) generation system. This research work will also depict a literature review on SL corpus available for various SL s over the years. For ISL generation system, a written form of SL is required and there are certain techniques available for writing the SL. The system uses Hamburg sign language Notation System (HamNoSys) and Signing Gesture Mark-up Language (SiGML) for ISL generation. It is developed in PHP using Web Graphics Library (WebGL) technology for 3D avatar animation. A multilingual ISL dictionary is developed using HamNoSys for both English and Hindi Language. This dictionary will be used as a database to associate signs with words or phrases of a spoken language. It provides an interface for admin panel to manage the dictionary, i.e., modification, addition, or deletion of a word. Through this interface, HamNoSys can be developed and stored in a database and these notations can be converted into its corresponding SiGML file manually. The system takes natural language input sentence in English and Hindi language and generate 3D sign animation using an avatar. SL generation systems have potential applications in many domains such as healthcare sector, media, educational institutes, commercial sectors, transportation services etc. This research work will help the researchers to understand various techniques used for writing SL and generation of Sign Language systems. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=avatar" title="avatar">avatar</a>, <a href="https://publications.waset.org/abstracts/search?q=dictionary" title=" dictionary"> dictionary</a>, <a href="https://publications.waset.org/abstracts/search?q=HamNoSys" title=" HamNoSys"> HamNoSys</a>, <a href="https://publications.waset.org/abstracts/search?q=hearing%20impaired" title=" hearing impaired"> hearing impaired</a>, <a href="https://publications.waset.org/abstracts/search?q=Indian%20sign%20language%20%28ISL%29" title=" Indian sign language (ISL)"> Indian sign language (ISL)</a>, <a href="https://publications.waset.org/abstracts/search?q=sign%20language" title=" sign language"> sign language</a> </p> <a href="https://publications.waset.org/abstracts/88342/online-multilingual-dictionary-using-hamburg-notation-for-avatar-based-indian-sign-language-generation-system" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/88342.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">230</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4803</span> Development of Taiwanese Sign Language Receptive Skills Test for Deaf Children</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Hsiu%20Tan%20Liu">Hsiu Tan Liu</a>, <a href="https://publications.waset.org/abstracts/search?q=Chun%20Jung%20Liu"> Chun Jung Liu</a> </p> <p class="card-text"><strong>Abstract:</strong></p> It has multiple purposes to develop a sign language receptive skills test. For example, this test can be used to be an important tool for education and to understand the sign language ability of deaf children. There is no available test for these purposes in Taiwan. Through the discussion of experts and the references of standardized Taiwanese Sign Language Receptive Test for adults and adolescents, the frame of Taiwanese Sign Language Receptive Skills Test (TSL-RST) for deaf children was developed, and the items were further designed. After multiple times of pre-trials, discussions and corrections, TSL-RST is finally developed which can be conducted and scored online. There were 33 deaf children who agreed to be tested from all three deaf schools in Taiwan. Through item analysis, the items were picked out that have good discrimination index and fair difficulty index. Moreover, psychometric indexes of reliability and validity were established. Then, derived the regression formula was derived which can predict the sign language receptive skills of deaf children. The main results of this study are as follows. (1). TSL-RST includes three sub-test of vocabulary comprehension, syntax comprehension and paragraph comprehension. There are 21, 20, and 9 items in vocabulary comprehension, syntax comprehension, and paragraph comprehension, respectively. (2). TSL-RST can be conducted individually online. The sign language ability of deaf students can be calculated fast and objectively, so that they can get the feedback and results immediately. This can also contribute to both teaching and research. The most subjects can complete the test within 25 minutes. While the test procedure, they can answer the test questions without relying on their reading ability or memory capacity. (3). The sub-test of the vocabulary comprehension is the easiest one, syntax comprehension is harder than vocabulary comprehension and the paragraph comprehension is the hardest. Each of the three sub-test and the whole test are good in item discrimination index. (4). The psychometric indices are good, including the internal consistency reliability (Cronbach’s α coefficient), test-retest reliability, split-half reliability, and content validity. The sign language ability are significantly related to non-verbal IQ, the teachers’ rating to the students’ sign language ability and students’ self-rating to their own sign language ability. The results showed that the higher grade students have better performance than the lower grade students, and students with deaf parent perform better than those with hearing parent. These results made TLS-RST have great discriminant validity. (5). The predictors of sign language ability of primary deaf students are age and years of starting to learn sign language. The results of this study suggested that TSL-RST can effectively assess deaf student’s sign language ability. This study also proposed a model to develop a sign language tests. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=comprehension%20test" title="comprehension test">comprehension test</a>, <a href="https://publications.waset.org/abstracts/search?q=elementary%20school" title=" elementary school"> elementary school</a>, <a href="https://publications.waset.org/abstracts/search?q=sign%20language" title=" sign language"> sign language</a>, <a href="https://publications.waset.org/abstracts/search?q=Taiwan%20sign%20language" title=" Taiwan sign language"> Taiwan sign language</a> </p> <a href="https://publications.waset.org/abstracts/83403/development-of-taiwanese-sign-language-receptive-skills-test-for-deaf-children" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/83403.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">187</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4802</span> Assessing Language Dominance in Mexican Deaf Signers with the Bilingual Language Profile (BLP)</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=E.%20Mendoza">E. Mendoza</a>, <a href="https://publications.waset.org/abstracts/search?q=D.%20Jackson-Maldonado"> D. Jackson-Maldonado</a>, <a href="https://publications.waset.org/abstracts/search?q=G.%20Avecilla-Ram%C3%ADrez"> G. Avecilla-Ramírez</a>, <a href="https://publications.waset.org/abstracts/search?q=A.%20Mondaca"> A. Mondaca</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Assessing language proficiency is a major issue in psycholinguistic research. There are multiple tools that measure language dominance and language proficiency in hearing bilinguals, however, this is not the case for Deaf bilinguals. Specifically, there are few, if not none, assessment tools useful in the description of the multilingual abilities of Mexican Deaf signers. Because of this, the linguistic characteristics of Mexican Deaf population have been poorly described. This paper attempts to explain the necessary changes done in order to adapt the Bilingual Language Profile (BLP) to Mexican Sign Language (LSM) and written/oral Spanish. BLP is a Self-Evaluation tool that has been adapted and translated to several oral languages, but not to sign languages. Lexical, syntactic, cultural, and structural changes were applied to the BLP. 35 Mexican Deaf signers participated in a pilot study. All of them were enrolled in Higher Education programs. BLP was presented online in written Spanish via Google Forms. No additional information in LSM was provided. Results show great heterogeneity as it is expected of Deaf populations and BLP seems to be a useful tool to create a bilingual profile of the Mexican Deaf population. This is a first attempt to adapt a widely tested tool in bilingualism research to sign language. Further modifications need to be done. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=deaf%20bilinguals" title="deaf bilinguals">deaf bilinguals</a>, <a href="https://publications.waset.org/abstracts/search?q=assessment%20tools" title=" assessment tools"> assessment tools</a>, <a href="https://publications.waset.org/abstracts/search?q=bilingual%20language%20profile" title=" bilingual language profile"> bilingual language profile</a>, <a href="https://publications.waset.org/abstracts/search?q=mexican%20sign%20language" title=" mexican sign language"> mexican sign language</a> </p> <a href="https://publications.waset.org/abstracts/147694/assessing-language-dominance-in-mexican-deaf-signers-with-the-bilingual-language-profile-blp" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/147694.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">153</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4801</span> Inclusive Cultural Heritage Tourism Project</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=L.%20Cruz-Lopes">L. Cruz-Lopes</a>, <a href="https://publications.waset.org/abstracts/search?q=M.%20Sell"> M. Sell</a>, <a href="https://publications.waset.org/abstracts/search?q=P.%20Escudeiro"> P. Escudeiro</a>, <a href="https://publications.waset.org/abstracts/search?q=B.%20Esteves"> B. Esteves</a> </p> <p class="card-text"><strong>Abstract:</strong></p> It might be difficult for deaf people to communicate since spoken and written languages are different from sign language. When it comes to getting information, going to places of cultural heritage, or using services and infrastructure, there is a clear lack of inclusiveness. By creating assistive technology that enables deaf individuals to get around communication hurdles and encourage inclusive tourism, the ICHT- Inclusive Cultural Heritage Tourism initiative hopes to increase knowledge of sign language. The purpose of the Inclusive Cultural Heritage Tourism (ICHT) project is to develop online and on-site sign language tools and material for usage at popular tourist destinations in the northern region of Portugal, including Torre dos Clérigos, the Lello bookstore, Maia Zoo, Porto wine cellars, and São Pedro do Sul (Viseu) thermae. The ICHT system consists of an application using holography, a mobile game, an online platform for collaboration with deaf and hearing users, and a collection of International Sign training courses. The project also offers a prospect for a more inclusive society by introducing a method of teaching sign languages to tourism industry professionals. As a result, the teaching and learning of sign language along with the assistive technology tools created by the project sets up an inclusive environment for the deaf community, producing results in the area of automatic sign language translation and aiding in the global recognition of the Portuguese tourism industry. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=inclusive%20tourism" title="inclusive tourism">inclusive tourism</a>, <a href="https://publications.waset.org/abstracts/search?q=games" title=" games"> games</a>, <a href="https://publications.waset.org/abstracts/search?q=international%20sign%20training" title=" international sign training"> international sign training</a>, <a href="https://publications.waset.org/abstracts/search?q=deaf%20community" title=" deaf community"> deaf community</a> </p> <a href="https://publications.waset.org/abstracts/158577/inclusive-cultural-heritage-tourism-project" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/158577.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">116</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4800</span> American Slang: Perception and Connotations – Issues of Translation</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Lison%20Carlier">Lison Carlier</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The English language that is taught in school or used in media nowadays is defined as 'standard English,' although unstandardized Englishes, or 'parallel' Englishes, are practiced throughout the world. The existence of these 'parallel' Englishes has challenged standardization by imposing its own specific vocabulary or grammar. These non-standard languages tend to be regarded as inferior and, therefore, pose a problem regarding their translation. In the USA, 'slanguage', or slang, is a good example of a 'parallel' language. It consists of a particular set of vocabulary, used mostly in speech, and rarely in writing. Qualified as vulgar, often reduced to an urban language spoken by young people from lower classes, slanguage – or the language that is often first spoken between youths – is still the most common language used in the English-speaking world. Moreover, it appears that the prime meaning of 'informal' (as in an informal language) – a language that is spoken with persons the speaker knows – has been put aside and replaced in the general mind by the idea of vulgarity and non-appropriateness, when in fact informality is a sign of intimacy, not of vulgarity. When it comes to translating American slang, the main problem a translator encounters is the image and the cultural background usually associated with this 'parallel' language. Indeed, one will have, unwillingly, a predisposition to categorize a speaker of a 'parallel' language as being part of a particular group of people. The way one sees a speaker using it is paramount, and needs to be transposed into the target language. This paper will conduct an analysis of American slang – its use, perception and the image it gives of its speakers – and its translation into French, using the novel Is Everyone Hanging Out Without Me? (and other concerns) by way of example. In her autobiography/personal essay book, comedy writer, actress and author Mindy Kaling speaks with a very familiar English, including slang, which participates in the construction of her own voice and style, and enables a deeper connection with her readers. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=translation" title="translation">translation</a>, <a href="https://publications.waset.org/abstracts/search?q=English" title=" English"> English</a>, <a href="https://publications.waset.org/abstracts/search?q=slang" title=" slang"> slang</a>, <a href="https://publications.waset.org/abstracts/search?q=French" title=" French"> French</a> </p> <a href="https://publications.waset.org/abstracts/60197/american-slang-perception-and-connotations-issues-of-translation" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/60197.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">318</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4799</span> Facial Expression Phoenix (FePh): An Annotated Sequenced Dataset for Facial and Emotion-Specified Expressions in Sign Language</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Marie%20Alaghband">Marie Alaghband</a>, <a href="https://publications.waset.org/abstracts/search?q=Niloofar%20Yousefi"> Niloofar Yousefi</a>, <a href="https://publications.waset.org/abstracts/search?q=Ivan%20Garibay"> Ivan Garibay</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Facial expressions are important parts of both gesture and sign language recognition systems. Despite the recent advances in both fields, annotated facial expression datasets in the context of sign language are still scarce resources. In this manuscript, we introduce an annotated sequenced facial expression dataset in the context of sign language, comprising over 3000 facial images extracted from the daily news and weather forecast of the public tv-station PHOENIX. Unlike the majority of currently existing facial expression datasets, FePh provides sequenced semi-blurry facial images with different head poses, orientations, and movements. In addition, in the majority of images, identities are mouthing the words, which makes the data more challenging. To annotate this dataset we consider primary, secondary, and tertiary dyads of seven basic emotions of &quot;sad&quot;, &quot;surprise&quot;, &quot;fear&quot;, &quot;angry&quot;, &quot;neutral&quot;, &quot;disgust&quot;, and &quot;happy&quot;. We also considered the &quot;None&quot; class if the image&rsquo;s facial expression could not be described by any of the aforementioned emotions. Although we provide FePh as a facial expression dataset of signers in sign language, it has a wider application in gesture recognition and Human Computer Interaction (HCI) systems. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=annotated%20facial%20expression%20dataset" title="annotated facial expression dataset">annotated facial expression dataset</a>, <a href="https://publications.waset.org/abstracts/search?q=gesture%20recognition" title=" gesture recognition"> gesture recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=sequenced%20facial%20expression%20dataset" title=" sequenced facial expression dataset"> sequenced facial expression dataset</a>, <a href="https://publications.waset.org/abstracts/search?q=sign%20language%20recognition" title=" sign language recognition"> sign language recognition</a> </p> <a href="https://publications.waset.org/abstracts/129717/facial-expression-phoenix-feph-an-annotated-sequenced-dataset-for-facial-and-emotion-specified-expressions-in-sign-language" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/129717.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">159</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4798</span> Computerized Analysis of Phonological Structure of 10,400 Brazilian Sign Language Signs</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Wanessa%20G.%20Oliveira">Wanessa G. Oliveira</a>, <a href="https://publications.waset.org/abstracts/search?q=Fernando%20C.%20Capovilla"> Fernando C. Capovilla</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Capovilla and Raphael’s Libras Dictionary documents a corpus of 4,200 Brazilian Sign Language (Libras) signs. Duduchi and Capovilla’s software SignTracking permits users to retrieve signs even when ignoring the gloss corresponding to it and to discover the meaning of all 4,200 signs sign simply by clicking on graphic menus of the sign characteristics (phonemes). Duduchi and Capovilla have discovered that the ease with which any given sign can be retrieved is an inverse function of the average popularity of its component phonemes. Thus, signs composed of rare (distinct) phonemes are easier to retrieve than are those composed of common phonemes. SignTracking offers a means of computing the average popularity of the phonemes that make up each one of 4,200 signs. It provides a precise measure of the degree of ease with which signs can be retrieved, and sign meanings can be discovered. Duduchi and Capovilla’s logarithmic model proved valid: The degree with which any given sign can be retrieved is an inverse function of the arithmetic mean of the logarithm of the popularity of each component phoneme. Capovilla, Raphael and Mauricio’s New Libras Dictionary documents a corpus of 10,400 Libras signs. The present analysis revealed Libras DNA structure by mapping the incidence of 501 sign phonemes resulting from the layered distribution of five parameters: 163 handshape phonemes (CherEmes-ManusIculi); 34 finger shape phonemes (DactilEmes-DigitumIculi); 55 hand placement phonemes (ArtrotoToposEmes-ArticulatiLocusIculi); 173 movement dimension phonemes (CinesEmes-MotusIculi) pertaining to direction, frequency, and type; and 76 Facial Expression phonemes (MascarEmes-PersonalIculi). <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=Brazilian%20sign%20language" title="Brazilian sign language">Brazilian sign language</a>, <a href="https://publications.waset.org/abstracts/search?q=lexical%20retrieval" title=" lexical retrieval"> lexical retrieval</a>, <a href="https://publications.waset.org/abstracts/search?q=libras%20sign" title=" libras sign"> libras sign</a>, <a href="https://publications.waset.org/abstracts/search?q=sign%20phonology" title=" sign phonology"> sign phonology</a> </p> <a href="https://publications.waset.org/abstracts/86262/computerized-analysis-of-phonological-structure-of-10400-brazilian-sign-language-signs" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/86262.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">345</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4797</span> British English vs. American English: A Comparative Study</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Halima%20Benazzouz">Halima Benazzouz</a> </p> <p class="card-text"><strong>Abstract:</strong></p> It is often believed that British English and American English are the foremost varieties of the English Language serving as reference norms for other varieties;that is the reason why they have obviously been compared and contrasted.Meanwhile,the terms “British English” and “American English” are used differently by different people to refer to: 1) Two national varieties each subsuming regional and other sub-varieties standard and non-standard. 2) Two national standard varieties in which each one is only part of the range of English within its own state, but the most prestigious part. 3) Two international varieties, that is each is more than a national variety of the English Language. 4) Two international standard varieties that may or may not each subsume other standard varieties.Furthermore,each variety serves as a reference norm for users of the language elsewhere. Moreover, without a clear identification, as primarily belonging to one variety or the other, British English(Br.Eng) and American English (Am.Eng) are understood as national or international varieties. British English and American English are both “variants” and “varieties” of the English Language, more similar than different.In brief, the following may justify general categories of difference between Standard American English (S.Am.E) and Standard British English (S.Br.e) each having their own sociolectic value: A difference in pronunciation exists between the two foremost varieties, although it is the same spelling, by contrast, a divergence in spelling may be recognized, eventhough the same pronunciation. In such case, the same term is different but there is a similarity in spelling and pronunciation. Otherwise, grammar, syntax, and punctuation are distinctively used to distinguish the two varieties of the English Language. Beyond these differences, spelling is noted as one of the chief sources of variation. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=Greek" title="Greek">Greek</a>, <a href="https://publications.waset.org/abstracts/search?q=Latin" title=" Latin"> Latin</a>, <a href="https://publications.waset.org/abstracts/search?q=French%20pronunciation%20expert" title=" French pronunciation expert"> French pronunciation expert</a>, <a href="https://publications.waset.org/abstracts/search?q=varieties%20of%20English%20language" title=" varieties of English language"> varieties of English language</a> </p> <a href="https://publications.waset.org/abstracts/15569/british-english-vs-american-english-a-comparative-study" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/15569.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">501</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4796</span> Irreducible Sign Patterns of Minimum Rank of 3 and Symmetric Sign Patterns That Allow Diagonalizability</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Sriparna%20Bandopadhyay">Sriparna Bandopadhyay</a> </p> <p class="card-text"><strong>Abstract:</strong></p> It is known that irreducible sign patterns in general may not allow diagonalizability and in particular irreducible sign patterns with minimum rank greater than or equal to 4. It is also known that every irreducible sign pattern matrix with minimum rank of 2 allow diagonalizability with rank of 2 and the maximum rank of the sign pattern. In general sign patterns with minimum rank of 3 may not allow diagonalizability if the condition of irreducibility is dropped, but the problem of whether every irreducible sign pattern with minimum rank of 3 allows diagonalizability remains open. In this paper it is shown that irreducible sign patterns with minimum rank of 3 under certain conditions on the underlying graph allow diagonalizability. An alternate proof of the results that every sign pattern matrix with minimum rank of 2 and no zero lines allow diagonalizability with rank of 2 and also that every full sign pattern allows diagonalizability with all permissible ranks of the sign pattern is given. Some open problems regarding composite cycles in an irreducible symmetric sign pattern that support of a rank principal certificate are also answered. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=irreducible%20sign%20patterns" title="irreducible sign patterns">irreducible sign patterns</a>, <a href="https://publications.waset.org/abstracts/search?q=minimum%20rank" title=" minimum rank"> minimum rank</a>, <a href="https://publications.waset.org/abstracts/search?q=symmetric%20sign%20patterns" title=" symmetric sign patterns"> symmetric sign patterns</a>, <a href="https://publications.waset.org/abstracts/search?q=rank%20-principal%20certificate" title=" rank -principal certificate"> rank -principal certificate</a>, <a href="https://publications.waset.org/abstracts/search?q=allowing%20diagonalizability" title=" allowing diagonalizability"> allowing diagonalizability</a> </p> <a href="https://publications.waset.org/abstracts/173597/irreducible-sign-patterns-of-minimum-rank-of-3-and-symmetric-sign-patterns-that-allow-diagonalizability" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/173597.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">98</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4795</span> Sign Language Recognition of Static Gestures Using Kinect™ and Convolutional Neural Networks</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Rohit%20Semwal">Rohit Semwal</a>, <a href="https://publications.waset.org/abstracts/search?q=Shivam%20Arora"> Shivam Arora</a>, <a href="https://publications.waset.org/abstracts/search?q=Saurav"> Saurav</a>, <a href="https://publications.waset.org/abstracts/search?q=Sangita%20Roy"> Sangita Roy</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This work proposes a supervised framework with deep convolutional neural networks (CNNs) for vision-based sign language recognition of static gestures. Our approach addresses the acquisition and segmentation of correct inputs for the CNN-based classifier. Microsoft Kinect™ sensor, despite complex environmental conditions, can track hands efficiently. Skin Colour based segmentation is applied on cropped images of hands in different poses, used to depict different sign language gestures. The segmented hand images are used as an input for our classifier. The CNN classifier proposed in the paper is able to classify the input images with a high degree of accuracy. The system was trained and tested on 39 static sign language gestures, including 26 letters of the alphabet and 13 commonly used words. This paper includes a problem definition for building the proposed system, which acts as a sign language translator between deaf/mute and the rest of the society. It is then followed by a focus on reviewing existing knowledge in the area and work done by other researchers. It also describes the working principles behind different components of CNNs in brief. The architecture and system design specifications of the proposed system are discussed in the subsequent sections of the paper to give the reader a clear picture of the system in terms of the capability required. The design then gives the top-level details of how the proposed system meets the requirements. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=sign%20language" title="sign language">sign language</a>, <a href="https://publications.waset.org/abstracts/search?q=CNN" title=" CNN"> CNN</a>, <a href="https://publications.waset.org/abstracts/search?q=HCI" title=" HCI"> HCI</a>, <a href="https://publications.waset.org/abstracts/search?q=segmentation" title=" segmentation"> segmentation</a> </p> <a href="https://publications.waset.org/abstracts/150342/sign-language-recognition-of-static-gestures-using-kinect-and-convolutional-neural-networks" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/150342.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">157</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4794</span> Healthcare-SignNet: Advanced Video Classification for Medical Sign Language Recognition Using CNN and RNN Models</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Chithra%20A.%20V.">Chithra A. V.</a>, <a href="https://publications.waset.org/abstracts/search?q=Somoshree%20Datta"> Somoshree Datta</a>, <a href="https://publications.waset.org/abstracts/search?q=Sandeep%20Nithyanandan"> Sandeep Nithyanandan</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Sign Language Recognition (SLR) is the process of interpreting and translating sign language into spoken or written language using technological systems. It involves recognizing hand gestures, facial expressions, and body movements that makeup sign language communication. The primary goal of SLR is to facilitate communication between hearing- and speech-impaired communities and those who do not understand sign language. Due to the increased awareness and greater recognition of the rights and needs of the hearing- and speech-impaired community, sign language recognition has gained significant importance over the past 10 years. Technological advancements in the fields of Artificial Intelligence and Machine Learning have made it more practical and feasible to create accurate SLR systems. This paper presents a distinct approach to SLR by framing it as a video classification problem using Deep Learning (DL), whereby a combination of Convolutional Neural Networks (CNNs) and Recurrent Neural Networks (RNNs) has been used. This research targets the integration of sign language recognition into healthcare settings, aiming to improve communication between medical professionals and patients with hearing impairments. The spatial features from each video frame are extracted using a CNN, which captures essential elements such as hand shapes, movements, and facial expressions. These features are then fed into an RNN network that learns the temporal dependencies and patterns inherent in sign language sequences. The INCLUDE dataset has been enhanced with more videos from the healthcare domain and the model is evaluated on the same. Our model achieves 91% accuracy, representing state-of-the-art performance in this domain. The results highlight the effectiveness of treating SLR as a video classification task with the CNN-RNN architecture. This approach not only improves recognition accuracy but also offers a scalable solution for real-time SLR applications, significantly advancing the field of accessible communication technologies. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=sign%20language%20recognition" title="sign language recognition">sign language recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=deep%20learning" title=" deep learning"> deep learning</a>, <a href="https://publications.waset.org/abstracts/search?q=convolution%20neural%20network" title=" convolution neural network"> convolution neural network</a>, <a href="https://publications.waset.org/abstracts/search?q=recurrent%20neural%20network" title=" recurrent neural network"> recurrent neural network</a> </p> <a href="https://publications.waset.org/abstracts/188417/healthcare-signnet-advanced-video-classification-for-medical-sign-language-recognition-using-cnn-and-rnn-models" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/188417.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">27</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4793</span> Lithuanian Sign Language Literature: Metaphors at the Phonological Level</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=An%C5%BEelika%20Teres%C4%97">Anželika Teresė</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In order to solve issues in sign language linguistics, address matters pertaining to maintaining high quality of sign language (SL) translation, contribute to dispelling misconceptions about SL and deaf people, and raise awareness and understanding of the deaf community heritage, this presentation discusses literature in Lithuanian Sign Language (LSL) and inherent metaphors that are created by using the phonological parameter –handshape, location, movement, palm orientation and nonmanual features. The study covered in this presentation is twofold, involving both the micro-level analysis of metaphors in terms of phonological parameters as a sub-lexical feature and the macro-level analysis of the poetic context. Cognitive theories underlie research of metaphors in sign language literature in a range of SL. The study follows this practice. The presentation covers the qualitative analysis of 34 pieces of LSL literature. The analysis employs ELAN software widely used in SL research. The target is to examine how specific types of each phonological parameter are used for the creation of metaphors in LSL literature and what metaphors are created. The results of the study show that LSL literature employs a range of metaphors created by using classifier signs and by modifying the established signs. The study also reveals that LSL literature tends to create reference metaphors indicating status and power. As the study shows, LSL poets metaphorically encode status by encoding another meaning in the same sign, which results in creating double metaphors. The metaphor of identity has been determined. Notably, the poetic context has revealed that the latter metaphor can also be identified as a metaphor for life. The study goes on to note that deaf poets create metaphors related to the importance of various phenomena significance of the lyrical subject. Notably, the study has allowed detecting locations, nonmanual features and etc., never mentioned in previous SL research as used for the creation of metaphors. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=Lithuanian%20sign%20language" title="Lithuanian sign language">Lithuanian sign language</a>, <a href="https://publications.waset.org/abstracts/search?q=sign%20language%20literature" title=" sign language literature"> sign language literature</a>, <a href="https://publications.waset.org/abstracts/search?q=sign%20language%20metaphor" title=" sign language metaphor"> sign language metaphor</a>, <a href="https://publications.waset.org/abstracts/search?q=metaphor%20at%20the%20phonological%20level" title=" metaphor at the phonological level"> metaphor at the phonological level</a>, <a href="https://publications.waset.org/abstracts/search?q=cognitive%20linguistics" title=" cognitive linguistics"> cognitive linguistics</a> </p> <a href="https://publications.waset.org/abstracts/147518/lithuanian-sign-language-literature-metaphors-at-the-phonological-level" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/147518.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">136</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4792</span> Using Convolutional Neural Networks to Distinguish Different Sign Language Alphanumerics </h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Stephen%20L.%20Green">Stephen L. Green</a>, <a href="https://publications.waset.org/abstracts/search?q=Alexander%20N.%20Gorban"> Alexander N. Gorban</a>, <a href="https://publications.waset.org/abstracts/search?q=Ivan%20Y.%20Tyukin"> Ivan Y. Tyukin</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Within the past decade, using Convolutional Neural Networks (CNN)’s to create Deep Learning systems capable of translating Sign Language into text has been a breakthrough in breaking the communication barrier for deaf-mute people. Conventional research on this subject has been concerned with training the network to recognize the fingerspelling gestures of a given language and produce their corresponding alphanumerics. One of the problems with the current developing technology is that images are scarce, with little variations in the gestures being presented to the recognition program, often skewed towards single skin tones and hand sizes that makes a percentage of the population’s fingerspelling harder to detect. Along with this, current gesture detection programs are only trained on one finger spelling language despite there being one hundred and forty-two known variants so far. All of this presents a limitation for traditional exploitation for the state of current technologies such as CNN’s, due to their large number of required parameters. This work aims to present a technology that aims to resolve this issue by combining a pretrained legacy AI system for a generic object recognition task with a corrector method to uptrain the legacy network. This is a computationally efficient procedure that does not require large volumes of data even when covering a broad range of sign languages such as American Sign Language, British Sign Language and Chinese Sign Language (Pinyin). Implementing recent results on method concentration, namely the stochastic separation theorem, an AI system is supposed as an operate mapping an input present in the set of images u ∈ U to an output that exists in a set of predicted class labels q ∈ Q of the alphanumeric that q represents and the language it comes from. These inputs and outputs, along with the interval variables z ∈ Z represent the system’s current state which implies a mapping that assigns an element x ∈ ℝⁿ to the triple (u, z, q). As all xi are i.i.d vectors drawn from a product mean distribution, over a period of time the AI generates a large set of measurements xi called S that are grouped into two categories: the correct predictions M and the incorrect predictions Y. Once the network has made its predictions, a corrector can then be applied through centering S and Y by subtracting their means. The data is then regularized by applying the Kaiser rule to the resulting eigenmatrix and then whitened before being split into pairwise, positively correlated clusters. Each of these clusters produces a unique hyperplane and if any element x falls outside the region bounded by these lines then it is reported as an error. As a result of this methodology, a self-correcting recognition process is created that can identify fingerspelling from a variety of sign language and successfully identify the corresponding alphanumeric and what language the gesture originates from which no other neural network has been able to replicate. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=convolutional%20neural%20networks" title="convolutional neural networks">convolutional neural networks</a>, <a href="https://publications.waset.org/abstracts/search?q=deep%20learning" title=" deep learning"> deep learning</a>, <a href="https://publications.waset.org/abstracts/search?q=shallow%20correctors" title=" shallow correctors"> shallow correctors</a>, <a href="https://publications.waset.org/abstracts/search?q=sign%20language" title=" sign language"> sign language</a> </p> <a href="https://publications.waset.org/abstracts/96031/using-convolutional-neural-networks-to-distinguish-different-sign-language-alphanumerics" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/96031.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">100</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4791</span> TechWhiz: Empowering Deaf Students through Inclusive Education</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Paula%20Escudeiro">Paula Escudeiro</a>, <a href="https://publications.waset.org/abstracts/search?q=Nuno%20Escudeiro"> Nuno Escudeiro</a>, <a href="https://publications.waset.org/abstracts/search?q=M%C3%A1rcia%20Campos"> Márcia Campos</a>, <a href="https://publications.waset.org/abstracts/search?q=Francisca%20Escudeiro"> Francisca Escudeiro</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In today's world, technical and scientific knowledge plays a vital role in education, research, and employment. Deaf students face unique challenges in educational settings, particularly when it comes to understanding technical and scientific terminology. The reliance on written and spoken languages can create barriers for deaf individuals who primarily communicate using sign language. This lack of accessibility can hinder their learning experience and compromise equity in education. To address this issue, the TechWhiz project has been developed as a comprehensive glossary of scientific and technical concepts explained in sign language. By providing deaf students with access to education in their first language, TechWhiz aims to enhance their learning achievements and promote inclusivity while also fostering equity in education for all students. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=deaf%20students" title="deaf students">deaf students</a>, <a href="https://publications.waset.org/abstracts/search?q=technical%20and%20scientific%20knowledge" title=" technical and scientific knowledge"> technical and scientific knowledge</a>, <a href="https://publications.waset.org/abstracts/search?q=automatic%20sign%20language" title=" automatic sign language"> automatic sign language</a>, <a href="https://publications.waset.org/abstracts/search?q=inclusive%20education" title=" inclusive education"> inclusive education</a> </p> <a href="https://publications.waset.org/abstracts/175618/techwhiz-empowering-deaf-students-through-inclusive-education" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/175618.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">68</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4790</span> Hand Motion Trajectory Analysis for Dynamic Hand Gestures Used in Indian Sign Language</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Daleesha%20M.%20Viswanathan">Daleesha M. Viswanathan</a>, <a href="https://publications.waset.org/abstracts/search?q=Sumam%20Mary%20Idicula"> Sumam Mary Idicula</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Dynamic hand gestures are an intrinsic component in sign language communication. Extracting spatial temporal features of the hand gesture trajectory plays an important role in a dynamic gesture recognition system. Finding a discrete feature descriptor for the motion trajectory based on the orientation feature is the main concern of this paper. Kalman filter algorithm and Hidden Markov Models (HMM) models are incorporated with this recognition system for hand trajectory tracking and for spatial temporal classification, respectively. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=orientation%20features" title="orientation features">orientation features</a>, <a href="https://publications.waset.org/abstracts/search?q=discrete%20feature%20vector" title=" discrete feature vector"> discrete feature vector</a>, <a href="https://publications.waset.org/abstracts/search?q=HMM." title=" HMM."> HMM.</a>, <a href="https://publications.waset.org/abstracts/search?q=Indian%20sign%20language" title=" Indian sign language"> Indian sign language</a> </p> <a href="https://publications.waset.org/abstracts/35653/hand-motion-trajectory-analysis-for-dynamic-hand-gestures-used-in-indian-sign-language" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/35653.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">370</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4789</span> Need for E-Learning: An Effective Method in Educating the Persons with Hearing Impairment Using Sign Language</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=S.%20Vijayakumar">S. Vijayakumar</a>, <a href="https://publications.waset.org/abstracts/search?q=S.%20B.%20Rathna%20Kumar"> S. B. Rathna Kumar</a>, <a href="https://publications.waset.org/abstracts/search?q=Navnath%20D%20Jagadale"> Navnath D Jagadale </a> </p> <p class="card-text"><strong>Abstract:</strong></p> Learning and teaching are the challenges ahead in the education of the students with hearing impairment using sign language (SHISL). Either the students or teachers face difficulties in the process of learning/teaching. Communication is one of the main barriers while teaching SHISL. Further, the courses of study or the subjects are limited to SHISL at least in countries like India. Students with hearing impairment mainly opt for sign language as a communication mode. Subjects like physics, chemistry, advanced mathematics etc. are not available in the curriculum for the SHISL since their content and ideas are complex. In India, exemption for language papers is being given for the students with hearing impairment. It may give opportunity to them to secure secondary/ higher secondary qualifications. It is a known fact that students with hearing impairment are facing difficulty in their future carrier. They secure neither a higher study nor a good employment opportunity. Vocational training in various trades will land them in few jobs with few bucks in pocket. However, not all of them are blessed with higher positions in government or private sectors in competitive fields or where the technical knowledge is required. E learning with sign language instructions can be used for teaching languages and science subjects. Computer Based Instruction (CBI), Computer Based Training (CBT), and Computer Assisted Instruction (CAI) are now part-and-parcel of Modern Education. It will also include signed video clip corresponding to the topic. Learning language subjects will improve the understanding of concepts in different subjects. Learning other science subjects like their hearing counterparts will enable the SHISL to go higher in studies and increase their height to pluck a fruit of the tree of employment. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=students%20with%20hearing%20impairment%20using%20sign%20language" title="students with hearing impairment using sign language">students with hearing impairment using sign language</a>, <a href="https://publications.waset.org/abstracts/search?q=hearing%20impairment" title=" hearing impairment"> hearing impairment</a>, <a href="https://publications.waset.org/abstracts/search?q=language%20subjects" title=" language subjects"> language subjects</a>, <a href="https://publications.waset.org/abstracts/search?q=science%20subjects" title=" science subjects"> science subjects</a>, <a href="https://publications.waset.org/abstracts/search?q=e-learning" title=" e-learning "> e-learning </a> </p> <a href="https://publications.waset.org/abstracts/41414/need-for-e-learning-an-effective-method-in-educating-the-persons-with-hearing-impairment-using-sign-language" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/41414.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">405</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4788</span> Deaf Inmates in Canadian Prisons: Addressing Discrimination through Staff Training Videos with Deaf Actors</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Tracey%20Bone">Tracey Bone</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Deaf inmates, whose first or preferred language is a Signed Language, experience barriers to accessing the necessary two-way communication with correctional staff, and the educational and social programs that will enhance their eligibility for conditional release from the federal prison system in Canada. The development of visual content to enhance the knowledge and skill development of correctional staff is a contemporary strategy intended to significantly improve the correctional experience for deaf inmates. This presentation reports on the development of two distinct training videos created to enhance staff’s understanding of the needs of deaf inmates; one a two-part simulation of an interaction with a deaf inmate, the second an interview with a deaf academic. Part one of video one demonstrates the challenges and misunderstandings inherent in communicating across languages without a qualified sign language interpreter; the second part demonstrates the ease of communication when communication needs are met. Video two incorporates the experiences of a deaf academic to provide the cultural grounding necessary to educate staff in the unique experiences associated with being a visual language user. Lack of staff understanding or awareness of deaf culture and language must not be acceptable reasons for the inadequate treatment of deaf visual language users in federal prisons. This paper demonstrates a contemporary approach to meeting the human rights and needs of this unique and often ignored inmate subpopulation. The deaf community supports this visual approach to enhancing staff understanding of the unique needs of this population. A study of its effectiveness is currently underway. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=accommodations" title="accommodations">accommodations</a>, <a href="https://publications.waset.org/abstracts/search?q=American%20Sign%20Language%20%28ASL%29" title=" American Sign Language (ASL)"> American Sign Language (ASL)</a>, <a href="https://publications.waset.org/abstracts/search?q=deaf%20inmates" title=" deaf inmates"> deaf inmates</a>, <a href="https://publications.waset.org/abstracts/search?q=sensory%20deprivation" title=" sensory deprivation "> sensory deprivation </a> </p> <a href="https://publications.waset.org/abstracts/88508/deaf-inmates-in-canadian-prisons-addressing-discrimination-through-staff-training-videos-with-deaf-actors" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/88508.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">149</span> </span> </div> </div> <ul class="pagination"> <li class="page-item disabled"><span class="page-link">&lsaquo;</span></li> <li class="page-item active"><span class="page-link">1</span></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=American%20Sign%20Language%20%28ASL%29&amp;page=2">2</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=American%20Sign%20Language%20%28ASL%29&amp;page=3">3</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=American%20Sign%20Language%20%28ASL%29&amp;page=4">4</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=American%20Sign%20Language%20%28ASL%29&amp;page=5">5</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=American%20Sign%20Language%20%28ASL%29&amp;page=6">6</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=American%20Sign%20Language%20%28ASL%29&amp;page=7">7</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=American%20Sign%20Language%20%28ASL%29&amp;page=8">8</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=American%20Sign%20Language%20%28ASL%29&amp;page=9">9</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=American%20Sign%20Language%20%28ASL%29&amp;page=10">10</a></li> <li class="page-item disabled"><span class="page-link">...</span></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=American%20Sign%20Language%20%28ASL%29&amp;page=160">160</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=American%20Sign%20Language%20%28ASL%29&amp;page=161">161</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=American%20Sign%20Language%20%28ASL%29&amp;page=2" rel="next">&rsaquo;</a></li> </ul> </div> </main> <footer> <div id="infolinks" class="pt-3 pb-2"> <div class="container"> <div style="background-color:#f5f5f5;" class="p-3"> <div class="row"> <div class="col-md-2"> <ul class="list-unstyled"> About <li><a href="https://waset.org/page/support">About Us</a></li> <li><a href="https://waset.org/page/support#legal-information">Legal</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/WASET-16th-foundational-anniversary.pdf">WASET celebrates its 16th foundational anniversary</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Account <li><a href="https://waset.org/profile">My Account</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Explore <li><a href="https://waset.org/disciplines">Disciplines</a></li> <li><a href="https://waset.org/conferences">Conferences</a></li> <li><a href="https://waset.org/conference-programs">Conference Program</a></li> <li><a href="https://waset.org/committees">Committees</a></li> <li><a href="https://publications.waset.org">Publications</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Research <li><a href="https://publications.waset.org/abstracts">Abstracts</a></li> <li><a href="https://publications.waset.org">Periodicals</a></li> <li><a href="https://publications.waset.org/archive">Archive</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Open Science <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Science-Philosophy.pdf">Open Science Philosophy</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Science-Award.pdf">Open Science Award</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Society-Open-Science-and-Open-Innovation.pdf">Open Innovation</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Postdoctoral-Fellowship-Award.pdf">Postdoctoral Fellowship Award</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Scholarly-Research-Review.pdf">Scholarly Research Review</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Support <li><a href="https://waset.org/page/support">Support</a></li> <li><a href="https://waset.org/profile/messages/create">Contact Us</a></li> <li><a href="https://waset.org/profile/messages/create">Report Abuse</a></li> </ul> </div> </div> </div> </div> </div> <div class="container text-center"> <hr style="margin-top:0;margin-bottom:.3rem;"> <a href="https://creativecommons.org/licenses/by/4.0/" target="_blank" class="text-muted small">Creative Commons Attribution 4.0 International License</a> <div id="copy" class="mt-2">&copy; 2024 World Academy of Science, Engineering and Technology</div> </div> </footer> <a href="javascript:" id="return-to-top"><i class="fas fa-arrow-up"></i></a> <div class="modal" id="modal-template"> <div class="modal-dialog"> <div class="modal-content"> <div class="row m-0 mt-1"> <div class="col-md-12"> <button type="button" class="close" data-dismiss="modal" aria-label="Close"><span aria-hidden="true">&times;</span></button> </div> </div> <div class="modal-body"></div> </div> </div> </div> <script src="https://cdn.waset.org/static/plugins/jquery-3.3.1.min.js"></script> <script src="https://cdn.waset.org/static/plugins/bootstrap-4.2.1/js/bootstrap.bundle.min.js"></script> <script src="https://cdn.waset.org/static/js/site.js?v=150220211556"></script> <script> jQuery(document).ready(function() { /*jQuery.get("https://publications.waset.org/xhr/user-menu", function (response) { jQuery('#mainNavMenu').append(response); });*/ jQuery.get({ url: "https://publications.waset.org/xhr/user-menu", cache: false }).then(function(response){ jQuery('#mainNavMenu').append(response); }); }); </script> </body> </html>

Pages: 1 2 3 4 5 6 7 8 9 10