CINXE.COM
Search results for: audio-visual speech recognition
<!DOCTYPE html> <html lang="en" dir="ltr"> <head> <!-- Google tag (gtag.js) --> <script async src="https://www.googletagmanager.com/gtag/js?id=G-P63WKM1TM1"></script> <script> window.dataLayer = window.dataLayer || []; function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'G-P63WKM1TM1'); </script> <!-- Yandex.Metrika counter --> <script type="text/javascript" > (function(m,e,t,r,i,k,a){m[i]=m[i]||function(){(m[i].a=m[i].a||[]).push(arguments)}; m[i].l=1*new Date(); for (var j = 0; j < document.scripts.length; j++) {if (document.scripts[j].src === r) { return; }} k=e.createElement(t),a=e.getElementsByTagName(t)[0],k.async=1,k.src=r,a.parentNode.insertBefore(k,a)}) (window, document, "script", "https://mc.yandex.ru/metrika/tag.js", "ym"); ym(55165297, "init", { clickmap:false, trackLinks:true, accurateTrackBounce:true, webvisor:false }); </script> <noscript><div><img src="https://mc.yandex.ru/watch/55165297" style="position:absolute; left:-9999px;" alt="" /></div></noscript> <!-- /Yandex.Metrika counter --> <!-- Matomo --> <!-- End Matomo Code --> <title>Search results for: audio-visual speech recognition</title> <meta name="description" content="Search results for: audio-visual speech recognition"> <meta name="keywords" content="audio-visual speech recognition"> <meta name="viewport" content="width=device-width, initial-scale=1, minimum-scale=1, maximum-scale=1, user-scalable=no"> <meta charset="utf-8"> <link href="https://cdn.waset.org/favicon.ico" type="image/x-icon" rel="shortcut icon"> <link href="https://cdn.waset.org/static/plugins/bootstrap-4.2.1/css/bootstrap.min.css" rel="stylesheet"> <link href="https://cdn.waset.org/static/plugins/fontawesome/css/all.min.css" rel="stylesheet"> <link href="https://cdn.waset.org/static/css/site.css?v=150220211555" rel="stylesheet"> </head> <body> <header> <div class="container"> <nav class="navbar navbar-expand-lg navbar-light"> <a class="navbar-brand" href="https://waset.org"> <img src="https://cdn.waset.org/static/images/wasetc.png" alt="Open Science Research Excellence" title="Open Science Research Excellence" /> </a> <button class="d-block d-lg-none navbar-toggler ml-auto" type="button" data-toggle="collapse" data-target="#navbarMenu" aria-controls="navbarMenu" aria-expanded="false" aria-label="Toggle navigation"> <span class="navbar-toggler-icon"></span> </button> <div class="w-100"> <div class="d-none d-lg-flex flex-row-reverse"> <form method="get" action="https://waset.org/search" class="form-inline my-2 my-lg-0"> <input class="form-control mr-sm-2" type="search" placeholder="Search Conferences" value="audio-visual speech recognition" name="q" aria-label="Search"> <button class="btn btn-light my-2 my-sm-0" type="submit"><i class="fas fa-search"></i></button> </form> </div> <div class="collapse navbar-collapse mt-1" id="navbarMenu"> <ul class="navbar-nav ml-auto align-items-center" id="mainNavMenu"> <li class="nav-item"> <a class="nav-link" href="https://waset.org/conferences" title="Conferences in 2024/2025/2026">Conferences</a> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/disciplines" title="Disciplines">Disciplines</a> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/committees" rel="nofollow">Committees</a> </li> <li class="nav-item dropdown"> <a class="nav-link dropdown-toggle" href="#" id="navbarDropdownPublications" role="button" data-toggle="dropdown" aria-haspopup="true" aria-expanded="false"> Publications </a> <div class="dropdown-menu" aria-labelledby="navbarDropdownPublications"> <a class="dropdown-item" href="https://publications.waset.org/abstracts">Abstracts</a> <a class="dropdown-item" href="https://publications.waset.org">Periodicals</a> <a class="dropdown-item" href="https://publications.waset.org/archive">Archive</a> </div> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/page/support" title="Support">Support</a> </li> </ul> </div> </div> </nav> </div> </header> <main> <div class="container mt-4"> <div class="row"> <div class="col-md-9 mx-auto"> <form method="get" action="https://publications.waset.org/abstracts/search"> <div id="custom-search-input"> <div class="input-group"> <i class="fas fa-search"></i> <input type="text" class="search-query" name="q" placeholder="Author, Title, Abstract, Keywords" value="audio-visual speech recognition"> <input type="submit" class="btn_search" value="Search"> </div> </div> </form> </div> </div> <div class="row mt-3"> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Commenced</strong> in January 2007</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Frequency:</strong> Monthly</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Edition:</strong> International</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Paper Count:</strong> 2370</div> </div> </div> </div> <h1 class="mt-3 mb-3 text-center" style="font-size:1.6rem;">Search results for: audio-visual speech recognition</h1> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2190</span> Recognition and Protection of Indigenous Society in Indonesia</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Triyanto">Triyanto</a>, <a href="https://publications.waset.org/abstracts/search?q=Rima%20Vien%20Permata%20Hartanto"> Rima Vien Permata Hartanto</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Indonesia is a legal state. The consequence of this status is the recognition and protection of the existence of indigenous peoples. This paper aims to describe the dynamics of legal recognition and protection for indigenous peoples within the framework of Indonesian law. This paper is library research based on literature. The result states that although the constitution has normatively recognized the existence of indigenous peoples and their traditional rights, in reality, not all rights were recognized and protected. The protection and recognition for indigenous people need to be strengthened. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=indigenous%20peoples" title="indigenous peoples">indigenous peoples</a>, <a href="https://publications.waset.org/abstracts/search?q=customary%20law" title=" customary law"> customary law</a>, <a href="https://publications.waset.org/abstracts/search?q=state%20law" title=" state law"> state law</a>, <a href="https://publications.waset.org/abstracts/search?q=state%20of%20law" title=" state of law"> state of law</a> </p> <a href="https://publications.waset.org/abstracts/74295/recognition-and-protection-of-indigenous-society-in-indonesia" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/74295.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">330</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2189</span> Detecting Characters as Objects Towards Character Recognition on Licence Plates</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Alden%20Boby">Alden Boby</a>, <a href="https://publications.waset.org/abstracts/search?q=Dane%20Brown"> Dane Brown</a>, <a href="https://publications.waset.org/abstracts/search?q=James%20Connan"> James Connan</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Character recognition is a well-researched topic across disciplines. Regardless, creating a solution that can cater to multiple situations is still challenging. Vehicle licence plates lack an international standard, meaning that different countries and regions have their own licence plate format. A problem that arises from this is that the typefaces and designs from different regions make it difficult to create a solution that can cater to a wide range of licence plates. The main issue concerning detection is the character recognition stage. This paper aims to create an object detection-based character recognition model trained on a custom dataset that consists of typefaces of licence plates from various regions. Given that characters have featured consistently maintained across an array of fonts, YOLO can be trained to recognise characters based on these features, which may provide better performance than OCR methods such as Tesseract OCR. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=computer%20vision" title="computer vision">computer vision</a>, <a href="https://publications.waset.org/abstracts/search?q=character%20recognition" title=" character recognition"> character recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=licence%20plate%20recognition" title=" licence plate recognition"> licence plate recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=object%20detection" title=" object detection"> object detection</a> </p> <a href="https://publications.waset.org/abstracts/155443/detecting-characters-as-objects-towards-character-recognition-on-licence-plates" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/155443.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">121</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2188</span> Relevant LMA Features for Human Motion Recognition</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Insaf%20Ajili">Insaf Ajili</a>, <a href="https://publications.waset.org/abstracts/search?q=Malik%20Mallem"> Malik Mallem</a>, <a href="https://publications.waset.org/abstracts/search?q=Jean-Yves%20Didier"> Jean-Yves Didier</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Motion recognition from videos is actually a very complex task due to the high variability of motions. This paper describes the challenges of human motion recognition, especially motion representation step with relevant features. Our descriptor vector is inspired from Laban Movement Analysis method. We propose discriminative features using the Random Forest algorithm in order to remove redundant features and make learning algorithms operate faster and more effectively. We validate our method on MSRC-12 and UTKinect datasets. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=discriminative%20LMA%20features" title="discriminative LMA features">discriminative LMA features</a>, <a href="https://publications.waset.org/abstracts/search?q=features%20reduction" title=" features reduction"> features reduction</a>, <a href="https://publications.waset.org/abstracts/search?q=human%20motion%20recognition" title=" human motion recognition"> human motion recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=random%20forest" title=" random forest"> random forest</a> </p> <a href="https://publications.waset.org/abstracts/96299/relevant-lma-features-for-human-motion-recognition" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/96299.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">195</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2187</span> Effects of Reversible Watermarking on Iris Recognition Performance</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Andrew%20Lock">Andrew Lock</a>, <a href="https://publications.waset.org/abstracts/search?q=Alastair%20Allen"> Alastair Allen</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Fragile watermarking has been proposed as a means of adding additional security or functionality to biometric systems, particularly for authentication and tamper detection. In this paper we describe an experimental study on the effect of watermarking iris images with a particular class of fragile algorithm, reversible algorithms, and the ability to correctly perform iris recognition. We investigate two scenarios, matching watermarked images to unmodified images, and matching watermarked images to watermarked images. We show that different watermarking schemes give very different results for a given capacity, highlighting the importance of investigation. At high embedding rates most algorithms cause significant reduction in recognition performance. However, in many cases, for low embedding rates, recognition accuracy is improved by the watermarking process. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=biometrics" title="biometrics">biometrics</a>, <a href="https://publications.waset.org/abstracts/search?q=iris%20recognition" title=" iris recognition"> iris recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=reversible%20watermarking" title=" reversible watermarking"> reversible watermarking</a>, <a href="https://publications.waset.org/abstracts/search?q=vision%20engineering" title=" vision engineering"> vision engineering</a> </p> <a href="https://publications.waset.org/abstracts/9663/effects-of-reversible-watermarking-on-iris-recognition-performance" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/9663.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">456</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2186</span> Compensatory Articulation of Pressure Consonants in Telugu Cleft Palate Speech: A Spectrographic Analysis</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Indira%20Kothalanka">Indira Kothalanka</a> </p> <p class="card-text"><strong>Abstract:</strong></p> For individuals born with a cleft palate (CP), there is no separation between the nasal cavity and the oral cavity, due to which they cannot build up enough air pressure in the mouth for speech. Therefore, it is common for them to have speech problems. Common cleft type speech errors include abnormal articulation (compensatory or obligatory) and abnormal resonance (hyper, hypo and mixed nasality). These are generally resolved after palate repair. However, in some individuals, articulation problems do persist even after the palate repair. Such individuals develop variant articulations in an attempt to compensate for the inability to produce the target phonemes. A spectrographic analysis is used to investigate the compensatory articulatory behaviours of pressure consonants in the speech of 10 Telugu speaking individuals aged between 7-17 years with a history of cleft palate. Telugu is a Dravidian language which is spoken in Andhra Pradesh and Telangana states in India. It is a language with the third largest number of native speakers in India and the most spoken Dravidian language. The speech of the informants is analysed using single word list, sentences, passage and conversation. Spectrographic analysis is carried out using PRAAT, speech analysis software. The place and manner of articulation of consonant sounds is studied through spectrograms with the help of various acoustic cues. The types of compensatory articulation identified are glottal stops, palatal stops, uvular, velar stops and nasal fricatives which are non-native in Telugu. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=cleft%20palate" title="cleft palate">cleft palate</a>, <a href="https://publications.waset.org/abstracts/search?q=compensatory%20articulation" title=" compensatory articulation"> compensatory articulation</a>, <a href="https://publications.waset.org/abstracts/search?q=spectrographic%20analysis" title=" spectrographic analysis"> spectrographic analysis</a>, <a href="https://publications.waset.org/abstracts/search?q=PRAAT" title=" PRAAT"> PRAAT</a> </p> <a href="https://publications.waset.org/abstracts/31534/compensatory-articulation-of-pressure-consonants-in-telugu-cleft-palate-speech-a-spectrographic-analysis" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/31534.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">442</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2185</span> An Early Attempt of Artificial Intelligence-Assisted Language Oral Practice and Assessment</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Paul%20Lam">Paul Lam</a>, <a href="https://publications.waset.org/abstracts/search?q=Kevin%20Wong"> Kevin Wong</a>, <a href="https://publications.waset.org/abstracts/search?q=Chi%20Him%20Chan"> Chi Him Chan</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Constant practicing and accurate, immediate feedback are the keys to improving students’ speaking skills. However, traditional oral examination often fails to provide such opportunities to students. The traditional, face-to-face oral assessment is often time consuming – attending the oral needs of one student often leads to the negligence of others. Hence, teachers can only provide limited opportunities and feedback to students. Moreover, students’ incentive to practice is also reduced by their anxiety and shyness in speaking the new language. A mobile app was developed to use artificial intelligence (AI) to provide immediate feedback to students’ speaking performance as an attempt to solve the above-mentioned problems. Firstly, it was thought that online exercises would greatly increase the learning opportunities of students as they can now practice more without the needs of teachers’ presence. Secondly, the automatic feedback provided by the AI would enhance students’ motivation to practice as there is an instant evaluation of their performance. Lastly, students should feel less anxious and shy compared to directly practicing oral in front of teachers. Technically, the program made use of speech-to-text functions to generate feedback to students. To be specific, the software analyzes students’ oral input through certain speech-to-text AI engine and then cleans up the results further to the point that can be compared with the targeted text. The mobile app has invited English teachers for the pilot use and asked for their feedback. Preliminary trials indicated that the approach has limitations. Many of the users’ pronunciation were automatically corrected by the speech recognition function as wise guessing is already integrated into many of such systems. Nevertheless, teachers have confidence that the app can be further improved for accuracy. It has the potential to significantly improve oral drilling by giving students more chances to practice. Moreover, they believe that the success of this mobile app confirms the potential to extend the AI-assisted assessment to other language skills, such as writing, reading, and listening. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=artificial%20Intelligence" title="artificial Intelligence">artificial Intelligence</a>, <a href="https://publications.waset.org/abstracts/search?q=mobile%20learning" title=" mobile learning"> mobile learning</a>, <a href="https://publications.waset.org/abstracts/search?q=oral%20assessment" title=" oral assessment"> oral assessment</a>, <a href="https://publications.waset.org/abstracts/search?q=oral%20practice" title=" oral practice"> oral practice</a>, <a href="https://publications.waset.org/abstracts/search?q=speech-to-text%20function" title=" speech-to-text function"> speech-to-text function</a> </p> <a href="https://publications.waset.org/abstracts/120972/an-early-attempt-of-artificial-intelligence-assisted-language-oral-practice-and-assessment" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/120972.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">103</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2184</span> ICanny: CNN Modulation Recognition Algorithm</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Jingpeng%20Gao">Jingpeng Gao</a>, <a href="https://publications.waset.org/abstracts/search?q=Xinrui%20Mao"> Xinrui Mao</a>, <a href="https://publications.waset.org/abstracts/search?q=Zhibin%20Deng"> Zhibin Deng</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Aiming at the low recognition rate on the composite signal modulation in low signal to noise ratio (SNR), this paper proposes a modulation recognition algorithm based on ICanny-CNN. Firstly, the radar signal is transformed into the time-frequency image by Choi-Williams Distribution (CWD). Secondly, we propose an image processing algorithm using the Guided Filter and the threshold selection method, which is combined with the hole filling and the mask operation. Finally, the shallow convolutional neural network (CNN) is combined with the idea of the depth-wise convolution (Dw Conv) and the point-wise convolution (Pw Conv). The proposed CNN is designed to complete image classification and realize modulation recognition of radar signal. The simulation results show that the proposed algorithm can reach 90.83% at 0dB and 71.52% at -8dB. Therefore, the proposed algorithm has a good classification and anti-noise performance in radar signal modulation recognition and other fields. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=modulation%20recognition" title="modulation recognition">modulation recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20processing" title=" image processing"> image processing</a>, <a href="https://publications.waset.org/abstracts/search?q=composite%20signal" title=" composite signal"> composite signal</a>, <a href="https://publications.waset.org/abstracts/search?q=improved%20Canny%20algorithm" title=" improved Canny algorithm"> improved Canny algorithm</a> </p> <a href="https://publications.waset.org/abstracts/139350/icanny-cnn-modulation-recognition-algorithm" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/139350.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">191</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2183</span> A Survey on Speech Emotion-Based Music Recommendation System</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Chirag%20Kothawade">Chirag Kothawade</a>, <a href="https://publications.waset.org/abstracts/search?q=Gourie%20Jagtap"> Gourie Jagtap</a>, <a href="https://publications.waset.org/abstracts/search?q=PreetKaur%20Relusinghani"> PreetKaur Relusinghani</a>, <a href="https://publications.waset.org/abstracts/search?q=Vedang%20Chavan"> Vedang Chavan</a>, <a href="https://publications.waset.org/abstracts/search?q=Smitha%20S.%20Bhosale"> Smitha S. Bhosale</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Psychological research has proven that music relieves stress, elevates mood, and is responsible for the release of “feel-good” chemicals like oxytocin, serotonin, and dopamine. It comes as no surprise that music has been a popular tool in rehabilitation centers and therapy for various disorders, thus with the interminably rising numbers of people facing mental health-related issues across the globe, addressing mental health concerns is more crucial than ever. Despite the existing music recommendation systems, there is a dearth of holistically curated algorithms that take care of the needs of users. Given that, an undeniable majority of people turn to music on a regular basis and that music has been proven to increase cognition, memory, and sleep quality while reducing anxiety, pain, and blood pressure, it is the need of the hour to fashion a product that extracts all the benefits of music in the most extensive and deployable method possible. Our project aims to ameliorate our users’ mental state by building a comprehensive mood-based music recommendation system called “Viby”. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=language" title="language">language</a>, <a href="https://publications.waset.org/abstracts/search?q=communication" title=" communication"> communication</a>, <a href="https://publications.waset.org/abstracts/search?q=speech%20recognition" title=" speech recognition"> speech recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=interaction" title=" interaction"> interaction</a> </p> <a href="https://publications.waset.org/abstracts/177086/a-survey-on-speech-emotion-based-music-recommendation-system" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/177086.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">63</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2182</span> Video Based Automatic License Plate Recognition System</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Ali%20Ganoun">Ali Ganoun</a>, <a href="https://publications.waset.org/abstracts/search?q=Wesam%20Algablawi"> Wesam Algablawi</a>, <a href="https://publications.waset.org/abstracts/search?q=Wasim%20BenAnaif"> Wasim BenAnaif </a> </p> <p class="card-text"><strong>Abstract:</strong></p> Video based traffic surveillance based on License Plate Recognition (LPR) system is an essential part for any intelligent traffic management system. The LPR system utilizes computer vision and pattern recognition technologies to obtain traffic and road information by detecting and recognizing vehicles based on their license plates. Generally, the video based LPR system is a challenging area of research due to the variety of environmental conditions. The LPR systems used in a wide range of commercial applications such as collision warning systems, finding stolen cars, controlling access to car parks and automatic congestion charge systems. This paper presents an automatic LPR system of Libyan license plate. The performance of the proposed system is evaluated with three video sequences. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=license%20plate%20recognition" title="license plate recognition">license plate recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=localization" title=" localization"> localization</a>, <a href="https://publications.waset.org/abstracts/search?q=segmentation" title=" segmentation"> segmentation</a>, <a href="https://publications.waset.org/abstracts/search?q=recognition" title=" recognition"> recognition</a> </p> <a href="https://publications.waset.org/abstracts/9958/video-based-automatic-license-plate-recognition-system" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/9958.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">464</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2181</span> Genetic Algorithm Based Deep Learning Parameters Tuning for Robot Object Recognition and Grasping</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Delowar%20Hossain">Delowar Hossain</a>, <a href="https://publications.waset.org/abstracts/search?q=Genci%20Capi"> Genci Capi</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper concerns with the problem of deep learning parameters tuning using a genetic algorithm (GA) in order to improve the performance of deep learning (DL) method. We present a GA based DL method for robot object recognition and grasping. GA is used to optimize the DL parameters in learning procedure in term of the fitness function that is good enough. After finishing the evolution process, we receive the optimal number of DL parameters. To evaluate the performance of our method, we consider the object recognition and robot grasping tasks. Experimental results show that our method is efficient for robot object recognition and grasping. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=deep%20learning" title="deep learning">deep learning</a>, <a href="https://publications.waset.org/abstracts/search?q=genetic%20algorithm" title=" genetic algorithm"> genetic algorithm</a>, <a href="https://publications.waset.org/abstracts/search?q=object%20recognition" title=" object recognition"> object recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=robot%20grasping" title=" robot grasping"> robot grasping</a> </p> <a href="https://publications.waset.org/abstracts/67943/genetic-algorithm-based-deep-learning-parameters-tuning-for-robot-object-recognition-and-grasping" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/67943.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">353</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2180</span> Face Recognition Using Discrete Orthogonal Hahn Moments</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Fatima%20Akhmedova">Fatima Akhmedova</a>, <a href="https://publications.waset.org/abstracts/search?q=Simon%20Liao"> Simon Liao</a> </p> <p class="card-text"><strong>Abstract:</strong></p> One of the most critical decision points in the design of a face recognition system is the choice of an appropriate face representation. Effective feature descriptors are expected to convey sufficient, invariant and non-redundant facial information. In this work, we propose a set of Hahn moments as a new approach for feature description. Hahn moments have been widely used in image analysis due to their invariance, non-redundancy and the ability to extract features either globally and locally. To assess the applicability of Hahn moments to Face Recognition we conduct two experiments on the Olivetti Research Laboratory (ORL) database and University of Notre-Dame (UND) X1 biometric collection. Fusion of the global features along with the features from local facial regions are used as an input for the conventional k-NN classifier. The method reaches an accuracy of 93% of correctly recognized subjects for the ORL database and 94% for the UND database. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=face%20recognition" title="face recognition">face recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=Hahn%20moments" title=" Hahn moments"> Hahn moments</a>, <a href="https://publications.waset.org/abstracts/search?q=recognition-by-parts" title=" recognition-by-parts"> recognition-by-parts</a>, <a href="https://publications.waset.org/abstracts/search?q=time-lapse" title=" time-lapse"> time-lapse</a> </p> <a href="https://publications.waset.org/abstracts/27781/face-recognition-using-discrete-orthogonal-hahn-moments" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/27781.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">375</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2179</span> A Profile of the Patients at the Hearing and Speech Clinic at the University of Jordan: A Retrospective Study</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Maisa%20Haj-Tas">Maisa Haj-Tas</a>, <a href="https://publications.waset.org/abstracts/search?q=Jehad%20Alaraifi"> Jehad Alaraifi</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The significance of the study: This retrospective study examined the speech and language profiles of patients who received clinical services at the University of Jordan Hearing and Speech Clinic (UJ-HSC) from 2009 to 2014. The UJ-HSC clinic is located in the capital Amman and was established in the late 1990s. It is the first hearing and speech clinic in Jordan and one of first speech and hearing clinics in the Middle East. This clinic provides services to an annual average of 2000 patients who are diagnosed with different communication disorders. Examining the speech and language profiles of patients in this clinic could provide an insight about the most common disorders seen in patients who attend similar clinics in Jordan. It could also provide information about community awareness of the role of speech therapists in the management of speech and language disorders. Methodology: The researchers examined the clinical records of 1140 patients (797 males and 343 females) who received clinical services at the UJ-HSC between the years 2009 and 2014 for the purpose of data analysis for this study. The main variables examined in the study were disorder type and gender. Participants were divided into four age groups: children, adolescents, adults, and older adults. The examined disorders were classified as either speech disorders, language disorders, or dysphagia (i.e., swallowing problems). The disorders were further classified as childhood language impairments, articulation disorders, stuttering, cluttering, voice disorders, aphasia, and dysphagia. Results: The results indicated that the prevalence for language disorders was the highest (50.7%) followed by speech disorders (48.3%), and dysphagia (0.9%). The majority of patients who were seen at the JU-HSC were diagnosed with childhood language impairments (47.3%) followed consecutively by articulation disorders (21.1%), stuttering (16.3%), voice disorders (12.1%), aphasia (2.2%), dysphagia (0.9%), and cluttering (0.2%). As for gender, the majority of patients seen at the clinic were males in all disorders except for voice disorders and cluttering. Discussion: The results of the present study indicate that the majority of examined patients were diagnosed with childhood language impairments. Based on this result, the researchers suggest that there seems to be a high prevalence of childhood language impairments among children in Jordan compared to other types of speech and language disorders. The researchers also suggest that there is a need for further examination of the actual prevalence data on speech and language disorders in Jordan. The fact that many of the children seen at the UJ-HSC were brought to the clinic either as a result of parental concern or teacher referral indicates that there seems to an increased awareness among parents and teachers about the services speech pathologists can provide about assessment and treatment of childhood speech and language disorders. The small percentage of other disorders (i.e., stuttering, cluttering, dysphasia, aphasia, and voice disorders) seen at the UJ-HSC may indicate a little awareness by the local community about the role of speech pathologists in the assessment and treatment of these disorders. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=clinic" title="clinic">clinic</a>, <a href="https://publications.waset.org/abstracts/search?q=disorders" title=" disorders"> disorders</a>, <a href="https://publications.waset.org/abstracts/search?q=language" title=" language"> language</a>, <a href="https://publications.waset.org/abstracts/search?q=profile" title=" profile"> profile</a>, <a href="https://publications.waset.org/abstracts/search?q=speech" title=" speech"> speech</a> </p> <a href="https://publications.waset.org/abstracts/51540/a-profile-of-the-patients-at-the-hearing-and-speech-clinic-at-the-university-of-jordan-a-retrospective-study" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/51540.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">313</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2178</span> Topology-Based Character Recognition Method for Coin Date Detection</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Xingyu%20Pan">Xingyu Pan</a>, <a href="https://publications.waset.org/abstracts/search?q=Laure%20Tougne"> Laure Tougne</a> </p> <p class="card-text"><strong>Abstract:</strong></p> For recognizing coins, the graved release date is important information to identify precisely its monetary type. However, reading characters in coins meets much more obstacles than traditional character recognition tasks in the other fields, such as reading scanned documents or license plates. To address this challenging issue in a numismatic context, we propose a training-free approach dedicated to detection and recognition of the release date of the coin. In the first step, the date zone is detected by comparing histogram features; in the second step, a topology-based algorithm is introduced to recognize coin numbers with various font types represented by binary gradient map. Our method obtained a recognition rate of 92% on synthetic data and of 44% on real noised data. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=coin" title="coin">coin</a>, <a href="https://publications.waset.org/abstracts/search?q=detection" title=" detection"> detection</a>, <a href="https://publications.waset.org/abstracts/search?q=character%20recognition" title=" character recognition"> character recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=topology" title=" topology"> topology</a> </p> <a href="https://publications.waset.org/abstracts/55637/topology-based-character-recognition-method-for-coin-date-detection" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/55637.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">253</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2177</span> Role of Speech Articulation in English Language Learning</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Khadija%20Rafi">Khadija Rafi</a>, <a href="https://publications.waset.org/abstracts/search?q=Neha%20Jamil"> Neha Jamil</a>, <a href="https://publications.waset.org/abstracts/search?q=Laiba%20Khalid"> Laiba Khalid</a>, <a href="https://publications.waset.org/abstracts/search?q=Meerub%20Nawaz"> Meerub Nawaz</a>, <a href="https://publications.waset.org/abstracts/search?q=Mahwish%20Farooq"> Mahwish Farooq</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Speech articulation is a complex process to produce intelligible sounds with the help of precise movements of various structures within the vocal tract. All these structures in the vocal tract are named as articulators, which comprise lips, teeth, tongue, and palate. These articulators work together to produce a range of distinct phonemes, which happen to be the basis of language. It starts with the airstream from the lungs passing through the trachea and into oral and nasal cavities. When the air passes through the mouth, the tongue and the muscles around it form such coordination it creates certain sounds. It can be seen when the tongue is placed in different positions- sometimes near the alveolar ridge, soft palate, roof of the mouth or the back of the teeth which end up creating unique qualities of each phoneme. We can articulate vowels with open vocal tracts, but the height and position of the tongue is different every time depending upon each vowel, while consonants can be pronounced when we create obstructions in the airflow. For instance, the alphabet ‘b’ is a plosive and can be produced only by briefly closing the lips. Articulation disorders can not only affect communication but can also be a hurdle in speech production. To improve articulation skills for such individuals, doctors often recommend speech therapy, which involves various kinds of exercises like jaw exercises and tongue twisters. However, this disorder is more common in children who are going through developmental articulation issues right after birth, but in adults, it can be caused by injury, neurological conditions, or other speech-related disorders. In short, speech articulation is an essential aspect of productive communication, which also includes coordination of the specific articulators to produce different intelligible sounds, which are a vital part of spoken language. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=linguistics" title="linguistics">linguistics</a>, <a href="https://publications.waset.org/abstracts/search?q=speech%20articulation" title=" speech articulation"> speech articulation</a>, <a href="https://publications.waset.org/abstracts/search?q=speech%20therapy" title=" speech therapy"> speech therapy</a>, <a href="https://publications.waset.org/abstracts/search?q=language%20learning" title=" language learning"> language learning</a> </p> <a href="https://publications.waset.org/abstracts/176220/role-of-speech-articulation-in-english-language-learning" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/176220.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">62</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2176</span> Hate Speech in Selected Nigerian Newspapers</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Laurel%20Chikwado%20Madumere">Laurel Chikwado Madumere</a>, <a href="https://publications.waset.org/abstracts/search?q=Kevin%20O.%20Ugorji"> Kevin O. Ugorji</a> </p> <p class="card-text"><strong>Abstract:</strong></p> A speech is said to be full of hate when it appropriates disparaging and vituperative locutions and/or appellations, which are riddled with prejudices and misconceptions about an antagonizing party on the grounds of gender, race, political orientation, religious affiliations, tribe, etc. Due largely to the dichotomies and polarities that exist in Nigeria across political ideological spectrum, tribal affiliations, and gender contradistinctions, there are possibilities for the existence of socioeconomic, religious and political conditions that would induce, provoke and catalyze hate speeches in Nigeria’s mainstream media. Therefore the aim of this paper is to investigate, using select daily newspapers in Nigeria, the extent and complexity of those likely hate speeches that emanate from the pluralism in Nigeria and to set in to relief, the discrepancies and contrariety in the interpretation of those hate words. To achieve the above, the paper shall be qualitative in orientation as it shall be using the Speech Act Theory of J. L. Austin and J. R. Searle to interpret and evaluate the hate speeches in the select Nigerian daily newspapers. Also this paper shall help to elucidate the conditions that generate hate, and inform the government and NGOs how best to approach those conditions and put an end to the possible violence and extremism that emanate from extreme cases of hate. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=extremism" title="extremism">extremism</a>, <a href="https://publications.waset.org/abstracts/search?q=gender" title=" gender"> gender</a>, <a href="https://publications.waset.org/abstracts/search?q=hate%20speech" title=" hate speech"> hate speech</a>, <a href="https://publications.waset.org/abstracts/search?q=pluralism" title=" pluralism"> pluralism</a>, <a href="https://publications.waset.org/abstracts/search?q=prejudice" title=" prejudice"> prejudice</a>, <a href="https://publications.waset.org/abstracts/search?q=speech%20act%20theory" title=" speech act theory"> speech act theory</a> </p> <a href="https://publications.waset.org/abstracts/124330/hate-speech-in-selected-nigerian-newspapers" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/124330.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">146</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2175</span> Exploring Multi-Feature Based Action Recognition Using Multi-Dimensional Dynamic Time Warping</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Guoliang%20Lu">Guoliang Lu</a>, <a href="https://publications.waset.org/abstracts/search?q=Changhou%20Lu"> Changhou Lu</a>, <a href="https://publications.waset.org/abstracts/search?q=Xueyong%20Li"> Xueyong Li</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In action recognition, previous studies have demonstrated the effectiveness of using multiple features to improve the recognition performance. We focus on two practical issues: i) most studies use a direct way of concatenating/accumulating multi features to evaluate the similarity between two actions. This way could be too strong since each kind of feature can include different dimensions, quantities, etc; ii) in many studies, the employed classification methods lack of a flexible and effective mechanism to add new feature(s) into classification. In this paper, we explore an unified scheme based on recently-proposed multi-dimensional dynamic time warping (MD-DTW). Experiments demonstrated the scheme's effectiveness of combining multi-feature and the flexibility of adding new feature(s) to increase the recognition performance. In addition, the explored scheme also provides us an open architecture for using new advanced classification methods in the future to enhance action recognition. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=action%20recognition" title="action recognition">action recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=multi%20features" title=" multi features"> multi features</a>, <a href="https://publications.waset.org/abstracts/search?q=dynamic%20time%20warping" title=" dynamic time warping"> dynamic time warping</a>, <a href="https://publications.waset.org/abstracts/search?q=feature%20combination" title=" feature combination"> feature combination</a> </p> <a href="https://publications.waset.org/abstracts/33238/exploring-multi-feature-based-action-recognition-using-multi-dimensional-dynamic-time-warping" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/33238.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">437</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2174</span> Improved Dynamic Bayesian Networks Applied to Arabic On Line Characters Recognition</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Redouane%20Tlemsani">Redouane Tlemsani</a>, <a href="https://publications.waset.org/abstracts/search?q=Abdelkader%20Benyettou"> Abdelkader Benyettou</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Work is in on line Arabic character recognition and the principal motivation is to study the Arab manuscript with on line technology. This system is a Markovian system, which one can see as like a Dynamic Bayesian Network (DBN). One of the major interests of these systems resides in the complete models training (topology and parameters) starting from training data. Our approach is based on the dynamic Bayesian Networks formalism. The DBNs theory is a Bayesians networks generalization to the dynamic processes. Among our objective, amounts finding better parameters, which represent the links (dependences) between dynamic network variables. In applications in pattern recognition, one will carry out the fixing of the structure, which obliges us to admit some strong assumptions (for example independence between some variables). Our application will relate to the Arabic isolated characters on line recognition using our laboratory database: NOUN. A neural tester proposed for DBN external optimization. The DBN scores and DBN mixed are respectively 70.24% and 62.50%, which lets predict their further development; other approaches taking account time were considered and implemented until obtaining a significant recognition rate 94.79%. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=Arabic%20on%20line%20character%20recognition" title="Arabic on line character recognition">Arabic on line character recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=dynamic%20Bayesian%20network" title=" dynamic Bayesian network"> dynamic Bayesian network</a>, <a href="https://publications.waset.org/abstracts/search?q=pattern%20recognition" title=" pattern recognition"> pattern recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=computer%20vision" title=" computer vision"> computer vision</a> </p> <a href="https://publications.waset.org/abstracts/7319/improved-dynamic-bayesian-networks-applied-to-arabic-on-line-characters-recognition" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/7319.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">428</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2173</span> Absence of Developmental Change in Epenthetic Vowel Duration in Japanese Speakers’ English</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Takayuki%20Konishi">Takayuki Konishi</a>, <a href="https://publications.waset.org/abstracts/search?q=Kakeru%20Yazawa"> Kakeru Yazawa</a>, <a href="https://publications.waset.org/abstracts/search?q=Mariko%20Kondo"> Mariko Kondo</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This study examines developmental change in the production of epenthetic vowels by Japanese learners of English in relation to acquisition of L2 English speech rhythm. Seventy-two Japanese learners of English in the <em>J-AESOP</em> corpus were divided into lower- and higher-level learners according to their proficiency score and the frequency of vowel epenthesis. Three learners were excluded because no vowel epenthesis was observed in their utterances. The analysis of their read English speech data showed no statistical difference between lower- and higher-level learners, implying the absence of any developmental change in durations of epenthetic vowels. This result, together with the findings of previous studies, will be discussed in relation to the transfer of L1 phonology and manifestation of L2 English rhythm. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=vowel%20epenthesis" title="vowel epenthesis">vowel epenthesis</a>, <a href="https://publications.waset.org/abstracts/search?q=Japanese%20learners%20of%20English" title=" Japanese learners of English"> Japanese learners of English</a>, <a href="https://publications.waset.org/abstracts/search?q=L2%20speech%20corpus" title=" L2 speech corpus"> L2 speech corpus</a>, <a href="https://publications.waset.org/abstracts/search?q=speech%20rhythm" title=" speech rhythm"> speech rhythm</a> </p> <a href="https://publications.waset.org/abstracts/61343/absence-of-developmental-change-in-epenthetic-vowel-duration-in-japanese-speakers-english" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/61343.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">268</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2172</span> Object Recognition Approach Based on Generalized Hough Transform and Color Distribution Serving in Generating Arabic Sentences</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Nada%20Farhani">Nada Farhani</a>, <a href="https://publications.waset.org/abstracts/search?q=Naim%20Terbeh"> Naim Terbeh</a>, <a href="https://publications.waset.org/abstracts/search?q=Mounir%20Zrigui"> Mounir Zrigui</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The recognition of the objects contained in images has always presented a challenge in the field of research because of several difficulties that the researcher can envisage because of the variability of shape, position, contrast of objects, etc. In this paper, we will be interested in the recognition of objects. The classical Hough Transform (HT) presented a tool for detecting straight line segments in images. The technique of HT has been generalized (GHT) for the detection of arbitrary forms. With GHT, the forms sought are not necessarily defined analytically but rather by a particular silhouette. For more precision, we proposed to combine the results from the GHT with the results from a calculation of similarity between the histograms and the spatiograms of the images. The main purpose of our work is to use the concepts from recognition to generate sentences in Arabic that summarize the content of the image. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=recognition%20of%20shape" title="recognition of shape">recognition of shape</a>, <a href="https://publications.waset.org/abstracts/search?q=generalized%20hough%20transformation" title=" generalized hough transformation"> generalized hough transformation</a>, <a href="https://publications.waset.org/abstracts/search?q=histogram" title=" histogram"> histogram</a>, <a href="https://publications.waset.org/abstracts/search?q=spatiogram" title=" spatiogram"> spatiogram</a>, <a href="https://publications.waset.org/abstracts/search?q=learning" title=" learning"> learning</a> </p> <a href="https://publications.waset.org/abstracts/101706/object-recognition-approach-based-on-generalized-hough-transform-and-color-distribution-serving-in-generating-arabic-sentences" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/101706.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">158</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2171</span> Grammatical and Lexical Cohesion in the Japan’s Prime Minister Shinzo Abe’s Speech Text ‘Nihon wa Modottekimashita’</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Nadya%20Inda%20Syartanti">Nadya Inda Syartanti</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This research aims to identify, classify, and analyze descriptively the aspects of grammatical and lexical cohesion in the speech text of Japan’s Prime Minister Shinzo Abe entitled Nihon wa Modotte kimashita delivered in Washington DC, the United States on February 23, 2013, as a research data source. The method used is qualitative research, which uses descriptions through words that are applied by analyzing aspects of grammatical and lexical cohesion proposed by Halliday and Hasan (1976). The aspects of grammatical cohesion consist of references (personal, demonstrative, interrogative pronouns), substitution, ellipsis, and conjunction. In contrast, lexical cohesion consists of reiteration (repetition, synonym, antonym, hyponym, meronym) and collocation. Data classification is based on the 6 aspects of the cohesion. Through some aspects of cohesion, this research tries to find out the frequency of using grammatical and lexical cohesion in Shinzo Abe's speech text entitled Nihon wa Modotte kimashita. The results of this research are expected to help overcome the difficulty of understanding speech texts in Japanese. Therefore, this research can be a reference for learners, researchers, and anyone who is interested in the field of discourse analysis. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=cohesion" title="cohesion">cohesion</a>, <a href="https://publications.waset.org/abstracts/search?q=grammatical%20cohesion" title=" grammatical cohesion"> grammatical cohesion</a>, <a href="https://publications.waset.org/abstracts/search?q=lexical%20cohesion" title=" lexical cohesion"> lexical cohesion</a>, <a href="https://publications.waset.org/abstracts/search?q=speech%20text" title=" speech text"> speech text</a>, <a href="https://publications.waset.org/abstracts/search?q=Shinzo%20Abe" title=" Shinzo Abe"> Shinzo Abe</a> </p> <a href="https://publications.waset.org/abstracts/107621/grammatical-and-lexical-cohesion-in-the-japans-prime-minister-shinzo-abes-speech-text-nihon-wa-modottekimashita" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/107621.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">162</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2170</span> Speech and Swallowing Function after Tonsillo-Lingual Sulcus Resection with PMMC Flap Reconstruction: A Case Study</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=K.%20Rhea%20Devaiah">K. Rhea Devaiah</a>, <a href="https://publications.waset.org/abstracts/search?q=B.%20S.%20Premalatha"> B. S. Premalatha</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Background: Tonsillar Lingual sulcus is the area between the tonsils and the base of the tongue. The surgical resection of the lesions in the head and neck results in changes in speech and swallowing functions. The severity of the speech and swallowing problem depends upon the site and extent of the lesion, types and extent of surgery and also the flexibility of the remaining structures. Need of the study: This paper focuses on the importance of speech and swallowing rehabilitation in an individual with the lesion in the Tonsillar Lingual Sulcus and post-operative functions. Aim: Evaluating the speech and swallow functions post-intensive speech and swallowing rehabilitation. The objectives are to evaluate the speech intelligibility and swallowing functions after intensive therapy and assess the quality of life. Method: The present study describes a report of an individual aged 47years male, with the diagnosis of basaloid squamous cell carcinoma, left tonsillar lingual sulcus (pT2n2M0) and underwent wide local excision with left radical neck dissection with PMMC flap reconstruction. Post-surgery the patient came with a complaint of reduced speech intelligibility, and difficulty in opening the mouth and swallowing. Detailed evaluation of the speech and swallowing functions were carried out such as OPME, articulation test, speech intelligibility, different phases of swallowing and trismus evaluation. Self-reported questionnaires such as SHI-E(Speech handicap Index- Indian English), DHI (Dysphagia handicap Index) and SESEQ -K (Self Evaluation of Swallowing Efficiency in Kannada) were also administered to know what the patient feels about his problem. Based on the evaluation, the patient was diagnosed with pharyngeal phase dysphagia associated with trismus and reduced speech intelligibility. Intensive speech and swallowing therapy was advised weekly twice for the duration of 1 hour. Results: Totally the patient attended 10 intensive speech and swallowing therapy sessions. Results indicated misarticulation of speech sounds such as lingua-palatal sounds. Mouth opening was restricted to one finger width with difficulty chewing, masticating, and swallowing the bolus. Intervention strategies included Oro motor exercise, Indirect swallowing therapy, usage of a trismus device to facilitate mouth opening, and change in the food consistency to help to swallow. A practice session was held with articulation drills to improve the production of speech sounds and also improve speech intelligibility. Significant changes in articulatory production and speech intelligibility and swallowing abilities were observed. The self-rated quality of life measures such as DHI, SHI and SESE Q-K revealed no speech handicap and near-normal swallowing ability indicating the improved QOL after the intensive speech and swallowing therapy. Conclusion: Speech and swallowing therapy post carcinoma in the tonsillar lingual sulcus is crucial as the tongue plays an important role in both speech and swallowing. The role of Speech-language and swallowing therapists in oral cancer should be highlighted in treating these patients and improving the overall quality of life. With intensive speech-language and swallowing therapy post-surgery for oral cancer, there can be a significant change in the speech outcome and swallowing functions depending on the site and extent of lesions which will thereby improve the individual’s QOL. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=oral%20cancer" title="oral cancer">oral cancer</a>, <a href="https://publications.waset.org/abstracts/search?q=speech%20and%20swallowing%20therapy" title=" speech and swallowing therapy"> speech and swallowing therapy</a>, <a href="https://publications.waset.org/abstracts/search?q=speech%20intelligibility" title=" speech intelligibility"> speech intelligibility</a>, <a href="https://publications.waset.org/abstracts/search?q=trismus" title=" trismus"> trismus</a>, <a href="https://publications.waset.org/abstracts/search?q=quality%20of%20life" title=" quality of life"> quality of life</a> </p> <a href="https://publications.waset.org/abstracts/152166/speech-and-swallowing-function-after-tonsillo-lingual-sulcus-resection-with-pmmc-flap-reconstruction-a-case-study" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/152166.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">112</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2169</span> Real Time Multi Person Action Recognition Using Pose Estimates</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Aishrith%20Rao">Aishrith Rao</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Human activity recognition is an important aspect of video analytics, and many approaches have been recommended to enable action recognition. In this approach, the model is used to identify the action of the multiple people in the frame and classify them accordingly. A few approaches use RNNs and 3D CNNs, which are computationally expensive and cannot be trained with the small datasets which are currently available. Multi-person action recognition has been performed in order to understand the positions and action of people present in the video frame. The size of the video frame can be adjusted as a hyper-parameter depending on the hardware resources available. OpenPose has been used to calculate pose estimate using CNN to produce heap-maps, one of which provides skeleton features, which are basically joint features. The features are then extracted, and a classification algorithm can be applied to classify the action. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=human%20activity%20recognition" title="human activity recognition">human activity recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=computer%20vision" title=" computer vision"> computer vision</a>, <a href="https://publications.waset.org/abstracts/search?q=pose%20estimates" title=" pose estimates"> pose estimates</a>, <a href="https://publications.waset.org/abstracts/search?q=convolutional%20neural%20networks" title=" convolutional neural networks"> convolutional neural networks</a> </p> <a href="https://publications.waset.org/abstracts/127872/real-time-multi-person-action-recognition-using-pose-estimates" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/127872.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">139</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2168</span> Visual Thing Recognition with Binary Scale-Invariant Feature Transform and Support Vector Machine Classifiers Using Color Information</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Wei-Jong%20Yang">Wei-Jong Yang</a>, <a href="https://publications.waset.org/abstracts/search?q=Wei-Hau%20Du"> Wei-Hau Du</a>, <a href="https://publications.waset.org/abstracts/search?q=Pau-Choo%20Chang"> Pau-Choo Chang</a>, <a href="https://publications.waset.org/abstracts/search?q=Jar-Ferr%20Yang"> Jar-Ferr Yang</a>, <a href="https://publications.waset.org/abstracts/search?q=Pi-Hsia%20Hung"> Pi-Hsia Hung</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The demands of smart visual thing recognition in various devices have been increased rapidly for daily smart production, living and learning systems in recent years. This paper proposed a visual thing recognition system, which combines binary scale-invariant feature transform (SIFT), bag of words model (BoW), and support vector machine (SVM) by using color information. Since the traditional SIFT features and SVM classifiers only use the gray information, color information is still an important feature for visual thing recognition. With color-based SIFT features and SVM, we can discard unreliable matching pairs and increase the robustness of matching tasks. The experimental results show that the proposed object recognition system with color-assistant SIFT SVM classifier achieves higher recognition rate than that with the traditional gray SIFT and SVM classification in various situations. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=color%20moments" title="color moments">color moments</a>, <a href="https://publications.waset.org/abstracts/search?q=visual%20thing%20recognition%20system" title=" visual thing recognition system"> visual thing recognition system</a>, <a href="https://publications.waset.org/abstracts/search?q=SIFT" title=" SIFT"> SIFT</a>, <a href="https://publications.waset.org/abstracts/search?q=color%20SIFT" title=" color SIFT"> color SIFT</a> </p> <a href="https://publications.waset.org/abstracts/62857/visual-thing-recognition-with-binary-scale-invariant-feature-transform-and-support-vector-machine-classifiers-using-color-information" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/62857.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">467</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2167</span> A Neural Approach for the Offline Recognition of the Arabic Handwritten Words of the Algerian Departments</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Salim%20Ouchtati">Salim Ouchtati</a>, <a href="https://publications.waset.org/abstracts/search?q=Jean%20Sequeira"> Jean Sequeira</a>, <a href="https://publications.waset.org/abstracts/search?q=Mouldi%20Bedda"> Mouldi Bedda</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In this work we present an off line system for the recognition of the Arabic handwritten words of the Algerian departments. The study is based mainly on the evaluation of neural network performances, trained with the gradient back propagation algorithm. The used parameters to form the input vector of the neural network are extracted on the binary images of the handwritten word by several methods: the parameters of distribution, the moments centered of the different projections and the Barr features. It should be noted that these methods are applied on segments gotten after the division of the binary image of the word in six segments. The classification is achieved by a multi layers perceptron. Detailed experiments are carried and satisfactory recognition results are reported. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=handwritten%20word%20recognition" title="handwritten word recognition">handwritten word recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=neural%20networks" title=" neural networks"> neural networks</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20processing" title=" image processing"> image processing</a>, <a href="https://publications.waset.org/abstracts/search?q=pattern%20recognition" title=" pattern recognition"> pattern recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=features%20extraction" title=" features extraction "> features extraction </a> </p> <a href="https://publications.waset.org/abstracts/29848/a-neural-approach-for-the-offline-recognition-of-the-arabic-handwritten-words-of-the-algerian-departments" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/29848.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">513</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2166</span> The Communicative Nature of Linguistic Interference in Learning and Teaching of Slavic Languages </h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Kseniia%20Fedorova">Kseniia Fedorova</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The article is devoted to interlinguistic homonymy and enantiosemy analysis. These phenomena belong to the process of linguistic interference, which leads to violation of the communicative utterances integrity and causes misunderstanding between foreign interlocutors - native speakers of different Slavic languages. More attention is paid to investigation of non-typical speech situations, which occurred spontaneously or created by somebody intentionally being based on described phenomenon mechanism. The classification of typical students' mistakes connected with the paradox of interference is being represented in the article. The survey contributes to speech act theory, contemporary linguodidactics, translation science and comparative lexicology of Slavonic languages. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=adherent%20enantiosemy" title="adherent enantiosemy">adherent enantiosemy</a>, <a href="https://publications.waset.org/abstracts/search?q=interference" title=" interference"> interference</a>, <a href="https://publications.waset.org/abstracts/search?q=interslavonic%20homonymy" title=" interslavonic homonymy"> interslavonic homonymy</a>, <a href="https://publications.waset.org/abstracts/search?q=speech%20act" title=" speech act"> speech act</a> </p> <a href="https://publications.waset.org/abstracts/58467/the-communicative-nature-of-linguistic-interference-in-learning-and-teaching-of-slavic-languages" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/58467.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">244</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2165</span> The Relation between Subtitling and General Translation from a Didactic Perspective</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Sonia%20Gonzalez%20Cruz">Sonia Gonzalez Cruz</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Subtitling activities allow for acquiring and developing certain translation skills, and they also have a great impact on the students' motivation. Active subtitling is a relatively recent activity that has generated a lot of interest particularly in the field of second-language acquisition, but it is also present within both the didactics of general translation and language teaching for translators. It is interesting to analyze the level of inclusion of these new resources into the existent curricula and observe to what extent these different teaching methods are being used in the translation classroom. Although subtitling has already become an independent discipline of study and it is considered to be a type of translation on its own, it is necessary to do further research on the different didactic varieties that this type of audiovisual translation offers. Therefore, this project is framed within the field of the didactics of translation, and it focuses on the relationship between the didactics of general translation and active subtitling as a didactic tool. Its main objective is to analyze the inclusion of interlinguistic active subtitling in general translation curricula at different universities. As it has been observed so far, the analyzed curricula do not make any type of reference to the use of this didactic tool in general translation classrooms. However, they do register the inclusion of other audiovisual activities such as dubbing, script translation or video watching, among others. By means of online questionnaires and interviews, the main goal is to confirm the results obtained after the observation of the curricula and find out to what extent subtitling has actually been included into general translation classrooms. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=subtitling" title="subtitling">subtitling</a>, <a href="https://publications.waset.org/abstracts/search?q=general%20translation" title=" general translation"> general translation</a>, <a href="https://publications.waset.org/abstracts/search?q=didactics" title=" didactics"> didactics</a>, <a href="https://publications.waset.org/abstracts/search?q=translation%20competence" title=" translation competence"> translation competence</a> </p> <a href="https://publications.waset.org/abstracts/83484/the-relation-between-subtitling-and-general-translation-from-a-didactic-perspective" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/83484.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">176</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2164</span> Study of Multimodal Resources in Interactions Involving Children with Autistic Spectrum Disorders</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Fernanda%20Miranda%20da%20Cruz">Fernanda Miranda da Cruz</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper aims to systematize, descriptively and analytically, the relations between language, body and material world explored in a specific empirical context: everyday co-presence interactions between children diagnosed with Autistic Spectrum Disease ASD and various interlocutors. We will work based on 20 hours of an audiovisual corpus in Brazilian Portuguese language. This analysis focuses on 1) the analysis of daily interactions that have the presence/participation of subjects with a diagnosis of ASD based on an embodied interaction perspective; 2) the study of the status and role of gestures, body and material world in the construction and constitution of human interaction and its relation with linguistic-cognitive processes and Autistic Spectrum Disorders; 3) to highlight questions related to the field of videoanalysis, such as: procedures for recording interactions in complex environments (involving many participants, use of objects and body movement); the construction of audiovisual corpora for linguistic-interaction research; the invitation to a visual analytical mentality of human social interactions involving not only the verbal aspects that constitute it, but also the physical space, the body and the material world. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=autism%20spectrum%20disease" title="autism spectrum disease">autism spectrum disease</a>, <a href="https://publications.waset.org/abstracts/search?q=multimodality" title=" multimodality"> multimodality</a>, <a href="https://publications.waset.org/abstracts/search?q=social%20interaction" title=" social interaction"> social interaction</a>, <a href="https://publications.waset.org/abstracts/search?q=non-verbal%20interactions" title=" non-verbal interactions"> non-verbal interactions</a> </p> <a href="https://publications.waset.org/abstracts/121698/study-of-multimodal-resources-in-interactions-involving-children-with-autistic-spectrum-disorders" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/121698.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">114</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2163</span> A Chinese Nested Named Entity Recognition Model Based on Lexical Features</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Shuo%20Liu">Shuo Liu</a>, <a href="https://publications.waset.org/abstracts/search?q=Dan%20Liu"> Dan Liu</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In the field of named entity recognition, most of the research has been conducted around simple entities. However, for nested named entities, which still contain entities within entities, it has been difficult to identify them accurately due to their boundary ambiguity. In this paper, a hierarchical recognition model is constructed based on the grammatical structure and semantic features of Chinese text for boundary calculation based on lexical features. The analysis is carried out at different levels in terms of granularity, semantics, and lexicality, respectively, avoiding repetitive work to reduce computational effort and using the semantic features of words to calculate the boundaries of entities to improve the accuracy of the recognition work. The results of the experiments carried out on web-based microblogging data show that the model achieves an accuracy of 86.33% and an F1 value of 89.27% in recognizing nested named entities, making up for the shortcomings of some previous recognition models and improving the efficiency of recognition of nested named entities. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=coarse-grained" title="coarse-grained">coarse-grained</a>, <a href="https://publications.waset.org/abstracts/search?q=nested%20named%20entity" title=" nested named entity"> nested named entity</a>, <a href="https://publications.waset.org/abstracts/search?q=Chinese%20natural%20language%20processing" title=" Chinese natural language processing"> Chinese natural language processing</a>, <a href="https://publications.waset.org/abstracts/search?q=word%20embedding" title=" word embedding"> word embedding</a>, <a href="https://publications.waset.org/abstracts/search?q=T-SNE%20dimensionality%20reduction%20algorithm" title=" T-SNE dimensionality reduction algorithm"> T-SNE dimensionality reduction algorithm</a> </p> <a href="https://publications.waset.org/abstracts/156934/a-chinese-nested-named-entity-recognition-model-based-on-lexical-features" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/156934.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">128</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2162</span> Diplomatic Public Relations Techniques for Official Recognition of Palestine State in Europe</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Bilgehan%20Gultekin">Bilgehan Gultekin</a>, <a href="https://publications.waset.org/abstracts/search?q=Tuba%20Gultekin"> Tuba Gultekin</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Diplomatic public relations gives an ideal concept for recognition of palestine state in all over the europe. The first step of official recognition is approval of palestine state in international political organisations such as United Nations and Nato. So, diplomatic public relations provides a recognition process in communication scale. One of the aims of the study titled “Diplomatic Public Relations Techniques for Recognition of Palestine State in Europe” is to present some communication projects on diplomatic way. The study also aims at showing communication process at diplomatic level. The most important level of such kind of diplomacy is society based diplomacy. Moreover,The study provides a wider perspective that gives some creative diplomatic communication strategies for attracting society. To persuade the public for official recognition also is key element of this process. The study also finds new communication routes including persuasion techniques for society. All creative projects are supporting parts in original persuasive process of official recognition of Palestine. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=diplomatic%20public%20relations" title="diplomatic public relations">diplomatic public relations</a>, <a href="https://publications.waset.org/abstracts/search?q=diplomatic%20communication%20strategies" title=" diplomatic communication strategies"> diplomatic communication strategies</a>, <a href="https://publications.waset.org/abstracts/search?q=diplomatic%20communication" title=" diplomatic communication"> diplomatic communication</a>, <a href="https://publications.waset.org/abstracts/search?q=public%20relations" title=" public relations"> public relations</a> </p> <a href="https://publications.waset.org/abstracts/33218/diplomatic-public-relations-techniques-for-official-recognition-of-palestine-state-in-europe" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/33218.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">455</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2161</span> Named Entity Recognition System for Tigrinya Language</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Sham%20Kidane">Sham Kidane</a>, <a href="https://publications.waset.org/abstracts/search?q=Fitsum%20Gaim"> Fitsum Gaim</a>, <a href="https://publications.waset.org/abstracts/search?q=Ibrahim%20Abdella"> Ibrahim Abdella</a>, <a href="https://publications.waset.org/abstracts/search?q=Sirak%20Asmerom"> Sirak Asmerom</a>, <a href="https://publications.waset.org/abstracts/search?q=Yoel%20Ghebrihiwot"> Yoel Ghebrihiwot</a>, <a href="https://publications.waset.org/abstracts/search?q=Simon%20Mulugeta"> Simon Mulugeta</a>, <a href="https://publications.waset.org/abstracts/search?q=Natnael%20Ambassager"> Natnael Ambassager</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The lack of annotated datasets is a bottleneck to the progress of NLP in low-resourced languages. The work presented here consists of large-scale annotated datasets and models for the named entity recognition (NER) system for the Tigrinya language. Our manually constructed corpus comprises over 340K words tagged for NER, with over 118K of the tokens also having parts-of-speech (POS) tags, annotated with 12 distinct classes of entities, represented using several types of tagging schemes. We conducted extensive experiments covering convolutional neural networks and transformer models; the highest performance achieved is 88.8% weighted F1-score. These results are especially noteworthy given the unique challenges posed by Tigrinya’s distinct grammatical structure and complex word morphologies. The system can be an essential building block for the advancement of NLP systems in Tigrinya and other related low-resourced languages and serve as a bridge for cross-referencing against higher-resourced languages. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=Tigrinya%20NER%20corpus" title="Tigrinya NER corpus">Tigrinya NER corpus</a>, <a href="https://publications.waset.org/abstracts/search?q=TiBERT" title=" TiBERT"> TiBERT</a>, <a href="https://publications.waset.org/abstracts/search?q=TiRoBERTa" title=" TiRoBERTa"> TiRoBERTa</a>, <a href="https://publications.waset.org/abstracts/search?q=BiLSTM-CRF" title=" BiLSTM-CRF"> BiLSTM-CRF</a> </p> <a href="https://publications.waset.org/abstracts/177845/named-entity-recognition-system-for-tigrinya-language" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/177845.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">130</span> </span> </div> </div> <ul class="pagination"> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=audio-visual%20speech%20recognition&page=6" rel="prev">‹</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=audio-visual%20speech%20recognition&page=1">1</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=audio-visual%20speech%20recognition&page=2">2</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=audio-visual%20speech%20recognition&page=3">3</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=audio-visual%20speech%20recognition&page=4">4</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=audio-visual%20speech%20recognition&page=5">5</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=audio-visual%20speech%20recognition&page=6">6</a></li> <li class="page-item active"><span class="page-link">7</span></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=audio-visual%20speech%20recognition&page=8">8</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=audio-visual%20speech%20recognition&page=9">9</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=audio-visual%20speech%20recognition&page=10">10</a></li> <li class="page-item disabled"><span class="page-link">...</span></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=audio-visual%20speech%20recognition&page=78">78</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=audio-visual%20speech%20recognition&page=79">79</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=audio-visual%20speech%20recognition&page=8" rel="next">›</a></li> </ul> </div> </main> <footer> <div id="infolinks" class="pt-3 pb-2"> <div class="container"> <div style="background-color:#f5f5f5;" class="p-3"> <div class="row"> <div class="col-md-2"> <ul class="list-unstyled"> About <li><a href="https://waset.org/page/support">About Us</a></li> <li><a href="https://waset.org/page/support#legal-information">Legal</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/WASET-16th-foundational-anniversary.pdf">WASET celebrates its 16th foundational anniversary</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Account <li><a href="https://waset.org/profile">My Account</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Explore <li><a href="https://waset.org/disciplines">Disciplines</a></li> <li><a href="https://waset.org/conferences">Conferences</a></li> <li><a href="https://waset.org/conference-programs">Conference Program</a></li> <li><a href="https://waset.org/committees">Committees</a></li> <li><a href="https://publications.waset.org">Publications</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Research <li><a href="https://publications.waset.org/abstracts">Abstracts</a></li> <li><a href="https://publications.waset.org">Periodicals</a></li> <li><a href="https://publications.waset.org/archive">Archive</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Open Science <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Science-Philosophy.pdf">Open Science Philosophy</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Science-Award.pdf">Open Science Award</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Society-Open-Science-and-Open-Innovation.pdf">Open Innovation</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Postdoctoral-Fellowship-Award.pdf">Postdoctoral Fellowship Award</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Scholarly-Research-Review.pdf">Scholarly Research Review</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Support <li><a href="https://waset.org/page/support">Support</a></li> <li><a href="https://waset.org/profile/messages/create">Contact Us</a></li> <li><a href="https://waset.org/profile/messages/create">Report Abuse</a></li> </ul> </div> </div> </div> </div> </div> <div class="container text-center"> <hr style="margin-top:0;margin-bottom:.3rem;"> <a href="https://creativecommons.org/licenses/by/4.0/" target="_blank" class="text-muted small">Creative Commons Attribution 4.0 International License</a> <div id="copy" class="mt-2">© 2024 World Academy of Science, Engineering and Technology</div> </div> </footer> <a href="javascript:" id="return-to-top"><i class="fas fa-arrow-up"></i></a> <div class="modal" id="modal-template"> <div class="modal-dialog"> <div class="modal-content"> <div class="row m-0 mt-1"> <div class="col-md-12"> <button type="button" class="close" data-dismiss="modal" aria-label="Close"><span aria-hidden="true">×</span></button> </div> </div> <div class="modal-body"></div> </div> </div> </div> <script src="https://cdn.waset.org/static/plugins/jquery-3.3.1.min.js"></script> <script src="https://cdn.waset.org/static/plugins/bootstrap-4.2.1/js/bootstrap.bundle.min.js"></script> <script src="https://cdn.waset.org/static/js/site.js?v=150220211556"></script> <script> jQuery(document).ready(function() { /*jQuery.get("https://publications.waset.org/xhr/user-menu", function (response) { jQuery('#mainNavMenu').append(response); });*/ jQuery.get({ url: "https://publications.waset.org/xhr/user-menu", cache: false }).then(function(response){ jQuery('#mainNavMenu').append(response); }); }); </script> </body> </html>