CINXE.COM
Search results for: identification of person
<!DOCTYPE html> <html lang="en" dir="ltr"> <head> <!-- Google tag (gtag.js) --> <script async src="https://www.googletagmanager.com/gtag/js?id=G-P63WKM1TM1"></script> <script> window.dataLayer = window.dataLayer || []; function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'G-P63WKM1TM1'); </script> <!-- Yandex.Metrika counter --> <script type="text/javascript" > (function(m,e,t,r,i,k,a){m[i]=m[i]||function(){(m[i].a=m[i].a||[]).push(arguments)}; m[i].l=1*new Date(); for (var j = 0; j < document.scripts.length; j++) {if (document.scripts[j].src === r) { return; }} k=e.createElement(t),a=e.getElementsByTagName(t)[0],k.async=1,k.src=r,a.parentNode.insertBefore(k,a)}) (window, document, "script", "https://mc.yandex.ru/metrika/tag.js", "ym"); ym(55165297, "init", { clickmap:false, trackLinks:true, accurateTrackBounce:true, webvisor:false }); </script> <noscript><div><img src="https://mc.yandex.ru/watch/55165297" style="position:absolute; left:-9999px;" alt="" /></div></noscript> <!-- /Yandex.Metrika counter --> <!-- Matomo --> <!-- End Matomo Code --> <title>Search results for: identification of person</title> <meta name="description" content="Search results for: identification of person"> <meta name="keywords" content="identification of person"> <meta name="viewport" content="width=device-width, initial-scale=1, minimum-scale=1, maximum-scale=1, user-scalable=no"> <meta charset="utf-8"> <link href="https://cdn.waset.org/favicon.ico" type="image/x-icon" rel="shortcut icon"> <link href="https://cdn.waset.org/static/plugins/bootstrap-4.2.1/css/bootstrap.min.css" rel="stylesheet"> <link href="https://cdn.waset.org/static/plugins/fontawesome/css/all.min.css" rel="stylesheet"> <link href="https://cdn.waset.org/static/css/site.css?v=150220211555" rel="stylesheet"> </head> <body> <header> <div class="container"> <nav class="navbar navbar-expand-lg navbar-light"> <a class="navbar-brand" href="https://waset.org"> <img src="https://cdn.waset.org/static/images/wasetc.png" alt="Open Science Research Excellence" title="Open Science Research Excellence" /> </a> <button class="d-block d-lg-none navbar-toggler ml-auto" type="button" data-toggle="collapse" data-target="#navbarMenu" aria-controls="navbarMenu" aria-expanded="false" aria-label="Toggle navigation"> <span class="navbar-toggler-icon"></span> </button> <div class="w-100"> <div class="d-none d-lg-flex flex-row-reverse"> <form method="get" action="https://waset.org/search" class="form-inline my-2 my-lg-0"> <input class="form-control mr-sm-2" type="search" placeholder="Search Conferences" value="identification of person" name="q" aria-label="Search"> <button class="btn btn-light my-2 my-sm-0" type="submit"><i class="fas fa-search"></i></button> </form> </div> <div class="collapse navbar-collapse mt-1" id="navbarMenu"> <ul class="navbar-nav ml-auto align-items-center" id="mainNavMenu"> <li class="nav-item"> <a class="nav-link" href="https://waset.org/conferences" title="Conferences in 2024/2025/2026">Conferences</a> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/disciplines" title="Disciplines">Disciplines</a> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/committees" rel="nofollow">Committees</a> </li> <li class="nav-item dropdown"> <a class="nav-link dropdown-toggle" href="#" id="navbarDropdownPublications" role="button" data-toggle="dropdown" aria-haspopup="true" aria-expanded="false"> Publications </a> <div class="dropdown-menu" aria-labelledby="navbarDropdownPublications"> <a class="dropdown-item" href="https://publications.waset.org/abstracts">Abstracts</a> <a class="dropdown-item" href="https://publications.waset.org">Periodicals</a> <a class="dropdown-item" href="https://publications.waset.org/archive">Archive</a> </div> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/page/support" title="Support">Support</a> </li> </ul> </div> </div> </nav> </div> </header> <main> <div class="container mt-4"> <div class="row"> <div class="col-md-9 mx-auto"> <form method="get" action="https://publications.waset.org/abstracts/search"> <div id="custom-search-input"> <div class="input-group"> <i class="fas fa-search"></i> <input type="text" class="search-query" name="q" placeholder="Author, Title, Abstract, Keywords" value="identification of person"> <input type="submit" class="btn_search" value="Search"> </div> </div> </form> </div> </div> <div class="row mt-3"> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Commenced</strong> in January 2007</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Frequency:</strong> Monthly</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Edition:</strong> International</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Paper Count:</strong> 4278</div> </div> </div> </div> <h1 class="mt-3 mb-3 text-center" style="font-size:1.6rem;">Search results for: identification of person</h1> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4278</span> Person Re-Identification using Siamese Convolutional Neural Network</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Sello%20Mokwena">Sello Mokwena</a>, <a href="https://publications.waset.org/abstracts/search?q=Monyepao%20Thabang"> Monyepao Thabang</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In this study, we propose a comprehensive approach to address the challenges in person re-identification models. By combining a centroid tracking algorithm with a Siamese convolutional neural network model, our method excels in detecting, tracking, and capturing robust person features across non-overlapping camera views. The algorithm efficiently identifies individuals in the camera network, while the neural network extracts fine-grained global features for precise cross-image comparisons. The approach's effectiveness is further accentuated by leveraging the camera network topology for guidance. Our empirical analysis on benchmark datasets highlights its competitive performance, particularly evident when background subtraction techniques are selectively applied, underscoring its potential in advancing person re-identification techniques. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=camera%20network" title="camera network">camera network</a>, <a href="https://publications.waset.org/abstracts/search?q=convolutional%20neural%20network%20topology" title=" convolutional neural network topology"> convolutional neural network topology</a>, <a href="https://publications.waset.org/abstracts/search?q=person%20tracking" title=" person tracking"> person tracking</a>, <a href="https://publications.waset.org/abstracts/search?q=person%20re-identification" title=" person re-identification"> person re-identification</a>, <a href="https://publications.waset.org/abstracts/search?q=siamese" title=" siamese"> siamese</a> </p> <a href="https://publications.waset.org/abstracts/171989/person-re-identification-using-siamese-convolutional-neural-network" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/171989.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">72</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4277</span> Gait Biometric for Person Re-Identification</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Lavanya%20Srinivasan">Lavanya Srinivasan</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Biometric identification is to identify unique features in a person like fingerprints, iris, ear, and voice recognition that need the subject's permission and physical contact. Gait biometric is used to identify the unique gait of the person by extracting moving features. The main advantage of gait biometric to identify the gait of a person at a distance, without any physical contact. In this work, the gait biometric is used for person re-identification. The person walking naturally compared with the same person walking with bag, coat, and case recorded using longwave infrared, short wave infrared, medium wave infrared, and visible cameras. The videos are recorded in rural and in urban environments. The pre-processing technique includes human identified using YOLO, background subtraction, silhouettes extraction, and synthesis Gait Entropy Image by averaging the silhouettes. The moving features are extracted from the Gait Entropy Energy Image. The extracted features are dimensionality reduced by the principal component analysis and recognised using different classifiers. The comparative results with the different classifier show that linear discriminant analysis outperforms other classifiers with 95.8% for visible in the rural dataset and 94.8% for longwave infrared in the urban dataset. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=biometric" title="biometric">biometric</a>, <a href="https://publications.waset.org/abstracts/search?q=gait" title=" gait"> gait</a>, <a href="https://publications.waset.org/abstracts/search?q=silhouettes" title=" silhouettes"> silhouettes</a>, <a href="https://publications.waset.org/abstracts/search?q=YOLO" title=" YOLO"> YOLO</a> </p> <a href="https://publications.waset.org/abstracts/136879/gait-biometric-for-person-re-identification" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/136879.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">172</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4276</span> Face Tracking and Recognition Using Deep Learning Approach</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Degale%20Desta">Degale Desta</a>, <a href="https://publications.waset.org/abstracts/search?q=Cheng%20Jian"> Cheng Jian</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The most important factor in identifying a person is their face. Even identical twins have their own distinct faces. As a result, identification and face recognition are needed to tell one person from another. A face recognition system is a verification tool used to establish a person's identity using biometrics. Nowadays, face recognition is a common technique used in a variety of applications, including home security systems, criminal identification, and phone unlock systems. This system is more secure because it only requires a facial image instead of other dependencies like a key or card. Face detection and face identification are the two phases that typically make up a human recognition system.The idea behind designing and creating a face recognition system using deep learning with Azure ML Python's OpenCV is explained in this paper. Face recognition is a task that can be accomplished using deep learning, and given the accuracy of this method, it appears to be a suitable approach. To show how accurate the suggested face recognition system is, experimental results are given in 98.46% accuracy using Fast-RCNN Performance of algorithms under different training conditions. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=deep%20learning" title="deep learning">deep learning</a>, <a href="https://publications.waset.org/abstracts/search?q=face%20recognition" title=" face recognition"> face recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=identification" title=" identification"> identification</a>, <a href="https://publications.waset.org/abstracts/search?q=fast-RCNN" title=" fast-RCNN"> fast-RCNN</a> </p> <a href="https://publications.waset.org/abstracts/163134/face-tracking-and-recognition-using-deep-learning-approach" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/163134.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">140</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4275</span> Images Selection and Best Descriptor Combination for Multi-Shot Person Re-Identification</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Yousra%20Hadj%20Hassen">Yousra Hadj Hassen</a>, <a href="https://publications.waset.org/abstracts/search?q=Walid%20Ayedi"> Walid Ayedi</a>, <a href="https://publications.waset.org/abstracts/search?q=Tarek%20Ouni"> Tarek Ouni</a>, <a href="https://publications.waset.org/abstracts/search?q=Mohamed%20Jallouli"> Mohamed Jallouli</a> </p> <p class="card-text"><strong>Abstract:</strong></p> To re-identify a person is to check if he/she has been already seen over a cameras network. Recently, re-identifying people over large public cameras networks has become a crucial task of great importance to ensure public security. The vision community has deeply investigated this area of research. Most existing researches rely only on the spatial appearance information from either one or multiple person images. Actually, the real person re-id framework is a multi-shot scenario. However, to efficiently model a person’s appearance and to choose the best samples to remain a challenging problem. In this work, an extensive comparison of descriptors of state of the art associated with the proposed frame selection method is studied. Specifically, we evaluate the samples selection approach using multiple proposed descriptors. We show the effectiveness and advantages of the proposed method by extensive comparisons with related state-of-the-art approaches using two standard datasets PRID2011 and iLIDS-VID. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=camera%20network" title="camera network">camera network</a>, <a href="https://publications.waset.org/abstracts/search?q=descriptor" title=" descriptor"> descriptor</a>, <a href="https://publications.waset.org/abstracts/search?q=model" title=" model"> model</a>, <a href="https://publications.waset.org/abstracts/search?q=multi-shot" title=" multi-shot"> multi-shot</a>, <a href="https://publications.waset.org/abstracts/search?q=person%20re-identification" title=" person re-identification"> person re-identification</a>, <a href="https://publications.waset.org/abstracts/search?q=selection" title=" selection"> selection</a> </p> <a href="https://publications.waset.org/abstracts/65815/images-selection-and-best-descriptor-combination-for-multi-shot-person-re-identification" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/65815.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">278</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4274</span> Bag of Local Features for Person Re-Identification on Large-Scale Datasets</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Yixiu%20Liu">Yixiu Liu</a>, <a href="https://publications.waset.org/abstracts/search?q=Yunzhou%20Zhang"> Yunzhou Zhang</a>, <a href="https://publications.waset.org/abstracts/search?q=Jianning%20Chi"> Jianning Chi</a>, <a href="https://publications.waset.org/abstracts/search?q=Hao%20Chu"> Hao Chu</a>, <a href="https://publications.waset.org/abstracts/search?q=Rui%20Zheng"> Rui Zheng</a>, <a href="https://publications.waset.org/abstracts/search?q=Libo%20Sun"> Libo Sun</a>, <a href="https://publications.waset.org/abstracts/search?q=Guanghao%20Chen"> Guanghao Chen</a>, <a href="https://publications.waset.org/abstracts/search?q=Fangtong%20Zhou"> Fangtong Zhou</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In the last few years, large-scale person re-identification has attracted a lot of attention from video surveillance since it has a potential application prospect in public safety management. However, it is still a challenging job considering the variation in human pose, the changing illumination conditions and the lack of paired samples. Although the accuracy has been significantly improved, the data dependence of the sample training is serious. To tackle this problem, a new strategy is proposed based on bag of visual words (BoVW) model of designing the feature representation which has been widely used in the field of image retrieval. The local features are extracted, and more discriminative feature representation is obtained by cross-view dictionary learning (CDL), then the assignment map is obtained through k-means clustering. Finally, the BoVW histograms are formed which encodes the images with the statistics of the feature classes in the assignment map. Experiments conducted on the CUHK03, Market1501 and MARS datasets show that the proposed method performs favorably against existing approaches. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=bag%20of%20visual%20words" title="bag of visual words">bag of visual words</a>, <a href="https://publications.waset.org/abstracts/search?q=cross-view%20dictionary%20learning" title=" cross-view dictionary learning"> cross-view dictionary learning</a>, <a href="https://publications.waset.org/abstracts/search?q=person%20re-identification" title=" person re-identification"> person re-identification</a>, <a href="https://publications.waset.org/abstracts/search?q=reranking" title=" reranking"> reranking</a> </p> <a href="https://publications.waset.org/abstracts/85908/bag-of-local-features-for-person-re-identification-on-large-scale-datasets" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/85908.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">195</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4273</span> Providing a Secure, Reliable and Decentralized Document Management Solution Using Blockchain by a Virtual Identity Card</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Meet%20Shah">Meet Shah</a>, <a href="https://publications.waset.org/abstracts/search?q=Ankita%20Aditya"> Ankita Aditya</a>, <a href="https://publications.waset.org/abstracts/search?q=Dhruv%20Bindra"> Dhruv Bindra</a>, <a href="https://publications.waset.org/abstracts/search?q=V.%20S.%20Omkar"> V. S. Omkar</a>, <a href="https://publications.waset.org/abstracts/search?q=Aashruti%20Seervi"> Aashruti Seervi</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In today's world, we need documents everywhere for a smooth workflow in the identification process or any other security aspects. The current system and techniques which are used for identification need one thing, that is ‘proof of existence’, which involves valid documents, for example, educational, financial, etc. The main issue with the current identity access management system and digital identification process is that the system is centralized in their network, which makes it inefficient. The paper presents the system which resolves all these cited issues. It is based on ‘blockchain’ technology, which is a 'decentralized system'. It allows transactions in a decentralized and immutable manner. The primary notion of the model is to ‘have everything with nothing’. It involves inter-linking required documents of a person with a single identity card so that a person can go anywhere without having the required documents with him/her. The person just needs to be physically present at a place wherein documents are necessary, and using a fingerprint impression and an iris scan print, the rest of the verification will progress. Furthermore, some technical overheads and advancements are listed. This paper also aims to layout its far-vision scenario of blockchain and its impact on future trends. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=blockchain" title="blockchain">blockchain</a>, <a href="https://publications.waset.org/abstracts/search?q=decentralized%20system" title=" decentralized system"> decentralized system</a>, <a href="https://publications.waset.org/abstracts/search?q=fingerprint%20impression" title=" fingerprint impression"> fingerprint impression</a>, <a href="https://publications.waset.org/abstracts/search?q=identity%20management" title=" identity management"> identity management</a>, <a href="https://publications.waset.org/abstracts/search?q=iris%20scan" title=" iris scan"> iris scan</a> </p> <a href="https://publications.waset.org/abstracts/118996/providing-a-secure-reliable-and-decentralized-document-management-solution-using-blockchain-by-a-virtual-identity-card" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/118996.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">129</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4272</span> New Approach for Constructing a Secure Biometric Database</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=A.%20Kebbeb">A. Kebbeb</a>, <a href="https://publications.waset.org/abstracts/search?q=M.%20Mostefai"> M. Mostefai</a>, <a href="https://publications.waset.org/abstracts/search?q=F.%20Benmerzoug"> F. Benmerzoug</a>, <a href="https://publications.waset.org/abstracts/search?q=Y.%20Chahir"> Y. Chahir</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The multimodal biometric identification is the combination of several biometric systems. The challenge of this combination is to reduce some limitations of systems based on a single modality while significantly improving performance. In this paper, we propose a new approach to the construction and the protection of a multimodal biometric database dedicated to an identification system. We use a topological watermarking to hide the relation between face image and the registered descriptors extracted from other modalities of the same person for more secure user identification. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=biometric%20databases" title="biometric databases">biometric databases</a>, <a href="https://publications.waset.org/abstracts/search?q=multimodal%20biometrics" title=" multimodal biometrics"> multimodal biometrics</a>, <a href="https://publications.waset.org/abstracts/search?q=security%20authentication" title=" security authentication"> security authentication</a>, <a href="https://publications.waset.org/abstracts/search?q=digital%20watermarking" title=" digital watermarking"> digital watermarking</a> </p> <a href="https://publications.waset.org/abstracts/3126/new-approach-for-constructing-a-secure-biometric-database" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/3126.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">391</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4271</span> Career Anchors and Job Satisfaction of Managers: The Mediating Role of Person-job Fit</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Azadeh%20Askari">Azadeh Askari</a>, <a href="https://publications.waset.org/abstracts/search?q=Ali%20Nasery%20Mohamad%20Abadi"> Ali Nasery Mohamad Abadi</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The present study was conducted to investigate the relationship between career anchors and job satisfaction with emphasis on the mediating role of person-job fit. 502 managers and supervisors of ten operational areas of a large energy Company were selected as a cluster sample appropriate to the volume. The instruments used in this study were Career Anchor Questionnaire, Job Satisfaction Questionnaire and Person-job fit Questionnaire. Pearson correlation coefficient was used to analyze the data and AMOS software was used to determine the effect of career anchor variables and person-job fit on job satisfaction. Anchors of service and dedication, pure challenge and security and stability increase the person-job fit among managers and also the person-job fit plays a mediating role in relation to the effect it has on job satisfaction through these anchors. In contrast, the anchors of independence and autonomy reduce the person-job fit. Considering the importance of positive organizational attitudes and in order to have an optimal fit between job and worker, it is better that in human resources processes such as hiring and employing, the career anchors of the person should be considered so that the person can have more job satisfaction; and thus bring higher productivity for themselves and the organization. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=career%20anchor" title="career anchor">career anchor</a>, <a href="https://publications.waset.org/abstracts/search?q=job%20satisfaction" title=" job satisfaction"> job satisfaction</a>, <a href="https://publications.waset.org/abstracts/search?q=person-job%20fit" title=" person-job fit"> person-job fit</a>, <a href="https://publications.waset.org/abstracts/search?q=energy%20company" title=" energy company"> energy company</a>, <a href="https://publications.waset.org/abstracts/search?q=managers" title=" managers"> managers</a> </p> <a href="https://publications.waset.org/abstracts/145999/career-anchors-and-job-satisfaction-of-managers-the-mediating-role-of-person-job-fit" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/145999.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">121</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4270</span> Person-Environment Fit (PE Fit): Evidence from Brazil</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Jucelia%20Appio">Jucelia Appio</a>, <a href="https://publications.waset.org/abstracts/search?q=Danielle%20Deimling%20De%20Carli"> Danielle Deimling De Carli</a>, <a href="https://publications.waset.org/abstracts/search?q=Bruno%20Henrique%20Rocha%20Fernandes"> Bruno Henrique Rocha Fernandes</a>, <a href="https://publications.waset.org/abstracts/search?q=Nelson%20Natalino%20Frizon"> Nelson Natalino Frizon</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The purpose of this paper is to investigate if there are positive and significant correlations between the dimensions of Person-Environment Fit (Person-Job, Person-Organization, Person-Group and Person-Supervisor) at the “Best Companies to Work for” in Brazil in 2017. For that, a quantitative approach was used with a descriptive method being defined as a research sample the "150 Best Companies to Work for", according to data base collected in 2017 and provided by Fundação Instituto of Administração (FIA) of the University of São Paulo (USP). About the data analysis procedures, asymmetry and kurtosis, factorial analysis, Kaiser-Meyer-Olkin (KMO) tests, Bartlett sphericity and Cronbach's alpha were used for the 69 research variables, and as a statistical technique for the purpose of analyzing the hypothesis, Pearson's correlation analysis was performed. As a main result, we highlight that there was a positive and significant correlation between the dimensions of Person-Environment Fit, corroborating the H1 hypothesis that there is a positive and significant correlation between Person-Job Fit, Person-Organization Fit, Person-Group Fit and Person-Supervisor Fit. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=Human%20Resource%20Management%20%28HRM%29" title="Human Resource Management (HRM)">Human Resource Management (HRM)</a>, <a href="https://publications.waset.org/abstracts/search?q=Person-Environment%20Fit%20%28PE%29" title=" Person-Environment Fit (PE)"> Person-Environment Fit (PE)</a>, <a href="https://publications.waset.org/abstracts/search?q=strategic%20people%20management" title=" strategic people management"> strategic people management</a>, <a href="https://publications.waset.org/abstracts/search?q=best%20companies%20to%20work%20for" title=" best companies to work for"> best companies to work for</a> </p> <a href="https://publications.waset.org/abstracts/101954/person-environment-fit-pe-fit-evidence-from-brazil" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/101954.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">141</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4269</span> Lateral Cephalometric Radiograph to Determine Sex in Forensic Investigations</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Paulus%20Maulana">Paulus Maulana</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Forensic identification is to help investigators determine a person's identity. Personal identification is often a problem in civil and criminal cases. Orthodontists like all other dental professionals can play a major role by maintaining lateral cephalogram and thus providing important or vital information or can clues to the legal authorities in order to help them in their search. Radiographic lateral cephalometry is a measurement method which focused on the anatomical points of human lateral skull. Sex determination is one of the most important aspects of the personal identification in forensic. Lateral cephalogram is a valuable tool in identification of sex as reveal morphological details of the skull on single radiograph. This present study evaluates the role of lateral cephalogram in identification of sex that parameters of lateral cephalogram are linear measurement and angle measurement. The linear measurements are N-S ( Anterior cranial length), Sna-Snp (Palatal plane length), Me-Go (menton-gonion), N-Sna ( Midfacial anterior height ), Sna-Me (Lower anterior face height), Co-Gn (total mandibular length). The angle measurements are SNA, SNB, ANB, Gonial, Interincical, and facial. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=lateral%20cephalometry" title="lateral cephalometry">lateral cephalometry</a>, <a href="https://publications.waset.org/abstracts/search?q=cephalogram" title=" cephalogram"> cephalogram</a>, <a href="https://publications.waset.org/abstracts/search?q=sex" title=" sex"> sex</a>, <a href="https://publications.waset.org/abstracts/search?q=forensic" title=" forensic"> forensic</a>, <a href="https://publications.waset.org/abstracts/search?q=parameter" title=" parameter"> parameter</a> </p> <a href="https://publications.waset.org/abstracts/74843/lateral-cephalometric-radiograph-to-determine-sex-in-forensic-investigations" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/74843.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">190</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4268</span> ECG Based Reliable User Identification Using Deep Learning</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=R.%20N.%20Begum">R. N. Begum</a>, <a href="https://publications.waset.org/abstracts/search?q=Ambalika%20Sharma"> Ambalika Sharma</a>, <a href="https://publications.waset.org/abstracts/search?q=G.%20K.%20Singh"> G. K. Singh</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Identity theft has serious ramifications beyond data and personal information loss. This necessitates the implementation of robust and efficient user identification systems. Therefore, automatic biometric recognition systems are the need of the hour, and ECG-based systems are unquestionably the best choice due to their appealing inherent characteristics. The CNNs are the recent state-of-the-art techniques for ECG-based user identification systems. However, the results obtained are significantly below standards, and the situation worsens as the number of users and types of heartbeats in the dataset grows. As a result, this study proposes a highly accurate and resilient ECG-based person identification system using CNN's dense learning framework. The proposed research explores explicitly the calibre of dense CNNs in the field of ECG-based human recognition. The study tests four different configurations of dense CNN which are trained on a dataset of recordings collected from eight popular ECG databases. With the highest FAR of 0.04 percent and the highest FRR of 5%, the best performing network achieved an identification accuracy of 99.94 percent. The best network is also tested with various train/test split ratios. The findings show that DenseNets are not only extremely reliable but also highly efficient. Thus, they might also be implemented in real-time ECG-based human recognition systems. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=Biometrics" title="Biometrics">Biometrics</a>, <a href="https://publications.waset.org/abstracts/search?q=Dense%20Networks" title="Dense Networks">Dense Networks</a>, <a href="https://publications.waset.org/abstracts/search?q=Identification%20Rate" title="Identification Rate">Identification Rate</a>, <a href="https://publications.waset.org/abstracts/search?q=Train%2FTest%20split%20ratio" title="Train/Test split ratio">Train/Test split ratio</a> </p> <a href="https://publications.waset.org/abstracts/143509/ecg-based-reliable-user-identification-using-deep-learning" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/143509.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">161</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4267</span> Identification of Novel Differentially Expressed and Co-Expressed Genes between Tumor and Adjacent Tissue in Prostate Cancer</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Luis%20Enrique%20Bautista-Hinojosa">Luis Enrique Bautista-Hinojosa</a>, <a href="https://publications.waset.org/abstracts/search?q=Luis%20A.%20Herrera"> Luis A. Herrera</a>, <a href="https://publications.waset.org/abstracts/search?q=Cristian%20Arriaga-Canon"> Cristian Arriaga-Canon</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Text should be written in the third person. Please avoid using "I" “my” or the pronoun "one". It is best to say "It is believed..." rather than "I believe..." or "One believes...". <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=transcriptomics" title="transcriptomics">transcriptomics</a>, <a href="https://publications.waset.org/abstracts/search?q=co-expression" title=" co-expression"> co-expression</a>, <a href="https://publications.waset.org/abstracts/search?q=cancer" title=" cancer"> cancer</a>, <a href="https://publications.waset.org/abstracts/search?q=biomarkers" title=" biomarkers"> biomarkers</a> </p> <a href="https://publications.waset.org/abstracts/179230/identification-of-novel-differentially-expressed-and-co-expressed-genes-between-tumor-and-adjacent-tissue-in-prostate-cancer" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/179230.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">75</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4266</span> Cardiokey: A Binary and Multi-Class Machine Learning Approach to Identify Individuals Using Electrocardiographic Signals on Wearable Devices</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=S.%20Chami">S. Chami</a>, <a href="https://publications.waset.org/abstracts/search?q=J.%20Chauvin"> J. Chauvin</a>, <a href="https://publications.waset.org/abstracts/search?q=T.%20Demarest"> T. Demarest</a>, <a href="https://publications.waset.org/abstracts/search?q=Stan%20Ng"> Stan Ng</a>, <a href="https://publications.waset.org/abstracts/search?q=M.%20Straus"> M. Straus</a>, <a href="https://publications.waset.org/abstracts/search?q=W.%20Jahner"> W. Jahner</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Biometrics tools such as fingerprint and iris are widely used in industry to protect critical assets. However, their vulnerability and lack of robustness raise several worries about the protection of highly critical assets. Biometrics based on Electrocardiographic (ECG) signals is a robust identification tool. However, most of the state-of-the-art techniques have worked on clinical signals, which are of high quality and less noisy, extracted from wearable devices like a smartwatch. In this paper, we are presenting a complete machine learning pipeline that identifies people using ECG extracted from an off-person device. An off-person device is a wearable device that is not used in a medical context such as a smartwatch. In addition, one of the main challenges of ECG biometrics is the variability of the ECG of different persons and different situations. To solve this issue, we proposed two different approaches: per person classifier, and one-for-all classifier. The first approach suggests making binary classifier to distinguish one person from others. The second approach suggests a multi-classifier that distinguishes the selected set of individuals from non-selected individuals (others). The preliminary results, the binary classifier obtained a performance 90% in terms of accuracy within a balanced data. The second approach has reported a log loss of 0.05 as a multi-class score. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=biometrics" title="biometrics">biometrics</a>, <a href="https://publications.waset.org/abstracts/search?q=electrocardiographic" title=" electrocardiographic"> electrocardiographic</a>, <a href="https://publications.waset.org/abstracts/search?q=machine%20learning" title=" machine learning"> machine learning</a>, <a href="https://publications.waset.org/abstracts/search?q=signals%20processing" title=" signals processing"> signals processing</a> </p> <a href="https://publications.waset.org/abstracts/114879/cardiokey-a-binary-and-multi-class-machine-learning-approach-to-identify-individuals-using-electrocardiographic-signals-on-wearable-devices" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/114879.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">142</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4265</span> Testing a Moderated Mediation Model of Person–Organization Fit, Organizational Support, and Feelings of Violation</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Chi-Tai%20Shen">Chi-Tai Shen</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This study aims to examine whether perceived organizational support moderates the relationship between person–former organization fit and person–organization fit after the mediating effect of feelings of violation. A two-stage data collection method was used. Based on our research requirements, we only approached participants who were involuntary turnover from their former organizations and looking for a new job. Our final usable sample was comprised of a total of 264 participants from Taiwan. We followed Muller, Judd, and Yzerbyt, and Preacher, Rucker, and Hayes’s suggestions to test our moderated mediation model. This study found that employee perceived organizational support moderated the indirect effect of person–former organization fit on person–organization fit (through feelings of violation). Our study ends with a discussion of the main research findings and their limitations and presents suggestions regarding the direction of future studies and the empirical implications of the results. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=person%E2%80%93organization%20fit" title="person–organization fit">person–organization fit</a>, <a href="https://publications.waset.org/abstracts/search?q=feelings%20of%20violation" title=" feelings of violation"> feelings of violation</a>, <a href="https://publications.waset.org/abstracts/search?q=organizational%20support" title=" organizational support"> organizational support</a>, <a href="https://publications.waset.org/abstracts/search?q=moderated%20mediation" title=" moderated mediation"> moderated mediation</a> </p> <a href="https://publications.waset.org/abstracts/64313/testing-a-moderated-mediation-model-of-person-organization-fit-organizational-support-and-feelings-of-violation" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/64313.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">265</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4264</span> Speech Identification Test for Individuals with High-Frequency Sloping Hearing Loss in Telugu</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=S.%20B.%20Rathna%20Kumar">S. B. Rathna Kumar</a>, <a href="https://publications.waset.org/abstracts/search?q=Sandya%20K.%20Varudhini"> Sandya K. Varudhini</a>, <a href="https://publications.waset.org/abstracts/search?q=Aparna%20Ravichandran"> Aparna Ravichandran </a> </p> <p class="card-text"><strong>Abstract:</strong></p> Telugu is a south central Dravidian language spoken in Andhra Pradesh, a southern state of India. The available speech identification tests in Telugu have been developed to determine the communication problems of individuals having a flat frequency hearing loss. These conventional speech audiometric tests would provide redundant information when used on individuals with high-frequency sloping hearing loss because of better hearing sensitivity in the low- and mid-frequency regions. Hence, conventional speech identification tests do not indicate the true nature of the communication problem of individuals with high-frequency sloping hearing loss. It is highly possible that a person with a high-frequency sloping hearing loss may get maximum scores if conventional speech identification tests are used. Hence, there is a need to develop speech identification test materials that are specifically designed to assess the speech identification performance of individuals with high-frequency sloping hearing loss. The present study aimed to develop speech identification test for individuals with high-frequency sloping hearing loss in Telugu. Individuals with high-frequency sloping hearing loss have difficulty in perception of voiceless consonants whose spectral energy is above 1000 Hz. Hence, the word lists constructed with phonemes having mid- and high-frequency spectral energy will estimate speech identification performance better for such individuals. The phonemes /k/, /g/, /c/, /ṭ/ /t/, /p/, /s/, /ś/, /ṣ/ and /h/are preferred for the construction of words as these phonemes have spectral energy distributed in the frequencies above 1000 KHz predominantly. The present study developed two word lists in Telugu (each word list contained 25 words) for evaluating speech identification performance of individuals with high-frequency sloping hearing loss. The performance of individuals with high-frequency sloping hearing loss was evaluated using both conventional and high-frequency word lists under recorded voice condition. The results revealed that the developed word lists were found to be more sensitive in identifying the true nature of the communication problem of individuals with high-frequency sloping hearing loss. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=speech%20identification%20test" title="speech identification test">speech identification test</a>, <a href="https://publications.waset.org/abstracts/search?q=high-frequency%20sloping%20hearing%20loss" title=" high-frequency sloping hearing loss"> high-frequency sloping hearing loss</a>, <a href="https://publications.waset.org/abstracts/search?q=recorded%20voice%20condition" title=" recorded voice condition"> recorded voice condition</a>, <a href="https://publications.waset.org/abstracts/search?q=Telugu" title=" Telugu "> Telugu </a> </p> <a href="https://publications.waset.org/abstracts/41243/speech-identification-test-for-individuals-with-high-frequency-sloping-hearing-loss-in-telugu" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/41243.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">419</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4263</span> Human Resource Management Practices, Person-Environment Fit and Financial Performance in Brazilian Publicly Traded Companies</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Bruno%20Henrique%20Rocha%20Fernandes">Bruno Henrique Rocha Fernandes</a>, <a href="https://publications.waset.org/abstracts/search?q=Amir%20Rezaee"> Amir Rezaee</a>, <a href="https://publications.waset.org/abstracts/search?q=Jucelia%20Appio"> Jucelia Appio </a> </p> <p class="card-text"><strong>Abstract:</strong></p> The relation between Human Resource Management (HRM) practices and organizational performance remains the subject of substantial literature. Though many studies demonstrated positive relationship, still major influencing variables are not yet clear. This study considers the Person-Environment Fit (PE Fit) and its components, Person-Supervisor (PS), Person-Group (PG), Person-Organization (PO) and Person-Job (PJ) Fit, as possible explanatory variables. We analyzed PE Fit as a moderator between HRM practices and financial performance in the “best companies to work” in Brazil. Data from HRM practices were classified through the High Performance Working Systems (HPWS) construct and data on PE-Fit were obtained through surveys among employees. Financial data, consisting of return on invested capital (ROIC) and price earnings ratio (PER) were collected for publicly traded best companies to work. Findings show that PO Fit and PJ Fit play a significant moderator role for PER but not for ROIC. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=financial%20performance" title="financial performance">financial performance</a>, <a href="https://publications.waset.org/abstracts/search?q=human%20resource%20management" title=" human resource management"> human resource management</a>, <a href="https://publications.waset.org/abstracts/search?q=high%20performance%20working%20systems" title=" high performance working systems"> high performance working systems</a>, <a href="https://publications.waset.org/abstracts/search?q=person-environment%20fit" title=" person-environment fit"> person-environment fit</a> </p> <a href="https://publications.waset.org/abstracts/96403/human-resource-management-practices-person-environment-fit-and-financial-performance-in-brazilian-publicly-traded-companies" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/96403.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">166</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4262</span> To Study the New Invocation of Biometric Authentication Technique</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Aparna%20Gulhane">Aparna Gulhane</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Biometrics is the science and technology of measuring and analyzing biological data form the basis of research in biological measuring techniques for the purpose of people identification and recognition. In information technology, biometrics refers to technologies that measure and analyze human body characteristics, such as DNA, fingerprints, eye retinas and irises, voice patterns, facial patterns and hand measurements. Biometric systems are used to authenticate the person's identity. The idea is to use the special characteristics of a person to identify him. These papers present a biometric authentication techniques and actual deployment of potential by overall invocation of biometrics recognition, with an independent testing of various biometric authentication products and technology. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=types%20of%20biometrics" title="types of biometrics">types of biometrics</a>, <a href="https://publications.waset.org/abstracts/search?q=importance%20of%20biometric" title=" importance of biometric"> importance of biometric</a>, <a href="https://publications.waset.org/abstracts/search?q=review%20for%20biometrics%20and%20getting%20a%20new%20implementation" title=" review for biometrics and getting a new implementation"> review for biometrics and getting a new implementation</a>, <a href="https://publications.waset.org/abstracts/search?q=biometric%20authentication%20technique" title=" biometric authentication technique"> biometric authentication technique</a> </p> <a href="https://publications.waset.org/abstracts/23939/to-study-the-new-invocation-of-biometric-authentication-technique" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/23939.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">321</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4261</span> Disability, Stigma and In-Group Identification: An Exploration across Different Disability Subgroups</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Sharmila%20Rathee">Sharmila Rathee</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Individuals with disability/ies often face negative attitudes, discrimination, exclusion, and inequality of treatment due to stigmatization and stigmatized treatment. While a significant number of studies in field of stigma suggest that group-identification has positive consequences for stigmatized individuals, ironically very miniscule empirical work in sight has attempted to investigate in-group identification as a coping measure against stigma, humiliation and related experiences among disability group. In view of death of empirical research on in-group identification among disability group, through present work, an attempt has been made to examine the experiences of stigma, humiliation, and in-group identification among disability group. Results of the study suggest that use of in-group identification as a coping strategy is not uniform across members of disability group and degree of in-group identification differs across different sub-groups of disability groups. Further, in-group identification among members of disability group depends on variables like degree and impact of disability, factors like onset of disability, nature, and visibility of disability, educational experiences and resources available to deal with disabling conditions. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=disability" title="disability">disability</a>, <a href="https://publications.waset.org/abstracts/search?q=stigma" title=" stigma"> stigma</a>, <a href="https://publications.waset.org/abstracts/search?q=in-group%20identification" title=" in-group identification"> in-group identification</a>, <a href="https://publications.waset.org/abstracts/search?q=social%20identity" title=" social identity"> social identity</a> </p> <a href="https://publications.waset.org/abstracts/48888/disability-stigma-and-in-group-identification-an-exploration-across-different-disability-subgroups" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/48888.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">324</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4260</span> Forensic Challenges in Source Device Identification for Digital Videos</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Mustapha%20Aminu%20Bagiwa">Mustapha Aminu Bagiwa</a>, <a href="https://publications.waset.org/abstracts/search?q=Ainuddin%20Wahid%20Abdul%20Wahab"> Ainuddin Wahid Abdul Wahab</a>, <a href="https://publications.waset.org/abstracts/search?q=Mohd%20Yamani%20Idna%20Idris"> Mohd Yamani Idna Idris</a>, <a href="https://publications.waset.org/abstracts/search?q=Suleman%20Khan"> Suleman Khan</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Video source device identification has become a problem of concern in numerous domains especially in multimedia security and digital investigation. This is because videos are now used as evidence in legal proceedings. Source device identification aim at identifying the source of digital devices using the content they produced. However, due to affordable processing tools and the influx in digital content generating devices, source device identification is still a major problem within the digital forensic community. In this paper, we discuss source device identification for digital videos by identifying techniques that were proposed in the literature for model or specific device identification. This is aimed at identifying salient open challenges for future research. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=video%20forgery" title="video forgery">video forgery</a>, <a href="https://publications.waset.org/abstracts/search?q=source%20camcorder" title=" source camcorder"> source camcorder</a>, <a href="https://publications.waset.org/abstracts/search?q=device%20identification" title=" device identification"> device identification</a>, <a href="https://publications.waset.org/abstracts/search?q=forgery%20detection" title=" forgery detection "> forgery detection </a> </p> <a href="https://publications.waset.org/abstracts/21641/forensic-challenges-in-source-device-identification-for-digital-videos" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/21641.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">631</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4259</span> Correlation Matrix for Automatic Identification of Meal-Taking Activity</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Ghazi%20Bouaziz">Ghazi Bouaziz</a>, <a href="https://publications.waset.org/abstracts/search?q=Abderrahim%20Derouiche"> Abderrahim Derouiche</a>, <a href="https://publications.waset.org/abstracts/search?q=Damien%20Brulin"> Damien Brulin</a>, <a href="https://publications.waset.org/abstracts/search?q=H%C3%A9l%C3%A8ne%20Pigot"> Hélène Pigot</a>, <a href="https://publications.waset.org/abstracts/search?q=Eric%20Campo"> Eric Campo</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Automatic ADL classification is a crucial part of ambient assisted living technologies. It allows to monitor the daily life of the elderly and to detect any changes in their behavior that could be related to health problem. But detection of ADLs is a challenge, especially because each person has his/her own rhythm for performing them. Therefore, we used a correlation matrix to extract custom rules that enable to detect ADLs, including eating activity. Data collected from 3 different individuals between 35 and 105 days allows the extraction of personalized eating patterns. The comparison of the results of the process of eating activity extracted from the correlation matrices with the declarative data collected during the survey shows an accuracy of 90%. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=elderly%20monitoring" title="elderly monitoring">elderly monitoring</a>, <a href="https://publications.waset.org/abstracts/search?q=ADL%20identification" title=" ADL identification"> ADL identification</a>, <a href="https://publications.waset.org/abstracts/search?q=matrix%20correlation" title=" matrix correlation"> matrix correlation</a>, <a href="https://publications.waset.org/abstracts/search?q=meal-taking%20activity" title=" meal-taking activity"> meal-taking activity</a> </p> <a href="https://publications.waset.org/abstracts/155224/correlation-matrix-for-automatic-identification-of-meal-taking-activity" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/155224.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">93</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4258</span> The Influence of Superordinate Identity and Group Size on Group Decision Making through Discussion</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Lin%20Peng">Lin Peng</a>, <a href="https://publications.waset.org/abstracts/search?q=Jin%20Zhang"> Jin Zhang</a>, <a href="https://publications.waset.org/abstracts/search?q=Yuanyuan%20Miao"> Yuanyuan Miao</a>, <a href="https://publications.waset.org/abstracts/search?q=Quanquan%20Zheng"> Quanquan Zheng</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Group discussion and group decision-making have long been a topic of research interest. Traditional research on group decision making typically focuses on the strategies or functional models of combining members’ preferences to reach an optimal consensus. In this research, we want to explore natural process group decision making through discussion and examine relevant, influential factors--common superordinate identity shared by group and size of the groups. We manipulated the social identity of the groups into either a shared superordinate identity or different subgroup identities. We also manipulated the size to make it either a big (6-8 person) group or small group (3-person group). Using experimental methods, we found members of a superordinate identity group tend to modify more of their own opinions through the discussion, compared to those only identifying with their subgroups. Besides, members of superordinate identity groups also formed stronger identification with group decision--the results of group discussion than their subgroup peers. We also found higher member modification in bigger groups compared to smaller groups. Evaluations of decisions before and after discussion as well as group decisions are strongly linked to group identity, as members of superordinate group feel more confident and satisfied with both the results and decision-making process. Members’ opinions are more similar and homogeneous in smaller groups compared to bigger groups. This research have many implications for further research and applied behaviors in organizations. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=group%20decision%20making" title="group decision making">group decision making</a>, <a href="https://publications.waset.org/abstracts/search?q=group%20size" title=" group size"> group size</a>, <a href="https://publications.waset.org/abstracts/search?q=identification" title=" identification"> identification</a>, <a href="https://publications.waset.org/abstracts/search?q=modification" title=" modification"> modification</a>, <a href="https://publications.waset.org/abstracts/search?q=superordinate%20identity" title=" superordinate identity"> superordinate identity</a> </p> <a href="https://publications.waset.org/abstracts/53349/the-influence-of-superordinate-identity-and-group-size-on-group-decision-making-through-discussion" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/53349.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">307</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4257</span> Identification of Dynamic Friction Model for High-Precision Motion Control</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Martin%20Goubej">Martin Goubej</a>, <a href="https://publications.waset.org/abstracts/search?q=Tomas%20Popule"> Tomas Popule</a>, <a href="https://publications.waset.org/abstracts/search?q=Alois%20Krejci"> Alois Krejci</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper deals with experimental identification of mechanical systems with nonlinear friction characteristics. Dynamic LuGre friction model is adopted and a systematic approach to parameter identification of both linear and nonlinear subsystems is given. The identification procedure consists of three subsequent experiments which deal with the individual parts of plant dynamics. The proposed method is experimentally verified on an industrial-grade robotic manipulator. Model fidelity is compared with the results achieved with a static friction model. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=mechanical%20friction" title="mechanical friction">mechanical friction</a>, <a href="https://publications.waset.org/abstracts/search?q=LuGre%20model" title=" LuGre model"> LuGre model</a>, <a href="https://publications.waset.org/abstracts/search?q=friction%20identification" title=" friction identification"> friction identification</a>, <a href="https://publications.waset.org/abstracts/search?q=motion%20control" title=" motion control"> motion control</a> </p> <a href="https://publications.waset.org/abstracts/51897/identification-of-dynamic-friction-model-for-high-precision-motion-control" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/51897.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">413</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4256</span> Forensic Comparison of Facial Images for Human Identification </h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=D.%20P.%20Gangwar">D. P. Gangwar</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Identification of human through facial images has got great importance in forensic science. The video recordings, CCTV footage, passports, driver licenses and other related documents are invariably sent to the laboratory for comparison of the questioned photographs as well as video recordings with suspected photographs/recordings to prove the identity of a person. More than 300 questioned and 300 control photographs received in actual crime cases, received from various investigation agencies, have been compared by me so far using various familiar analysis and comparison techniques such as Holistic comparison, Morphological analysis, Photo-anthropometry and superimposition. On the basis of findings obtained during the examination huge photo exhibits, a realistic and comprehensive technique has been proposed which could be very useful for forensic. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=CCTV%20Images" title="CCTV Images">CCTV Images</a>, <a href="https://publications.waset.org/abstracts/search?q=facial%20features" title=" facial features"> facial features</a>, <a href="https://publications.waset.org/abstracts/search?q=photo-anthropometry" title=" photo-anthropometry"> photo-anthropometry</a>, <a href="https://publications.waset.org/abstracts/search?q=superimposition" title=" superimposition"> superimposition</a> </p> <a href="https://publications.waset.org/abstracts/31353/forensic-comparison-of-facial-images-for-human-identification" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/31353.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">529</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4255</span> SLIITBOT: Design of a Socially Assistive Robot for SLIIT</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Chandimal%20Jayawardena">Chandimal Jayawardena</a>, <a href="https://publications.waset.org/abstracts/search?q=Ridmal%20Mendis"> Ridmal Mendis</a>, <a href="https://publications.waset.org/abstracts/search?q=Manoji%20Tennakoon"> Manoji Tennakoon</a>, <a href="https://publications.waset.org/abstracts/search?q=Theekshana%20Wijayathilaka"> Theekshana Wijayathilaka</a>, <a href="https://publications.waset.org/abstracts/search?q=Randima%20Marasinghe"> Randima Marasinghe</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This research paper defines the research area of the implementation of the socially assistive robot (SLIITBOT). It consists of the overall process implemented within the robot’s system and limitations, along with a literature survey. This project considers developing a socially assistive robot called SLIITBOT that will interact using its voice outputs and graphical user interface with people within the university and benefit them with updates and tasks. The robot will be able to detect a person when he/she enters the room, navigate towards the position the human is standing, welcome and greet the particular person with a simple conversation using its voice, introduce the services through its voice, and provide the person with services through an electronic input via an app while guiding the person with voice outputs. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=application" title="application">application</a>, <a href="https://publications.waset.org/abstracts/search?q=detection" title=" detection"> detection</a>, <a href="https://publications.waset.org/abstracts/search?q=dialogue" title=" dialogue"> dialogue</a>, <a href="https://publications.waset.org/abstracts/search?q=navigation" title=" navigation"> navigation</a> </p> <a href="https://publications.waset.org/abstracts/132967/sliitbot-design-of-a-socially-assistive-robot-for-sliit" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/132967.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">169</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4254</span> Identification of Nonlinear Systems Structured by Hammerstein-Wiener Model </h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=A.%20Brouri">A. Brouri</a>, <a href="https://publications.waset.org/abstracts/search?q=F.%20Giri"> F. Giri</a>, <a href="https://publications.waset.org/abstracts/search?q=A.%20Mkhida"> A. Mkhida</a>, <a href="https://publications.waset.org/abstracts/search?q=A.%20Elkarkri"> A. Elkarkri</a>, <a href="https://publications.waset.org/abstracts/search?q=M.%20L.%20Chhibat"> M. L. Chhibat</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Standard Hammerstein-Wiener models consist of a linear subsystem sandwiched by two memoryless nonlinearities. Presently, the linear subsystem is allowed to be parametric or not, continuous- or discrete-time. The input and output nonlinearities are polynomial and may be noninvertible. A two-stage identification method is developed such the parameters of all nonlinear elements are estimated first using the Kozen-Landau polynomial decomposition algorithm. The obtained estimates are then based upon in the identification of the linear subsystem, making use of suitable pre-ad post-compensators. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=nonlinear%20system%20identification" title="nonlinear system identification">nonlinear system identification</a>, <a href="https://publications.waset.org/abstracts/search?q=Hammerstein-Wiener%20systems" title=" Hammerstein-Wiener systems"> Hammerstein-Wiener systems</a>, <a href="https://publications.waset.org/abstracts/search?q=frequency%20identification" title=" frequency identification"> frequency identification</a>, <a href="https://publications.waset.org/abstracts/search?q=polynomial%20decomposition" title=" polynomial decomposition"> polynomial decomposition</a> </p> <a href="https://publications.waset.org/abstracts/7969/identification-of-nonlinear-systems-structured-by-hammerstein-wiener-model" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/7969.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">511</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4253</span> Application of Artificial Neural Network and Background Subtraction for Determining Body Mass Index (BMI) in Android Devices Using Bluetooth</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Neil%20Erick%20Q.%20Madariaga">Neil Erick Q. Madariaga</a>, <a href="https://publications.waset.org/abstracts/search?q=Noel%20B.%20Linsangan"> Noel B. Linsangan</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Body Mass Index (BMI) is one of the different ways to monitor the health of a person. It is based on the height and weight of the person. This study aims to compute for the BMI using an Android tablet by obtaining the height of the person by using a camera and measuring the weight of the person by using a weighing scale or load cell. The height of the person was estimated by applying background subtraction to the image captured and applying different processes such as getting the vanishing point and applying Artificial Neural Network. The weight was measured by using Wheatstone bridge load cell configuration and sending the value to the computer by using Gizduino microcontroller and Bluetooth technology after the amplification using AD620 instrumentation amplifier. The application will process the images and read the measured values and show the BMI of the person. The study met all the objectives needed and further studies will be needed to improve the design project. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=body%20mass%20index" title="body mass index">body mass index</a>, <a href="https://publications.waset.org/abstracts/search?q=artificial%20neural%20network" title=" artificial neural network"> artificial neural network</a>, <a href="https://publications.waset.org/abstracts/search?q=vanishing%20point" title=" vanishing point"> vanishing point</a>, <a href="https://publications.waset.org/abstracts/search?q=bluetooth" title=" bluetooth"> bluetooth</a>, <a href="https://publications.waset.org/abstracts/search?q=wheatstone%20bridge%20load%20cell" title=" wheatstone bridge load cell"> wheatstone bridge load cell</a> </p> <a href="https://publications.waset.org/abstracts/20342/application-of-artificial-neural-network-and-background-subtraction-for-determining-body-mass-index-bmi-in-android-devices-using-bluetooth" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/20342.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">324</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4252</span> Additional Opportunities of Forensic Medical Identification of Dead Bodies of Unkown Persons</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Saule%20Mussabekova">Saule Mussabekova</a> </p> <p class="card-text"><strong>Abstract:</strong></p> A number of chemical elements widely presented in the nature is seldom met in people and vice versa. This is a peculiarity of accumulation of elements in the body, and their selective use regardless of widely changed parameters of external environment. Microelemental identification of human hair and particularly dead body is a new step in the development of modern forensic medicine which needs reliable criteria while identifying the person. In the condition of technology-related pressing of large industrial cities for many years and specific for each region multiple-factor toxic effect from many industrial enterprises it’s important to assess actuality and the role of researches of human hair while assessing degree of deposition with specific pollution. Hair is highly sensitive biological indicator and allows to assess ecological situation, to perform regionalism of large territories of geological and chemical methods. Besides, monitoring of concentrations of chemical elements in the regions of Kazakhstan gives opportunity to use these data while performing forensic medical identification of dead bodies of unknown persons. Methods based on identification of chemical composition of hair with further computer processing allowed to compare received data with average values for the sex, age, and to reveal causally significant deviations. It gives an opportunity preliminary to suppose the region of residence of the person, having concentrated actions of policy for search of people who are unaccounted for. It also allows to perform purposeful legal actions for its further identification having created more optimal and strictly individual scheme of personal identity. Hair is the most suitable material for forensic researches as it has such advances as long term storage properties with no time limitations and specific equipment. Besides, quantitative analysis of micro elements is well correlated with level of pollution of the environment, reflects professional diseases and with pinpoint accuracy helps not only to diagnose region of temporary residence of the person but to establish regions of his migration as well. Peculiarities of elemental composition of human hair have been established regardless of age and sex of persons residing on definite territories of Kazakhstan. Data regarding average content of 29 chemical elements in hair of population in different regions of Kazakhstan have been systemized. Coefficients of concentration of studies elements in hair relative to average values around the region have been calculated for each region. Groups of regions with specific spectrum of elements have been emphasized; these elements are accumulated in hair in quantities exceeding average indexes. Our results have showed significant differences in concentrations of chemical elements for studies groups and showed that population of Kazakhstan is exposed to different toxic substances. It depends on emissions to atmosphere from industrial enterprises dominating in each separate region. Performed researches have showed that obtained elemental composition of human hair residing in different regions of Kazakhstan reflects technogenic spectrum of elements. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=analysis%20of%20elemental%20composition%20of%20hair" title="analysis of elemental composition of hair">analysis of elemental composition of hair</a>, <a href="https://publications.waset.org/abstracts/search?q=forensic%20medical%20research%20of%20hair" title=" forensic medical research of hair"> forensic medical research of hair</a>, <a href="https://publications.waset.org/abstracts/search?q=identification%20of%20unknown%20dead%20bodies" title=" identification of unknown dead bodies"> identification of unknown dead bodies</a>, <a href="https://publications.waset.org/abstracts/search?q=microelements" title=" microelements"> microelements</a> </p> <a href="https://publications.waset.org/abstracts/79979/additional-opportunities-of-forensic-medical-identification-of-dead-bodies-of-unkown-persons" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/79979.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">142</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4251</span> Domain Specificity and Language Change: Evidence South Central (Kuki-Chin) Tibeto-Burman</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Mohammed%20Zahid%20Akter">Mohammed Zahid Akter</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In the studies of language change, mental factors including analogy, reanalysis, and frequency have received considerable attention as possible catalysts for language change. In comparison, relatively little is known regarding which functional domains or construction types are more amenable to these mental factors than others. In this regard, this paper will show with data from South Central (Kuki-Chin) Tibeto-Burman languages how language change interacts with certain functional domains or construction types. These construction types include transitivity, person marking, and polarity distinctions. Thus, it will be shown that transitive clauses are more prone to change than intransitive and ditransitive clauses, clauses with 1st person argument marking are more prone to change than clauses with 2nd and 3rd person argument marking, non-copular clauses are more prone to change than copular clauses, affirmative clauses are more prone to change than negative clauses, and standard negatives are more prone to change than negative imperatives. The following schematic structure can summarize these findings: transitive>intransitive, ditransitive; 1st person>2nd person, 3rd person; non-copular>copular; and affirmative>negative; and standard negative>negative imperatives. In the interest of space, here only one of these findings is illustrated: affirmative>negative. In Hyow (South Central, Bangladesh), the innovative and preverbal 1st person subject k(V)- occurs in an affirmative construction, and the archaic and postverbal 1st person subject -ŋ occurs in a negative construction. Similarly, in Purum (South Central, Northeast India), the innovative and preverbal 1st person subject k(V)- occurs in an affirmative construction, and the archaic and postverbal 1st person subject *-ŋ occurs in a negative construction. Like 1st person subject, we also see that in Anal (South Central, Northeast India), the innovative and preverbal 2nd person subject V- occurs in an affirmative construction, and the archaic and postverbal 2nd person subject -t(V) in a negative construction. To conclude, data from South Central Tibeto-Burman languages suggest that language change interacts with functional domains as some construction types are more susceptible to change than others. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=functional%20domains" title="functional domains">functional domains</a>, <a href="https://publications.waset.org/abstracts/search?q=Kuki-Chin" title=" Kuki-Chin"> Kuki-Chin</a>, <a href="https://publications.waset.org/abstracts/search?q=language%20change" title=" language change"> language change</a>, <a href="https://publications.waset.org/abstracts/search?q=south-central" title=" south-central"> south-central</a>, <a href="https://publications.waset.org/abstracts/search?q=Tibeto-Burman" title=" Tibeto-Burman"> Tibeto-Burman</a> </p> <a href="https://publications.waset.org/abstracts/172689/domain-specificity-and-language-change-evidence-south-central-kuki-chin-tibeto-burman" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/172689.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">70</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4250</span> A Cross-Dialect Statistical Analysis of Final Declarative Intonation in Tuvinian</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=D.%20Beziakina">D. Beziakina</a>, <a href="https://publications.waset.org/abstracts/search?q=E.%20Bulgakova"> E. Bulgakova</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This study continues the research on Tuvinian intonation and presents a general cross-dialect analysis of intonation of Tuvinian declarative utterances, specifically the character of the tone movement in order to test the hypothesis about the prevalence of level tone in some Tuvinian dialects. The results of the analysis of basic pitch characteristics of Tuvinian speech (in general and in comparison with two other Turkic languages - Uzbek and Azerbaijani) are also given in this paper. The goal of our work was to obtain the ranges of pitch parameter values typical for Tuvinian speech. Such language-specific values can be used in speaker identification systems in order to get more accurate results of ethnic speech analysis. We also present the results of a cross-dialect analysis of declarative intonation in the poorly studied Tuvinian language. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=speech%20analysis" title="speech analysis">speech analysis</a>, <a href="https://publications.waset.org/abstracts/search?q=statistical%20analysis" title=" statistical analysis"> statistical analysis</a>, <a href="https://publications.waset.org/abstracts/search?q=speaker%20recognition" title=" speaker recognition"> speaker recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=identification%20of%20person" title=" identification of person"> identification of person</a> </p> <a href="https://publications.waset.org/abstracts/12497/a-cross-dialect-statistical-analysis-of-final-declarative-intonation-in-tuvinian" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/12497.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">470</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4249</span> Structural Damage Detection Using Sensors Optimally Located</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Carlos%20Alberto%20Riveros">Carlos Alberto Riveros</a>, <a href="https://publications.waset.org/abstracts/search?q=Edwin%20Fabi%C3%A1n%20Garc%C3%ADa"> Edwin Fabián García</a>, <a href="https://publications.waset.org/abstracts/search?q=Javier%20Enrique%20Rivero"> Javier Enrique Rivero</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The measured data obtained from sensors in continuous monitoring of civil structures are mainly used for modal identification and damage detection. Therefore when modal identification analysis is carried out the quality in the identification of the modes will highly influence the damage detection results. It is also widely recognized that the usefulness of the measured data used for modal identification and damage detection is significantly influenced by the number and locations of sensors. The objective of this study is the numerical implementation of two widely known optimum sensor placement methods in beam-like structures <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=optimum%20sensor%20placement" title="optimum sensor placement">optimum sensor placement</a>, <a href="https://publications.waset.org/abstracts/search?q=structural%20damage%20detection" title=" structural damage detection"> structural damage detection</a>, <a href="https://publications.waset.org/abstracts/search?q=modal%20identification" title=" modal identification"> modal identification</a>, <a href="https://publications.waset.org/abstracts/search?q=beam-like%20structures." title=" beam-like structures. "> beam-like structures. </a> </p> <a href="https://publications.waset.org/abstracts/15240/structural-damage-detection-using-sensors-optimally-located" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/15240.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">431</span> </span> </div> </div> <ul class="pagination"> <li class="page-item disabled"><span class="page-link">‹</span></li> <li class="page-item active"><span class="page-link">1</span></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=identification%20of%20person&page=2">2</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=identification%20of%20person&page=3">3</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=identification%20of%20person&page=4">4</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=identification%20of%20person&page=5">5</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=identification%20of%20person&page=6">6</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=identification%20of%20person&page=7">7</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=identification%20of%20person&page=8">8</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=identification%20of%20person&page=9">9</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=identification%20of%20person&page=10">10</a></li> <li class="page-item disabled"><span class="page-link">...</span></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=identification%20of%20person&page=142">142</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=identification%20of%20person&page=143">143</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=identification%20of%20person&page=2" rel="next">›</a></li> </ul> </div> </main> <footer> <div id="infolinks" class="pt-3 pb-2"> <div class="container"> <div style="background-color:#f5f5f5;" class="p-3"> <div class="row"> <div class="col-md-2"> <ul class="list-unstyled"> About <li><a href="https://waset.org/page/support">About Us</a></li> <li><a href="https://waset.org/page/support#legal-information">Legal</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/WASET-16th-foundational-anniversary.pdf">WASET celebrates its 16th foundational anniversary</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Account <li><a href="https://waset.org/profile">My Account</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Explore <li><a href="https://waset.org/disciplines">Disciplines</a></li> <li><a href="https://waset.org/conferences">Conferences</a></li> <li><a href="https://waset.org/conference-programs">Conference Program</a></li> <li><a href="https://waset.org/committees">Committees</a></li> <li><a href="https://publications.waset.org">Publications</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Research <li><a href="https://publications.waset.org/abstracts">Abstracts</a></li> <li><a href="https://publications.waset.org">Periodicals</a></li> <li><a href="https://publications.waset.org/archive">Archive</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Open Science <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Science-Philosophy.pdf">Open Science Philosophy</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Science-Award.pdf">Open Science Award</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Society-Open-Science-and-Open-Innovation.pdf">Open Innovation</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Postdoctoral-Fellowship-Award.pdf">Postdoctoral Fellowship Award</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Scholarly-Research-Review.pdf">Scholarly Research Review</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Support <li><a href="https://waset.org/page/support">Support</a></li> <li><a href="https://waset.org/profile/messages/create">Contact Us</a></li> <li><a href="https://waset.org/profile/messages/create">Report Abuse</a></li> </ul> </div> </div> </div> </div> </div> <div class="container text-center"> <hr style="margin-top:0;margin-bottom:.3rem;"> <a href="https://creativecommons.org/licenses/by/4.0/" target="_blank" class="text-muted small">Creative Commons Attribution 4.0 International License</a> <div id="copy" class="mt-2">© 2024 World Academy of Science, Engineering and Technology</div> </div> </footer> <a href="javascript:" id="return-to-top"><i class="fas fa-arrow-up"></i></a> <div class="modal" id="modal-template"> <div class="modal-dialog"> <div class="modal-content"> <div class="row m-0 mt-1"> <div class="col-md-12"> <button type="button" class="close" data-dismiss="modal" aria-label="Close"><span aria-hidden="true">×</span></button> </div> </div> <div class="modal-body"></div> </div> </div> </div> <script src="https://cdn.waset.org/static/plugins/jquery-3.3.1.min.js"></script> <script src="https://cdn.waset.org/static/plugins/bootstrap-4.2.1/js/bootstrap.bundle.min.js"></script> <script src="https://cdn.waset.org/static/js/site.js?v=150220211556"></script> <script> jQuery(document).ready(function() { /*jQuery.get("https://publications.waset.org/xhr/user-menu", function (response) { jQuery('#mainNavMenu').append(response); });*/ jQuery.get({ url: "https://publications.waset.org/xhr/user-menu", cache: false }).then(function(response){ jQuery('#mainNavMenu').append(response); }); }); </script> </body> </html>