CINXE.COM

Search results for: human person

<!DOCTYPE html> <html lang="en" dir="ltr"> <head> <!-- Google tag (gtag.js) --> <script async src="https://www.googletagmanager.com/gtag/js?id=G-P63WKM1TM1"></script> <script> window.dataLayer = window.dataLayer || []; function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'G-P63WKM1TM1'); </script> <!-- Yandex.Metrika counter --> <script type="text/javascript" > (function(m,e,t,r,i,k,a){m[i]=m[i]||function(){(m[i].a=m[i].a||[]).push(arguments)}; m[i].l=1*new Date(); for (var j = 0; j < document.scripts.length; j++) {if (document.scripts[j].src === r) { return; }} k=e.createElement(t),a=e.getElementsByTagName(t)[0],k.async=1,k.src=r,a.parentNode.insertBefore(k,a)}) (window, document, "script", "https://mc.yandex.ru/metrika/tag.js", "ym"); ym(55165297, "init", { clickmap:false, trackLinks:true, accurateTrackBounce:true, webvisor:false }); </script> <noscript><div><img src="https://mc.yandex.ru/watch/55165297" style="position:absolute; left:-9999px;" alt="" /></div></noscript> <!-- /Yandex.Metrika counter --> <!-- Matomo --> <!-- End Matomo Code --> <title>Search results for: human person</title> <meta name="description" content="Search results for: human person"> <meta name="keywords" content="human person"> <meta name="viewport" content="width=device-width, initial-scale=1, minimum-scale=1, maximum-scale=1, user-scalable=no"> <meta charset="utf-8"> <link href="https://cdn.waset.org/favicon.ico" type="image/x-icon" rel="shortcut icon"> <link href="https://cdn.waset.org/static/plugins/bootstrap-4.2.1/css/bootstrap.min.css" rel="stylesheet"> <link href="https://cdn.waset.org/static/plugins/fontawesome/css/all.min.css" rel="stylesheet"> <link href="https://cdn.waset.org/static/css/site.css?v=150220211555" rel="stylesheet"> </head> <body> <header> <div class="container"> <nav class="navbar navbar-expand-lg navbar-light"> <a class="navbar-brand" href="https://waset.org"> <img src="https://cdn.waset.org/static/images/wasetc.png" alt="Open Science Research Excellence" title="Open Science Research Excellence" /> </a> <button class="d-block d-lg-none navbar-toggler ml-auto" type="button" data-toggle="collapse" data-target="#navbarMenu" aria-controls="navbarMenu" aria-expanded="false" aria-label="Toggle navigation"> <span class="navbar-toggler-icon"></span> </button> <div class="w-100"> <div class="d-none d-lg-flex flex-row-reverse"> <form method="get" action="https://waset.org/search" class="form-inline my-2 my-lg-0"> <input class="form-control mr-sm-2" type="search" placeholder="Search Conferences" value="human person" name="q" aria-label="Search"> <button class="btn btn-light my-2 my-sm-0" type="submit"><i class="fas fa-search"></i></button> </form> </div> <div class="collapse navbar-collapse mt-1" id="navbarMenu"> <ul class="navbar-nav ml-auto align-items-center" id="mainNavMenu"> <li class="nav-item"> <a class="nav-link" href="https://waset.org/conferences" title="Conferences in 2024/2025/2026">Conferences</a> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/disciplines" title="Disciplines">Disciplines</a> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/committees" rel="nofollow">Committees</a> </li> <li class="nav-item dropdown"> <a class="nav-link dropdown-toggle" href="#" id="navbarDropdownPublications" role="button" data-toggle="dropdown" aria-haspopup="true" aria-expanded="false"> Publications </a> <div class="dropdown-menu" aria-labelledby="navbarDropdownPublications"> <a class="dropdown-item" href="https://publications.waset.org/abstracts">Abstracts</a> <a class="dropdown-item" href="https://publications.waset.org">Periodicals</a> <a class="dropdown-item" href="https://publications.waset.org/archive">Archive</a> </div> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/page/support" title="Support">Support</a> </li> </ul> </div> </div> </nav> </div> </header> <main> <div class="container mt-4"> <div class="row"> <div class="col-md-9 mx-auto"> <form method="get" action="https://publications.waset.org/abstracts/search"> <div id="custom-search-input"> <div class="input-group"> <i class="fas fa-search"></i> <input type="text" class="search-query" name="q" placeholder="Author, Title, Abstract, Keywords" value="human person"> <input type="submit" class="btn_search" value="Search"> </div> </div> </form> </div> </div> <div class="row mt-3"> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Commenced</strong> in January 2007</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Frequency:</strong> Monthly</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Edition:</strong> International</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Paper Count:</strong> 9455</div> </div> </div> </div> <h1 class="mt-3 mb-3 text-center" style="font-size:1.6rem;">Search results for: human person</h1> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">9455</span> Human Resource Management Practices, Person-Environment Fit and Financial Performance in Brazilian Publicly Traded Companies</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Bruno%20Henrique%20Rocha%20Fernandes">Bruno Henrique Rocha Fernandes</a>, <a href="https://publications.waset.org/abstracts/search?q=Amir%20Rezaee"> Amir Rezaee</a>, <a href="https://publications.waset.org/abstracts/search?q=Jucelia%20Appio"> Jucelia Appio </a> </p> <p class="card-text"><strong>Abstract:</strong></p> The relation between Human Resource Management (HRM) practices and organizational performance remains the subject of substantial literature. Though many studies demonstrated positive relationship, still major influencing variables are not yet clear. This study considers the Person-Environment Fit (PE Fit) and its components, Person-Supervisor (PS), Person-Group (PG), Person-Organization (PO) and Person-Job (PJ) Fit, as possible explanatory variables. We analyzed PE Fit as a moderator between HRM practices and financial performance in the &ldquo;best companies to work&rdquo; in Brazil. Data from HRM practices were classified through the High Performance Working Systems (HPWS) construct and data on PE-Fit were obtained through surveys among employees. Financial data, consisting of return on invested capital (ROIC) and price earnings ratio (PER) were collected for publicly traded best companies to work. Findings show that PO Fit and PJ Fit play a significant moderator role for PER but not for ROIC. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=financial%20performance" title="financial performance">financial performance</a>, <a href="https://publications.waset.org/abstracts/search?q=human%20resource%20management" title=" human resource management"> human resource management</a>, <a href="https://publications.waset.org/abstracts/search?q=high%20performance%20working%20systems" title=" high performance working systems"> high performance working systems</a>, <a href="https://publications.waset.org/abstracts/search?q=person-environment%20fit" title=" person-environment fit"> person-environment fit</a> </p> <a href="https://publications.waset.org/abstracts/96403/human-resource-management-practices-person-environment-fit-and-financial-performance-in-brazilian-publicly-traded-companies" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/96403.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">166</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">9454</span> The Framework of System Safety for Multi Human-in-The-Loop System</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Hideyuki%20Shintani">Hideyuki Shintani</a>, <a href="https://publications.waset.org/abstracts/search?q=Ichiro%20Koshijima"> Ichiro Koshijima</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In Cyber Physical System (CPS), if there are a large number of persons in the process, a role of person in CPS might be different comparing with the one-man system. It is also necessary to consider how Human-in-The-Loop Cyber Physical Systems (HiTLCPS) ensure safety of each person in the loop process. In this paper, the authors discuss a system safety framework with an illustrative example with STAMP model to clarify what point for safety should be considered and what role of person in the should have. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=cyber-physical-system" title="cyber-physical-system">cyber-physical-system</a>, <a href="https://publications.waset.org/abstracts/search?q=human-in-the-loop" title=" human-in-the-loop"> human-in-the-loop</a>, <a href="https://publications.waset.org/abstracts/search?q=safety" title=" safety"> safety</a>, <a href="https://publications.waset.org/abstracts/search?q=STAMP%20model" title=" STAMP model"> STAMP model</a> </p> <a href="https://publications.waset.org/abstracts/54442/the-framework-of-system-safety-for-multi-human-in-the-loop-system" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/54442.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">325</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">9453</span> Career Anchors and Job Satisfaction of Managers: The Mediating Role of Person-job Fit</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Azadeh%20Askari">Azadeh Askari</a>, <a href="https://publications.waset.org/abstracts/search?q=Ali%20Nasery%20Mohamad%20Abadi"> Ali Nasery Mohamad Abadi</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The present study was conducted to investigate the relationship between career anchors and job satisfaction with emphasis on the mediating role of person-job fit. 502 managers and supervisors of ten operational areas of a large energy Company were selected as a cluster sample appropriate to the volume. The instruments used in this study were Career Anchor Questionnaire, Job Satisfaction Questionnaire and Person-job fit Questionnaire. Pearson correlation coefficient was used to analyze the data and AMOS software was used to determine the effect of career anchor variables and person-job fit on job satisfaction. Anchors of service and dedication, pure challenge and security and stability increase the person-job fit among managers and also the person-job fit plays a mediating role in relation to the effect it has on job satisfaction through these anchors. In contrast, the anchors of independence and autonomy reduce the person-job fit. Considering the importance of positive organizational attitudes and in order to have an optimal fit between job and worker, it is better that in human resources processes such as hiring and employing, the career anchors of the person should be considered so that the person can have more job satisfaction; and thus bring higher productivity for themselves and the organization. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=career%20anchor" title="career anchor">career anchor</a>, <a href="https://publications.waset.org/abstracts/search?q=job%20satisfaction" title=" job satisfaction"> job satisfaction</a>, <a href="https://publications.waset.org/abstracts/search?q=person-job%20fit" title=" person-job fit"> person-job fit</a>, <a href="https://publications.waset.org/abstracts/search?q=energy%20company" title=" energy company"> energy company</a>, <a href="https://publications.waset.org/abstracts/search?q=managers" title=" managers"> managers</a> </p> <a href="https://publications.waset.org/abstracts/145999/career-anchors-and-job-satisfaction-of-managers-the-mediating-role-of-person-job-fit" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/145999.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">121</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">9452</span> Measuring the Height of a Person in Closed Circuit Television Video Footage Using 3D Human Body Model</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Dojoon%20Jung">Dojoon Jung</a>, <a href="https://publications.waset.org/abstracts/search?q=Kiwoong%20Moon"> Kiwoong Moon</a>, <a href="https://publications.waset.org/abstracts/search?q=Joong%20Lee"> Joong Lee</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The height of criminals is one of the important clues that can determine the scope of the suspect's search or exclude the suspect from the search target. Although measuring the height of criminals by video alone is limited by various reasons, the 3D data of the scene and the Closed Circuit Television (CCTV) footage are matched, the height of the criminal can be measured. However, it is still difficult to measure the height of CCTV footage in the non-contact type measurement method because of variables such as position, posture, and head shape of criminals. In this paper, we propose a method of matching the CCTV footage with the 3D data on the crime scene and measuring the height of the person using the 3D human body model in the matched data. In the proposed method, the height is measured by using 3D human model in various scenes of the person in the CCTV footage, and the measurement value of the target person is corrected by the measurement error of the replay CCTV footage of the reference person. We tested for 20 people's walking CCTV footage captured from an indoor and an outdoor and corrected the measurement values with 5 reference persons. Experimental results show that the measurement error (true value-measured value) average is 0.45 cm, and this method is effective for the measurement of the person's height in CCTV footage. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=human%20height" title="human height">human height</a>, <a href="https://publications.waset.org/abstracts/search?q=CCTV%20footage" title=" CCTV footage"> CCTV footage</a>, <a href="https://publications.waset.org/abstracts/search?q=2D%2F3D%20matching" title=" 2D/3D matching"> 2D/3D matching</a>, <a href="https://publications.waset.org/abstracts/search?q=3D%20human%20body%20model" title=" 3D human body model"> 3D human body model</a> </p> <a href="https://publications.waset.org/abstracts/93625/measuring-the-height-of-a-person-in-closed-circuit-television-video-footage-using-3d-human-body-model" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/93625.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">248</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">9451</span> Person-Environment Fit (PE Fit): Evidence from Brazil</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Jucelia%20Appio">Jucelia Appio</a>, <a href="https://publications.waset.org/abstracts/search?q=Danielle%20Deimling%20De%20Carli"> Danielle Deimling De Carli</a>, <a href="https://publications.waset.org/abstracts/search?q=Bruno%20Henrique%20Rocha%20Fernandes"> Bruno Henrique Rocha Fernandes</a>, <a href="https://publications.waset.org/abstracts/search?q=Nelson%20Natalino%20Frizon"> Nelson Natalino Frizon</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The purpose of this paper is to investigate if there are positive and significant correlations between the dimensions of Person-Environment Fit (Person-Job, Person-Organization, Person-Group and Person-Supervisor) at the &ldquo;Best Companies to Work for&rdquo; in Brazil in 2017. For that, a quantitative approach was used with a descriptive method being defined as a research sample the &quot;150 Best Companies to Work for&quot;, according to data base collected in 2017 and provided by Funda&ccedil;&atilde;o Instituto of Administra&ccedil;&atilde;o (FIA) of the University of S&atilde;o Paulo (USP). About the data analysis procedures, asymmetry and kurtosis, factorial analysis, Kaiser-Meyer-Olkin (KMO) tests, Bartlett sphericity and Cronbach&#39;s alpha were used for the 69 research variables, and as a statistical technique for the purpose of analyzing the hypothesis, Pearson&#39;s correlation analysis was performed. As a main result, we highlight that there was a positive and significant correlation between the dimensions of Person-Environment Fit, corroborating the H1 hypothesis that there is a positive and significant correlation between Person-Job Fit, Person-Organization Fit, Person-Group Fit and Person-Supervisor Fit. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=Human%20Resource%20Management%20%28HRM%29" title="Human Resource Management (HRM)">Human Resource Management (HRM)</a>, <a href="https://publications.waset.org/abstracts/search?q=Person-Environment%20Fit%20%28PE%29" title=" Person-Environment Fit (PE)"> Person-Environment Fit (PE)</a>, <a href="https://publications.waset.org/abstracts/search?q=strategic%20people%20management" title=" strategic people management"> strategic people management</a>, <a href="https://publications.waset.org/abstracts/search?q=best%20companies%20to%20work%20for" title=" best companies to work for"> best companies to work for</a> </p> <a href="https://publications.waset.org/abstracts/101954/person-environment-fit-pe-fit-evidence-from-brazil" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/101954.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">141</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">9450</span> Questioning Eugenics and the Dignity of the Human Person in the Age of Science Technology</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Ephraim%20Ibekwe">Ephraim Ibekwe</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The field of biomedical science has offered modern man more options to choose from than ever before about what their future children will be or look like. Today, embryo selection techniques, for instance, has availed most people the power to choose the sex of their child, to avoid the birth of a child with a disability, or even to choose deliberately to create a disabled child. With new biotechnological tools emerging daily, many people deem parents personally and socially responsible for the results of their choosing to bear children, i.e. all tests should be done, and parents are responsible for only “keeping” healthy children. Some fear parents may soon be left to their own devices if they have children who require extra time and social spending. As with other discoveries in the area of genetic engineering, such possibilities raise important ethical issues – questions about which of these choices are morally permissible or morally wrong. Hence, the preoccupation of this article is to understand the extent to which the questions that Eugenics posits on the human person can be answered with keen clarity. With an analytical posture, this article, while not deriding the impact of biotechnology and the medical sciences, argues for Human dignity in its strictest consideration. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=dignity" title="dignity">dignity</a>, <a href="https://publications.waset.org/abstracts/search?q=eugenics" title=" eugenics"> eugenics</a>, <a href="https://publications.waset.org/abstracts/search?q=human%20person" title=" human person"> human person</a>, <a href="https://publications.waset.org/abstracts/search?q=technology%20and%20biomedical%20science" title=" technology and biomedical science"> technology and biomedical science</a> </p> <a href="https://publications.waset.org/abstracts/147268/questioning-eugenics-and-the-dignity-of-the-human-person-in-the-age-of-science-technology" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/147268.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">140</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">9449</span> An Investigation Into an Essential Property of Creativity, Which Is the First-Person Experience</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Ukpaka%20Paschal">Ukpaka Paschal</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Margret Boden argues that a creative product is one that is new, surprising, and valuable as a result of the combination, exploration, or transformation involved in producing it. Boden uses examples of artificial intelligence systems that fit all of these criteria and argues that real creativity involves autonomy, intentionality, valuation, emotion, and consciousness. This paper provides an analysis of all these elements in order to try to understand whether they are sufficient to account for creativity, especially human creativity. This paper focuses on Generative Adversarial Networks (GANs), which is a class of artificial intelligence algorithms that are said to have disproved the common perception that creativity is something that only humans possess. This paper will then argue that Boden’s listed properties of creativity, which capture the creativity exhibited by GANs, are not sufficient to account for human creativity, and this paper will further identify “first-person phenomenological experience” as an essential property of human creativity. The rationale behind the proposed essential property is that if creativity involves comprehending our experience of the world around us into a form of self-expression, then our experience of the world really matters with regard to creativity. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=artificial%20intelligence" title="artificial intelligence">artificial intelligence</a>, <a href="https://publications.waset.org/abstracts/search?q=creativity" title=" creativity"> creativity</a>, <a href="https://publications.waset.org/abstracts/search?q=GANs" title=" GANs"> GANs</a>, <a href="https://publications.waset.org/abstracts/search?q=first-person%20experience" title=" first-person experience"> first-person experience</a> </p> <a href="https://publications.waset.org/abstracts/150960/an-investigation-into-an-essential-property-of-creativity-which-is-the-first-person-experience" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/150960.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">135</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">9448</span> SLIITBOT: Design of a Socially Assistive Robot for SLIIT</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Chandimal%20Jayawardena">Chandimal Jayawardena</a>, <a href="https://publications.waset.org/abstracts/search?q=Ridmal%20Mendis"> Ridmal Mendis</a>, <a href="https://publications.waset.org/abstracts/search?q=Manoji%20Tennakoon"> Manoji Tennakoon</a>, <a href="https://publications.waset.org/abstracts/search?q=Theekshana%20Wijayathilaka"> Theekshana Wijayathilaka</a>, <a href="https://publications.waset.org/abstracts/search?q=Randima%20Marasinghe"> Randima Marasinghe</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This research paper defines the research area of the implementation of the socially assistive robot (SLIITBOT). It consists of the overall process implemented within the robot’s system and limitations, along with a literature survey. This project considers developing a socially assistive robot called SLIITBOT that will interact using its voice outputs and graphical user interface with people within the university and benefit them with updates and tasks. The robot will be able to detect a person when he/she enters the room, navigate towards the position the human is standing, welcome and greet the particular person with a simple conversation using its voice, introduce the services through its voice, and provide the person with services through an electronic input via an app while guiding the person with voice outputs. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=application" title="application">application</a>, <a href="https://publications.waset.org/abstracts/search?q=detection" title=" detection"> detection</a>, <a href="https://publications.waset.org/abstracts/search?q=dialogue" title=" dialogue"> dialogue</a>, <a href="https://publications.waset.org/abstracts/search?q=navigation" title=" navigation"> navigation</a> </p> <a href="https://publications.waset.org/abstracts/132967/sliitbot-design-of-a-socially-assistive-robot-for-sliit" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/132967.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">169</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">9447</span> Gait Biometric for Person Re-Identification</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Lavanya%20Srinivasan">Lavanya Srinivasan</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Biometric identification is to identify unique features in a person like fingerprints, iris, ear, and voice recognition that need the subject's permission and physical contact. Gait biometric is used to identify the unique gait of the person by extracting moving features. The main advantage of gait biometric to identify the gait of a person at a distance, without any physical contact. In this work, the gait biometric is used for person re-identification. The person walking naturally compared with the same person walking with bag, coat, and case recorded using longwave infrared, short wave infrared, medium wave infrared, and visible cameras. The videos are recorded in rural and in urban environments. The pre-processing technique includes human identified using YOLO, background subtraction, silhouettes extraction, and synthesis Gait Entropy Image by averaging the silhouettes. The moving features are extracted from the Gait Entropy Energy Image. The extracted features are dimensionality reduced by the principal component analysis and recognised using different classifiers. The comparative results with the different classifier show that linear discriminant analysis outperforms other classifiers with 95.8% for visible in the rural dataset and 94.8% for longwave infrared in the urban dataset. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=biometric" title="biometric">biometric</a>, <a href="https://publications.waset.org/abstracts/search?q=gait" title=" gait"> gait</a>, <a href="https://publications.waset.org/abstracts/search?q=silhouettes" title=" silhouettes"> silhouettes</a>, <a href="https://publications.waset.org/abstracts/search?q=YOLO" title=" YOLO"> YOLO</a> </p> <a href="https://publications.waset.org/abstracts/136879/gait-biometric-for-person-re-identification" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/136879.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">172</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">9446</span> Recognizing an Individual, Their Topic of Conversation and Cultural Background from 3D Body Movement</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Gheida%20J.%20Shahrour">Gheida J. Shahrour</a>, <a href="https://publications.waset.org/abstracts/search?q=Martin%20J.%20Russell"> Martin J. Russell</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The 3D body movement signals captured during human-human conversation include clues not only to the content of people’s communication but also to their culture and personality. This paper is concerned with automatic extraction of this information from body movement signals. For the purpose of this research, we collected a novel corpus from 27 subjects, arranged them into groups according to their culture. We arranged each group into pairs and each pair communicated with each other about different topics. A state-of-art recognition system is applied to the problems of person, culture, and topic recognition. We borrowed modeling, classification, and normalization techniques from speech recognition. We used Gaussian Mixture Modeling (GMM) as the main technique for building our three systems, obtaining 77.78%, 55.47%, and 39.06% from the person, culture, and topic recognition systems respectively. In addition, we combined the above GMM systems with Support Vector Machines (SVM) to obtain 85.42%, 62.50%, and 40.63% accuracy for person, culture, and topic recognition respectively. Although direct comparison among these three recognition systems is difficult, it seems that our person recognition system performs best for both GMM and GMM-SVM, suggesting that inter-subject differences (i.e. subject’s personality traits) are a major source of variation. When removing these traits from culture and topic recognition systems using the Nuisance Attribute Projection (NAP) and the Intersession Variability Compensation (ISVC) techniques, we obtained 73.44% and 46.09% accuracy from culture and topic recognition systems respectively. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=person%20recognition" title="person recognition">person recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=topic%20recognition" title=" topic recognition"> topic recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=culture%20recognition" title=" culture recognition"> culture recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=3D%20body%20movement%20signals" title=" 3D body movement signals"> 3D body movement signals</a>, <a href="https://publications.waset.org/abstracts/search?q=variability%20compensation" title=" variability compensation"> variability compensation</a> </p> <a href="https://publications.waset.org/abstracts/19473/recognizing-an-individual-their-topic-of-conversation-and-cultural-background-from-3d-body-movement" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/19473.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">541</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">9445</span> Investigation of Emotional Indicators of Schizophrenia Patients on Draw a Person Test in Pakistan</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Shakir%20Iqbal">Shakir Iqbal</a>, <a href="https://publications.waset.org/abstracts/search?q=Muhammad%20Aqeel"> Muhammad Aqeel</a>, <a href="https://publications.waset.org/abstracts/search?q=Asghar%20Ali%20Shah"> Asghar Ali Shah</a>, <a href="https://publications.waset.org/abstracts/search?q=Aftab%20Hussain"> Aftab Hussain</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The present study was aimed to investigate and compare the emotional indicators of patients with schizophrenia on Draw a Person test in Pakistan. Draw a Person test was administered on a sample of 400 (Schizophrenia patients=200, Normal=200) age ranged from 20 to 50 years. The data was collected from two provinces of Pakistan (Punjab and Khyber Pakhtun Khwa). The sample was selected by the age levels. According to the Koppitz method of scoring a list of 40 Emotional indicators was selected that were derived from the literature review. It was found that 26 out of 40 emotional indicators (EIs) on Draw a Person test significantly differentiated between patients with schizophrenia and normal (healthy) population. Chi square analysis of the study indicated that 23 EIs were found significant at (p=.001) level, while three EIs were found significant at (P=.05) levels. It was also found that 9 exclusive and 4 frequent EIs on Human Figure Drawings may be significant diagnostic emotional indicators for schizophrenia. It was found that DAP test can be used as a diagnostic tool with the battery of psychological tests such as MCMI-III, MMPI, MSE, HTP for schizophrenia in Pakistan. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=draw%20a%20person%20test" title="draw a person test">draw a person test</a>, <a href="https://publications.waset.org/abstracts/search?q=normal%20population" title=" normal population"> normal population</a>, <a href="https://publications.waset.org/abstracts/search?q=Schizophrenia%20patients" title=" Schizophrenia patients"> Schizophrenia patients</a>, <a href="https://publications.waset.org/abstracts/search?q=psychological%20sciences" title=" psychological sciences"> psychological sciences</a> </p> <a href="https://publications.waset.org/abstracts/5746/investigation-of-emotional-indicators-of-schizophrenia-patients-on-draw-a-person-test-in-pakistan" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/5746.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">470</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">9444</span> Online Pose Estimation and Tracking Approach with Siamese Region Proposal Network</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Cheng%20Fang">Cheng Fang</a>, <a href="https://publications.waset.org/abstracts/search?q=Lingwei%20Quan"> Lingwei Quan</a>, <a href="https://publications.waset.org/abstracts/search?q=Cunyue%20Lu"> Cunyue Lu</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Human pose estimation and tracking are to accurately identify and locate the positions of human joints in the video. It is a computer vision task which is of great significance for human motion recognition, behavior understanding and scene analysis. There has been remarkable progress on human pose estimation in recent years. However, more researches are needed for human pose tracking especially for online tracking. In this paper, a framework, called PoseSRPN, is proposed for online single-person pose estimation and tracking. We use Siamese network attaching a pose estimation branch to incorporate Single-person Pose Tracking (SPT) and Visual Object Tracking (VOT) into one framework. The pose estimation branch has a simple network structure that replaces the complex upsampling and convolution network structure with deconvolution. By augmenting the loss of fully convolutional Siamese network with the pose estimation task, pose estimation and tracking can be trained in one stage. Once trained, PoseSRPN only relies on a single bounding box initialization and producing human joints location. The experimental results show that while maintaining the good accuracy of pose estimation on COCO and PoseTrack datasets, the proposed method achieves a speed of 59 frame/s, which is superior to other pose tracking frameworks. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=computer%20vision" title="computer vision">computer vision</a>, <a href="https://publications.waset.org/abstracts/search?q=pose%20estimation" title=" pose estimation"> pose estimation</a>, <a href="https://publications.waset.org/abstracts/search?q=pose%20tracking" title=" pose tracking"> pose tracking</a>, <a href="https://publications.waset.org/abstracts/search?q=Siamese%20network" title=" Siamese network"> Siamese network</a> </p> <a href="https://publications.waset.org/abstracts/112839/online-pose-estimation-and-tracking-approach-with-siamese-region-proposal-network" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/112839.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">153</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">9443</span> Hybrid Velocity Control Approach for Tethered Aerial Vehicle</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Lovesh%20Goyal">Lovesh Goyal</a>, <a href="https://publications.waset.org/abstracts/search?q=Pushkar%20Dave"> Pushkar Dave</a>, <a href="https://publications.waset.org/abstracts/search?q=Prajyot%20Jadhav"> Prajyot Jadhav</a>, <a href="https://publications.waset.org/abstracts/search?q=GonnaYaswanth"> GonnaYaswanth</a>, <a href="https://publications.waset.org/abstracts/search?q=Sakshi%20Giri"> Sakshi Giri</a>, <a href="https://publications.waset.org/abstracts/search?q=Sahil%20Dharme"> Sahil Dharme</a>, <a href="https://publications.waset.org/abstracts/search?q=Rushika%20Joshi"> Rushika Joshi</a>, <a href="https://publications.waset.org/abstracts/search?q=Rishabh%20Verma"> Rishabh Verma</a>, <a href="https://publications.waset.org/abstracts/search?q=Shital%20Chiddarwar"> Shital Chiddarwar</a> </p> <p class="card-text"><strong>Abstract:</strong></p> With the rising need for human-robot interaction, researchers have proposed and tested multiple models with varying degrees of success. A few of these models performed on aerial platforms are commonly known as Tethered Aerial Systems. These aerial vehicles may be powered continuously by a tether cable, which addresses the predicament of the short battery life of quadcopters. This system finds applications to minimize humanitarian efforts for industrial, medical, agricultural, and service uses. However, a significant challenge in employing such systems is that it necessities attaining smooth and secure robot-human interaction while ensuring that the forces from the tether remain within the standard comfortable range for the humans. To tackle this problem, a hybrid control method that could switch between two control techniques: constant control input and the steady-state solution, is implemented. The constant control approach is implemented when a person is far from the target location, and error is thought to be eventually constant. The controller switches to the steady-state approach when the person reaches within a specific range of the goal position. Both strategies take into account human velocity feedback. This hybrid technique enhances the outcomes by assisting the person to reach the desired location while decreasing the human's unwanted disturbance throughout the process, thereby keeping the interaction between the robot and the subject smooth. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=unmanned%20aerial%20vehicle" title="unmanned aerial vehicle">unmanned aerial vehicle</a>, <a href="https://publications.waset.org/abstracts/search?q=tethered%20system" title=" tethered system"> tethered system</a>, <a href="https://publications.waset.org/abstracts/search?q=physical%20human-robot%20interaction" title=" physical human-robot interaction"> physical human-robot interaction</a>, <a href="https://publications.waset.org/abstracts/search?q=hybrid%20control" title=" hybrid control"> hybrid control</a> </p> <a href="https://publications.waset.org/abstracts/156082/hybrid-velocity-control-approach-for-tethered-aerial-vehicle" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/156082.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">98</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">9442</span> The Onus of Human to Society in Accordance with Constitution and Traditions</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Qamar%20Raza">Qamar Raza</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper deals with the human concern and onus which every person should provide to his/her society. Especially the rules and regulations described in constitution or traditions which we have inherited from ancestors should be followed, and also our lives should be led in accordance with them. The main concern of paper would be personal behavior with others in a good manner especially what he/she should exercise for society’s welfare. As human beings are the fundamental organ of society, who play a crucial role in reforming the society, they should try their best to develop it as well as maintain harmony, peace, we-feeling and mutual contact in the society. Focusing on how the modern society and its elements keep society successful. Regulations of our constitution and tradition are essential for reforming the society. In a nutshell, a human has to mingle in his society and keep mutual respect and understand the value of others as well as for himself. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=constitution" title="constitution">constitution</a>, <a href="https://publications.waset.org/abstracts/search?q=human%20beings" title=" human beings"> human beings</a>, <a href="https://publications.waset.org/abstracts/search?q=society" title=" society"> society</a>, <a href="https://publications.waset.org/abstracts/search?q=traditions" title=" traditions"> traditions</a> </p> <a href="https://publications.waset.org/abstracts/87042/the-onus-of-human-to-society-in-accordance-with-constitution-and-traditions" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/87042.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">223</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">9441</span> Information Literacy: Concept and Importance</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Gaurav%20Kumar">Gaurav Kumar</a> </p> <p class="card-text"><strong>Abstract:</strong></p> An information literate person is one who uses information effectively in all its forms. When presented with questions or problems, an information literate person would know what information to look for, how to search efficiently and be able to access relevant sources. In addition, an information literate person would have the ability to evaluate and select appropriate information sources and to use the information effectively and ethically to answer questions or solve problems. Information literacy has become an important element in higher education. The information literacy movement has internationally recognized standards and learning outcomes. The step-by-step process of achieving information literacy is particularly crucial in an era where knowledge could be disseminated through a variety of media. What is the relationship between information literacy as we define it in higher education and information literacy among non-academic populations? What forces will change how we think about the definition of information literacy in the future and how we will apply the definition in all environments? <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=information%20literacy" title="information literacy">information literacy</a>, <a href="https://publications.waset.org/abstracts/search?q=human%20beings" title=" human beings"> human beings</a>, <a href="https://publications.waset.org/abstracts/search?q=visual%20media%20and%20computer%20network%20etc" title=" visual media and computer network etc"> visual media and computer network etc</a>, <a href="https://publications.waset.org/abstracts/search?q=information%20literacy" title=" information literacy"> information literacy</a> </p> <a href="https://publications.waset.org/abstracts/36349/information-literacy-concept-and-importance" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/36349.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">339</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">9440</span> Person Re-Identification using Siamese Convolutional Neural Network</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Sello%20Mokwena">Sello Mokwena</a>, <a href="https://publications.waset.org/abstracts/search?q=Monyepao%20Thabang"> Monyepao Thabang</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In this study, we propose a comprehensive approach to address the challenges in person re-identification models. By combining a centroid tracking algorithm with a Siamese convolutional neural network model, our method excels in detecting, tracking, and capturing robust person features across non-overlapping camera views. The algorithm efficiently identifies individuals in the camera network, while the neural network extracts fine-grained global features for precise cross-image comparisons. The approach's effectiveness is further accentuated by leveraging the camera network topology for guidance. Our empirical analysis on benchmark datasets highlights its competitive performance, particularly evident when background subtraction techniques are selectively applied, underscoring its potential in advancing person re-identification techniques. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=camera%20network" title="camera network">camera network</a>, <a href="https://publications.waset.org/abstracts/search?q=convolutional%20neural%20network%20topology" title=" convolutional neural network topology"> convolutional neural network topology</a>, <a href="https://publications.waset.org/abstracts/search?q=person%20tracking" title=" person tracking"> person tracking</a>, <a href="https://publications.waset.org/abstracts/search?q=person%20re-identification" title=" person re-identification"> person re-identification</a>, <a href="https://publications.waset.org/abstracts/search?q=siamese" title=" siamese"> siamese</a> </p> <a href="https://publications.waset.org/abstracts/171989/person-re-identification-using-siamese-convolutional-neural-network" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/171989.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">72</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">9439</span> Real Time Multi Person Action Recognition Using Pose Estimates</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Aishrith%20Rao">Aishrith Rao</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Human activity recognition is an important aspect of video analytics, and many approaches have been recommended to enable action recognition. In this approach, the model is used to identify the action of the multiple people in the frame and classify them accordingly. A few approaches use RNNs and 3D CNNs, which are computationally expensive and cannot be trained with the small datasets which are currently available. Multi-person action recognition has been performed in order to understand the positions and action of people present in the video frame. The size of the video frame can be adjusted as a hyper-parameter depending on the hardware resources available. OpenPose has been used to calculate pose estimate using CNN to produce heap-maps, one of which provides skeleton features, which are basically joint features. The features are then extracted, and a classification algorithm can be applied to classify the action. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=human%20activity%20recognition" title="human activity recognition">human activity recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=computer%20vision" title=" computer vision"> computer vision</a>, <a href="https://publications.waset.org/abstracts/search?q=pose%20estimates" title=" pose estimates"> pose estimates</a>, <a href="https://publications.waset.org/abstracts/search?q=convolutional%20neural%20networks" title=" convolutional neural networks"> convolutional neural networks</a> </p> <a href="https://publications.waset.org/abstracts/127872/real-time-multi-person-action-recognition-using-pose-estimates" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/127872.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">141</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">9438</span> Testing a Moderated Mediation Model of Person–Organization Fit, Organizational Support, and Feelings of Violation</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Chi-Tai%20Shen">Chi-Tai Shen</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This study aims to examine whether perceived organizational support moderates the relationship between person–former organization fit and person–organization fit after the mediating effect of feelings of violation. A two-stage data collection method was used. Based on our research requirements, we only approached participants who were involuntary turnover from their former organizations and looking for a new job. Our final usable sample was comprised of a total of 264 participants from Taiwan. We followed Muller, Judd, and Yzerbyt, and Preacher, Rucker, and Hayes’s suggestions to test our moderated mediation model. This study found that employee perceived organizational support moderated the indirect effect of person–former organization fit on person–organization fit (through feelings of violation). Our study ends with a discussion of the main research findings and their limitations and presents suggestions regarding the direction of future studies and the empirical implications of the results. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=person%E2%80%93organization%20fit" title="person–organization fit">person–organization fit</a>, <a href="https://publications.waset.org/abstracts/search?q=feelings%20of%20violation" title=" feelings of violation"> feelings of violation</a>, <a href="https://publications.waset.org/abstracts/search?q=organizational%20support" title=" organizational support"> organizational support</a>, <a href="https://publications.waset.org/abstracts/search?q=moderated%20mediation" title=" moderated mediation"> moderated mediation</a> </p> <a href="https://publications.waset.org/abstracts/64313/testing-a-moderated-mediation-model-of-person-organization-fit-organizational-support-and-feelings-of-violation" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/64313.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">265</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">9437</span> Social Contact Patterns among School-Age Children in Taiwan</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Dih%20Ling%20Luh">Dih Ling Luh</a>, <a href="https://publications.waset.org/abstracts/search?q=Zhi%20Shih%20You"> Zhi Shih You</a>, <a href="https://publications.waset.org/abstracts/search?q=Szu%20Chieh%20Chen"> Szu Chieh Chen</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Social contact patterns among school-age children play an important role in the epidemiology of infectious disease. Since many of the greatest threats to human health are spread by direct person-to-person contact, understanding the spread of respiratory pathogens and patterns of human interactions are public health priorities. This study used social contact diaries to compare the number of contacts per day per participant across different flu/non-flu seasons and weekend/weekday. We also present contact properties such as sex, age, masking, setting, frequency, duration, and contact types among school-age children (grades 7–8). The sample size with pair-wise comparisons for the seasons (flu/non-flu) and stratification by location were 54 and 83, respectively. There was no difference in the number of contacts during the flu and non-flu seasons, with averages of 16.3 (S.D. = 12.9) and 14.6 (S.D. = 9.5) people, respectively. Weekdays were associated with 23% and 28% more contacts than weekend days during the non-flu and flu seasons, respectively (p < 0.001) (Wilcoxon signed-rank test). <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=contact%20patterns" title="contact patterns">contact patterns</a>, <a href="https://publications.waset.org/abstracts/search?q=behavior" title=" behavior"> behavior</a>, <a href="https://publications.waset.org/abstracts/search?q=influenza" title=" influenza"> influenza</a>, <a href="https://publications.waset.org/abstracts/search?q=social%20mixing" title=" social mixing"> social mixing</a> </p> <a href="https://publications.waset.org/abstracts/42689/social-contact-patterns-among-school-age-children-in-taiwan" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/42689.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">345</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">9436</span> Structure of Consciousness According to Deep Systemic Constellations</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Dmitry%20Ustinov">Dmitry Ustinov</a>, <a href="https://publications.waset.org/abstracts/search?q=Olga%20Lobareva"> Olga Lobareva</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The method of Deep Systemic Constellations is based on a phenomenological approach. Using the phenomenon of substitutive perception it was established that the human consciousness has a hierarchical structure, where deeper levels govern more superficial ones (reactive level, energy or ancestral level, spiritual level, magical level, and deeper levels of consciousness). Every human possesses a depth of consciousness to the spiritual level, however deeper levels of consciousness are not found for every person. It was found that the spiritual level of consciousness is not homogeneous and has its own internal hierarchy of sublevels (the level of formation of spiritual values, the level of the 'inner observer', the level of the 'path', the level of 'God', etc.). The depth of the spiritual level of a person defines the paradigm of all his internal processes and the main motives of the movement through life. At any level of consciousness disturbances can occur. Disturbances at a deeper level cause disturbances at more superficial levels and are manifested in the daily life of a person in feelings, behavioral patterns, psychosomatics, etc. Without removing the deepest source of a disturbance it is impossible to completely correct its manifestation in the actual moment. Thus a destructive pattern of feeling and behavior in the actual moment can exist because of a disturbance, for example, at the spiritual level of a person (although in most cases the source is at the energy level). Psychological work with superficial levels without removing a source of disturbance cannot fully solve the problem. The method of Deep Systemic Constellations allows one to work effectively with the source of the problem located at any depth. The methodology has confirmed its effectiveness in working with more than a thousand people. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=constellations" title="constellations">constellations</a>, <a href="https://publications.waset.org/abstracts/search?q=spiritual%20psychology" title=" spiritual psychology"> spiritual psychology</a>, <a href="https://publications.waset.org/abstracts/search?q=structure%20of%20consciousness" title=" structure of consciousness"> structure of consciousness</a>, <a href="https://publications.waset.org/abstracts/search?q=transpersonal%20psychology" title=" transpersonal psychology"> transpersonal psychology</a> </p> <a href="https://publications.waset.org/abstracts/82375/structure-of-consciousness-according-to-deep-systemic-constellations" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/82375.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">249</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">9435</span> Absolute Liability in International Human Rights Law</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Gassem%20Alfaleh">Gassem Alfaleh</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In Strict liability, a person can be held liable for any harm resulting from certain actions or activities without any mistake. The liability is strict because a person can be liable when he or she commits any harm with or without his intention. The duty owed is the duty to avoid causing the plaintiff any harm. However, “strict liability is imposed at the International level by two types of treaties, namely those limited to giving internal effect to treaty provisions and those that impose responsibilities on states. The basic principle of strict liability is that there is a liability on the operator or the state (when the act concerned is attributable to the state) for damage inflicted without there being a need to prove unlawful behavior”. In international human rights law, strict liability can exist when a defendant is in legal jeopardy by virtue of an internationally wrongful act, without any accompanying intent or mental state. When the defendant engages in an abnormally dangerous activity against the environment, he will be held liable for any harm it causes, even if he was not at fault. The paper will focus on these activities under international human rights law. First, the paper will define important terms in the first section of the paper. Second, it will focus on state and non-state actors in terms of strict liability. Then, the paper will cover three major areas in which states should be liable for hazardous activities: (1) nuclear energy, (2) maritime pollution, (3) Space Law, and (4) other hazardous activities which damage the environment. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=human%20rights" title="human rights">human rights</a>, <a href="https://publications.waset.org/abstracts/search?q=law" title=" law"> law</a>, <a href="https://publications.waset.org/abstracts/search?q=legal" title=" legal"> legal</a>, <a href="https://publications.waset.org/abstracts/search?q=absolute" title=" absolute"> absolute</a> </p> <a href="https://publications.waset.org/abstracts/143159/absolute-liability-in-international-human-rights-law" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/143159.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">148</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">9434</span> Determination of Neighbor Node in Consideration of the Imaging Range of Cameras in Automatic Human Tracking System</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Kozo%20Tanigawa">Kozo Tanigawa</a>, <a href="https://publications.waset.org/abstracts/search?q=Tappei%20Yotsumoto"> Tappei Yotsumoto</a>, <a href="https://publications.waset.org/abstracts/search?q=Kenichi%20Takahashi"> Kenichi Takahashi</a>, <a href="https://publications.waset.org/abstracts/search?q=Takao%20Kawamura"> Takao Kawamura</a>, <a href="https://publications.waset.org/abstracts/search?q=Kazunori%20Sugahara"> Kazunori Sugahara</a> </p> <p class="card-text"><strong>Abstract:</strong></p> An automatic human tracking system using mobile agent technology is realized because a mobile agent moves in accordance with a migration of a target person. In this paper, we propose a method for determining the neighbor node in consideration of the imaging range of cameras. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=human%20tracking" title="human tracking">human tracking</a>, <a href="https://publications.waset.org/abstracts/search?q=mobile%20agent" title=" mobile agent"> mobile agent</a>, <a href="https://publications.waset.org/abstracts/search?q=Pan%2FTilt%2FZoom" title=" Pan/Tilt/Zoom"> Pan/Tilt/Zoom</a>, <a href="https://publications.waset.org/abstracts/search?q=neighbor%20relation" title=" neighbor relation"> neighbor relation</a> </p> <a href="https://publications.waset.org/abstracts/11821/determination-of-neighbor-node-in-consideration-of-the-imaging-range-of-cameras-in-automatic-human-tracking-system" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/11821.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">516</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">9433</span> Rehabilitation of the Blind Using Sono-Visualization Tool</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Ashwani%20Kumar">Ashwani Kumar</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In human beings, eyes play a vital role. A very less research has been done for rehabilitation of blindness for the blind people. This paper discusses the work that helps blind people for recognizing the basic shapes of the objects like circle, square, triangle, horizontal lines, vertical lines, diagonal lines and the wave forms like sinusoidal, square, triangular etc. This is largely achieved by using a digital camera, which is used to capture the visual information present in front of the blind person and a software program, which achieves the image processing operations, and finally the processed image is converted into sound. After the sound generation process, the generated sound is fed to the blind person through headphones for visualizing the imaginary image of the object. For visualizing the imaginary image of the object, it needs to train the blind person. Various training process methods had been applied for recognizing the object. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=image%20processing" title="image processing">image processing</a>, <a href="https://publications.waset.org/abstracts/search?q=pixel" title=" pixel"> pixel</a>, <a href="https://publications.waset.org/abstracts/search?q=pitch" title=" pitch"> pitch</a>, <a href="https://publications.waset.org/abstracts/search?q=loudness" title=" loudness"> loudness</a>, <a href="https://publications.waset.org/abstracts/search?q=sound%20generation" title=" sound generation"> sound generation</a>, <a href="https://publications.waset.org/abstracts/search?q=edge%20detection" title=" edge detection"> edge detection</a>, <a href="https://publications.waset.org/abstracts/search?q=brightness" title=" brightness"> brightness</a> </p> <a href="https://publications.waset.org/abstracts/14606/rehabilitation-of-the-blind-using-sono-visualization-tool" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/14606.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">388</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">9432</span> Images Selection and Best Descriptor Combination for Multi-Shot Person Re-Identification</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Yousra%20Hadj%20Hassen">Yousra Hadj Hassen</a>, <a href="https://publications.waset.org/abstracts/search?q=Walid%20Ayedi"> Walid Ayedi</a>, <a href="https://publications.waset.org/abstracts/search?q=Tarek%20Ouni"> Tarek Ouni</a>, <a href="https://publications.waset.org/abstracts/search?q=Mohamed%20Jallouli"> Mohamed Jallouli</a> </p> <p class="card-text"><strong>Abstract:</strong></p> To re-identify a person is to check if he/she has been already seen over a cameras network. Recently, re-identifying people over large public cameras networks has become a crucial task of great importance to ensure public security. The vision community has deeply investigated this area of research. Most existing researches rely only on the spatial appearance information from either one or multiple person images. Actually, the real person re-id framework is a multi-shot scenario. However, to efficiently model a person’s appearance and to choose the best samples to remain a challenging problem. In this work, an extensive comparison of descriptors of state of the art associated with the proposed frame selection method is studied. Specifically, we evaluate the samples selection approach using multiple proposed descriptors. We show the effectiveness and advantages of the proposed method by extensive comparisons with related state-of-the-art approaches using two standard datasets PRID2011 and iLIDS-VID. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=camera%20network" title="camera network">camera network</a>, <a href="https://publications.waset.org/abstracts/search?q=descriptor" title=" descriptor"> descriptor</a>, <a href="https://publications.waset.org/abstracts/search?q=model" title=" model"> model</a>, <a href="https://publications.waset.org/abstracts/search?q=multi-shot" title=" multi-shot"> multi-shot</a>, <a href="https://publications.waset.org/abstracts/search?q=person%20re-identification" title=" person re-identification"> person re-identification</a>, <a href="https://publications.waset.org/abstracts/search?q=selection" title=" selection"> selection</a> </p> <a href="https://publications.waset.org/abstracts/65815/images-selection-and-best-descriptor-combination-for-multi-shot-person-re-identification" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/65815.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">278</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">9431</span> Face Tracking and Recognition Using Deep Learning Approach</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Degale%20Desta">Degale Desta</a>, <a href="https://publications.waset.org/abstracts/search?q=Cheng%20Jian"> Cheng Jian</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The most important factor in identifying a person is their face. Even identical twins have their own distinct faces. As a result, identification and face recognition are needed to tell one person from another. A face recognition system is a verification tool used to establish a person's identity using biometrics. Nowadays, face recognition is a common technique used in a variety of applications, including home security systems, criminal identification, and phone unlock systems. This system is more secure because it only requires a facial image instead of other dependencies like a key or card. Face detection and face identification are the two phases that typically make up a human recognition system.The idea behind designing and creating a face recognition system using deep learning with Azure ML Python's OpenCV is explained in this paper. Face recognition is a task that can be accomplished using deep learning, and given the accuracy of this method, it appears to be a suitable approach. To show how accurate the suggested face recognition system is, experimental results are given in 98.46% accuracy using Fast-RCNN Performance of algorithms under different training conditions. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=deep%20learning" title="deep learning">deep learning</a>, <a href="https://publications.waset.org/abstracts/search?q=face%20recognition" title=" face recognition"> face recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=identification" title=" identification"> identification</a>, <a href="https://publications.waset.org/abstracts/search?q=fast-RCNN" title=" fast-RCNN"> fast-RCNN</a> </p> <a href="https://publications.waset.org/abstracts/163134/face-tracking-and-recognition-using-deep-learning-approach" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/163134.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">140</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">9430</span> Offline Signature Verification in Punjabi Based On SURF Features and Critical Point Matching Using HMM</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Rajpal%20Kaur">Rajpal Kaur</a>, <a href="https://publications.waset.org/abstracts/search?q=Pooja%20Choudhary"> Pooja Choudhary</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Biometrics, which refers to identifying an individual based on his or her physiological or behavioral characteristics, has the capabilities to the reliably distinguish between an authorized person and an imposter. The Signature recognition systems can categorized as offline (static) and online (dynamic). This paper presents Surf Feature based recognition of offline signatures system that is trained with low-resolution scanned signature images. The signature of a person is an important biometric attribute of a human being which can be used to authenticate human identity. However the signatures of human can be handled as an image and recognized using computer vision and HMM techniques. With modern computers, there is need to develop fast algorithms for signature recognition. There are multiple techniques are defined to signature recognition with a lot of scope of research. In this paper, (static signature) off-line signature recognition & verification using surf feature with HMM is proposed, where the signature is captured and presented to the user in an image format. Signatures are verified depended on parameters extracted from the signature using various image processing techniques. The Off-line Signature Verification and Recognition is implemented using Mat lab platform. This work has been analyzed or tested and found suitable for its purpose or result. The proposed method performs better than the other recently proposed methods. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=offline%20signature%20verification" title="offline signature verification">offline signature verification</a>, <a href="https://publications.waset.org/abstracts/search?q=offline%20signature%20recognition" title=" offline signature recognition"> offline signature recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=signatures" title=" signatures"> signatures</a>, <a href="https://publications.waset.org/abstracts/search?q=SURF%20features" title=" SURF features"> SURF features</a>, <a href="https://publications.waset.org/abstracts/search?q=HMM" title=" HMM "> HMM </a> </p> <a href="https://publications.waset.org/abstracts/20259/offline-signature-verification-in-punjabi-based-on-surf-features-and-critical-point-matching-using-hmm" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/20259.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">384</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">9429</span> A Comparative Approach to the Concept of Incarnation of God in Hinduism and Christianity</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Cemil%20Kutluturk">Cemil Kutluturk</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This is a comparative study of the incarnation of God according to Hinduism and Christianity. After dealing with their basic ideas on the concept of the incarnation of God, the main similarities and differences between each other will be examined by quoting references from their sacred texts. In Hinduism, the term avatara is used in order to indicate the concept of the incarnation of God. The word avatara is derived from ava (down) and tri (to cross, to save, attain). Thus avatara means to come down or to descend. Although an avatara is commonly considered as an appearance of any deity on earth, the term refers particularly to descents of Vishnu. According to Hinduism, God becomes an avatara in every age and entering into diverse wombs for the sake of establishing righteousness. On the Christian side, the word incarnation means enfleshment. In Christianity, it is believed that the Logos or Word, the Second Person of Trinity, presumed human reality. Incarnation refers both to the act of God becoming a human being and to the result of his action, namely the permanent union of the divine and human natures in the one Person of the Word. When the doctrines of incarnation and avatara are compared some similarities and differences can be found between each other. The basic similarity is that both doctrines are not bound by the laws of nature as human beings are. They reveal God’s personal love and concern, and emphasize loving devotion. Their entry into the world is generally accompanied by extraordinary signs. In both cases, the descent of God allows for human beings to ascend to God. On the other hand, there are some distinctions between two religious traditions. For instance, according to Hinduism there are many and repeated avataras, while Christ comes only once. Indeed, this is related to the respective cyclic and linear worldviews of the two religions. Another difference is that in Hinduism avataras are real and perfect, while in Christianity Christ is also real, yet imperfect; that is, he has human imperfections, except sin. While Christ has never been thought of as a partial incarnation, in Hinduism there are some partial and full avataras. The other difference is that while the purpose of Christ is primarily ultimate salvation, not every avatara grants ultimate liberation, some of them come only to save a devotee from a specific predicament. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=Avatara" title="Avatara">Avatara</a>, <a href="https://publications.waset.org/abstracts/search?q=Christianity" title=" Christianity"> Christianity</a>, <a href="https://publications.waset.org/abstracts/search?q=Hinduism" title=" Hinduism"> Hinduism</a>, <a href="https://publications.waset.org/abstracts/search?q=incarnation" title=" incarnation"> incarnation</a> </p> <a href="https://publications.waset.org/abstracts/68777/a-comparative-approach-to-the-concept-of-incarnation-of-god-in-hinduism-and-christianity" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/68777.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">256</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">9428</span> Anthropocentric and Ecocentric Representation of Human-Environment Relationship in Paulo Coelho&#039;s the Alchemist</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Tooba%20Sabir">Tooba Sabir</a>, <a href="https://publications.waset.org/abstracts/search?q=Namra%20Sabir"> Namra Sabir</a>, <a href="https://publications.waset.org/abstracts/search?q=Mohammad%20Amjad%20Sabir"> Mohammad Amjad Sabir</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The human-environment relationship has been projected since the beginning of literary tradition i.e. pastoral tradition, however, the interest of critics, writers and poets, in this view, has been developed, since the last couple of decades because of the increasing scope of environmental studies and growing environmental issues. One such novel, that projects human-environment relationship, is ‘The Alchemist.’ It is Paulo Coelho’s one of the most read novels. It holds a central theme that the universe conspires to help a person achieve his destiny, projecting anthropocentrism and human domination by centralizing human and devaluing the intrinsic worth of ecosystem. However, ecocritical analysis of the text reveals that the novel contains, at several instances, ecocentrism as well e.g. ‘everything on earth is being continuously transformed because earth is alive.’ This portrays ecosphere as living and dynamic entity rather than a mere instrument for human to achieve his destiny. The idea that the universe shares the same language projects unity of nature showing the relationship between human and non-human aspects of the environment as one being and not separate or superior to one another. It depicts human as a part of the environment and not the lord of the world. Therefore, it can be concluded that the novel oscillates between both the ecocentric and the anthropocentric phenomena. It is not suggested, however, that one phenomenon should be valued over the other but that the complexities of both the phenomena should be recognized, acknowledged and valued in order to encourage the interactions between literature and environment. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=anthropocentrism" title="anthropocentrism">anthropocentrism</a>, <a href="https://publications.waset.org/abstracts/search?q=ecocentrism" title=" ecocentrism"> ecocentrism</a>, <a href="https://publications.waset.org/abstracts/search?q=ecocritical%20analysis" title=" ecocritical analysis"> ecocritical analysis</a>, <a href="https://publications.waset.org/abstracts/search?q=human-environment%20relationship" title=" human-environment relationship"> human-environment relationship</a> </p> <a href="https://publications.waset.org/abstracts/72371/anthropocentric-and-ecocentric-representation-of-human-environment-relationship-in-paulo-coelhos-the-alchemist" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/72371.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">313</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">9427</span> General Network with Four Nodes and Four Activities with Triangular Fuzzy Number as Activity Times</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Rashmi%20Tamhankar">Rashmi Tamhankar</a>, <a href="https://publications.waset.org/abstracts/search?q=Madhav%20Bapat"> Madhav Bapat</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In many projects, we have to use human judgment for determining the duration of the activities which may vary from person to person. Hence, there is vagueness about the time duration for activities in network planning. Fuzzy sets can handle such vague or imprecise concepts and has an application to such network. The vague activity times can be represented by triangular fuzzy numbers. In this paper, a general network with fuzzy activity times is considered and conditions for the critical path are obtained also we compute total float time of each activity. Several numerical examples are discussed. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=PERT" title="PERT">PERT</a>, <a href="https://publications.waset.org/abstracts/search?q=CPM" title=" CPM"> CPM</a>, <a href="https://publications.waset.org/abstracts/search?q=triangular%20fuzzy%20numbers" title=" triangular fuzzy numbers"> triangular fuzzy numbers</a>, <a href="https://publications.waset.org/abstracts/search?q=fuzzy%20activity%20times" title=" fuzzy activity times "> fuzzy activity times </a> </p> <a href="https://publications.waset.org/abstracts/28350/general-network-with-four-nodes-and-four-activities-with-triangular-fuzzy-number-as-activity-times" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/28350.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">473</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">9426</span> The Structure of the Intangible Capital</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Kolesnikova%20Julia">Kolesnikova Julia</a>, <a href="https://publications.waset.org/abstracts/search?q=Fakhrutdinova%20Elena"> Fakhrutdinova Elena</a>, <a href="https://publications.waset.org/abstracts/search?q=Zagidullina%20Venera"> Zagidullina Venera</a>, <a href="https://publications.waset.org/abstracts/search?q=Kamasheva%20Anastasia"> Kamasheva Anastasia</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The article deals with the structure of intangible capital. A significant share of intangible capital is associated with a person as such and can be considered as human capital, which in turn also has a complex structure, including intellectual, social, organizational, client, reputational capital. We have allocated a separate category of intangible capital - unidentifiable capital, including a variety of synergistic interaction effects, etc. the structure of intangible capital. A significant share of intangible capital is associated with a person as such and can be considered as human capital, which in turn also has a complex structure, including intellectual, social, organizational, client, reputational capital. We have allocated unidentifiable capital as a separate category of intangible capital, including a variety of synergistic interaction effects and other. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=intangible%20capital" title="intangible capital">intangible capital</a>, <a href="https://publications.waset.org/abstracts/search?q=intangible%20property" title=" intangible property"> intangible property</a>, <a href="https://publications.waset.org/abstracts/search?q=object%20of%20intangible%20property" title=" object of intangible property"> object of intangible property</a>, <a href="https://publications.waset.org/abstracts/search?q=reputation%20capital" title=" reputation capital"> reputation capital</a> </p> <a href="https://publications.waset.org/abstracts/25209/the-structure-of-the-intangible-capital" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/25209.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">535</span> </span> </div> </div> <ul class="pagination"> <li class="page-item disabled"><span class="page-link">&lsaquo;</span></li> <li class="page-item active"><span class="page-link">1</span></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=human%20person&amp;page=2">2</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=human%20person&amp;page=3">3</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=human%20person&amp;page=4">4</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=human%20person&amp;page=5">5</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=human%20person&amp;page=6">6</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=human%20person&amp;page=7">7</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=human%20person&amp;page=8">8</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=human%20person&amp;page=9">9</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=human%20person&amp;page=10">10</a></li> <li class="page-item disabled"><span class="page-link">...</span></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=human%20person&amp;page=315">315</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=human%20person&amp;page=316">316</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=human%20person&amp;page=2" rel="next">&rsaquo;</a></li> </ul> </div> </main> <footer> <div id="infolinks" class="pt-3 pb-2"> <div class="container"> <div style="background-color:#f5f5f5;" class="p-3"> <div class="row"> <div class="col-md-2"> <ul class="list-unstyled"> About <li><a href="https://waset.org/page/support">About Us</a></li> <li><a href="https://waset.org/page/support#legal-information">Legal</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/WASET-16th-foundational-anniversary.pdf">WASET celebrates its 16th foundational anniversary</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Account <li><a href="https://waset.org/profile">My Account</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Explore <li><a href="https://waset.org/disciplines">Disciplines</a></li> <li><a href="https://waset.org/conferences">Conferences</a></li> <li><a href="https://waset.org/conference-programs">Conference Program</a></li> <li><a href="https://waset.org/committees">Committees</a></li> <li><a href="https://publications.waset.org">Publications</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Research <li><a href="https://publications.waset.org/abstracts">Abstracts</a></li> <li><a href="https://publications.waset.org">Periodicals</a></li> <li><a href="https://publications.waset.org/archive">Archive</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Open Science <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Science-Philosophy.pdf">Open Science Philosophy</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Science-Award.pdf">Open Science Award</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Society-Open-Science-and-Open-Innovation.pdf">Open Innovation</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Postdoctoral-Fellowship-Award.pdf">Postdoctoral Fellowship Award</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Scholarly-Research-Review.pdf">Scholarly Research Review</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Support <li><a href="https://waset.org/page/support">Support</a></li> <li><a href="https://waset.org/profile/messages/create">Contact Us</a></li> <li><a href="https://waset.org/profile/messages/create">Report Abuse</a></li> </ul> </div> </div> </div> </div> </div> <div class="container text-center"> <hr style="margin-top:0;margin-bottom:.3rem;"> <a href="https://creativecommons.org/licenses/by/4.0/" target="_blank" class="text-muted small">Creative Commons Attribution 4.0 International License</a> <div id="copy" class="mt-2">&copy; 2024 World Academy of Science, Engineering and Technology</div> </div> </footer> <a href="javascript:" id="return-to-top"><i class="fas fa-arrow-up"></i></a> <div class="modal" id="modal-template"> <div class="modal-dialog"> <div class="modal-content"> <div class="row m-0 mt-1"> <div class="col-md-12"> <button type="button" class="close" data-dismiss="modal" aria-label="Close"><span aria-hidden="true">&times;</span></button> </div> </div> <div class="modal-body"></div> </div> </div> </div> <script src="https://cdn.waset.org/static/plugins/jquery-3.3.1.min.js"></script> <script src="https://cdn.waset.org/static/plugins/bootstrap-4.2.1/js/bootstrap.bundle.min.js"></script> <script src="https://cdn.waset.org/static/js/site.js?v=150220211556"></script> <script> jQuery(document).ready(function() { /*jQuery.get("https://publications.waset.org/xhr/user-menu", function (response) { jQuery('#mainNavMenu').append(response); });*/ jQuery.get({ url: "https://publications.waset.org/xhr/user-menu", cache: false }).then(function(response){ jQuery('#mainNavMenu').append(response); }); }); </script> </body> </html>

Pages: 1 2 3 4 5 6 7 8 9 10