CINXE.COM

Search results for: recognition primed decision

<!DOCTYPE html> <html lang="en" dir="ltr"> <head> <!-- Google tag (gtag.js) --> <script async src="https://www.googletagmanager.com/gtag/js?id=G-P63WKM1TM1"></script> <script> window.dataLayer = window.dataLayer || []; function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'G-P63WKM1TM1'); </script> <!-- Yandex.Metrika counter --> <script type="text/javascript" > (function(m,e,t,r,i,k,a){m[i]=m[i]||function(){(m[i].a=m[i].a||[]).push(arguments)}; m[i].l=1*new Date(); for (var j = 0; j < document.scripts.length; j++) {if (document.scripts[j].src === r) { return; }} k=e.createElement(t),a=e.getElementsByTagName(t)[0],k.async=1,k.src=r,a.parentNode.insertBefore(k,a)}) (window, document, "script", "https://mc.yandex.ru/metrika/tag.js", "ym"); ym(55165297, "init", { clickmap:false, trackLinks:true, accurateTrackBounce:true, webvisor:false }); </script> <noscript><div><img src="https://mc.yandex.ru/watch/55165297" style="position:absolute; left:-9999px;" alt="" /></div></noscript> <!-- /Yandex.Metrika counter --> <!-- Matomo --> <!-- End Matomo Code --> <title>Search results for: recognition primed decision</title> <meta name="description" content="Search results for: recognition primed decision"> <meta name="keywords" content="recognition primed decision"> <meta name="viewport" content="width=device-width, initial-scale=1, minimum-scale=1, maximum-scale=1, user-scalable=no"> <meta charset="utf-8"> <link href="https://cdn.waset.org/favicon.ico" type="image/x-icon" rel="shortcut icon"> <link href="https://cdn.waset.org/static/plugins/bootstrap-4.2.1/css/bootstrap.min.css" rel="stylesheet"> <link href="https://cdn.waset.org/static/plugins/fontawesome/css/all.min.css" rel="stylesheet"> <link href="https://cdn.waset.org/static/css/site.css?v=150220211555" rel="stylesheet"> </head> <body> <header> <div class="container"> <nav class="navbar navbar-expand-lg navbar-light"> <a class="navbar-brand" href="https://waset.org"> <img src="https://cdn.waset.org/static/images/wasetc.png" alt="Open Science Research Excellence" title="Open Science Research Excellence" /> </a> <button class="d-block d-lg-none navbar-toggler ml-auto" type="button" data-toggle="collapse" data-target="#navbarMenu" aria-controls="navbarMenu" aria-expanded="false" aria-label="Toggle navigation"> <span class="navbar-toggler-icon"></span> </button> <div class="w-100"> <div class="d-none d-lg-flex flex-row-reverse"> <form method="get" action="https://waset.org/search" class="form-inline my-2 my-lg-0"> <input class="form-control mr-sm-2" type="search" placeholder="Search Conferences" value="recognition primed decision" name="q" aria-label="Search"> <button class="btn btn-light my-2 my-sm-0" type="submit"><i class="fas fa-search"></i></button> </form> </div> <div class="collapse navbar-collapse mt-1" id="navbarMenu"> <ul class="navbar-nav ml-auto align-items-center" id="mainNavMenu"> <li class="nav-item"> <a class="nav-link" href="https://waset.org/conferences" title="Conferences in 2024/2025/2026">Conferences</a> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/disciplines" title="Disciplines">Disciplines</a> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/committees" rel="nofollow">Committees</a> </li> <li class="nav-item dropdown"> <a class="nav-link dropdown-toggle" href="#" id="navbarDropdownPublications" role="button" data-toggle="dropdown" aria-haspopup="true" aria-expanded="false"> Publications </a> <div class="dropdown-menu" aria-labelledby="navbarDropdownPublications"> <a class="dropdown-item" href="https://publications.waset.org/abstracts">Abstracts</a> <a class="dropdown-item" href="https://publications.waset.org">Periodicals</a> <a class="dropdown-item" href="https://publications.waset.org/archive">Archive</a> </div> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/page/support" title="Support">Support</a> </li> </ul> </div> </div> </nav> </div> </header> <main> <div class="container mt-4"> <div class="row"> <div class="col-md-9 mx-auto"> <form method="get" action="https://publications.waset.org/abstracts/search"> <div id="custom-search-input"> <div class="input-group"> <i class="fas fa-search"></i> <input type="text" class="search-query" name="q" placeholder="Author, Title, Abstract, Keywords" value="recognition primed decision"> <input type="submit" class="btn_search" value="Search"> </div> </div> </form> </div> </div> <div class="row mt-3"> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Commenced</strong> in January 2007</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Frequency:</strong> Monthly</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Edition:</strong> International</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Paper Count:</strong> 5567</div> </div> </div> </div> <h1 class="mt-3 mb-3 text-center" style="font-size:1.6rem;">Search results for: recognition primed decision</h1> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">5567</span> Time Pressure and Its Effect at Tactical Level of Disaster Management</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Agoston%20Restas">Agoston Restas</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Introduction: In case of managing disasters decision makers can face many times such a special situation where any pre-sign of the drastically change is missing therefore the improvised decision making can be required. The complexity, ambiguity, uncertainty or the volatility of the situation can require many times the improvisation as decision making. It can be taken at any level of the management (strategic, operational and tactical) but at tactical level the main reason of the improvisation is surely time pressure. It is certainly the biggest problem during the management. Methods: The author used different tools and methods to achieve his goals; one of them was the study of the relevant literature, the other one was his own experience as a firefighting manager. Other results come from two surveys that are referred to; one of them was an essay analysis, the second one was a word association test, specially created for the research. Results and discussion: This article proves that, in certain situations, the multi-criteria, evaluating decision-making processes simply cannot be used or only in a limited manner. However, it can be seen that managers, directors or commanders are many times in situations that simply cannot be ignored when making decisions which should be made in a short time. The functional background of decisions made in a short time, their mechanism, which is different from the conventional, was studied lately and this special decision procedure was given the name recognition-primed decision. In the article, author illustrates the limits of the possibilities of analytical decision-making, presents the general operating mechanism of recognition-primed decision-making, elaborates on its special model relevant to managers at tactical level, as well as explore and systemize the factors that facilitate (catalyze) the processes with an example with fire managers. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=decision%20making" title="decision making">decision making</a>, <a href="https://publications.waset.org/abstracts/search?q=disaster%20managers" title=" disaster managers"> disaster managers</a>, <a href="https://publications.waset.org/abstracts/search?q=recognition%20primed%20decision" title=" recognition primed decision"> recognition primed decision</a>, <a href="https://publications.waset.org/abstracts/search?q=model%20for%20making%20decisions%20in%20emergencies" title=" model for making decisions in emergencies"> model for making decisions in emergencies</a> </p> <a href="https://publications.waset.org/abstracts/47916/time-pressure-and-its-effect-at-tactical-level-of-disaster-management" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/47916.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">259</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">5566</span> The Efficacy of Salicylic Acid and Puccinia Triticina Isolates Priming Wheat Plant to Diuraphis Noxia Damage</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Huzaifa%20Bilal">Huzaifa Bilal</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Russian wheat aphid (Diuraphis noxia, Kurdjumov) is considered an economically important wheat (Triticum aestivum L.) pest worldwide and in South Africa. The RWA damages wheat plants and reduces annual yields by more than 10%. Even though pest management by pesticides and resistance breeding is an attractive option, chemicals can cause harm to the environment. Furthermore, the evolution of resistance-breaking aphid biotypes has out-paced the release of resistant cultivars. An alternative strategy to reduce the impact of aphid damage on plants, such as priming, which sensitizes plants to respond effectively to subsequent attacks, is necessary. In this study, wheat plants at the seedling and flag leaf stages were primed by salicylic acid and isolate representative of two races of the leaf rust pathogen Puccinia triticina Eriks. (Pt), before RWA (South African RWA biotypes 1 and 4) infestation. Randomized complete block design experiments were conducted in the greenhouse to study plant-pest interaction in primed and non-primed plants. Analysis of induced aphid damage indicated salicylic acid differentially primed wheat cultivars for increased resistance to the RWASA biotypes. At the seedling stage, all cultivars were primed for enhanced resistance to RWASA1, while at the flag leaf stage, only PAN 3111, SST 356 and Makalote were primed for increased resistance. The Puccinia triticina efficaciously primed wheat cultivars for excellent resistance to RWASA1 at the seedling and flag leaf stages. However, Pt failed to enhance the four Lesotho cultivars' resistance to RWASA4 at the seedling stage and PAN 3118 at the flag leaf stage. The induced responses at the seedling and flag leaf stages were positively correlated in all the treatments. Primed plants induced high activity of antioxidant enzymes like peroxidase, ascorbate peroxidase and superoxide dismutase. High antioxidant activity indicates activation of resistant responses in primed plants (primed by salicylic acid and Puccina triticina). Isolates of avirulent Pt races can be a worthy priming agent for improved resistance to RWA infestation. Further confirmation of the priming effects needs to be evaluated at the field trials to investigate its application efficiency. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=Russian%20wheat%20aphis" title="Russian wheat aphis">Russian wheat aphis</a>, <a href="https://publications.waset.org/abstracts/search?q=salicylic%20acid" title=" salicylic acid"> salicylic acid</a>, <a href="https://publications.waset.org/abstracts/search?q=puccina%20triticina" title=" puccina triticina"> puccina triticina</a>, <a href="https://publications.waset.org/abstracts/search?q=priming" title=" priming"> priming</a> </p> <a href="https://publications.waset.org/abstracts/139395/the-efficacy-of-salicylic-acid-and-puccinia-triticina-isolates-priming-wheat-plant-to-diuraphis-noxia-damage" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/139395.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">208</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">5565</span> Influence of Salicylic Acid on Yield and Some Physiological Parameters in Chickpea (Cicer arietinum L.)</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Farid%20Shekari">Farid Shekari</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Salicylic Acid (SA) is a plant hormone that improves some physiological responses of plants under stress conditions. Seeds of two desi type chickpea cultivars, viz., Kaka and Pirooz, primed with 250, 500, 750, and 1000 μM of SA and a group of seeds without any treating (as control) were evaluated under rain fed conditions. Seed priming in both cultivars led to higher efficiency compare to non-primed treatments. In general, seed priming with 500 and 750 μM of SA had appropriate effects; however the cultivars responses were different in this regard. Kaka showed better performance both in primed and non-primed seed than Pirooz. Results of this study revealed that not only yield quantity but also yield quality, as seed protein amounts, could positively affect by SA treatments. It seems that SA by enhancing of soluble sugars and proline amounts enhanced total water potential (ψ) and RWC. The increment in RWC led to rose of chlorophyll content of plants chlorophyll stability. In general, SA increased water use efficiency, both in biologic and seed yield base, and drought tolerance of chickpea plants. HI was a little decreased in SA treatments and it shows that SA more effective in biomass production than seed yield. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=chlorophyll" title="chlorophyll">chlorophyll</a>, <a href="https://publications.waset.org/abstracts/search?q=harvest%20index" title=" harvest index"> harvest index</a>, <a href="https://publications.waset.org/abstracts/search?q=proline" title=" proline"> proline</a>, <a href="https://publications.waset.org/abstracts/search?q=seed%20protein" title=" seed protein"> seed protein</a>, <a href="https://publications.waset.org/abstracts/search?q=soluble%20sugar" title=" soluble sugar"> soluble sugar</a>, <a href="https://publications.waset.org/abstracts/search?q=water%20use%20efficiency" title=" water use efficiency"> water use efficiency</a>, <a href="https://publications.waset.org/abstracts/search?q=yield%20component" title=" yield component"> yield component</a> </p> <a href="https://publications.waset.org/abstracts/3559/influence-of-salicylic-acid-on-yield-and-some-physiological-parameters-in-chickpea-cicer-arietinum-l" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/3559.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">423</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">5564</span> A Contribution to Human Activities Recognition Using Expert System Techniques</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Malika%20Yaici">Malika Yaici</a>, <a href="https://publications.waset.org/abstracts/search?q=Soraya%20Aloui"> Soraya Aloui</a>, <a href="https://publications.waset.org/abstracts/search?q=Sara%20Semchaoui"> Sara Semchaoui</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper deals with human activity recognition from sensor data. It is an active research area, and the main objective is to obtain a high recognition rate. In this work, a recognition system based on expert systems is proposed; the recognition is performed using the objects, object states, and gestures and taking into account the context (the location of the objects and of the person performing the activity, the duration of the elementary actions and the activity). The system recognizes complex activities after decomposing them into simple, easy-to-recognize activities. The proposed method can be applied to any type of activity. The simulation results show the robustness of our system and its speed of decision. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=human%20activity%20recognition" title="human activity recognition">human activity recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=ubiquitous%20computing" title=" ubiquitous computing"> ubiquitous computing</a>, <a href="https://publications.waset.org/abstracts/search?q=context-awareness" title=" context-awareness"> context-awareness</a>, <a href="https://publications.waset.org/abstracts/search?q=expert%20system" title=" expert system"> expert system</a> </p> <a href="https://publications.waset.org/abstracts/171721/a-contribution-to-human-activities-recognition-using-expert-system-techniques" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/171721.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">118</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">5563</span> Human Activities Recognition Based on Expert System</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Malika%20Yaici">Malika Yaici</a>, <a href="https://publications.waset.org/abstracts/search?q=Soraya%20Aloui"> Soraya Aloui</a>, <a href="https://publications.waset.org/abstracts/search?q=Sara%20Semchaoui"> Sara Semchaoui</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Recognition of human activities from sensor data is an active research area, and the main objective is to obtain a high recognition rate. In this work, we propose a recognition system based on expert systems. The proposed system makes the recognition based on the objects, object states, and gestures, taking into account the context (the location of the objects and of the person performing the activity, the duration of the elementary actions, and the activity). This work focuses on complex activities which are decomposed into simple easy to recognize activities. The proposed method can be applied to any type of activity. The simulation results show the robustness of our system and its speed of decision. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=human%20activity%20recognition" title="human activity recognition">human activity recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=ubiquitous%20computing" title=" ubiquitous computing"> ubiquitous computing</a>, <a href="https://publications.waset.org/abstracts/search?q=context-awareness" title=" context-awareness"> context-awareness</a>, <a href="https://publications.waset.org/abstracts/search?q=expert%20system" title=" expert system"> expert system</a> </p> <a href="https://publications.waset.org/abstracts/151943/human-activities-recognition-based-on-expert-system" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/151943.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">140</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">5562</span> Face Recognition Using Discrete Orthogonal Hahn Moments</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Fatima%20Akhmedova">Fatima Akhmedova</a>, <a href="https://publications.waset.org/abstracts/search?q=Simon%20Liao"> Simon Liao</a> </p> <p class="card-text"><strong>Abstract:</strong></p> One of the most critical decision points in the design of a face recognition system is the choice of an appropriate face representation. Effective feature descriptors are expected to convey sufficient, invariant and non-redundant facial information. In this work, we propose a set of Hahn moments as a new approach for feature description. Hahn moments have been widely used in image analysis due to their invariance, non-redundancy and the ability to extract features either globally and locally. To assess the applicability of Hahn moments to Face Recognition we conduct two experiments on the Olivetti Research Laboratory (ORL) database and University of Notre-Dame (UND) X1 biometric collection. Fusion of the global features along with the features from local facial regions are used as an input for the conventional k-NN classifier. The method reaches an accuracy of 93% of correctly recognized subjects for the ORL database and 94% for the UND database. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=face%20recognition" title="face recognition">face recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=Hahn%20moments" title=" Hahn moments"> Hahn moments</a>, <a href="https://publications.waset.org/abstracts/search?q=recognition-by-parts" title=" recognition-by-parts"> recognition-by-parts</a>, <a href="https://publications.waset.org/abstracts/search?q=time-lapse" title=" time-lapse"> time-lapse</a> </p> <a href="https://publications.waset.org/abstracts/27781/face-recognition-using-discrete-orthogonal-hahn-moments" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/27781.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">375</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">5561</span> Preliminary Study of Human Reliability of Control in Case of Fire Based on the Decision Processes and Stress Model of Human in a Fire</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Seung-Un%20Chae">Seung-Un Chae</a>, <a href="https://publications.waset.org/abstracts/search?q=Heung-Yul%20Kim"> Heung-Yul Kim</a>, <a href="https://publications.waset.org/abstracts/search?q=Sa-Kil%20Kim"> Sa-Kil Kim</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper presents the findings of preliminary study on human control performance in case of fire. The relationship between human control and human decision is studied in decision processes and stress model of human in a fire. Human behavior aspects involved in the decision process during a fire incident. The decision processes appear that six of individual perceptual processes: recognition, validation, definition, evaluation, commitment, and reassessment. Then, human may be stressed in order to get an optimal decision for their activity. This paper explores problems in human control processes and stresses in a catastrophic situation. Thus, the future approach will be concerned to reduce stresses and ambiguous irrelevant information. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=human%20reliability" title="human reliability">human reliability</a>, <a href="https://publications.waset.org/abstracts/search?q=decision%20processes" title=" decision processes"> decision processes</a>, <a href="https://publications.waset.org/abstracts/search?q=stress%20model" title=" stress model"> stress model</a>, <a href="https://publications.waset.org/abstracts/search?q=fire" title=" fire"> fire</a> </p> <a href="https://publications.waset.org/abstracts/50470/preliminary-study-of-human-reliability-of-control-in-case-of-fire-based-on-the-decision-processes-and-stress-model-of-human-in-a-fire" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/50470.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">986</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">5560</span> Familiarity with Intercultural Conflicts and Global Work Performance: Testing a Theory of Recognition Primed Decision-Making</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Thomas%20Rockstuhl">Thomas Rockstuhl</a>, <a href="https://publications.waset.org/abstracts/search?q=Kok%20Yee%20Ng"> Kok Yee Ng</a>, <a href="https://publications.waset.org/abstracts/search?q=Guido%20Gianasso"> Guido Gianasso</a>, <a href="https://publications.waset.org/abstracts/search?q=Soon%20Ang"> Soon Ang</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Two meta-analyses show that intercultural experience is not related to intercultural adaptation or performance in international assignments. These findings have prompted calls for a deeper grounding of research on international experience in the phenomenon of global work. Two issues, in particular, may limit current understanding of the relationship between international experience and global work performance. First, intercultural experience is too broad a construct that may not sufficiently capture the essence of global work, which to a large part involves sensemaking and managing intercultural conflicts. Second, the psychological mechanisms through which intercultural experience affects performance remains under-explored, resulting in a poor understanding of how experience is translated into learning and performance outcomes. Drawing on recognition primed decision-making (RPD) research, the current study advances a cognitive processing model to highlight the importance of intercultural conflict familiarity. Compared to intercultural experience, intercultural conflict familiarity is a more targeted construct that captures individuals’ previous exposure to dealing with intercultural conflicts. Drawing on RPD theory, we argue that individuals’ intercultural conflict familiarity enhances their ability to make accurate judgments and generate effective responses when intercultural conflicts arise. In turn, the ability to make accurate situation judgements and effective situation responses is an important predictor of global work performance. A relocation program within a multinational enterprise provided the context to test these hypotheses using a time-lagged, multi-source field study. Participants were 165 employees (46% female; with an average of 5 years of global work experience) from 42 countries who relocated from country to regional offices as part a global restructuring program. Within the first two weeks of transfer to the regional office, employees completed measures of their familiarity with intercultural conflicts, cultural intelligence, cognitive ability, and demographic information. They also completed an intercultural situational judgment test (iSJT) to assess their situation judgment and situation response. The iSJT comprised four validated multimedia vignettes of challenging intercultural work conflicts and prompted employees to provide protocols of their situation judgment and situation response. Two research assistants, trained in intercultural management but blind to the study hypotheses, coded the quality of employee’s situation judgment and situation response. Three months later, supervisors rated employees’ global work performance. Results using multilevel modeling (vignettes nested within employees) support the hypotheses that greater familiarity with intercultural conflicts is positively associated with better situation judgment, and that situation judgment mediates the effect of intercultural familiarity on situation response quality. Also, aggregated situation judgment and situation response quality both predicted supervisor-rated global work performance. Theoretically, our findings highlight the important but under-explored role of familiarity with intercultural conflicts; a shift in attention from the general nature of international experience assessed in terms of number and length of overseas assignments. Also, our cognitive approach premised on RPD theory offers a new theoretical lens to understand the psychological mechanisms through which intercultural conflict familiarity affects global work performance. Third, and importantly, our study contributes to the global talent identification literature by demonstrating that the cognitive processes engaged in resolving intercultural conflicts predict actual performance in the global workplace. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=intercultural%20conflict%20familiarity" title="intercultural conflict familiarity">intercultural conflict familiarity</a>, <a href="https://publications.waset.org/abstracts/search?q=job%20performance" title=" job performance"> job performance</a>, <a href="https://publications.waset.org/abstracts/search?q=judgment%20and%20decision%20making" title=" judgment and decision making"> judgment and decision making</a>, <a href="https://publications.waset.org/abstracts/search?q=situational%20judgment%20test" title=" situational judgment test"> situational judgment test</a> </p> <a href="https://publications.waset.org/abstracts/106328/familiarity-with-intercultural-conflicts-and-global-work-performance-testing-a-theory-of-recognition-primed-decision-making" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/106328.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">179</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">5559</span> Handwriting Recognition of Gurmukhi Script: A Survey of Online and Offline Techniques</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Ravneet%20Kaur">Ravneet Kaur</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Character recognition is a very interesting area of pattern recognition. From past few decades, an intensive research on character recognition for Roman, Chinese, and Japanese and Indian scripts have been reported. In this paper, a review of Handwritten Character Recognition work on Indian Script Gurmukhi is being highlighted. Most of the published papers were summarized, various methodologies were analysed and their results are reported. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=Gurmukhi%20character%20recognition" title="Gurmukhi character recognition">Gurmukhi character recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=online" title=" online"> online</a>, <a href="https://publications.waset.org/abstracts/search?q=offline" title=" offline"> offline</a>, <a href="https://publications.waset.org/abstracts/search?q=HCR%20survey" title=" HCR survey"> HCR survey</a> </p> <a href="https://publications.waset.org/abstracts/46337/handwriting-recognition-of-gurmukhi-script-a-survey-of-online-and-offline-techniques" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/46337.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">424</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">5558</span> OCR/ICR Text Recognition Using ABBYY FineReader as an Example Text</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=A.%20R.%20Bagirzade">A. R. Bagirzade</a>, <a href="https://publications.waset.org/abstracts/search?q=A.%20Sh.%20Najafova"> A. Sh. Najafova</a>, <a href="https://publications.waset.org/abstracts/search?q=S.%20M.%20Yessirkepova"> S. M. Yessirkepova</a>, <a href="https://publications.waset.org/abstracts/search?q=E.%20S.%20Albert"> E. S. Albert</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This article describes a text recognition method based on Optical Character Recognition (OCR). The features of the OCR method were examined using the ABBYY FineReader program. It describes automatic text recognition in images. OCR is necessary because optical input devices can only transmit raster graphics as a result. Text recognition describes the task of recognizing letters shown as such, to identify and assign them an assigned numerical value in accordance with the usual text encoding (ASCII, Unicode). The peculiarity of this study conducted by the authors using the example of the ABBYY FineReader, was confirmed and shown in practice, the improvement of digital text recognition platforms developed by Electronic Publication. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=ABBYY%20FineReader%20system" title="ABBYY FineReader system">ABBYY FineReader system</a>, <a href="https://publications.waset.org/abstracts/search?q=algorithm%20symbol%20recognition" title=" algorithm symbol recognition"> algorithm symbol recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=OCR%2FICR%20techniques" title=" OCR/ICR techniques"> OCR/ICR techniques</a>, <a href="https://publications.waset.org/abstracts/search?q=recognition%20technologies" title=" recognition technologies"> recognition technologies</a> </p> <a href="https://publications.waset.org/abstracts/130255/ocricr-text-recognition-using-abbyy-finereader-as-an-example-text" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/130255.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">168</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">5557</span> An Improved OCR Algorithm on Appearance Recognition of Electronic Components Based on Self-adaptation of Multifont Template</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Zhu-Qing%20Jia">Zhu-Qing Jia</a>, <a href="https://publications.waset.org/abstracts/search?q=Tao%20Lin"> Tao Lin</a>, <a href="https://publications.waset.org/abstracts/search?q=Tong%20Zhou"> Tong Zhou</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The recognition method of Optical Character Recognition has been expensively utilized, while it is rare to be employed specifically in recognition of electronic components. This paper suggests a high-effective algorithm on appearance identification of integrated circuit components based on the existing methods of character recognition, and analyze the pros and cons. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=optical%20character%20recognition" title="optical character recognition">optical character recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=fuzzy%20page%20identification" title=" fuzzy page identification"> fuzzy page identification</a>, <a href="https://publications.waset.org/abstracts/search?q=mutual%20correlation%20matrix" title=" mutual correlation matrix"> mutual correlation matrix</a>, <a href="https://publications.waset.org/abstracts/search?q=confidence%20self-adaptation" title=" confidence self-adaptation"> confidence self-adaptation</a> </p> <a href="https://publications.waset.org/abstracts/14322/an-improved-ocr-algorithm-on-appearance-recognition-of-electronic-components-based-on-self-adaptation-of-multifont-template" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/14322.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">540</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">5556</span> Overview of a Quantum Model for Decision Support in a Sensor Network</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Shahram%20Payandeh">Shahram Payandeh</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper presents an overview of a model which can be used as a part of a decision support system when fusing information from multiple sensing environment. Data fusion has been widely studied in the past few decades and numerous frameworks have been proposed to facilitate decision making process under uncertainties. Multi-sensor data fusion technology plays an increasingly significant role during people tracking and activity recognition. This paper presents an overview of a quantum model as a part of a decision-making process in the context of multi-sensor data fusion. The paper presents basic definitions and relationships associating the decision-making process and quantum model formulation in the presence of uncertainties. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=quantum%20model" title="quantum model">quantum model</a>, <a href="https://publications.waset.org/abstracts/search?q=sensor%20space" title=" sensor space"> sensor space</a>, <a href="https://publications.waset.org/abstracts/search?q=sensor%20network" title=" sensor network"> sensor network</a>, <a href="https://publications.waset.org/abstracts/search?q=decision%20support" title=" decision support"> decision support</a> </p> <a href="https://publications.waset.org/abstracts/119110/overview-of-a-quantum-model-for-decision-support-in-a-sensor-network" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/119110.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">227</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">5555</span> Facial Recognition on the Basis of Facial Fragments</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Tetyana%20Baydyk">Tetyana Baydyk</a>, <a href="https://publications.waset.org/abstracts/search?q=Ernst%20Kussul"> Ernst Kussul</a>, <a href="https://publications.waset.org/abstracts/search?q=Sandra%20Bonilla%20Meza"> Sandra Bonilla Meza</a> </p> <p class="card-text"><strong>Abstract:</strong></p> There are many articles that attempt to establish the role of different facial fragments in face recognition. Various approaches are used to estimate this role. Frequently, authors calculate the entropy corresponding to the fragment. This approach can only give approximate estimation. In this paper, we propose to use a more direct measure of the importance of different fragments for face recognition. We propose to select a recognition method and a face database and experimentally investigate the recognition rate using different fragments of faces. We present two such experiments in the paper. We selected the PCNC neural classifier as a method for face recognition and parts of the LFW (Labeled Faces in the Wild<em>) </em>face database as training and testing sets. The recognition rate of the best experiment is comparable with the recognition rate obtained using the whole face. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=face%20recognition" title="face recognition">face recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=labeled%20faces%20in%20the%20wild%20%28LFW%29%20database" title=" labeled faces in the wild (LFW) database"> labeled faces in the wild (LFW) database</a>, <a href="https://publications.waset.org/abstracts/search?q=random%20local%20descriptor%20%28RLD%29" title=" random local descriptor (RLD)"> random local descriptor (RLD)</a>, <a href="https://publications.waset.org/abstracts/search?q=random%20features" title=" random features"> random features</a> </p> <a href="https://publications.waset.org/abstracts/50117/facial-recognition-on-the-basis-of-facial-fragments" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/50117.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">360</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">5554</span> Importance of Developing a Decision Support System for Diagnosis of Glaucoma</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Murat%20Durucu">Murat Durucu</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Glaucoma is a condition of irreversible blindness, early diagnosis and appropriate interventions to make the patients able to see longer time. In this study, it addressed that the importance of developing a decision support system for glaucoma diagnosis. Glaucoma occurs when pressure happens around the eyes it causes some damage to the optic nerves and deterioration of vision. There are different levels ranging blindness of glaucoma disease. The diagnosis at an early stage allows a chance for therapies that slows the progression of the disease. In recent years, imaging technology from Heidelberg Retinal Tomography (HRT), Stereoscopic Disc Photo (SDP) and Optical Coherence Tomography (OCT) have been used for the diagnosis of glaucoma. This better accuracy and faster imaging techniques in response technique of OCT have become the most common method used by experts. Although OCT images or HRT precision and quickness, especially in the early stages, there are still difficulties and mistakes are occurred in diagnosis of glaucoma. It is difficult to obtain objective results on diagnosis and placement process of the doctor's. It seems very important to develop an objective decision support system for diagnosis and level the glaucoma disease for patients. By using OCT images and pattern recognition systems, it is possible to develop a support system for doctors to make their decisions on glaucoma. Thus, in this recent study, we develop an evaluation and support system to the usage of doctors. Pattern recognition system based computer software would help the doctors to make an objective evaluation for their patients. It is intended that after development and evaluation processes of the software, the system is planning to be serve for the usage of doctors in different hospitals. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=decision%20support%20system" title="decision support system">decision support system</a>, <a href="https://publications.waset.org/abstracts/search?q=glaucoma" title=" glaucoma"> glaucoma</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20processing" title=" image processing"> image processing</a>, <a href="https://publications.waset.org/abstracts/search?q=pattern%20recognition" title=" pattern recognition"> pattern recognition</a> </p> <a href="https://publications.waset.org/abstracts/54286/importance-of-developing-a-decision-support-system-for-diagnosis-of-glaucoma" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/54286.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">302</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">5553</span> A Human Activity Recognition System Based on Sensory Data Related to Object Usage </h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=M.%20Abdullah">M. Abdullah</a>, <a href="https://publications.waset.org/abstracts/search?q=Al-Wadud"> Al-Wadud</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Sensor-based activity recognition systems usually accounts which sensors have been activated to perform an activity. The system then combines the conditional probabilities of those sensors to represent different activities and takes the decision based on that. However, the information about the sensors which are not activated may also be of great help in deciding which activity has been performed. This paper proposes an approach where the sensory data related to both usage and non-usage of objects are utilized to make the classification of activities. Experimental results also show the promising performance of the proposed method. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=Na%C3%AFve%20Bayesian" title="Naïve Bayesian">Naïve Bayesian</a>, <a href="https://publications.waset.org/abstracts/search?q=based%20classification" title=" based classification"> based classification</a>, <a href="https://publications.waset.org/abstracts/search?q=activity%20recognition" title=" activity recognition"> activity recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=sensor%20data" title=" sensor data"> sensor data</a>, <a href="https://publications.waset.org/abstracts/search?q=object-usage%20model" title=" object-usage model"> object-usage model</a> </p> <a href="https://publications.waset.org/abstracts/4112/a-human-activity-recognition-system-based-on-sensory-data-related-to-object-usage" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/4112.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">322</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">5552</span> Cultural Disposition and Implicit Dehumanization of Sexualized Females by Women</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Hong%20Im%20Shin">Hong Im Shin</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Previous research demonstrated that self-objectification (women view themselves as objects for use) is related to system-justification. Three studies investigated whether cultural disposition as its system-justifying function could have an impact on self-objectification and dehumanization of sexualized women and men. Study 1 (N = 91) employed a survey methodology to examine the relationship between cultural disposition (collectivism vs. individualism), trait of system-justification, and self-objectification. The results showed that the higher tendency of collectivism was related to stronger system-justification and self-objectification. Study 2 (N = 60 females) introduced a single category implicit association task (SC-IAT) to assess the extent to which sexually objectified women were associated with uniquely human attributes (i.e., culture) compared to animal-related attributes (i.e., nature). According to results, female participants associated sexually objectified female targets less with human attributes compared to animal-related attributes. Study 3 (N = 46) investigated whether priming to individualism or collectivism was associated to system justification and sexual objectification of men and women with the use of a recognition task involving upright and inverted pictures of sexualized women and men. The results indicated that the female participants primed to individualism showed an inversion effect for sexualized women and men (person-like recognition), whereas there was no inversion effect for sexualized women in the priming condition of collectivism (object-like recognition). This implies that cultural disposition plays a mediating role for rationalizing the gender status, implicit dehumanization of sexualized females and self-objectification. Future research directions are discussed. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=cultural%20disposition" title="cultural disposition">cultural disposition</a>, <a href="https://publications.waset.org/abstracts/search?q=dehumanization" title=" dehumanization"> dehumanization</a>, <a href="https://publications.waset.org/abstracts/search?q=implicit%20test" title=" implicit test"> implicit test</a>, <a href="https://publications.waset.org/abstracts/search?q=self-objectification" title=" self-objectification"> self-objectification</a> </p> <a href="https://publications.waset.org/abstracts/82752/cultural-disposition-and-implicit-dehumanization-of-sexualized-females-by-women" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/82752.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">238</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">5551</span> Optimal Feature Extraction Dimension in Finger Vein Recognition Using Kernel Principal Component Analysis</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Amir%20Hajian">Amir Hajian</a>, <a href="https://publications.waset.org/abstracts/search?q=Sepehr%20Damavandinejadmonfared"> Sepehr Damavandinejadmonfared</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In this paper the issue of dimensionality reduction is investigated in finger vein recognition systems using kernel Principal Component Analysis (KPCA). One aspect of KPCA is to find the most appropriate kernel function on finger vein recognition as there are several kernel functions which can be used within PCA-based algorithms. In this paper, however, another side of PCA-based algorithms -particularly KPCA- is investigated. The aspect of dimension of feature vector in PCA-based algorithms is of importance especially when it comes to the real-world applications and usage of such algorithms. It means that a fixed dimension of feature vector has to be set to reduce the dimension of the input and output data and extract the features from them. Then a classifier is performed to classify the data and make the final decision. We analyze KPCA (Polynomial, Gaussian, and Laplacian) in details in this paper and investigate the optimal feature extraction dimension in finger vein recognition using KPCA. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=biometrics" title="biometrics">biometrics</a>, <a href="https://publications.waset.org/abstracts/search?q=finger%20vein%20recognition" title=" finger vein recognition"> finger vein recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=principal%20component%20analysis%20%28PCA%29" title=" principal component analysis (PCA)"> principal component analysis (PCA)</a>, <a href="https://publications.waset.org/abstracts/search?q=kernel%20principal%20component%20analysis%20%28KPCA%29" title=" kernel principal component analysis (KPCA)"> kernel principal component analysis (KPCA)</a> </p> <a href="https://publications.waset.org/abstracts/14476/optimal-feature-extraction-dimension-in-finger-vein-recognition-using-kernel-principal-component-analysis" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/14476.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">365</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">5550</span> DBN-Based Face Recognition System Using Light Field</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Bing%20Gu">Bing Gu</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Abstract—Most of Conventional facial recognition systems are based on image features, such as LBP, SIFT. Recently some DBN-based 2D facial recognition systems have been proposed. However, we find there are few DBN-based 3D facial recognition system and relative researches. 3D facial images include all the individual biometric information. We can use these information to build more accurate features, So we present our DBN-based face recognition system using Light Field. We can see Light Field as another presentation of 3D image, and Light Field Camera show us a way to receive a Light Field. We use the commercially available Light Field Camera to act as the collector of our face recognition system, and the system receive a state-of-art performance as convenient as conventional 2D face recognition system. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=DBN" title="DBN">DBN</a>, <a href="https://publications.waset.org/abstracts/search?q=face%20recognition" title=" face recognition"> face recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=light%20field" title=" light field"> light field</a>, <a href="https://publications.waset.org/abstracts/search?q=Lytro" title=" Lytro"> Lytro</a> </p> <a href="https://publications.waset.org/abstracts/10821/dbn-based-face-recognition-system-using-light-field" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/10821.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">464</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">5549</span> Evolution of the Environmental Justice Concept</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Zahra%20Bakhtiari">Zahra Bakhtiari</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This article explores the development and evolution of the concept of environmental justice, which has shifted from being dominated by white and middle-class individuals to a civil struggle by marginalized communities against environmental injustices. Environmental justice aims to achieve equity in decision-making and policy-making related to the environment. The concept of justice in this context includes four fundamental aspects: distribution, procedure, recognition, and capabilities. Recent scholars have attempted to broaden the concept of justice to include dimensions of participation, recognition, and capabilities. Focusing on all four dimensions of environmental justice is crucial for effective planning and policy-making to address environmental issues. Ignoring any of these aspects can lead to the failure of efforts and the waste of resources. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=environmental%20justice" title="environmental justice">environmental justice</a>, <a href="https://publications.waset.org/abstracts/search?q=distribution" title=" distribution"> distribution</a>, <a href="https://publications.waset.org/abstracts/search?q=procedure" title=" procedure"> procedure</a>, <a href="https://publications.waset.org/abstracts/search?q=recognition" title=" recognition"> recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=capabilities" title=" capabilities"> capabilities</a> </p> <a href="https://publications.waset.org/abstracts/163335/evolution-of-the-environmental-justice-concept" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/163335.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">93</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">5548</span> Kannada HandWritten Character Recognition by Edge Hinge and Edge Distribution Techniques Using Manhatan and Minimum Distance Classifiers</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=C.%20V.%20Aravinda">C. V. Aravinda</a>, <a href="https://publications.waset.org/abstracts/search?q=H.%20N.%20Prakash"> H. N. Prakash</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In this paper, we tried to convey fusion and state of art pertaining to SIL character recognition systems. In the first step, the text is preprocessed and normalized to perform the text identification correctly. The second step involves extracting relevant and informative features. The third step implements the classification decision. The three stages which involved are Data acquisition and preprocessing, Feature extraction, and Classification. Here we concentrated on two techniques to obtain features, Feature Extraction & Feature Selection. Edge-hinge distribution is a feature that characterizes the changes in direction of a script stroke in handwritten text. The edge-hinge distribution is extracted by means of a windowpane that is slid over an edge-detected binary handwriting image. Whenever the mid pixel of the window is on, the two edge fragments (i.e. connected sequences of pixels) emerging from this mid pixel are measured. Their directions are measured and stored as pairs. A joint probability distribution is obtained from a large sample of such pairs. Despite continuous effort, handwriting identification remains a challenging issue, due to different approaches use different varieties of features, having different. Therefore, our study will focus on handwriting recognition based on feature selection to simplify features extracting task, optimize classification system complexity, reduce running time and improve the classification accuracy. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=word%20segmentation%20and%20recognition" title="word segmentation and recognition">word segmentation and recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=character%20recognition" title=" character recognition"> character recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=optical%20character%20recognition" title=" optical character recognition"> optical character recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=hand%20written%20character%20recognition" title=" hand written character recognition"> hand written character recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=South%20Indian%20languages" title=" South Indian languages"> South Indian languages</a> </p> <a href="https://publications.waset.org/abstracts/41271/kannada-handwritten-character-recognition-by-edge-hinge-and-edge-distribution-techniques-using-manhatan-and-minimum-distance-classifiers" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/41271.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">494</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">5547</span> The Effects of Future Priming on Resource Concern</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Calvin%20Rong">Calvin Rong</a>, <a href="https://publications.waset.org/abstracts/search?q=Regina%20Agassian"> Regina Agassian</a>, <a href="https://publications.waset.org/abstracts/search?q=Mindy%20Engle-Friedman"> Mindy Engle-Friedman</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Climate changes, including rising sea levels and increases in global temperature, can have major effects on resource availability, leading to increased competition for resources and rising food prices. The abstract nature and often delayed consequences of many ecological problems cause people focus on immediate, specific, and personal events and circumstances that compel immediate and emotional involvement. This finding may be explained by the challenges humans have in imagining themselves in the future, a shortcoming that interferes with decision-making involving far-off rewards, and leads people to indicate a lower concern toward the future than to present circumstances. The present study sought to assess whether priming people to think of themselves in the future might strengthen the connection to their future selves and stimulate environmentally-protective behavior. We hypothesize that priming participants to think about themselves in the future would increase concern for the future environment. 45 control participants were primed to think about themselves in the present, and 42 participants were primed to think about themselves in the futures. After priming, the participants rated their concern over access to clean water, food, and energy on a scale of 1 to 10. They also rated their predicted care levels for the environment at age points 40, 50, 60, 70, 80, and 90 on a scale of 1(not at all) to 10 (very much). Predicted care levels at age 90 for the experimental group was significantly higher than for the control group. Overall the experimental group rated their concern for resources higher than the control. In comparison to the control group (M=7.60, SD=2.104) participants in the experimental group had greater concern for clean water (M=8.56, SD=1.534). In comparison to the control group (M=7.49, SD=2.041) participants in the experimental group were more concerned about food resources (M=8.41, SD=1.830). In comparison to the control group (M=7.22, SD=1.999) participants in the experimental group were more concerned about energy resources (M=8.07, SD=1.967). This study assessed whether a priming strategy could be used to encourage pro-environmental practices that protect limited resources. Future-self priming helped participants see past short term issues and focus on concern for the future environment. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=climate%20change" title="climate change">climate change</a>, <a href="https://publications.waset.org/abstracts/search?q=future" title=" future"> future</a>, <a href="https://publications.waset.org/abstracts/search?q=priming" title=" priming"> priming</a>, <a href="https://publications.waset.org/abstracts/search?q=global%20warming" title=" global warming"> global warming</a> </p> <a href="https://publications.waset.org/abstracts/77487/the-effects-of-future-priming-on-resource-concern" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/77487.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">257</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">5546</span> Face Tracking and Recognition Using Deep Learning Approach</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Degale%20Desta">Degale Desta</a>, <a href="https://publications.waset.org/abstracts/search?q=Cheng%20Jian"> Cheng Jian</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The most important factor in identifying a person is their face. Even identical twins have their own distinct faces. As a result, identification and face recognition are needed to tell one person from another. A face recognition system is a verification tool used to establish a person's identity using biometrics. Nowadays, face recognition is a common technique used in a variety of applications, including home security systems, criminal identification, and phone unlock systems. This system is more secure because it only requires a facial image instead of other dependencies like a key or card. Face detection and face identification are the two phases that typically make up a human recognition system.The idea behind designing and creating a face recognition system using deep learning with Azure ML Python's OpenCV is explained in this paper. Face recognition is a task that can be accomplished using deep learning, and given the accuracy of this method, it appears to be a suitable approach. To show how accurate the suggested face recognition system is, experimental results are given in 98.46% accuracy using Fast-RCNN Performance of algorithms under different training conditions. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=deep%20learning" title="deep learning">deep learning</a>, <a href="https://publications.waset.org/abstracts/search?q=face%20recognition" title=" face recognition"> face recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=identification" title=" identification"> identification</a>, <a href="https://publications.waset.org/abstracts/search?q=fast-RCNN" title=" fast-RCNN"> fast-RCNN</a> </p> <a href="https://publications.waset.org/abstracts/163134/face-tracking-and-recognition-using-deep-learning-approach" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/163134.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">140</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">5545</span> A Neuro-Automata Decision Support System for the Control of Late Blight in Tomato Crops</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Gizelle%20K.%20Vianna">Gizelle K. Vianna</a>, <a href="https://publications.waset.org/abstracts/search?q=Gustavo%20S.%20Oliveira"> Gustavo S. Oliveira</a>, <a href="https://publications.waset.org/abstracts/search?q=Gabriel%20V.%20Cunha"> Gabriel V. Cunha</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The use of decision support systems in agriculture may help monitoring large fields of crops by automatically detecting the symptoms of foliage diseases. In our work, we designed and implemented a decision support system for small tomatoes producers. This work investigates ways to recognize the late blight disease from the analysis of digital images of tomatoes, using a pair of multilayer perceptron neural networks. The networks outputs are used to generate repainted tomato images in which the injuries on the plant are highlighted, and to calculate the damage level of each plant. Those levels are then used to construct a situation map of a farm where a cellular automata simulates the outbreak evolution over the fields. The simulator can test different pesticides actions, helping in the decision on when to start the spraying and in the analysis of losses and gains of each choice of action. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=artificial%20neural%20networks" title="artificial neural networks">artificial neural networks</a>, <a href="https://publications.waset.org/abstracts/search?q=cellular%20automata" title=" cellular automata"> cellular automata</a>, <a href="https://publications.waset.org/abstracts/search?q=decision%20support%20system" title=" decision support system"> decision support system</a>, <a href="https://publications.waset.org/abstracts/search?q=pattern%20recognition" title=" pattern recognition"> pattern recognition</a> </p> <a href="https://publications.waset.org/abstracts/63771/a-neuro-automata-decision-support-system-for-the-control-of-late-blight-in-tomato-crops" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/63771.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">455</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">5544</span> Comparing Emotion Recognition from Voice and Facial Data Using Time Invariant Features</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Vesna%20Kirandziska">Vesna Kirandziska</a>, <a href="https://publications.waset.org/abstracts/search?q=Nevena%20Ackovska"> Nevena Ackovska</a>, <a href="https://publications.waset.org/abstracts/search?q=Ana%20Madevska%20Bogdanova"> Ana Madevska Bogdanova</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The problem of emotion recognition is a challenging problem. It is still an open problem from the aspect of both intelligent systems and psychology. In this paper, both voice features and facial features are used for building an emotion recognition system. A Support Vector Machine classifiers are built by using raw data from video recordings. In this paper, the results obtained for the emotion recognition are given, and a discussion about the validity and the expressiveness of different emotions is presented. A comparison between the classifiers build from facial data only, voice data only and from the combination of both data is made here. The need for a better combination of the information from facial expression and voice data is argued. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=emotion%20recognition" title="emotion recognition">emotion recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=facial%20recognition" title=" facial recognition"> facial recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=signal%20processing" title=" signal processing"> signal processing</a>, <a href="https://publications.waset.org/abstracts/search?q=machine%20learning" title=" machine learning"> machine learning</a> </p> <a href="https://publications.waset.org/abstracts/42384/comparing-emotion-recognition-from-voice-and-facial-data-using-time-invariant-features" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/42384.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">316</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">5543</span> A Communication Signal Recognition Algorithm Based on Holder Coefficient Characteristics</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Hui%20Zhang">Hui Zhang</a>, <a href="https://publications.waset.org/abstracts/search?q=Ye%20Tian"> Ye Tian</a>, <a href="https://publications.waset.org/abstracts/search?q=Fang%20Ye"> Fang Ye</a>, <a href="https://publications.waset.org/abstracts/search?q=Ziming%20Guo"> Ziming Guo</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Communication signal modulation recognition technology is one of the key technologies in the field of modern information warfare. At present, communication signal automatic modulation recognition methods are mainly divided into two major categories. One is the maximum likelihood hypothesis testing method based on decision theory, the other is a statistical pattern recognition method based on feature extraction. Now, the most commonly used is a statistical pattern recognition method, which includes feature extraction and classifier design. With the increasingly complex electromagnetic environment of communications, how to effectively extract the features of various signals at low signal-to-noise ratio (SNR) is a hot topic for scholars in various countries. To solve this problem, this paper proposes a feature extraction algorithm for the communication signal based on the improved Holder cloud feature. And the extreme learning machine (ELM) is used which aims at the problem of the real-time in the modern warfare to classify the extracted features. The algorithm extracts the digital features of the improved cloud model without deterministic information in a low SNR environment, and uses the improved cloud model to obtain more stable Holder cloud features and the performance of the algorithm is improved. This algorithm addresses the problem that a simple feature extraction algorithm based on Holder coefficient feature is difficult to recognize at low SNR, and it also has a better recognition accuracy. The results of simulations show that the approach in this paper still has a good classification result at low SNR, even when the SNR is -15dB, the recognition accuracy still reaches 76%. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=communication%20signal" title="communication signal">communication signal</a>, <a href="https://publications.waset.org/abstracts/search?q=feature%20extraction" title=" feature extraction"> feature extraction</a>, <a href="https://publications.waset.org/abstracts/search?q=Holder%20coefficient" title=" Holder coefficient"> Holder coefficient</a>, <a href="https://publications.waset.org/abstracts/search?q=improved%20cloud%20model" title=" improved cloud model"> improved cloud model</a> </p> <a href="https://publications.waset.org/abstracts/101463/a-communication-signal-recognition-algorithm-based-on-holder-coefficient-characteristics" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/101463.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">156</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">5542</span> A Straightforward Approach for Determining the Weights of Decision Makers Based on Angle Cosine and Projection Method</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Qiang%20Yang">Qiang Yang</a>, <a href="https://publications.waset.org/abstracts/search?q=Ping-An%20Du"> Ping-An Du</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Group decision making with multiple attribute has attracted intensive concern in the decision analysis area. This paper assumes that the contributions of all the decision makers (DMs) are not equal to the decision process based on different knowledge and experience in group setting. The aim of this paper is to develop a novel approach to determine weights of DMs in the group decision making problems. In this paper, the weights of DMs are determined in the group decision environment via angle cosine and projection method. First of all, the average decision of all individual decisions is defined as the ideal decision. After that, we define the weight of each decision maker (DM) by aggregating the angle cosine and projection between individual decision and ideal decision with associated direction indicator μ. By using the weights of DMs, all individual decisions are aggregated into a collective decision. Further, the preference order of alternatives is ranked in accordance with the overall row value of collective decision. Finally, an example in a chemical company is provided to illustrate the developed approach. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=angel%20cosine" title="angel cosine">angel cosine</a>, <a href="https://publications.waset.org/abstracts/search?q=ideal%20decision" title=" ideal decision"> ideal decision</a>, <a href="https://publications.waset.org/abstracts/search?q=projection%20method" title=" projection method"> projection method</a>, <a href="https://publications.waset.org/abstracts/search?q=weights%20of%20decision%20makers" title=" weights of decision makers"> weights of decision makers</a> </p> <a href="https://publications.waset.org/abstracts/35292/a-straightforward-approach-for-determining-the-weights-of-decision-makers-based-on-angle-cosine-and-projection-method" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/35292.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">378</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">5541</span> Possibilities, Challenges and the State of the Art of Automatic Speech Recognition in Air Traffic Control</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Van%20Nhan%20Nguyen">Van Nhan Nguyen</a>, <a href="https://publications.waset.org/abstracts/search?q=Harald%20Holone"> Harald Holone</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Over the past few years, a lot of research has been conducted to bring Automatic Speech Recognition (ASR) into various areas of Air Traffic Control (ATC), such as air traffic control simulation and training, monitoring live operators for with the aim of safety improvements, air traffic controller workload measurement and conducting analysis on large quantities controller-pilot speech. Due to the high accuracy requirements of the ATC context and its unique challenges, automatic speech recognition has not been widely adopted in this field. With the aim of providing a good starting point for researchers who are interested bringing automatic speech recognition into ATC, this paper gives an overview of possibilities and challenges of applying automatic speech recognition in air traffic control. To provide this overview, we present an updated literature review of speech recognition technologies in general, as well as specific approaches relevant to the ATC context. Based on this literature review, criteria for selecting speech recognition approaches for the ATC domain are presented, and remaining challenges and possible solutions are discussed. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=automatic%20speech%20recognition" title="automatic speech recognition">automatic speech recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=asr" title=" asr"> asr</a>, <a href="https://publications.waset.org/abstracts/search?q=air%20traffic%20control" title=" air traffic control"> air traffic control</a>, <a href="https://publications.waset.org/abstracts/search?q=atc" title=" atc"> atc</a> </p> <a href="https://publications.waset.org/abstracts/31004/possibilities-challenges-and-the-state-of-the-art-of-automatic-speech-recognition-in-air-traffic-control" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/31004.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">399</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">5540</span> Switching to the Latin Alphabet in Kazakhstan: A Brief Overview of Character Recognition Methods</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Ainagul%20Yermekova">Ainagul Yermekova</a>, <a href="https://publications.waset.org/abstracts/search?q=Liudmila%20Goncharenko"> Liudmila Goncharenko</a>, <a href="https://publications.waset.org/abstracts/search?q=Ali%20Baghirzade"> Ali Baghirzade</a>, <a href="https://publications.waset.org/abstracts/search?q=Sergey%20Sybachin"> Sergey Sybachin</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In this article, we address the problem of Kazakhstan's transition to the Latin alphabet. The transition process started in 2017 and is scheduled to be completed in 2025. In connection with these events, the problem of recognizing the characters of the new alphabet is raised. Well-known character recognition programs such as ABBYY FineReader, FormReader, MyScript Stylus did not recognize specific Kazakh letters that were used in Cyrillic. The author tries to give an assessment of the well-known method of character recognition that could be in demand as part of the country's transition to the Latin alphabet. Three methods of character recognition: template, structured, and feature-based, are considered through the algorithms of operation. At the end of the article, a general conclusion is made about the possibility of applying a certain method to a particular recognition process: for example, in the process of population census, recognition of typographic text in Latin, or recognition of photos of car numbers, store signs, etc. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=text%20detection" title="text detection">text detection</a>, <a href="https://publications.waset.org/abstracts/search?q=template%20method" title=" template method"> template method</a>, <a href="https://publications.waset.org/abstracts/search?q=recognition%20algorithm" title=" recognition algorithm"> recognition algorithm</a>, <a href="https://publications.waset.org/abstracts/search?q=structured%20method" title=" structured method"> structured method</a>, <a href="https://publications.waset.org/abstracts/search?q=feature%20method" title=" feature method"> feature method</a> </p> <a href="https://publications.waset.org/abstracts/138734/switching-to-the-latin-alphabet-in-kazakhstan-a-brief-overview-of-character-recognition-methods" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/138734.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">187</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">5539</span> Recognizing an Individual, Their Topic of Conversation and Cultural Background from 3D Body Movement</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Gheida%20J.%20Shahrour">Gheida J. Shahrour</a>, <a href="https://publications.waset.org/abstracts/search?q=Martin%20J.%20Russell"> Martin J. Russell</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The 3D body movement signals captured during human-human conversation include clues not only to the content of people’s communication but also to their culture and personality. This paper is concerned with automatic extraction of this information from body movement signals. For the purpose of this research, we collected a novel corpus from 27 subjects, arranged them into groups according to their culture. We arranged each group into pairs and each pair communicated with each other about different topics. A state-of-art recognition system is applied to the problems of person, culture, and topic recognition. We borrowed modeling, classification, and normalization techniques from speech recognition. We used Gaussian Mixture Modeling (GMM) as the main technique for building our three systems, obtaining 77.78%, 55.47%, and 39.06% from the person, culture, and topic recognition systems respectively. In addition, we combined the above GMM systems with Support Vector Machines (SVM) to obtain 85.42%, 62.50%, and 40.63% accuracy for person, culture, and topic recognition respectively. Although direct comparison among these three recognition systems is difficult, it seems that our person recognition system performs best for both GMM and GMM-SVM, suggesting that inter-subject differences (i.e. subject’s personality traits) are a major source of variation. When removing these traits from culture and topic recognition systems using the Nuisance Attribute Projection (NAP) and the Intersession Variability Compensation (ISVC) techniques, we obtained 73.44% and 46.09% accuracy from culture and topic recognition systems respectively. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=person%20recognition" title="person recognition">person recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=topic%20recognition" title=" topic recognition"> topic recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=culture%20recognition" title=" culture recognition"> culture recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=3D%20body%20movement%20signals" title=" 3D body movement signals"> 3D body movement signals</a>, <a href="https://publications.waset.org/abstracts/search?q=variability%20compensation" title=" variability compensation"> variability compensation</a> </p> <a href="https://publications.waset.org/abstracts/19473/recognizing-an-individual-their-topic-of-conversation-and-cultural-background-from-3d-body-movement" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/19473.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">541</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">5538</span> Complex Decision Rules in the Form of Decision Trees</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Avinash%20S.%20Jagtap">Avinash S. Jagtap</a>, <a href="https://publications.waset.org/abstracts/search?q=Sharad%20D.%20Gore"> Sharad D. Gore</a>, <a href="https://publications.waset.org/abstracts/search?q=Rajendra%20G.%20Gurao"> Rajendra G. Gurao </a> </p> <p class="card-text"><strong>Abstract:</strong></p> Decision rules become more and more complex as the number of conditions increase. As a consequence, the complexity of the decision rule also influences the time complexity of computer implementation of such a rule. Consider, for example, a decision that depends on four conditions A, B, C and D. For simplicity, suppose each of these four conditions is binary. Even then the decision rule will consist of 16 lines, where each line will be of the form: If A and B and C and D, then action 1. If A and B and C but not D, then action 2 and so on. While executing this decision rule, each of the four conditions will be checked every time until all the four conditions in a line are satisfied. The minimum number of logical comparisons is 4 whereas the maximum number is 64. This paper proposes to present a complex decision rule in the form of a decision tree. A decision tree divides the cases into branches every time a condition is checked. In the form of a decision tree, every branching eliminates half of the cases that do not satisfy the related conditions. As a result, every branch of the decision tree involves only four logical comparisons and hence is significantly simpler than the corresponding complex decision rule. The conclusion of this paper is that every complex decision rule can be represented as a decision tree and the decision tree is mathematically equivalent but computationally much simpler than the original complex decision rule <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=strategic" title="strategic">strategic</a>, <a href="https://publications.waset.org/abstracts/search?q=tactical" title=" tactical"> tactical</a>, <a href="https://publications.waset.org/abstracts/search?q=operational" title=" operational"> operational</a>, <a href="https://publications.waset.org/abstracts/search?q=adaptive" title=" adaptive"> adaptive</a>, <a href="https://publications.waset.org/abstracts/search?q=innovative" title=" innovative"> innovative</a> </p> <a href="https://publications.waset.org/abstracts/77189/complex-decision-rules-in-the-form-of-decision-trees" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/77189.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">288</span> </span> </div> </div> <ul class="pagination"> <li class="page-item disabled"><span class="page-link">&lsaquo;</span></li> <li class="page-item active"><span class="page-link">1</span></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=recognition%20primed%20decision&amp;page=2">2</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=recognition%20primed%20decision&amp;page=3">3</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=recognition%20primed%20decision&amp;page=4">4</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=recognition%20primed%20decision&amp;page=5">5</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=recognition%20primed%20decision&amp;page=6">6</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=recognition%20primed%20decision&amp;page=7">7</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=recognition%20primed%20decision&amp;page=8">8</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=recognition%20primed%20decision&amp;page=9">9</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=recognition%20primed%20decision&amp;page=10">10</a></li> <li class="page-item disabled"><span class="page-link">...</span></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=recognition%20primed%20decision&amp;page=185">185</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=recognition%20primed%20decision&amp;page=186">186</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=recognition%20primed%20decision&amp;page=2" rel="next">&rsaquo;</a></li> </ul> </div> </main> <footer> <div id="infolinks" class="pt-3 pb-2"> <div class="container"> <div style="background-color:#f5f5f5;" class="p-3"> <div class="row"> <div class="col-md-2"> <ul class="list-unstyled"> About <li><a href="https://waset.org/page/support">About Us</a></li> <li><a href="https://waset.org/page/support#legal-information">Legal</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/WASET-16th-foundational-anniversary.pdf">WASET celebrates its 16th foundational anniversary</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Account <li><a href="https://waset.org/profile">My Account</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Explore <li><a href="https://waset.org/disciplines">Disciplines</a></li> <li><a href="https://waset.org/conferences">Conferences</a></li> <li><a href="https://waset.org/conference-programs">Conference Program</a></li> <li><a href="https://waset.org/committees">Committees</a></li> <li><a href="https://publications.waset.org">Publications</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Research <li><a href="https://publications.waset.org/abstracts">Abstracts</a></li> <li><a href="https://publications.waset.org">Periodicals</a></li> <li><a href="https://publications.waset.org/archive">Archive</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Open Science <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Science-Philosophy.pdf">Open Science Philosophy</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Science-Award.pdf">Open Science Award</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Society-Open-Science-and-Open-Innovation.pdf">Open Innovation</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Postdoctoral-Fellowship-Award.pdf">Postdoctoral Fellowship Award</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Scholarly-Research-Review.pdf">Scholarly Research Review</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Support <li><a href="https://waset.org/page/support">Support</a></li> <li><a href="https://waset.org/profile/messages/create">Contact Us</a></li> <li><a href="https://waset.org/profile/messages/create">Report Abuse</a></li> </ul> </div> </div> </div> </div> </div> <div class="container text-center"> <hr style="margin-top:0;margin-bottom:.3rem;"> <a href="https://creativecommons.org/licenses/by/4.0/" target="_blank" class="text-muted small">Creative Commons Attribution 4.0 International License</a> <div id="copy" class="mt-2">&copy; 2024 World Academy of Science, Engineering and Technology</div> </div> </footer> <a href="javascript:" id="return-to-top"><i class="fas fa-arrow-up"></i></a> <div class="modal" id="modal-template"> <div class="modal-dialog"> <div class="modal-content"> <div class="row m-0 mt-1"> <div class="col-md-12"> <button type="button" class="close" data-dismiss="modal" aria-label="Close"><span aria-hidden="true">&times;</span></button> </div> </div> <div class="modal-body"></div> </div> </div> </div> <script src="https://cdn.waset.org/static/plugins/jquery-3.3.1.min.js"></script> <script src="https://cdn.waset.org/static/plugins/bootstrap-4.2.1/js/bootstrap.bundle.min.js"></script> <script src="https://cdn.waset.org/static/js/site.js?v=150220211556"></script> <script> jQuery(document).ready(function() { /*jQuery.get("https://publications.waset.org/xhr/user-menu", function (response) { jQuery('#mainNavMenu').append(response); });*/ jQuery.get({ url: "https://publications.waset.org/xhr/user-menu", cache: false }).then(function(response){ jQuery('#mainNavMenu').append(response); }); }); </script> </body> </html>

Pages: 1 2 3 4 5 6 7 8 9 10