CINXE.COM

Search results for: perceptual present

<!DOCTYPE html> <html lang="en" dir="ltr"> <head> <!-- Google tag (gtag.js) --> <script async src="https://www.googletagmanager.com/gtag/js?id=G-P63WKM1TM1"></script> <script> window.dataLayer = window.dataLayer || []; function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'G-P63WKM1TM1'); </script> <!-- Yandex.Metrika counter --> <script type="text/javascript" > (function(m,e,t,r,i,k,a){m[i]=m[i]||function(){(m[i].a=m[i].a||[]).push(arguments)}; m[i].l=1*new Date(); for (var j = 0; j < document.scripts.length; j++) {if (document.scripts[j].src === r) { return; }} k=e.createElement(t),a=e.getElementsByTagName(t)[0],k.async=1,k.src=r,a.parentNode.insertBefore(k,a)}) (window, document, "script", "https://mc.yandex.ru/metrika/tag.js", "ym"); ym(55165297, "init", { clickmap:false, trackLinks:true, accurateTrackBounce:true, webvisor:false }); </script> <noscript><div><img src="https://mc.yandex.ru/watch/55165297" style="position:absolute; left:-9999px;" alt="" /></div></noscript> <!-- /Yandex.Metrika counter --> <!-- Matomo --> <!-- End Matomo Code --> <title>Search results for: perceptual present</title> <meta name="description" content="Search results for: perceptual present"> <meta name="keywords" content="perceptual present"> <meta name="viewport" content="width=device-width, initial-scale=1, minimum-scale=1, maximum-scale=1, user-scalable=no"> <meta charset="utf-8"> <link href="https://cdn.waset.org/favicon.ico" type="image/x-icon" rel="shortcut icon"> <link href="https://cdn.waset.org/static/plugins/bootstrap-4.2.1/css/bootstrap.min.css" rel="stylesheet"> <link href="https://cdn.waset.org/static/plugins/fontawesome/css/all.min.css" rel="stylesheet"> <link href="https://cdn.waset.org/static/css/site.css?v=150220211555" rel="stylesheet"> </head> <body> <header> <div class="container"> <nav class="navbar navbar-expand-lg navbar-light"> <a class="navbar-brand" href="https://waset.org"> <img src="https://cdn.waset.org/static/images/wasetc.png" alt="Open Science Research Excellence" title="Open Science Research Excellence" /> </a> <button class="d-block d-lg-none navbar-toggler ml-auto" type="button" data-toggle="collapse" data-target="#navbarMenu" aria-controls="navbarMenu" aria-expanded="false" aria-label="Toggle navigation"> <span class="navbar-toggler-icon"></span> </button> <div class="w-100"> <div class="d-none d-lg-flex flex-row-reverse"> <form method="get" action="https://waset.org/search" class="form-inline my-2 my-lg-0"> <input class="form-control mr-sm-2" type="search" placeholder="Search Conferences" value="perceptual present" name="q" aria-label="Search"> <button class="btn btn-light my-2 my-sm-0" type="submit"><i class="fas fa-search"></i></button> </form> </div> <div class="collapse navbar-collapse mt-1" id="navbarMenu"> <ul class="navbar-nav ml-auto align-items-center" id="mainNavMenu"> <li class="nav-item"> <a class="nav-link" href="https://waset.org/conferences" title="Conferences in 2024/2025/2026">Conferences</a> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/disciplines" title="Disciplines">Disciplines</a> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/committees" rel="nofollow">Committees</a> </li> <li class="nav-item dropdown"> <a class="nav-link dropdown-toggle" href="#" id="navbarDropdownPublications" role="button" data-toggle="dropdown" aria-haspopup="true" aria-expanded="false"> Publications </a> <div class="dropdown-menu" aria-labelledby="navbarDropdownPublications"> <a class="dropdown-item" href="https://publications.waset.org/abstracts">Abstracts</a> <a class="dropdown-item" href="https://publications.waset.org">Periodicals</a> <a class="dropdown-item" href="https://publications.waset.org/archive">Archive</a> </div> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/page/support" title="Support">Support</a> </li> </ul> </div> </div> </nav> </div> </header> <main> <div class="container mt-4"> <div class="row"> <div class="col-md-9 mx-auto"> <form method="get" action="https://publications.waset.org/abstracts/search"> <div id="custom-search-input"> <div class="input-group"> <i class="fas fa-search"></i> <input type="text" class="search-query" name="q" placeholder="Author, Title, Abstract, Keywords" value="perceptual present"> <input type="submit" class="btn_search" value="Search"> </div> </div> </form> </div> </div> <div class="row mt-3"> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Commenced</strong> in January 2007</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Frequency:</strong> Monthly</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Edition:</strong> International</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Paper Count:</strong> 13512</div> </div> </div> </div> <h1 class="mt-3 mb-3 text-center" style="font-size:1.6rem;">Search results for: perceptual present</h1> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">13512</span> Perceptual Organization within Temporal Displacement</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Michele%20Sinico">Michele Sinico</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The psychological present has an actual extension. When a sequence of instantaneous stimuli falls in this short interval of time, observers perceive a compresence of events in succession and the temporal order depends on the qualitative relationships between the perceptual properties of the events. Two experiments were carried out to study the influence of perceptual grouping, with and without temporal displacement, on the duration of auditory sequences. The psychophysical method of adjustment was adopted. The first experiment investigated the effect of temporal displacement of a white noise on sequence duration. The second experiment investigated the effect of temporal displacement, along the pitch dimension, on temporal shortening of sequence. The results suggest that the temporal order of sounds, in the case of temporal displacement, is organized along the pitch dimension. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=time%20perception" title="time perception">time perception</a>, <a href="https://publications.waset.org/abstracts/search?q=perceptual%20present" title=" perceptual present"> perceptual present</a>, <a href="https://publications.waset.org/abstracts/search?q=temporal%20displacement" title=" temporal displacement"> temporal displacement</a>, <a href="https://publications.waset.org/abstracts/search?q=Gestalt%20laws%20of%20perceptual%20organization" title=" Gestalt laws of perceptual organization"> Gestalt laws of perceptual organization</a> </p> <a href="https://publications.waset.org/abstracts/76211/perceptual-organization-within-temporal-displacement" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/76211.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">250</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">13511</span> Improving Perceptual Reasoning in School Children through Chess Training</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Ebenezer%20Joseph">Ebenezer Joseph</a>, <a href="https://publications.waset.org/abstracts/search?q=Veena%20Easvaradoss"> Veena Easvaradoss</a>, <a href="https://publications.waset.org/abstracts/search?q=S.%20Sundar%20Manoharan"> S. Sundar Manoharan</a>, <a href="https://publications.waset.org/abstracts/search?q=David%20Chandran"> David Chandran</a>, <a href="https://publications.waset.org/abstracts/search?q=Sumathi%20Chandrasekaran"> Sumathi Chandrasekaran</a>, <a href="https://publications.waset.org/abstracts/search?q=T.%20R.%20Uma"> T. R. Uma</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Perceptual reasoning is the ability that incorporates fluid reasoning, spatial processing, and visual motor integration. Several theories of cognitive functioning emphasize the importance of fluid reasoning. The ability to manipulate abstractions and rules and to generalize is required for reasoning tasks. This study, funded by the Cognitive Science Research Initiative, Department of Science and Technology, Government of India, analyzed the effect of 1-year chess training on the perceptual reasoning of children. A pretest–posttest with control group design was used, with 43 (28 boys, 15 girls) children in the experimental group and 42 (26 boys, 16 girls) children in the control group. The sample was selected from children studying in two private schools from South India (grades 3 to 9), which included both the genders. The experimental group underwent weekly 1-hour chess training for 1 year. Perceptual reasoning was measured by three subtests of WISC-IV INDIA. Pre-equivalence of means was established. Further statistical analyses revealed that the experimental group had shown statistically significant improvement in perceptual reasoning compared to the control group. The present study clearly establishes a correlation between chess learning and perceptual reasoning. If perceptual reasoning can be enhanced in children, it could possibly result in the improvement of executive functions as well as the scholastic performance of the child. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=chess" title="chess">chess</a>, <a href="https://publications.waset.org/abstracts/search?q=cognition" title=" cognition"> cognition</a>, <a href="https://publications.waset.org/abstracts/search?q=intelligence" title=" intelligence"> intelligence</a>, <a href="https://publications.waset.org/abstracts/search?q=perceptual%20reasoning" title=" perceptual reasoning"> perceptual reasoning</a> </p> <a href="https://publications.waset.org/abstracts/71492/improving-perceptual-reasoning-in-school-children-through-chess-training" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/71492.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">356</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">13510</span> Research on Perceptual Features of Couchsurfers on New Hospitality Tourism Platform Couchsurfing</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Yuanxiang%20Miao">Yuanxiang Miao</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper aims to examine the perceptual features of couchsurfers on a new hospitality tourism platform, the free homestay website couchsurfing. As a local host, the author has accepted 61 couchsurfers in Kyoto, Japan, and attempted to figure out couchsurfers' characteristics on perception by hosting them. Moreover, the methodology of this research is mainly based on in-depth interviews, by talking with couchsurfers, observing their behaviors, doing questionnaires, etc. Five dominant perceptual features of couchsurfers were identified: (1) Trusting; (2) Meeting; (3) Sharing; (4) Reciprocity; (5) Worries. The value of this research lies in figuring out a deeper understanding of the perceptual features of couchsurfers, and the author indeed hosted and stayed with 61 couchsurfers from 30 countries and areas over one year. Lastly, the author offers practical suggestions for future research. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=couchsurfing" title="couchsurfing">couchsurfing</a>, <a href="https://publications.waset.org/abstracts/search?q=depth%20interview" title=" depth interview"> depth interview</a>, <a href="https://publications.waset.org/abstracts/search?q=hospitality%20tourism" title=" hospitality tourism"> hospitality tourism</a>, <a href="https://publications.waset.org/abstracts/search?q=perceptual%20features" title=" perceptual features"> perceptual features</a> </p> <a href="https://publications.waset.org/abstracts/125558/research-on-perceptual-features-of-couchsurfers-on-new-hospitality-tourism-platform-couchsurfing" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/125558.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">145</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">13509</span> To Estimate the Association between Visual Stress and Visual Perceptual Skills</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Vijay%20Reena%20Durai">Vijay Reena Durai</a>, <a href="https://publications.waset.org/abstracts/search?q=Krithica%20Srinivasan"> Krithica Srinivasan</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Introduction: The two fundamental skills involved in the growth and wellbeing of any child can be categorized into visual motor and perceptual skills. Visual stress is a disorder which is characterized by visual discomfort, blurred vision, misspelling words, skipping lines, letters bunching together. There is a need to understand the deficits in perceptual skills among children with visual stress. Aim: To estimate the association between visual stress and visual perceptual skills Objective: To compare visual perceptual skills of children with and without visual stress Methodology: Children between 8 to 15 years of age participated in this cross-sectional study. All children with monocular visual acuity better than or equal to 6/6 were included. Visual perceptual skills were measured using test for visual perceptual skills (TVPS) tool. Reading speed was measured with the chosen colored overlay using Wilkins reading chart and pattern glare score was estimated using a 3cpd gratings. Visual stress was defined as change in reading speed of greater than or equal to 10% and a pattern glare score of greater than or equal to 4. Results: 252 children participated in this study and the male: female ratio of 3:2. Majority of the children preferred Magenta (28%) and Yellow (25%) colored overlay for reading. There was a significant difference between the two groups (MD=1.24±0.6) (p<0.04, 95% CI 0.01-2.43) only in the sequential memory skills. The prevalence of visual stress in this group was found to be 31% (n=78). Binary logistic regression showed that odds ratio of having poor visual perceptual skills was OR: 2.85 (95% CI 1.08-7.49) among children with visual stress. Conclusion: Children with visual stress are found to have three times poorer visual perceptual skills than children without visual stress. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=visual%20stress" title="visual stress">visual stress</a>, <a href="https://publications.waset.org/abstracts/search?q=visual%20perceptual%20skills" title=" visual perceptual skills"> visual perceptual skills</a>, <a href="https://publications.waset.org/abstracts/search?q=colored%20overlay" title=" colored overlay"> colored overlay</a>, <a href="https://publications.waset.org/abstracts/search?q=pattern%20glare" title=" pattern glare"> pattern glare</a> </p> <a href="https://publications.waset.org/abstracts/41580/to-estimate-the-association-between-visual-stress-and-visual-perceptual-skills" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/41580.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">388</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">13508</span> 3D Printing Perceptual Models of Preference Using a Fuzzy Extreme Learning Machine Approach</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Xinyi%20Le">Xinyi Le</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In this paper, 3D printing orientations were determined through our perceptual model. Some FDM (Fused Deposition Modeling) 3D printers, which are widely used in universities and industries, often require support structures during the additive manufacturing. After removing the residual material, some surface artifacts remain at the contact points. These artifacts will damage the function and visual effect of the model. To prevent the impact of these artifacts, we present a fuzzy extreme learning machine approach to find printing directions that avoid placing supports in perceptually significant regions. The proposed approach is able to solve the evaluation problem by combing both the subjective knowledge and objective information. Our method combines the advantages of fuzzy theory, auto-encoders, and extreme learning machine. Fuzzy set theory is applied for dealing with subjective preference information, and auto-encoder step is used to extract good features without supervised labels before extreme learning machine. An extreme learning machine method is then developed successfully for training and learning perceptual models. The performance of this perceptual model will be demonstrated on both natural and man-made objects. It is a good human-computer interaction practice which draws from supporting knowledge on both the machine side and the human side. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=3d%20printing" title="3d printing">3d printing</a>, <a href="https://publications.waset.org/abstracts/search?q=perceptual%20model" title=" perceptual model"> perceptual model</a>, <a href="https://publications.waset.org/abstracts/search?q=fuzzy%20evaluation" title=" fuzzy evaluation"> fuzzy evaluation</a>, <a href="https://publications.waset.org/abstracts/search?q=data-driven%20approach" title=" data-driven approach"> data-driven approach</a> </p> <a href="https://publications.waset.org/abstracts/67233/3d-printing-perceptual-models-of-preference-using-a-fuzzy-extreme-learning-machine-approach" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/67233.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">438</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">13507</span> Auteur 3D Filmmaking: From Hitchcock’s Protrusion Technique to Godard’s Immersion Aesthetic</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Delia%20Enyedi">Delia Enyedi</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Throughout film history, the regular return of 3D cinema has been discussed in connection to crises caused by the advent of television or the competition of the Internet. In addition, the three waves of stereoscopic 3D (from 1952 up to 1983) and its current digital version have been blamed for adding a challenging technical distraction to the viewing experience. By discussing the films <em>Dial M for Murder</em> (1954) and <em>Goodbye to Language</em> (2014), the paper aims to analyze the response of recognized auteurs to the use of 3D techniques in filmmaking. For Alfred Hitchcock, the solution to attaining perceptual immersion paradoxically resided in restraining the signature effect of 3D, namely protrusion. In Jean-Luc Godard&rsquo;s vision, 3D techniques allowed him to explore perceptual absorption by means of depth of field, for which he had long advocated as being central to cinema. Thus, both directors contribute to the foundation of an auteur aesthetic in 3D filmmaking. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=Alfred%20Hitchcock" title="Alfred Hitchcock">Alfred Hitchcock</a>, <a href="https://publications.waset.org/abstracts/search?q=authorship" title=" authorship"> authorship</a>, <a href="https://publications.waset.org/abstracts/search?q=3D%20filmmaking" title=" 3D filmmaking"> 3D filmmaking</a>, <a href="https://publications.waset.org/abstracts/search?q=Jean-Luc%20Godard" title=" Jean-Luc Godard"> Jean-Luc Godard</a>, <a href="https://publications.waset.org/abstracts/search?q=perceptual%20absorption" title=" perceptual absorption"> perceptual absorption</a>, <a href="https://publications.waset.org/abstracts/search?q=perceptual%20immersion" title=" perceptual immersion"> perceptual immersion</a> </p> <a href="https://publications.waset.org/abstracts/61084/auteur-3d-filmmaking-from-hitchcocks-protrusion-technique-to-godards-immersion-aesthetic" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/61084.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">290</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">13506</span> The Interleaving Effect of Subject Matter and Perceptual Modality on Students’ Attention and Learning: A Portable EEG Study</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Wen%20Chen">Wen Chen</a> </p> <p class="card-text"><strong>Abstract:</strong></p> To investigate the interleaving effect of subject matter (mathematics vs. history) and perceptual modality (visual vs. auditory materials) on student’s attention and learning outcomes, the present study collected self-reported data on subjective cognitive load (SCL) and attention level, EEG data, and learning outcomes from micro-lectures. Eighty-one 7th grade students were randomly assigned to four learning conditions: blocked (by subject matter) micro-lectures with auditory textual information (B-A condition), blocked (by subject matter) micro-lectures with visual textual information (B-V condition), interleaved (by subject matter) micro-lectures with auditory textual information (I-A condition), and interleaved micro-lectures by both perceptual modality and subject matter (I-all condition). The results showed that although interleaved conditions may show advantages in certain indices, the I-all condition showed the best overall outcomes (best performance, low SCL, and high attention). This study suggests that interleaving by both subject matter and perceptual modality should be preferred in scheduling and planning classes. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=cognitive%20load" title="cognitive load">cognitive load</a>, <a href="https://publications.waset.org/abstracts/search?q=interleaving%20effect" title=" interleaving effect"> interleaving effect</a>, <a href="https://publications.waset.org/abstracts/search?q=micro-lectures" title=" micro-lectures"> micro-lectures</a>, <a href="https://publications.waset.org/abstracts/search?q=sustained%20attention" title=" sustained attention"> sustained attention</a> </p> <a href="https://publications.waset.org/abstracts/105780/the-interleaving-effect-of-subject-matter-and-perceptual-modality-on-students-attention-and-learning-a-portable-eeg-study" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/105780.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">137</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">13505</span> Perceptual and Ultrasound Articulatory Training Effects on English L2 Vowels Production by Italian Learners </h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=I.%20Sonia%20d%E2%80%99Apolito">I. Sonia d’Apolito</a>, <a href="https://publications.waset.org/abstracts/search?q=Bianca%20Sisinni"> Bianca Sisinni</a>, <a href="https://publications.waset.org/abstracts/search?q=Mirko%20Grimaldi"> Mirko Grimaldi</a>, <a href="https://publications.waset.org/abstracts/search?q=Barbara%20Gili%20Fivela"> Barbara Gili Fivela</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The American English contrast /ɑ-ʌ/ (cop-cup) is difficult to be produced by Italian learners since they realize L2-/ɑ-ʌ/ as L1-/ɔ-a/ respectively, due to differences in phonetic-phonological systems and also in grapheme-to-phoneme conversion rules. In this paper, we try to answer the following research questions: Can a short training improve the production of English /ɑ-ʌ/ by Italian learners? Is a perceptual training better than an articulatory (ultrasound - US) training? Thus, we compare a perceptual training with an US articulatory one to observe: 1) the effects of short trainings on L2-/ɑ-ʌ/ productions; 2) if the US articulatory training improves the pronunciation better than the perceptual training. In this pilot study, 9 Salento-Italian monolingual adults participated: 3 subjects performed a 1-hour perceptual training (ES-P); 3 subjects performed a 1-hour US training (ES-US); and 3 control subjects did not receive any training (CS). Verbal instructions about the phonetic properties of L2-/ɑ-ʌ/ and L1-/ɔ-a/ and their differences (representation on F1-F2 plane) were provided during both trainings. After these instructions, the ES-P group performed an identification training based on the High Variability Phonetic Training procedure, while the ES-US group performed the articulatory training, by means of US video of tongue gestures in L2-/ɑ-ʌ/ production and dynamic view of their own tongue movements and position using a probe under their chin. The acoustic data were analyzed and the first three formants were calculated. Independent t-tests were run to compare: 1) /ɑ-ʌ/ in pre- vs. post-test respectively; /ɑ-ʌ/ in pre- and post-test vs. L1-/a-ɔ/ respectively. Results show that in the pre-test all speakers realize L2-/ɑ-ʌ/ as L1-/ɔ-a/ respectively. Contrary to CS and ES-P groups, the ES-US group in the post-test differentiates the L2 vowels from those produced in the pre-test as well as from the L1 vowels, although only one ES-US subject produces both L2 vowels accurately. The articulatory training seems more effective than the perceptual one since it favors the production of vowels in the correct direction of L2 vowels and differently from the similar L1 vowels. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=L2%20vowel%20production" title="L2 vowel production">L2 vowel production</a>, <a href="https://publications.waset.org/abstracts/search?q=perceptual%20training" title=" perceptual training"> perceptual training</a>, <a href="https://publications.waset.org/abstracts/search?q=articulatory%20training" title=" articulatory training"> articulatory training</a>, <a href="https://publications.waset.org/abstracts/search?q=ultrasound" title=" ultrasound"> ultrasound</a> </p> <a href="https://publications.waset.org/abstracts/71568/perceptual-and-ultrasound-articulatory-training-effects-on-english-l2-vowels-production-by-italian-learners" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/71568.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">256</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">13504</span> Perceptual Learning with Hand-Eye Coordination as an Effective Tool for Managing Amblyopia: A Prospective Study</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Anandkumar%20S.%20Purohit">Anandkumar S. Purohit</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Introduction: Amblyopia is a serious condition resulting in monocular impairment of vision. Although traditional treatment improves vision, we attempted the results of perceptual learning in this study. Methods: The prospective cohort study included all patients with amblyopia who were subjected to perceptual learning. The presenting data on vision, stereopsis, and contrast sensitivity were documented in a pretested online format, and the pre‑ and post‑treatment information was compared using descriptive, cross‑tabulation, and comparative methods on SPSS 22. Results: The cohort consisted of 47 patients (23 females and 24 males) with a mean age of 14.11 ± 7.13 years. A significant improvement was detected in visual acuity after the PL sessions, and the median follow‑up period was 17 days. Stereopsis improved significantly in all age groups. Conclusion: PL with hand-eye coordination is an effective method for managing amblyopia. This approach can improve vision in all age groups. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=amblyopia" title="amblyopia">amblyopia</a>, <a href="https://publications.waset.org/abstracts/search?q=perceptual%20learning" title=" perceptual learning"> perceptual learning</a>, <a href="https://publications.waset.org/abstracts/search?q=hand-eye%20coordination" title=" hand-eye coordination"> hand-eye coordination</a>, <a href="https://publications.waset.org/abstracts/search?q=visual%20acuity" title=" visual acuity"> visual acuity</a>, <a href="https://publications.waset.org/abstracts/search?q=stereopsis" title=" stereopsis"> stereopsis</a>, <a href="https://publications.waset.org/abstracts/search?q=contrast%20sensitivity" title=" contrast sensitivity"> contrast sensitivity</a>, <a href="https://publications.waset.org/abstracts/search?q=ophthalmology" title=" ophthalmology"> ophthalmology</a> </p> <a href="https://publications.waset.org/abstracts/190032/perceptual-learning-with-hand-eye-coordination-as-an-effective-tool-for-managing-amblyopia-a-prospective-study" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/190032.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">25</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">13503</span> The Combination of the Mel Frequency Cepstral Coefficients (MFCC), Perceptual Linear Prediction (PLP), JITTER and SHIMMER Coefficients for the Improvement of Automatic Recognition System for Dysarthric Speech</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Brahim-Fares%20Zaidi">Brahim-Fares Zaidi</a>, <a href="https://publications.waset.org/abstracts/search?q=Malika%20Boudraa"> Malika Boudraa</a>, <a href="https://publications.waset.org/abstracts/search?q=Sid-Ahmed%20Selouani"> Sid-Ahmed Selouani</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Our work aims to improve our Automatic Recognition System for Dysarthria Speech (ARSDS) based on the Hidden Models of Markov (HMM) and the Hidden Markov Model Toolkit (HTK) to help people who are sick. With pronunciation problems, we applied two techniques of speech parameterization based on Mel Frequency Cepstral Coefficients (MFCC's) and Perceptual Linear Prediction (PLP's) and concatenated them with JITTER and SHIMMER coefficients in order to increase the recognition rate of a dysarthria speech. For our tests, we used the NEMOURS database that represents speakers with dysarthria and normal speakers. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=hidden%20Markov%20model%20toolkit%20%28HTK%29" title="hidden Markov model toolkit (HTK)">hidden Markov model toolkit (HTK)</a>, <a href="https://publications.waset.org/abstracts/search?q=hidden%20models%20of%20Markov%20%28HMM%29" title=" hidden models of Markov (HMM)"> hidden models of Markov (HMM)</a>, <a href="https://publications.waset.org/abstracts/search?q=Mel-frequency%20cepstral%20coefficients%20%28MFCC%29" title=" Mel-frequency cepstral coefficients (MFCC)"> Mel-frequency cepstral coefficients (MFCC)</a>, <a href="https://publications.waset.org/abstracts/search?q=perceptual%20linear%20prediction%20%28PLP%E2%80%99s%29" title=" perceptual linear prediction (PLP’s)"> perceptual linear prediction (PLP’s)</a> </p> <a href="https://publications.waset.org/abstracts/143303/the-combination-of-the-mel-frequency-cepstral-coefficients-mfcc-perceptual-linear-prediction-plp-jitter-and-shimmer-coefficients-for-the-improvement-of-automatic-recognition-system-for-dysarthric-speech" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/143303.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">161</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">13502</span> Perception of Greek Vowels by Arabic-Greek Bilinguals: An Experimental Study</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Georgios%20P.%20Georgiou">Georgios P. Georgiou</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Infants are able to discriminate a number of sound contrasts in most languages. However, this ability is not available in adults who might face difficulties in discriminating accurately second language sound contrasts as they filter second language speech through the phonological categories of their native language. For example, Spanish speakers often struggle to perceive the difference between the English /ε/ and /æ/ because both vowels do not exist in their native language; so they assimilate these vowels to the closest phonological category of their first language. The present study aims to uncover the perceptual patterns of Arabic adult speakers in regard to the vowels of their second language (Greek). Still, there is not any study that investigates the perception of Greek vowels by Arabic speakers and, thus, the present study would contribute to the enrichment of the literature with cross-linguistic research in new languages. To the purpose of the present study, 15 native speakers of Egyptian Arabic who permanently live in Cyprus and have adequate knowledge of Greek as a second language passed through vowel assimilation and vowel contrast discrimination tests (AXB) in their second language. The perceptual stimuli included non-sense words that contained vowels in both stressed and unstressed positions. The second language listeners’ patterns were analyzed through the Perceptual Assimilation Model which makes testable hypotheses about the assimilation of second language sounds to the speakers’ native phonological categories and the discrimination accuracy over second language sound contrasts. The results indicated that second language listeners assimilated pairs of Greek vowels in a single phonological category of their native language resulting in a Category Goodness difference assimilation type for the Greek stressed /i/-/e/ and the Greek stressed-unstressed /o/-/u/ vowel contrasts. On the contrary, the members of the Greek unstressed /i/-/e/ vowel contrast were assimilated to two different categories resulting in a Two Category assimilation type. Furthermore, they could discriminate the Greek stressed /i/-/e/ and the Greek stressed-unstressed /o/-/u/ contrasts only in a moderate degree while the Greek unstressed /i/-/e/ contrast could be discriminated in an excellent degree. Two main implications emerge from the results. First, there is a strong influence of the listeners’ native language on the perception of the second language vowels. In Egyptian Arabic, contiguous vowel categories such as [i]-[e] and [u]-[o] do not have phonemic difference but they are subject to allophonic variation; by contrast, the vowel contrasts /i/-/e/ and /o/-/u/ are phonemic in Greek. Second, the role of stress is significant for second language perception since stressed vs. unstressed vowel contrasts were perceived in a different manner by the Greek listeners. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=Arabic" title="Arabic">Arabic</a>, <a href="https://publications.waset.org/abstracts/search?q=bilingual" title=" bilingual"> bilingual</a>, <a href="https://publications.waset.org/abstracts/search?q=Greek" title=" Greek"> Greek</a>, <a href="https://publications.waset.org/abstracts/search?q=vowel%20perception" title=" vowel perception"> vowel perception</a> </p> <a href="https://publications.waset.org/abstracts/102621/perception-of-greek-vowels-by-arabic-greek-bilinguals-an-experimental-study" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/102621.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">138</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">13501</span> SIFT and Perceptual Zoning Applied to CBIR Systems</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Simone%20B.%20K.%20Aires">Simone B. K. Aires</a>, <a href="https://publications.waset.org/abstracts/search?q=Cinthia%20O.%20de%20A.%20Freitas"> Cinthia O. de A. Freitas</a>, <a href="https://publications.waset.org/abstracts/search?q=Luiz%20E.%20S.%20Oliveira"> Luiz E. S. Oliveira</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper contributes to the CBIR systems applied to trademark retrieval. The proposed model includes aspects from visual perception of the shapes, by means of feature extractor associated to a non-symmetrical perceptual zoning mechanism based on the Principles of Gestalt. Thus, the feature set were performed using Scale Invariant Feature Transform (SIFT). We carried out experiments using four different zonings strategies (Z = 4, 5H, 5V, 7) for matching and retrieval tasks. Our proposal method achieved the normalized recall (Rn) equal to 0.84. Experiments show that the non-symmetrical zoning could be considered as a tool to build more reliable trademark retrieval systems. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=CBIR" title="CBIR">CBIR</a>, <a href="https://publications.waset.org/abstracts/search?q=Gestalt" title=" Gestalt"> Gestalt</a>, <a href="https://publications.waset.org/abstracts/search?q=matching" title=" matching"> matching</a>, <a href="https://publications.waset.org/abstracts/search?q=non-symmetrical%20zoning" title=" non-symmetrical zoning"> non-symmetrical zoning</a>, <a href="https://publications.waset.org/abstracts/search?q=SIFT" title=" SIFT"> SIFT</a> </p> <a href="https://publications.waset.org/abstracts/15764/sift-and-perceptual-zoning-applied-to-cbir-systems" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/15764.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">313</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">13500</span> The Combination of the Mel Frequency Cepstral Coefficients, Perceptual Linear Prediction, Jitter and Shimmer Coefficients for the Improvement of Automatic Recognition System for Dysarthric Speech</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Brahim%20Fares%20Zaidi">Brahim Fares Zaidi</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Our work aims to improve our Automatic Recognition System for Dysarthria Speech based on the Hidden Models of Markov and the Hidden Markov Model Toolkit to help people who are sick. With pronunciation problems, we applied two techniques of speech parameterization based on Mel Frequency Cepstral Coefficients and Perceptual Linear Prediction and concatenated them with JITTER and SHIMMER coefficients in order to increase the recognition rate of a dysarthria speech. For our tests, we used the NEMOURS database that represents speakers with dysarthria and normal speakers. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=ARSDS" title="ARSDS">ARSDS</a>, <a href="https://publications.waset.org/abstracts/search?q=HTK" title=" HTK"> HTK</a>, <a href="https://publications.waset.org/abstracts/search?q=HMM" title=" HMM"> HMM</a>, <a href="https://publications.waset.org/abstracts/search?q=MFCC" title=" MFCC"> MFCC</a>, <a href="https://publications.waset.org/abstracts/search?q=PLP" title=" PLP"> PLP</a> </p> <a href="https://publications.waset.org/abstracts/158636/the-combination-of-the-mel-frequency-cepstral-coefficients-perceptual-linear-prediction-jitter-and-shimmer-coefficients-for-the-improvement-of-automatic-recognition-system-for-dysarthric-speech" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/158636.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">108</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">13499</span> Subjective Time as a Marker of the Present Consciousness</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Anastasiya%20Paltarzhitskaya">Anastasiya Paltarzhitskaya</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Subjective time plays an important role in consciousness processes and self-awareness at the moment. The concept of intrinsic neural timescales (INT) explains the difference in perceiving various time intervals. The capacity to experience the present builds on the fundamental properties of temporal cognition. The challenge that both philosophy and neuroscience try to answer is how the brain differentiates the present from the past and future. In our work, we analyze papers which describe mechanisms involved in the perception of ‘present’ and ‘non-present’, i.e., future and past moments. Taking into account that we perceive time intervals even during rest or relaxation, we suppose that the default-mode network activity can code time features, including the present moment. We can compare some results of time perceptual studies, where brain activity was shown in states with different flows of time, including resting states and during “mental time travel”. According to the concept of mental traveling, we employ a range of scenarios which demand episodic memory. However, some papers show that the hippocampal region does not activate during time traveling. It is a controversial result that is further complicated by the phenomenological aspect that includes a holistic set of information about the individual’s past and future. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=temporal%20consciousness" title="temporal consciousness">temporal consciousness</a>, <a href="https://publications.waset.org/abstracts/search?q=time%20perception" title=" time perception"> time perception</a>, <a href="https://publications.waset.org/abstracts/search?q=memory" title=" memory"> memory</a>, <a href="https://publications.waset.org/abstracts/search?q=present" title=" present"> present</a> </p> <a href="https://publications.waset.org/abstracts/144726/subjective-time-as-a-marker-of-the-present-consciousness" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/144726.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">76</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">13498</span> Correlation between Visual Perception and Social Function in Patients with Schizophrenia</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Candy%20Chieh%20Lee">Candy Chieh Lee</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Objective: The purpose of this study is to investigate the relationship between visual perception and social function in patients with schizophrenia. The specific aims are: 1) To explore performances in visual perception and social function in patients with schizophrenia 2) to examine the correlation between visual perceptual skills and social function in patients with schizophrenia The long-term goal is to be able to provide the most adequate intervention program for promoting patients’ visual perceptual skills and social function, as well as compensatory techniques. Background: Perceptual deficits in schizophrenia have been well documented in the visual system. Clinically, a considerable portion (up to 60%) of schizophrenia patients report distorted visual experiences such as visual perception of motion, color, size, and facial expression. Visual perception is required for the successful performance of most activities of daily living, such as dressing, making a cup of tea, driving a car and reading. On the other hand, patients with schizophrenia usually exhibit psychotic symptoms such as auditory hallucination and delusions which tend to alter their perception of reality and affect their quality of interpersonal relationship and limit their participation in various social situations. Social function plays an important role in the prognosis of patients with schizophrenia; lower social functioning skills can lead to poorer prognosis. Investigations on the relationship between social functioning and perceptual ability in patients with schizophrenia are relatively new but important as the results could provide information for effective intervention on visual perception and social functioning in patients with schizophrenia. Methods: We recruited 50 participants with schizophrenia in the mental health hospital (Taipei City Hospital, Songde branch, Taipei, Taiwan) acute ward. Participants who have signed consent forms, diagnosis of schizophrenia and having no organic vision deficits were included. Participants were administered the test of visual-perceptual skills (non-motor), third edition (TVPS-3) and the personal and social performance scale (PSP) for assessing visual perceptual skill and social function. The assessments will take about 70-90 minutes to complete. Data Analysis: The IBM SPSS 21.0 will be used to perform the statistical analysis. First, descriptive statistics will be performed to describe the characteristics and performance of the participants. Lastly, Pearson correlation will be computed to examine the correlation between PSP and TVPS-3 scores. Results: Significant differences were found between the means of participants’ TVPS-3 raw scores of each subtest with the age equivalent raw score provided by the TVPS-3 manual. Significant correlations were found between all 7 subtests of TVPS-3 and PSP total score. Conclusions: The results showed that patients with schizophrenia do exhibit visual perceptual deficits and is correlated social functions. Understanding these facts of patients with schizophrenia can assist health care professionals in designing and implementing adequate rehabilitative treatment according to patients’ needs. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=occupational%20therapy" title="occupational therapy">occupational therapy</a>, <a href="https://publications.waset.org/abstracts/search?q=social%20function" title=" social function"> social function</a>, <a href="https://publications.waset.org/abstracts/search?q=schizophrenia" title=" schizophrenia"> schizophrenia</a>, <a href="https://publications.waset.org/abstracts/search?q=visual%20perception" title=" visual perception"> visual perception</a> </p> <a href="https://publications.waset.org/abstracts/102067/correlation-between-visual-perception-and-social-function-in-patients-with-schizophrenia" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/102067.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">138</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">13497</span> A Blind Three-Dimensional Meshes Watermarking Using the Interquartile Range</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Emad%20E.%20Abdallah">Emad E. Abdallah</a>, <a href="https://publications.waset.org/abstracts/search?q=Alaa%20E.%20Abdallah"> Alaa E. Abdallah</a>, <a href="https://publications.waset.org/abstracts/search?q=Bajes%20Y.%20Alskarnah"> Bajes Y. Alskarnah</a> </p> <p class="card-text"><strong>Abstract:</strong></p> We introduce a robust three-dimensional watermarking algorithm for copyright protection and indexing. The basic idea behind our technique is to measure the interquartile range or the spread of the 3D model vertices. The algorithm starts by converting all the vertices to spherical coordinate followed by partitioning them into small groups. The proposed algorithm is slightly altering the interquartile range distribution of the small groups based on predefined watermark. The experimental results on several 3D meshes prove perceptual invisibility and the robustness of the proposed technique against the most common attacks including compression, noise, smoothing, scaling, rotation as well as combinations of these attacks. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=watermarking" title="watermarking">watermarking</a>, <a href="https://publications.waset.org/abstracts/search?q=three-dimensional%20models" title=" three-dimensional models"> three-dimensional models</a>, <a href="https://publications.waset.org/abstracts/search?q=perceptual%20invisibility" title=" perceptual invisibility"> perceptual invisibility</a>, <a href="https://publications.waset.org/abstracts/search?q=interquartile%20range" title=" interquartile range"> interquartile range</a>, <a href="https://publications.waset.org/abstracts/search?q=3D%20attacks" title=" 3D attacks"> 3D attacks</a> </p> <a href="https://publications.waset.org/abstracts/15946/a-blind-three-dimensional-meshes-watermarking-using-the-interquartile-range" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/15946.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">474</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">13496</span> Comparing the Effect of Virtual Reality and Sound on Landscape Perception</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Mark%20Lindquist">Mark Lindquist</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper presents preliminary results of exploratory empirical research investigating the effect of viewing 3D landscape visualizations in virtual reality compared to a computer monitor, and how sound impacts perception. Five landscape types were paired with three sound conditions (no sound, generic sound, realistic sound). Perceived realism, preference, recreational value, and biodiversity were evaluated in a controlled laboratory environment. Results indicate that sound has a larger perceptual impact than display mode regardless of sound source across all perceptual measures. The results are considered to assess how sound can impact landscape preference and spatiotemporal understanding. The paper concludes with a discussion of the impact on designers, planners, and the public and targets future research endeavors in this area. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=landscape%20experience" title="landscape experience">landscape experience</a>, <a href="https://publications.waset.org/abstracts/search?q=perception" title=" perception"> perception</a>, <a href="https://publications.waset.org/abstracts/search?q=soundscape" title=" soundscape"> soundscape</a>, <a href="https://publications.waset.org/abstracts/search?q=virtual%20reality" title=" virtual reality"> virtual reality</a> </p> <a href="https://publications.waset.org/abstracts/114889/comparing-the-effect-of-virtual-reality-and-sound-on-landscape-perception" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/114889.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">169</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">13495</span> Nature of Body Image Distortion in Eating Disorders</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Katri%20K.%20Cornelissen">Katri K. Cornelissen</a>, <a href="https://publications.waset.org/abstracts/search?q=Lise%20Gulli%20Brokjob"> Lise Gulli Brokjob</a>, <a href="https://publications.waset.org/abstracts/search?q=Kristofor%20McCarty"> Kristofor McCarty</a>, <a href="https://publications.waset.org/abstracts/search?q=Jiri%20Gumancik"> Jiri Gumancik</a>, <a href="https://publications.waset.org/abstracts/search?q=Martin%20J.%20Tovee"> Martin J. Tovee</a>, <a href="https://publications.waset.org/abstracts/search?q=Piers%20L.%20Cornelissen"> Piers L. Cornelissen</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Recent research has shown that body size estimation of healthy women is driven by independent attitudinal and perceptual components. The attitudinal component represents psychological concerns about body, coupled to low self-esteem and a tendency towards depressive symptomatology, leading to over-estimation of body size, independent of the Body Mass Index (BMI) someone actually has. The perceptual component is a normal bias known as contraction bias, which, for bodies is dependent on actual BMI. Women with a BMI less than the population norm tend to overestimate their size, while women with a BMI greater than the population norm tend to underestimate their size. Women whose BMI is close to the population mean are most accurate. This is indexed by a regression of estimated BMI on actual BMI with a slope less than one. It is well established that body dissatisfaction, i.e. an attitudinal distortion, leads to body size overestimation in eating disordered individuals. However, debate persists as to whether women with eating disorders may also suffer a perceptual body distortion. Therefore, the current study was set to ask whether women with eating disorders exhibit the normal contraction bias when they estimate their own body size. If they do not, this would suggest differences in the way that women with eating disorders process the perceptual aspects of body shape and size in comparison to healthy controls. 100 healthy controls and 33 women with a history of eating disorders were recruited. Critically, it was ensured that both groups of participants represented comparable and adequate ranges of actual BMI (e.g. ~18 to ~40). Of those with eating disorders, 19 had a history of anorexia nervosa, 6 bulimia nervosa, and 8 OSFED. 87.5% of the women with a history of eating disorders self-reported that they were either recovered or recovering, and 89.7% of them self-reported that they had had one or more instances of relapse. The mean time lapsed since first diagnosis was 5 years and on average participants had experienced two relapses. Participants were asked to fill number of psychometric measures (EDE-Q, BSQ, RSE, BDI) to establish the attitudinal component of their body image as well as their tendency to internalize socio-cultural body ideals. Additionally, participants completed a method of adjustment psychophysical task, using photorealistic avatars calibrated for BMI, in order to provide an estimate of their own body size and shape. The data from the healthy controls replicate previous findings, revealing independent contributions to body size estimation from both attitudinal and perceptual (i.e. contraction bias) body image components, as described above. For the eating disorder group, once the adequacy of their actual BMI ranges was established, a regression of estimated BMI on actual BMI had a slope greater than 1, significantly different to that from controls. This suggests that (some) eating disordered individuals process the perceptual aspects of body image differently from healthy controls. It therefore is necessary to develop interventions which are specific to the perceptual processing of body shape and size for the management of (some) individuals with eating disorders. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=body%20image%20distortion" title="body image distortion">body image distortion</a>, <a href="https://publications.waset.org/abstracts/search?q=perception" title=" perception"> perception</a>, <a href="https://publications.waset.org/abstracts/search?q=recovery" title=" recovery"> recovery</a>, <a href="https://publications.waset.org/abstracts/search?q=relapse" title=" relapse"> relapse</a>, <a href="https://publications.waset.org/abstracts/search?q=BMI" title=" BMI"> BMI</a>, <a href="https://publications.waset.org/abstracts/search?q=eating%20disorders" title=" eating disorders"> eating disorders</a> </p> <a href="https://publications.waset.org/abstracts/171930/nature-of-body-image-distortion-in-eating-disorders" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/171930.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">68</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">13494</span> Examining Predictive Coding in the Hierarchy of Visual Perception in the Autism Spectrum Using Fast Periodic Visual Stimulation</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Min%20L.%20Stewart">Min L. Stewart</a>, <a href="https://publications.waset.org/abstracts/search?q=Patrick%20Johnston"> Patrick Johnston</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Predictive coding has been proposed as a general explanatory framework for understanding the neural mechanisms of perception. As such, an underweighting of perceptual priors has been hypothesised to underpin a range of differences in inferential and sensory processing in autism spectrum disorders. However, empirical evidence to support this has not been well established. The present study uses an electroencephalography paradigm involving changes of facial identity and person category (actors etc.) to explore how levels of autistic traits (AT) affect predictive coding at multiple stages in the visual processing hierarchy. The study uses a rapid serial presentation of faces, with hierarchically structured sequences involving both periodic and aperiodic repetitions of different stimulus attributes (i.e., person identity and person category) in order to induce contextual expectations relating to these attributes. It investigates two main predictions: (1) significantly larger and late neural responses to change of expected visual sequences in high-relative to low-AT, and (2) significantly reduced neural responses to violations of contextually induced expectation in high- relative to low-AT. Preliminary frequency analysis data comparing high and low-AT show greater and later event-related-potentials (ERPs) in occipitotemporal areas and prefrontal areas in high-AT than in low-AT for periodic changes of facial identity and person category but smaller ERPs over the same areas in response to aperiodic changes of identity and category. The research advances our understanding of how abnormalities in predictive coding might underpin aberrant perceptual experience in autism spectrum. This is the first stage of a research project that will inform clinical practitioners in developing better diagnostic tests and interventions for people with autism. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=hierarchical%20visual%20processing" title="hierarchical visual processing">hierarchical visual processing</a>, <a href="https://publications.waset.org/abstracts/search?q=face%20processing" title=" face processing"> face processing</a>, <a href="https://publications.waset.org/abstracts/search?q=perceptual%20hierarchy" title=" perceptual hierarchy"> perceptual hierarchy</a>, <a href="https://publications.waset.org/abstracts/search?q=prediction%20error" title=" prediction error"> prediction error</a>, <a href="https://publications.waset.org/abstracts/search?q=predictive%20coding" title=" predictive coding"> predictive coding</a> </p> <a href="https://publications.waset.org/abstracts/107770/examining-predictive-coding-in-the-hierarchy-of-visual-perception-in-the-autism-spectrum-using-fast-periodic-visual-stimulation" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/107770.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">111</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">13493</span> Hydration Matters: Impact on 3 km Running Performance in Trained Male Athletes Under Heat Conditions</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Zhaoqi%20He">Zhaoqi He</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Research Context: Endurance performance in hot environments is influenced by the interplay of hydration status and physiological responses. This study aims to investigate how dehydration, up to 2.11% body weight loss, affects the 3 km running performance of trained male athletes under conditions mimicking high temperatures. Methodology: In a randomized crossover design, five male athletes participated in two trials – euhydrated (EU) and dehydrated (HYPO). Both trials included a 70-minute preload run at 55-60% VO2max in 32°C and 50% humidity, followed by a 3-kilometer time trial. Fluid intake was restricted in HYPO to induce a 2.11% body weight loss. Physiological metrics, including heart rate, core temperature, and oxygen uptake, were measured, along with perceptual metrics like perceived exertion and thirst sensation. Findings: The 3-kilometer run completion times showed no significant differences between EU and HYPO trials (p=0.944). Physiological indicators, including heart rate, core temperature, and oxygen uptake, did not significantly vary (p>0.05). Thirst sensation was markedly higher in HYPO (p=0.013), confirming successful induction of dehydration. Other perceptual metrics and gastrointestinal comfort remained consistent. Conclusion: Contrary to the hypothesis, the study reveals that dehydration, inducing up to 2.11% body weight loss, does not significantly impair 3 km running performance in trained male athletes under hot conditions. Thirst sensation was notably higher in the dehydrated state, emphasizing the importance of considering perceptual factors in hydration strategies. The findings suggest that trained runners can maintain performance despite moderate dehydration, highlighting the need for nuanced hydration guidelines in hot-weather running. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=hypohydration" title="hypohydration">hypohydration</a>, <a href="https://publications.waset.org/abstracts/search?q=euhydration" title=" euhydration"> euhydration</a>, <a href="https://publications.waset.org/abstracts/search?q=hot%20environment" title=" hot environment"> hot environment</a>, <a href="https://publications.waset.org/abstracts/search?q=3km%20running%20time%20trial" title=" 3km running time trial"> 3km running time trial</a>, <a href="https://publications.waset.org/abstracts/search?q=endurance%20performance" title=" endurance performance"> endurance performance</a>, <a href="https://publications.waset.org/abstracts/search?q=trained%20athletes" title=" trained athletes"> trained athletes</a>, <a href="https://publications.waset.org/abstracts/search?q=perceptual%20metrics" title=" perceptual metrics"> perceptual metrics</a>, <a href="https://publications.waset.org/abstracts/search?q=dehydration%20impact" title=" dehydration impact"> dehydration impact</a>, <a href="https://publications.waset.org/abstracts/search?q=physiological%20responses" title=" physiological responses"> physiological responses</a>, <a href="https://publications.waset.org/abstracts/search?q=hydration%20strategies" title=" hydration strategies"> hydration strategies</a> </p> <a href="https://publications.waset.org/abstracts/182418/hydration-matters-impact-on-3-km-running-performance-in-trained-male-athletes-under-heat-conditions" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/182418.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">66</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">13492</span> Perceptual Image Coding by Exploiting Internal Generative Mechanism</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Kuo-Cheng%20Liu">Kuo-Cheng Liu</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In the perceptual image coding, the objective is to shape the coding distortion such that the amplitude of distortion does not exceed the error visibility threshold, or to remove perceptually redundant signals from the image. While most researches focus on color image coding, the perceptual-based quantizer developed for luminance signals are always directly applied to chrominance signals such that the color image compression methods are inefficient. In this paper, the internal generative mechanism is integrated into the design of a color image compression method. The internal generative mechanism working model based on the structure-based spatial masking is used to assess the subjective distortion visibility thresholds that are visually consistent to human eyes better. The estimation method of structure-based distortion visibility thresholds for color components is further presented in a locally adaptive way to design quantization process in the wavelet color image compression scheme. Since the lowest subband coefficient matrix of images in the wavelet domain preserves the local property of images in the spatial domain, the error visibility threshold inherent in each coefficient of the lowest subband for each color component is estimated by using the proposed spatial error visibility threshold assessment. The threshold inherent in each coefficient of other subbands for each color component is then estimated in a local adaptive fashion based on the distortion energy allocation. By considering that the error visibility thresholds are estimated using predicting and reconstructed signals of the color image, the coding scheme incorporated with locally adaptive perceptual color quantizer does not require side information. Experimental results show that the entropies of three color components obtained by using proposed IGM-based color image compression scheme are lower than that obtained by using the existing color image compression method at perceptually lossless visual quality. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=internal%20generative%20mechanism" title="internal generative mechanism">internal generative mechanism</a>, <a href="https://publications.waset.org/abstracts/search?q=structure-based%20spatial%20masking" title=" structure-based spatial masking"> structure-based spatial masking</a>, <a href="https://publications.waset.org/abstracts/search?q=visibility%20threshold" title=" visibility threshold"> visibility threshold</a>, <a href="https://publications.waset.org/abstracts/search?q=wavelet%20domain" title=" wavelet domain"> wavelet domain</a> </p> <a href="https://publications.waset.org/abstracts/75216/perceptual-image-coding-by-exploiting-internal-generative-mechanism" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/75216.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">248</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">13491</span> Studying the Spatial Aspects of Visual Attention Processing in Global Precedence Paradigm</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Shreya%20Borthakur">Shreya Borthakur</a>, <a href="https://publications.waset.org/abstracts/search?q=Aastha%20Vartak"> Aastha Vartak</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This behavioral experiment aimed to investigate the global precedence phenomenon in a South Asian sample and its correlation with mobile screen time. The global precedence effect refers to the tendency to process overall structure before attending to specific details. Participants completed attention tasks involving global and local stimuli with varying consistencies. The results showed a tendency towards local precedence, but no significant differences in reaction times were found between consistency levels or attention conditions. However, the correlation analysis revealed that participants with higher screen time exhibited a stronger negative correlation with local attention, suggesting that excessive screen usage may impact perceptual organization. Further research is needed to explore this relationship and understand the influence of screen time on cognitive processing. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=global%20precedence" title="global precedence">global precedence</a>, <a href="https://publications.waset.org/abstracts/search?q=visual%20attention" title=" visual attention"> visual attention</a>, <a href="https://publications.waset.org/abstracts/search?q=perceptual%20organization" title=" perceptual organization"> perceptual organization</a>, <a href="https://publications.waset.org/abstracts/search?q=screen%20time" title=" screen time"> screen time</a>, <a href="https://publications.waset.org/abstracts/search?q=cognition" title=" cognition"> cognition</a> </p> <a href="https://publications.waset.org/abstracts/169660/studying-the-spatial-aspects-of-visual-attention-processing-in-global-precedence-paradigm" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/169660.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">68</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">13490</span> Scheduling Algorithm Based on Load-Aware Queue Partitioning in Heterogeneous Multi-Core Systems</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Hong%20Kai">Hong Kai</a>, <a href="https://publications.waset.org/abstracts/search?q=Zhong%20Jun%20Jie"> Zhong Jun Jie</a>, <a href="https://publications.waset.org/abstracts/search?q=Chen%20Lin%20Qi"> Chen Lin Qi</a>, <a href="https://publications.waset.org/abstracts/search?q=Wang%20Chen%20Guang"> Wang Chen Guang</a> </p> <p class="card-text"><strong>Abstract:</strong></p> There are inefficient global scheduling parallelism and local scheduling parallelism prone to processor starvation in current scheduling algorithms. Regarding this issue, this paper proposed a load-aware queue partitioning scheduling strategy by first allocating the queues according to the number of processor cores, calculating the load factor to specify the load queue capacity, and it assigned the awaiting nodes to the appropriate perceptual queues through the precursor nodes and the communication computation overhead. At the same time, real-time computation of the load factor could effectively prevent the processor from being starved for a long time. Experimental comparison with two classical algorithms shows that there is a certain improvement in both performance metrics of scheduling length and task speedup ratio. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=load-aware" title="load-aware">load-aware</a>, <a href="https://publications.waset.org/abstracts/search?q=scheduling%20algorithm" title=" scheduling algorithm"> scheduling algorithm</a>, <a href="https://publications.waset.org/abstracts/search?q=perceptual%20queue" title=" perceptual queue"> perceptual queue</a>, <a href="https://publications.waset.org/abstracts/search?q=heterogeneous%20multi-core" title=" heterogeneous multi-core"> heterogeneous multi-core</a> </p> <a href="https://publications.waset.org/abstracts/162110/scheduling-algorithm-based-on-load-aware-queue-partitioning-in-heterogeneous-multi-core-systems" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/162110.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">145</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">13489</span> The Inattentional Blindness Paradigm: A Breaking Wave for Attentional Biases in Test Anxiety</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Kritika%20Kulhari">Kritika Kulhari</a>, <a href="https://publications.waset.org/abstracts/search?q=Aparna%20Sahu"> Aparna Sahu</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Test anxiety results from concerns about failure in examinations or evaluative situations. Attentional biases are known to pronounce the symptomatic expression of test anxiety. In recent times, the inattentional blindness (IB) paradigm has shown promise as an attention bias modification treatment (ABMT) for anxiety by overcoming practice and expectancy effects which preexisting paradigms fail to counter. The IB paradigm assesses the inability of an individual to attend to a stimulus that appears suddenly while indulging in a perceptual discrimination task. The present study incorporated an IB task with three critical items (book, face, and triangle) appearing randomly in the perceptual discrimination task. Attentional biases were assessed as detection and identification of the critical item. The sample (N = 50) consisted of low test anxiety (LTA) and high test anxiety (HTA) groups based on the reactions to tests scale scores. Test threat manipulation was done with pre- and post-test assessment of test anxiety using the State Test Anxiety Inventory. A mixed factorial design with gender, test anxiety, presence or absence of test threat, and critical items was conducted to assess their effects on attentional biases. Results showed only a significant main effect for test anxiety on detection with higher accuracy of detection of the critical item for the LTA group. The study presents promising results in the realm of ABMT for test anxiety. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=attentional%20bias" title="attentional bias">attentional bias</a>, <a href="https://publications.waset.org/abstracts/search?q=attentional%20bias%20modification%20treatment" title=" attentional bias modification treatment"> attentional bias modification treatment</a>, <a href="https://publications.waset.org/abstracts/search?q=inattentional%20blindness" title=" inattentional blindness"> inattentional blindness</a>, <a href="https://publications.waset.org/abstracts/search?q=test%20anxiety" title=" test anxiety"> test anxiety</a> </p> <a href="https://publications.waset.org/abstracts/106231/the-inattentional-blindness-paradigm-a-breaking-wave-for-attentional-biases-in-test-anxiety" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/106231.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">225</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">13488</span> Nyaya, Buddhist School Controversy regarding the Laksana of Pratyaksa: Causal versus Conceptual Analysis</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Maitreyee%20Datta">Maitreyee Datta</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Buddhist lakṣaņa of pratyakṣa pramā is not the result of the causal analysis of the genesis of it. Naiyāyikas, on the other hand, has provided the lakṣaņa of pratyakṣa in terms of the causal analysis of it. Thus, though in these two philosophical systems philosophers have discussed in detail the nature of pratyakṣa pramā (perception), yet their treatments and understanding of it vary according to their respective understanding of pramā and prmāņa and their relationship. In Nyāya school, the definition (lakṣņa) of perception (pratyakṣa) has been given in terms of the process by virtue of which it has been generated. Thus, Naiyāyikas were found to provide a causal account of perception (pratyakṣa) by virtue of their lakṣaņa of it. But in Buddhist epistemology perception has been defined by virtue of the nature of perceptual knowledge (pratyakṣa pramā) which is devoid of any vikalpa or cognition. These two schools differed due to their different metaphysical presuppositions which determine their epistemological pursuits. The Naiyāyikas admitted pramā and pramāņa as separate events and they have taken pramāņa to be the cause of pramā. These presuppositions enabled them to provide a lakṣaņa of pratyakṣa pramā in terms of the causes by which it is generated. Why did the Buddhist epistemologists define perception by the unique nature of perceptual knowledge instead of the process by which it is generated? This question will be addressed and dealt with in the present paper. In doing so, the unique purpose of Buddhist philosophy will be identified which will enable us to find out an answer to the above question. This enterprise will also reveal the close relationship among some basic Buddhist presuppositions like pratityasamutpādavāda and kṣaņikavāda with Buddhist epistemological positions. In other words, their distinctive notion of pramā (knowledge) indicates their unique epistemological position which is found to comply with their basic philosophical presuppositions. The first section of the paper will present the Buddhist epistemologists’ lakṣaņa of pratyakṣa. The analysis of the lakṣaņa will be given in clear terms to reveal the nature of pratyakṣa as an instance of pramā. In the second section, an effort will be made to identify the uniqueness of such a definition. Here an articulation will be made in which the relationship among basic Buddhist presuppositions and their unique epistemological positions are determined. In the third section of the paper, an effort will be made to compare Nyāya epistemologist’s position regarding pratyakṣa with that of the Buddhist epistemologist. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=laksana" title="laksana">laksana</a>, <a href="https://publications.waset.org/abstracts/search?q=prama" title=" prama"> prama</a>, <a href="https://publications.waset.org/abstracts/search?q=pramana" title=" pramana"> pramana</a>, <a href="https://publications.waset.org/abstracts/search?q=pratyksa" title=" pratyksa"> pratyksa</a> </p> <a href="https://publications.waset.org/abstracts/98528/nyaya-buddhist-school-controversy-regarding-the-laksana-of-pratyaksa-causal-versus-conceptual-analysis" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/98528.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">149</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">13487</span> Auditory and Visual Perceptual Category Learning in Adults with ADHD: Implications for Learning Systems and Domain-General Factors</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Yafit%20Gabay">Yafit Gabay</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Attention deficit hyperactivity disorder (ADHD) has been associated with both suboptimal functioning in the striatum and prefrontal cortex. Such abnormalities may impede the acquisition of perceptual categories, which are important for fundamental abilities such as object recognition and speech perception. Indeed, prior research has supported this possibility, demonstrating that children with ADHD have similar visual category learning performance as their neurotypical peers but use suboptimal learning strategies. However, much less is known about category learning processes in the auditory domain or among adults with ADHD in which prefrontal functions are more mature compared to children. Here, we investigated auditory and visual perceptual category learning in adults with ADHD and neurotypical individuals. Specifically, we examined learning of rule-based categories – presumed to be optimally learned by a frontal cortex-mediated hypothesis testing – and information-integration categories – hypothesized to be optimally learned by a striatally-mediated reinforcement learning system. Consistent with striatal and prefrontal cortical impairments observed in ADHD, our results show that across sensory modalities, both rule-based and information-integration category learning is impaired in adults with ADHD. Computational modeling analyses revealed that individuals with ADHD were slower to shift to optimal strategies than neurotypicals, regardless of category type or modality. Taken together, these results suggest that both explicit, frontally mediated and implicit, striatally mediated category learning are impaired in ADHD. These results suggest impairments across multiple learning systems in young adults with ADHD that extend across sensory modalities and likely arise from domain-general mechanisms. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=ADHD" title="ADHD">ADHD</a>, <a href="https://publications.waset.org/abstracts/search?q=category%20learning" title=" category learning"> category learning</a>, <a href="https://publications.waset.org/abstracts/search?q=modality" title=" modality"> modality</a>, <a href="https://publications.waset.org/abstracts/search?q=computational%20modeling" title=" computational modeling"> computational modeling</a> </p> <a href="https://publications.waset.org/abstracts/185848/auditory-and-visual-perceptual-category-learning-in-adults-with-adhd-implications-for-learning-systems-and-domain-general-factors" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/185848.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">47</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">13486</span> Korean Trends as a Factor Affecting Academic Performance among Students in Higher Education Institutions</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=D.%20F.%20Carigma">D. F. Carigma</a>, <a href="https://publications.waset.org/abstracts/search?q=E.%20Cruzado"> E. Cruzado</a>, <a href="https://publications.waset.org/abstracts/search?q=M.%20J.%20Hagos"> M. J. Hagos</a>, <a href="https://publications.waset.org/abstracts/search?q=K.%20Perater"> K. Perater</a>, <a href="https://publications.waset.org/abstracts/search?q=D.%20Ramos"> D. Ramos</a>, <a href="https://publications.waset.org/abstracts/search?q=R.%20Navarro"> R. Navarro</a>, <a href="https://publications.waset.org/abstracts/search?q=R.%20Galingan"> R. Galingan</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Korean culture disseminates rapidly across the globe. The young generation is highly engaged in Korean trends, such as Korean pop music, dramas or movies, fashion, food, and beauty standards. With the use of media, the effects of Korean trends may have resulted in the effects of media on people and society, such as addiction, perceptual influence, psychological effect, time consumption, and impulsive spending. The study aimed to determine whether there is a relationship between variable factors affecting the student's academic performance. The proponents used a quantitative approach in the 388 participants at the Technological Institute of the Philippines. This study shows that the Korean trends and the effect of media on people and society correlated to its variable factors. Moreover, this study may help future research in colleges and universities in the Philippines about how the students in higher education who engage in Korean trends affect their behavior and academic performance. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=academic%20performance" title="academic performance">academic performance</a>, <a href="https://publications.waset.org/abstracts/search?q=addiction" title=" addiction"> addiction</a>, <a href="https://publications.waset.org/abstracts/search?q=effect%20of%20media%20on%20people%20and%20society" title=" effect of media on people and society"> effect of media on people and society</a>, <a href="https://publications.waset.org/abstracts/search?q=Korean%20trend" title=" Korean trend"> Korean trend</a>, <a href="https://publications.waset.org/abstracts/search?q=perceptual%20influence" title=" perceptual influence"> perceptual influence</a>, <a href="https://publications.waset.org/abstracts/search?q=psychological%20effect" title=" psychological effect"> psychological effect</a> </p> <a href="https://publications.waset.org/abstracts/179277/korean-trends-as-a-factor-affecting-academic-performance-among-students-in-higher-education-institutions" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/179277.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">63</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">13485</span> Blind Watermarking Using Discrete Wavelet Transform Algorithm with Patchwork</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Toni%20Maristela%20C.%20Estabillo">Toni Maristela C. Estabillo</a>, <a href="https://publications.waset.org/abstracts/search?q=Michaela%20V.%20Matienzo"> Michaela V. Matienzo</a>, <a href="https://publications.waset.org/abstracts/search?q=Mikaela%20L.%20Sabangan"> Mikaela L. Sabangan</a>, <a href="https://publications.waset.org/abstracts/search?q=Rosette%20M.%20Tienzo"> Rosette M. Tienzo</a>, <a href="https://publications.waset.org/abstracts/search?q=Justine%20L.%20Bahinting"> Justine L. Bahinting</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This study is about blind watermarking on images with different categories and properties using two algorithms namely, Discrete Wavelet Transform and Patchwork Algorithm. A program is created to perform watermark embedding, extraction and evaluation. The evaluation is based on three watermarking criteria namely: image quality degradation, perceptual transparency and security. Image quality is measured by comparing the original properties with the processed one. Perceptual transparency is measured by a visual inspection on a survey. Security is measured by implementing geometrical and non-geometrical attacks through a pass or fail testing. Values used to measure the following criteria are mostly based on Mean Squared Error (MSE) and Peak Signal to Noise Ratio (PSNR). The results are based on statistical methods used to interpret and collect data such as averaging, z Test and survey. The study concluded that the combined DWT and Patchwork algorithms were less efficient and less capable of watermarking than DWT algorithm only. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=blind%20watermarking" title="blind watermarking">blind watermarking</a>, <a href="https://publications.waset.org/abstracts/search?q=discrete%20wavelet%20transform%20algorithm" title=" discrete wavelet transform algorithm"> discrete wavelet transform algorithm</a>, <a href="https://publications.waset.org/abstracts/search?q=patchwork%20algorithm" title=" patchwork algorithm"> patchwork algorithm</a>, <a href="https://publications.waset.org/abstracts/search?q=digital%20watermark" title=" digital watermark"> digital watermark</a> </p> <a href="https://publications.waset.org/abstracts/49404/blind-watermarking-using-discrete-wavelet-transform-algorithm-with-patchwork" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/49404.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">268</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">13484</span> English Vowel Duration Affected by Voicing Contrast: A Cross Linguistic Examination of L2 English Production and Perception by Asian Learners of English</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Nguyen%20Van%20Anh%20Le">Nguyen Van Anh Le</a>, <a href="https://publications.waset.org/abstracts/search?q=Mafuyu%20Kitahara"> Mafuyu Kitahara</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In several languages, it is widely acknowledged that vowels are longer before voiced consonants than before voiceless ones such as English. However, in Mandarin Chinese, Vietnamese, Japanese, and Korean, the distribution of voiced-voiceless stop contrasts and long-short vowel differences are vastly different from English. The purpose of this study is to determine whether these targeted learners' L2 English production and perception change in terms of vowel duration as a function of stop voicing. The production measurements in the database of Asian learners revealed a distinct effect than the one observed in native speakers. There was no evident vowel lengthening patterns. The results of the perceptual experiment with 24 participants indicated that individuals tended to prefer voiceless stops when preceding vowels were shortened, but there was no statistically significant difference between intermediate, upper-intermediate, and advanced-level learners. However, learners demonstrated distinct perceptual patterns for various vowels and stops. The findings have valuable implications for L2 English speech acquisition. Keywords: voiced/voiceless stops, preceding vowel duration, voiced/voiceless perception, L2 English, L1 Mandarin Chinese, L1 Vietnamese, L1 Japanese, L1 Korean <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=voiced%2Fvoiceless%20stops" title="voiced/voiceless stops">voiced/voiceless stops</a>, <a href="https://publications.waset.org/abstracts/search?q=preceding%20vowel%20duration" title=" preceding vowel duration"> preceding vowel duration</a>, <a href="https://publications.waset.org/abstracts/search?q=voiced%2Fvoiceless%20perception" title=" voiced/voiceless perception"> voiced/voiceless perception</a>, <a href="https://publications.waset.org/abstracts/search?q=L2%20english" title=" L2 english"> L2 english</a> </p> <a href="https://publications.waset.org/abstracts/150217/english-vowel-duration-affected-by-voicing-contrast-a-cross-linguistic-examination-of-l2-english-production-and-perception-by-asian-learners-of-english" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/150217.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">102</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">13483</span> Anomalies of Visual Perceptual Skills Amongst School Children in Foundation Phase in Olievenhoutbosch, Gauteng Province, South Africa</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Maria%20Bonolo%20Mathevula">Maria Bonolo Mathevula</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Background: Children are important members of communities playing major role in the future of any given country (Pera, Fails, Gelsomini, &Garzotto, 2018). Visual Perceptual Skills (VPSs) in children are important health aspect of early childhood development through the Foundation Phases in school. Subsequently, children should undergo visual screening before commencement of schooling for early diagnosis ofVPSs anomalies because the primary role of VPSs is to capacitate children with academic performance in general. Aim : The aim of this study was to determine the anomalies of visual VPSs amongst school children in Foundation Phase. The study’s objectives were to determine the prevalence of VPSs anomalies amongst school children in Foundation Phase; Determine the relationship between children’s academic and VPSs anomalies; and to investigate the relationship between VPSs anomalies and refractive error. Methodology: This study was a mixed method whereby triangulated qualitative (interviews) and quantitative (questionnaire and clinical data) was used. This was, therefore, descriptive by nature. The study’s target population was school children in Foundation Phase. The study followed purposive sampling method. School children in Foundation Phase were purposively sampled to form part of this study provided their parents have given a signed the consent. Data was collected by the use of standardized interviews; questionnaire; clinical data card, and TVPS standard data card. Results: Although the study is still ongoing, the preliminary study outcome based on data collected from one of the Foundation Phases have suggested the following:While VPSs anomalies is not prevalent, it, however, have indirect relationship with children’s academic performance in Foundation phase; Notably, VPSs anomalies and refractive error are directly related since majority of children with refractive error, specifically compound hyperopic astigmatism, failed most subtests of TVPS standard tests. Conclusion: Based on the study’s preliminary findings, it was clear that optometrists still have a lot to do in as far as researching on VPSs is concerned. Furthermore, the researcher recommends that optometrist, as the primary healthcare professionals, should also conduct the school-readiness pre-assessment on children before commencement of their grades in Foundation phase. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=foundation%20phase" title="foundation phase">foundation phase</a>, <a href="https://publications.waset.org/abstracts/search?q=visual%20perceptual%20skills" title=" visual perceptual skills"> visual perceptual skills</a>, <a href="https://publications.waset.org/abstracts/search?q=school%20children" title=" school children"> school children</a>, <a href="https://publications.waset.org/abstracts/search?q=refractive%20error" title=" refractive error"> refractive error</a> </p> <a href="https://publications.waset.org/abstracts/148271/anomalies-of-visual-perceptual-skills-amongst-school-children-in-foundation-phase-in-olievenhoutbosch-gauteng-province-south-africa" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/148271.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">102</span> </span> </div> </div> <ul class="pagination"> <li class="page-item disabled"><span class="page-link">&lsaquo;</span></li> <li class="page-item active"><span class="page-link">1</span></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=perceptual%20present&amp;page=2">2</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=perceptual%20present&amp;page=3">3</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=perceptual%20present&amp;page=4">4</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=perceptual%20present&amp;page=5">5</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=perceptual%20present&amp;page=6">6</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=perceptual%20present&amp;page=7">7</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=perceptual%20present&amp;page=8">8</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=perceptual%20present&amp;page=9">9</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=perceptual%20present&amp;page=10">10</a></li> <li class="page-item disabled"><span class="page-link">...</span></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=perceptual%20present&amp;page=450">450</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=perceptual%20present&amp;page=451">451</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=perceptual%20present&amp;page=2" rel="next">&rsaquo;</a></li> </ul> </div> </main> <footer> <div id="infolinks" class="pt-3 pb-2"> <div class="container"> <div style="background-color:#f5f5f5;" class="p-3"> <div class="row"> <div class="col-md-2"> <ul class="list-unstyled"> About <li><a href="https://waset.org/page/support">About Us</a></li> <li><a href="https://waset.org/page/support#legal-information">Legal</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/WASET-16th-foundational-anniversary.pdf">WASET celebrates its 16th foundational anniversary</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Account <li><a href="https://waset.org/profile">My Account</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Explore <li><a href="https://waset.org/disciplines">Disciplines</a></li> <li><a href="https://waset.org/conferences">Conferences</a></li> <li><a href="https://waset.org/conference-programs">Conference Program</a></li> <li><a href="https://waset.org/committees">Committees</a></li> <li><a href="https://publications.waset.org">Publications</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Research <li><a href="https://publications.waset.org/abstracts">Abstracts</a></li> <li><a href="https://publications.waset.org">Periodicals</a></li> <li><a href="https://publications.waset.org/archive">Archive</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Open Science <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Science-Philosophy.pdf">Open Science Philosophy</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Science-Award.pdf">Open Science Award</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Society-Open-Science-and-Open-Innovation.pdf">Open Innovation</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Postdoctoral-Fellowship-Award.pdf">Postdoctoral Fellowship Award</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Scholarly-Research-Review.pdf">Scholarly Research Review</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Support <li><a href="https://waset.org/page/support">Support</a></li> <li><a href="https://waset.org/profile/messages/create">Contact Us</a></li> <li><a href="https://waset.org/profile/messages/create">Report Abuse</a></li> </ul> </div> </div> </div> </div> </div> <div class="container text-center"> <hr style="margin-top:0;margin-bottom:.3rem;"> <a href="https://creativecommons.org/licenses/by/4.0/" target="_blank" class="text-muted small">Creative Commons Attribution 4.0 International License</a> <div id="copy" class="mt-2">&copy; 2024 World Academy of Science, Engineering and Technology</div> </div> </footer> <a href="javascript:" id="return-to-top"><i class="fas fa-arrow-up"></i></a> <div class="modal" id="modal-template"> <div class="modal-dialog"> <div class="modal-content"> <div class="row m-0 mt-1"> <div class="col-md-12"> <button type="button" class="close" data-dismiss="modal" aria-label="Close"><span aria-hidden="true">&times;</span></button> </div> </div> <div class="modal-body"></div> </div> </div> </div> <script src="https://cdn.waset.org/static/plugins/jquery-3.3.1.min.js"></script> <script src="https://cdn.waset.org/static/plugins/bootstrap-4.2.1/js/bootstrap.bundle.min.js"></script> <script src="https://cdn.waset.org/static/js/site.js?v=150220211556"></script> <script> jQuery(document).ready(function() { /*jQuery.get("https://publications.waset.org/xhr/user-menu", function (response) { jQuery('#mainNavMenu').append(response); });*/ jQuery.get({ url: "https://publications.waset.org/xhr/user-menu", cache: false }).then(function(response){ jQuery('#mainNavMenu').append(response); }); }); </script> </body> </html>

Pages: 1 2 3 4 5 6 7 8 9 10