CINXE.COM

Search results for: video consultations

<!DOCTYPE html> <html lang="en" dir="ltr"> <head> <!-- Google tag (gtag.js) --> <script async src="https://www.googletagmanager.com/gtag/js?id=G-P63WKM1TM1"></script> <script> window.dataLayer = window.dataLayer || []; function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'G-P63WKM1TM1'); </script> <!-- Yandex.Metrika counter --> <script type="text/javascript" > (function(m,e,t,r,i,k,a){m[i]=m[i]||function(){(m[i].a=m[i].a||[]).push(arguments)}; m[i].l=1*new Date(); for (var j = 0; j < document.scripts.length; j++) {if (document.scripts[j].src === r) { return; }} k=e.createElement(t),a=e.getElementsByTagName(t)[0],k.async=1,k.src=r,a.parentNode.insertBefore(k,a)}) (window, document, "script", "https://mc.yandex.ru/metrika/tag.js", "ym"); ym(55165297, "init", { clickmap:false, trackLinks:true, accurateTrackBounce:true, webvisor:false }); </script> <noscript><div><img src="https://mc.yandex.ru/watch/55165297" style="position:absolute; left:-9999px;" alt="" /></div></noscript> <!-- /Yandex.Metrika counter --> <!-- Matomo --> <!-- End Matomo Code --> <title>Search results for: video consultations</title> <meta name="description" content="Search results for: video consultations"> <meta name="keywords" content="video consultations"> <meta name="viewport" content="width=device-width, initial-scale=1, minimum-scale=1, maximum-scale=1, user-scalable=no"> <meta charset="utf-8"> <link href="https://cdn.waset.org/favicon.ico" type="image/x-icon" rel="shortcut icon"> <link href="https://cdn.waset.org/static/plugins/bootstrap-4.2.1/css/bootstrap.min.css" rel="stylesheet"> <link href="https://cdn.waset.org/static/plugins/fontawesome/css/all.min.css" rel="stylesheet"> <link href="https://cdn.waset.org/static/css/site.css?v=150220211555" rel="stylesheet"> </head> <body> <header> <div class="container"> <nav class="navbar navbar-expand-lg navbar-light"> <a class="navbar-brand" href="https://waset.org"> <img src="https://cdn.waset.org/static/images/wasetc.png" alt="Open Science Research Excellence" title="Open Science Research Excellence" /> </a> <button class="d-block d-lg-none navbar-toggler ml-auto" type="button" data-toggle="collapse" data-target="#navbarMenu" aria-controls="navbarMenu" aria-expanded="false" aria-label="Toggle navigation"> <span class="navbar-toggler-icon"></span> </button> <div class="w-100"> <div class="d-none d-lg-flex flex-row-reverse"> <form method="get" action="https://waset.org/search" class="form-inline my-2 my-lg-0"> <input class="form-control mr-sm-2" type="search" placeholder="Search Conferences" value="video consultations" name="q" aria-label="Search"> <button class="btn btn-light my-2 my-sm-0" type="submit"><i class="fas fa-search"></i></button> </form> </div> <div class="collapse navbar-collapse mt-1" id="navbarMenu"> <ul class="navbar-nav ml-auto align-items-center" id="mainNavMenu"> <li class="nav-item"> <a class="nav-link" href="https://waset.org/conferences" title="Conferences in 2024/2025/2026">Conferences</a> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/disciplines" title="Disciplines">Disciplines</a> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/committees" rel="nofollow">Committees</a> </li> <li class="nav-item dropdown"> <a class="nav-link dropdown-toggle" href="#" id="navbarDropdownPublications" role="button" data-toggle="dropdown" aria-haspopup="true" aria-expanded="false"> Publications </a> <div class="dropdown-menu" aria-labelledby="navbarDropdownPublications"> <a class="dropdown-item" href="https://publications.waset.org/abstracts">Abstracts</a> <a class="dropdown-item" href="https://publications.waset.org">Periodicals</a> <a class="dropdown-item" href="https://publications.waset.org/archive">Archive</a> </div> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/page/support" title="Support">Support</a> </li> </ul> </div> </div> </nav> </div> </header> <main> <div class="container mt-4"> <div class="row"> <div class="col-md-9 mx-auto"> <form method="get" action="https://publications.waset.org/abstracts/search"> <div id="custom-search-input"> <div class="input-group"> <i class="fas fa-search"></i> <input type="text" class="search-query" name="q" placeholder="Author, Title, Abstract, Keywords" value="video consultations"> <input type="submit" class="btn_search" value="Search"> </div> </div> </form> </div> </div> <div class="row mt-3"> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Commenced</strong> in January 2007</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Frequency:</strong> Monthly</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Edition:</strong> International</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Paper Count:</strong> 1067</div> </div> </div> </div> <h1 class="mt-3 mb-3 text-center" style="font-size:1.6rem;">Search results for: video consultations</h1> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1067</span> Tackling the Digital Divide: Enhancing Video Consultation Access for Digital Illiterate Patients in the Hospital</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Wieke%20Ellen%20Bouwes">Wieke Ellen Bouwes</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This study aims to unravel which factors enhance accessibility of video consultations (VCs) for patients with low digital literacy. Thirteen in-depth interviews with patients, hospital employees, eHealth experts, and digital support organizations were held. Patients with low digital literacy received in-home support during real-time video consultations and are observed during the set-up of these consultations. Key findings highlight the importance of patient acceptance, emphasizing video consultations benefits and avoiding standardized courses. The lack of a uniform video consultation system across healthcare providers poses a barrier. Familiarity with support organizations – to support patients in usage of digital tools - among healthcare practitioners enhances accessibility. Moreover, considerations regarding the Dutch General Data Protection Regulation (GDPR) law influence support patients receive. Also, provider readiness to use video consultations influences patient access. Further, alignment between learning styles and support methods seems to determine abilities to learn how to use video consultations. Future research could delve into tailored learning styles and technological solutions for remote access to further explore effectiveness of learning methods. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=video%20consultations" title="video consultations">video consultations</a>, <a href="https://publications.waset.org/abstracts/search?q=digital%20literacy%20skills" title=" digital literacy skills"> digital literacy skills</a>, <a href="https://publications.waset.org/abstracts/search?q=effectiveness%20of%20support" title=" effectiveness of support"> effectiveness of support</a>, <a href="https://publications.waset.org/abstracts/search?q=intra-%20and%20inter-organizational%20relationships" title=" intra- and inter-organizational relationships"> intra- and inter-organizational relationships</a>, <a href="https://publications.waset.org/abstracts/search?q=patient%20acceptance%20of%20video%20consultations" title=" patient acceptance of video consultations"> patient acceptance of video consultations</a> </p> <a href="https://publications.waset.org/abstracts/173756/tackling-the-digital-divide-enhancing-video-consultation-access-for-digital-illiterate-patients-in-the-hospital" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/173756.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">74</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1066</span> Online Versus Face-To-Face – How Do Video Consultations Change The Doctor-Patient-Interaction</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Markus%20Feufel">Markus Feufel</a>, <a href="https://publications.waset.org/abstracts/search?q=Friederike%20Kendel"> Friederike Kendel</a>, <a href="https://publications.waset.org/abstracts/search?q=Caren%20Hilger"> Caren Hilger</a>, <a href="https://publications.waset.org/abstracts/search?q=Selamawit%20Woldai"> Selamawit Woldai</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Since the corona pandemic, the use of video consultation has increased remarkably. For vulnerable groups such as oncological patients, the advantages seem obvious. But how does video consultation potentially change the doctor-patient relationship compared to face-to-face consultation? Which barriers may hinder the effective use of this consultation format in practice? We are presenting first results from a mixed-methods field study, funded by Federal Ministry of Health, which will provide the basis for a hands-on guide for both physicians and patients on how to improve the quality of video consultations. We use a quasi-experimental design to analyze qualitative and quantitative differences between face-to-face and video consultations based on video recordings of N = 64 actual counseling sessions (n = 32 for each consultation format). Data will be recorded from n = 32 gynecological and n = 32 urological cancer patients at two clinics. After the consultation, all patients will be asked to fill out a questionnaire about their consultation experience. For quantitative analyses, the counseling sessions will be systematically compared in terms of verbal and nonverbal communication patterns. Relative frequencies of eye contact and the information exchanged will be compared using 𝝌2 -tests. The validated questionnaire MAPPIN'Obsdyad will be used to assess the expression of shared decision-making parameters. In addition, semi-structured interviews will be conducted with n = 10 physicians and n = 10 patients experienced with video consultation, for which a qualitative content analysis will be conducted. We will elaborate the comprehensive methodological approach we used to compare video vs. face-to-face consultations and present first evidence on how video consultations change the doctor-patient interaction. We will also outline possible barriers of video consultations and best practices on how they may be overcome. Based on the results, we will present and discuss recommendations outlining best practices for how to prepare and conduct high-quality video consultations from the perspective of both physicians and patients. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=video%20consultation" title="video consultation">video consultation</a>, <a href="https://publications.waset.org/abstracts/search?q=patient-doctor-relationship" title=" patient-doctor-relationship"> patient-doctor-relationship</a>, <a href="https://publications.waset.org/abstracts/search?q=digital%20applications" title=" digital applications"> digital applications</a>, <a href="https://publications.waset.org/abstracts/search?q=technical%20barriers" title=" technical barriers"> technical barriers</a> </p> <a href="https://publications.waset.org/abstracts/153083/online-versus-face-to-face-how-do-video-consultations-change-the-doctor-patient-interaction" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/153083.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">140</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1065</span> H.263 Based Video Transceiver for Wireless Camera System</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Won-Ho%20Kim">Won-Ho Kim</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In this paper, a design of H.263 based wireless video transceiver is presented for wireless camera system. It uses standard WIFI transceiver and the covering area is up to 100m. Furthermore the standard H.263 video encoding technique is used for video compression since wireless video transmitter is unable to transmit high capacity raw data in real time and the implemented system is capable of streaming at speed of less than 1Mbps using NTSC 720x480 video. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=wireless%20video%20transceiver" title="wireless video transceiver">wireless video transceiver</a>, <a href="https://publications.waset.org/abstracts/search?q=video%20surveillance%20camera" title=" video surveillance camera"> video surveillance camera</a>, <a href="https://publications.waset.org/abstracts/search?q=H.263%20video%20encoding%20digital%20signal%20processing" title=" H.263 video encoding digital signal processing"> H.263 video encoding digital signal processing</a> </p> <a href="https://publications.waset.org/abstracts/12951/h263-based-video-transceiver-for-wireless-camera-system" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/12951.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">364</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1064</span> Extraction of Text Subtitles in Multimedia Systems</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Amarjit%20Singh">Amarjit Singh</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In this paper, a method for extraction of text subtitles in large video is proposed. The video data needs to be annotated for many multimedia applications. Text is incorporated in digital video for the motive of providing useful information about that video. So need arises to detect text present in video to understanding and video indexing. This is achieved in two steps. First step is text localization and the second step is text verification. The method of text detection can be extended to text recognition which finds applications in automatic video indexing; video annotation and content based video retrieval. The method has been tested on various types of videos. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=video" title="video">video</a>, <a href="https://publications.waset.org/abstracts/search?q=subtitles" title=" subtitles"> subtitles</a>, <a href="https://publications.waset.org/abstracts/search?q=extraction" title=" extraction"> extraction</a>, <a href="https://publications.waset.org/abstracts/search?q=annotation" title=" annotation"> annotation</a>, <a href="https://publications.waset.org/abstracts/search?q=frames" title=" frames"> frames</a> </p> <a href="https://publications.waset.org/abstracts/24441/extraction-of-text-subtitles-in-multimedia-systems" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/24441.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">601</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1063</span> Video Summarization: Techniques and Applications</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Zaynab%20El%20Khattabi">Zaynab El Khattabi</a>, <a href="https://publications.waset.org/abstracts/search?q=Youness%20Tabii"> Youness Tabii</a>, <a href="https://publications.waset.org/abstracts/search?q=Abdelhamid%20Benkaddour"> Abdelhamid Benkaddour</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Nowadays, huge amount of multimedia repositories make the browsing, retrieval and delivery of video contents very slow and even difficult tasks. Video summarization has been proposed to improve faster browsing of large video collections and more efficient content indexing and access. In this paper, we focus on approaches to video summarization. The video summaries can be generated in many different forms. However, two fundamentals ways to generate summaries are static and dynamic. We present different techniques for each mode in the literature and describe some features used for generating video summaries. We conclude with perspective for further research. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=video%20summarization" title="video summarization">video summarization</a>, <a href="https://publications.waset.org/abstracts/search?q=static%20summarization" title=" static summarization"> static summarization</a>, <a href="https://publications.waset.org/abstracts/search?q=video%20skimming" title=" video skimming"> video skimming</a>, <a href="https://publications.waset.org/abstracts/search?q=semantic%20features" title=" semantic features"> semantic features</a> </p> <a href="https://publications.waset.org/abstracts/27644/video-summarization-techniques-and-applications" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/27644.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">400</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1062</span> The Bespoke ‘Hybrid Virtual Fracture Clinic’ during the COVID-19 Pandemic: A Paradigm Shift?</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Anirudh%20Sharma">Anirudh Sharma</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Introduction: The Covid-19 pandemic necessitated a change in the manner outpatient fracture clinics are conducted due to the need to reduce footfall in hospital. While studies regarding virtual fracture clinics have shown these to be useful and effective, they focus exclusively on remote consultations. However, our service was bespoke to the patient – either a face-to-face or telephone consultation depending on patient need – a ‘hybrid virtual clinic (HVC).’ We report patient satisfaction and outcomes with this novel service. Methods: Patients booked onto our fracture clinics during the first 2 weeks of national lockdown were retrospectively contacted to assess the mode of consultations (virtual, face-to-face, or hybrid), patient experience, and outcome. Patient experience was assessed using the net promoter (NPS), customer effort (CES) and customer satisfaction scores (CSS), and their likelihood of using the HVC in the absence of a pandemic. Patient outcomes were assessed using the components of the EQ5D score. Results: Of 269 possible patients, 140 patients responded to the questionnaire. Of these, 66.4% had ‘hybrid’ consultations, 27.1% had only virtual consultations, and 6.4% had only face-to-face consultations. The mean overall NPS, CES, and CSS (on a scale of 1-10) were 7.27, 7.25, and 7.37, respectively. The mean likelihood of patients using the HVC in the absence of a pandemic was 6.5/10. Patients who had ‘hybrid’ consultations showed better effort scores and greater overall satisfaction than those with virtual consultations only and also reported superior EQ5D outcomes (mean 79.27 vs. 72.7). Patients who did not require surgery reported increased satisfaction (mean 7.51 vs. 7.08) and were more likely to use the HVC in the absence of a pandemic. Conclusion: Our study indicates that a bespoke HVC has good overall patient satisfaction and outcomes and is a better format of fracture clinic service than virtual consultations alone. It may be the preferred mode for fracture clinics in similar situations in the future. Further analysis needs to be conducted in order to explore the impact on resources and clinician experience of HVC in order to appreciate this new paradigm shift. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=hybrid%20virtual%20clinic" title="hybrid virtual clinic">hybrid virtual clinic</a>, <a href="https://publications.waset.org/abstracts/search?q=coronavirus" title=" coronavirus"> coronavirus</a>, <a href="https://publications.waset.org/abstracts/search?q=COVID-19" title=" COVID-19"> COVID-19</a>, <a href="https://publications.waset.org/abstracts/search?q=fracture%20clinic" title=" fracture clinic"> fracture clinic</a>, <a href="https://publications.waset.org/abstracts/search?q=remote%20consultation" title=" remote consultation"> remote consultation</a> </p> <a href="https://publications.waset.org/abstracts/130268/the-bespoke-hybrid-virtual-fracture-clinic-during-the-covid-19-pandemic-a-paradigm-shift" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/130268.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">136</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1061</span> The Use of Video Conferencing to Aid the Decision in Whether Vulnerable Patients Should Attend In-Person Appointments during a COVID Pandemic</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Nadia%20Arikat">Nadia Arikat</a>, <a href="https://publications.waset.org/abstracts/search?q=Katharine%20Blain"> Katharine Blain</a> </p> <p class="card-text"><strong>Abstract:</strong></p> During the worst of the COVID pandemic, only essential treatment was provided for patients needing urgent care. With the prolonged extent of the pandemic, there has been a return to more routine referrals for paediatric dentistry advice and treatment for specialist conditions. However, some of these patients and/or their carers may have significant medical issues meaning that attending in-person appointments carries additional risks. This poses an ethical dilemma for clinicians. This project looks at how a secure video conferencing platform (“Near Me”) has been used to assess the need and urgency for in-person new patient visits, particularly for patients and families with additional risks. “Near Me” is a secure online video consulting service used by NHS Scotland. In deciding whether to bring a new patient to the hospital for an appointment, the clinical condition of the teeth together with the urgency for treatment need to be assessed. This is not always apparent from the referral letter. In addition, it is important to judge the risks to the patients and carers of such visits, particularly if they have medical issues. The use and effectiveness of “Near Me” consultations to help decide whether vulnerable paediatric patients should have in-person appointments will be illustrated and discussed using two families: one where the child is medically compromised (Alagille syndrome with previous liver transplant), and the other where there is a medically compromised parent (undergoing chemotherapy and a bone marrow transplant). In both cases, it was necessary to take into consideration the risks and moral implications of requesting that they attend the dental hospital during a pandemic. The option of remote consultation allowed further clinical information to be evaluated and the families take part in the decision-making process about whether and when such visits should be scheduled. These cases will demonstrate how medically compromised patients (or patients with vulnerable carers), could have their dental needs assessed in a socially distanced manner by video consultation. Together, the clinician and the patient’s family can weigh up the risks, with regards to COVID-19, of attending for in-person appointments against the benefit of having treatment. This is particularly important for new paediatric patients who have not yet had a formal assessment. The limitations of this technology will also be discussed. It is limited by internet availability, the strength of the connection, the video quality and families owning a device which allows video calls. For those from a lower socio-economic background or living in some rural areas, this may not be possible or limit its usefulness. For the two patients discussed in this project, where the urgency of their dental condition was unclear, video consultation proved beneficial in deciding an appropriate outcome and preventing unnecessary exposure of vulnerable people to a hospital environment during a pandemic, demonstrating the usefulness of such technology when it is used appropriately. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=COVID-19" title="COVID-19">COVID-19</a>, <a href="https://publications.waset.org/abstracts/search?q=paediatrics" title=" paediatrics"> paediatrics</a>, <a href="https://publications.waset.org/abstracts/search?q=triage" title=" triage"> triage</a>, <a href="https://publications.waset.org/abstracts/search?q=video%20consultations" title=" video consultations"> video consultations</a> </p> <a href="https://publications.waset.org/abstracts/142314/the-use-of-video-conferencing-to-aid-the-decision-in-whether-vulnerable-patients-should-attend-in-person-appointments-during-a-covid-pandemic" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/142314.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">98</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1060</span> Lecture Video Indexing and Retrieval Using Topic Keywords</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=B.%20J.%20Sandesh">B. J. Sandesh</a>, <a href="https://publications.waset.org/abstracts/search?q=Saurabha%20Jirgi"> Saurabha Jirgi</a>, <a href="https://publications.waset.org/abstracts/search?q=S.%20Vidya"> S. Vidya</a>, <a href="https://publications.waset.org/abstracts/search?q=Prakash%20Eljer"> Prakash Eljer</a>, <a href="https://publications.waset.org/abstracts/search?q=Gowri%20Srinivasa"> Gowri Srinivasa</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In this paper, we propose a framework to help users to search and retrieve the portions in the lecture video of their interest. This is achieved by temporally segmenting and indexing the lecture video using the topic keywords. We use transcribed text from the video and documents relevant to the video topic extracted from the web for this purpose. The keywords for indexing are found by applying the non-negative matrix factorization (NMF) topic modeling techniques on the web documents. Our proposed technique first creates indices on the transcribed documents using the topic keywords, and these are mapped to the video to find the start and end time of the portions of the video for a particular topic. This time information is stored in the index table along with the topic keyword which is used to retrieve the specific portions of the video for the query provided by the users. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=video%20indexing%20and%20retrieval" title="video indexing and retrieval">video indexing and retrieval</a>, <a href="https://publications.waset.org/abstracts/search?q=lecture%20videos" title=" lecture videos"> lecture videos</a>, <a href="https://publications.waset.org/abstracts/search?q=content%20based%20video%20search" title=" content based video search"> content based video search</a>, <a href="https://publications.waset.org/abstracts/search?q=multimodal%20indexing" title=" multimodal indexing"> multimodal indexing</a> </p> <a href="https://publications.waset.org/abstracts/77066/lecture-video-indexing-and-retrieval-using-topic-keywords" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/77066.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">250</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1059</span> Distributed Processing for Content Based Lecture Video Retrieval on Hadoop Framework</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=U.%20S.%20N.%20Raju">U. S. N. Raju</a>, <a href="https://publications.waset.org/abstracts/search?q=Kothuri%20Sai%20Kiran"> Kothuri Sai Kiran</a>, <a href="https://publications.waset.org/abstracts/search?q=Meena%20G.%20Kamal"> Meena G. Kamal</a>, <a href="https://publications.waset.org/abstracts/search?q=Vinay%20Nikhil%20Pabba"> Vinay Nikhil Pabba</a>, <a href="https://publications.waset.org/abstracts/search?q=Suresh%20Kanaparthi"> Suresh Kanaparthi</a> </p> <p class="card-text"><strong>Abstract:</strong></p> There is huge amount of lecture video data available for public use, and many more lecture videos are being created and uploaded every day. Searching for videos on required topics from this huge database is a challenging task. Therefore, an efficient method for video retrieval is needed. An approach for automated video indexing and video search in large lecture video archives is presented. As the amount of video lecture data is huge, it is very inefficient to do the processing in a centralized computation framework. Hence, Hadoop Framework for distributed computing for Big Video Data is used. First, step in the process is automatic video segmentation and key-frame detection to offer a visual guideline for the video content navigation. In the next step, we extract textual metadata by applying video Optical Character Recognition (OCR) technology on key-frames. The OCR and detected slide text line types are adopted for keyword extraction, by which both video- and segment-level keywords are extracted for content-based video browsing and search. The performance of the indexing process can be improved for a large database by using distributed computing on Hadoop framework. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=video%20lectures" title="video lectures">video lectures</a>, <a href="https://publications.waset.org/abstracts/search?q=big%20video%20data" title=" big video data"> big video data</a>, <a href="https://publications.waset.org/abstracts/search?q=video%20retrieval" title=" video retrieval"> video retrieval</a>, <a href="https://publications.waset.org/abstracts/search?q=hadoop" title=" hadoop"> hadoop</a> </p> <a href="https://publications.waset.org/abstracts/26648/distributed-processing-for-content-based-lecture-video-retrieval-on-hadoop-framework" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/26648.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">533</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1058</span> Positive Politeness in Writing Centre Consultations with an Emphasis on Praise</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Avasha%20Rambiritch">Avasha Rambiritch</a>, <a href="https://publications.waset.org/abstracts/search?q=Adelia%20Carstens"> Adelia Carstens</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In especially the context of a writing center, learning takes place during, and as part of, the conversations between the writing center tutor and the student. This interaction or dialogue is an integral part of writing center research and is the focus of this largely qualitative study, employing a politeness lens. While there is some research on positive politeness strategies employed by writing center tutors, there is very little research on specifically praising as a positive politeness strategy. This study attempts to fill this gap by analyzing a corpus of 10 video-recorded consultations to determine how tutors in a writing center utilize the positive politeness strategy of praise. Findings indicate that while tutors exploit a range of politeness strategies, praise is used more often than any other strategy. The research indicates that praise as a politeness strategy is utilized significantly more when commenting on higher-order concerns, as in line with the writing center literature. The benefits of this study include insights into how such analyses can be used to better prepare and equip the tutors (usually postgraduate students appointed as part-time tutors in the writing center) for the work they do on a daily basis. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=writing%20center" title="writing center">writing center</a>, <a href="https://publications.waset.org/abstracts/search?q=academic%20writing" title=" academic writing"> academic writing</a>, <a href="https://publications.waset.org/abstracts/search?q=positive%20politeness" title=" positive politeness"> positive politeness</a>, <a href="https://publications.waset.org/abstracts/search?q=tutor" title=" tutor"> tutor</a> </p> <a href="https://publications.waset.org/abstracts/135646/positive-politeness-in-writing-centre-consultations-with-an-emphasis-on-praise" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/135646.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">214</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1057</span> Video Stabilization Using Feature Point Matching</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Shamsundar%20Kulkarni">Shamsundar Kulkarni</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Video capturing by non-professionals will lead to unanticipated effects. Such as image distortion, image blurring etc. Hence, many researchers study such drawbacks to enhance the quality of videos. In this paper, an algorithm is proposed to stabilize jittery videos .A stable output video will be attained without the effect of jitter which is caused due to shaking of handheld camera during video recording. Firstly, salient points from each frame from the input video are identified and processed followed by optimizing and stabilize the video. Optimization includes the quality of the video stabilization. This method has shown good result in terms of stabilization and it discarded distortion from the output videos recorded in different circumstances. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=video%20stabilization" title="video stabilization">video stabilization</a>, <a href="https://publications.waset.org/abstracts/search?q=point%20feature%20matching" title=" point feature matching"> point feature matching</a>, <a href="https://publications.waset.org/abstracts/search?q=salient%20points" title=" salient points"> salient points</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20quality%20measurement" title=" image quality measurement"> image quality measurement</a> </p> <a href="https://publications.waset.org/abstracts/57341/video-stabilization-using-feature-point-matching" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/57341.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">313</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1056</span> Structural Analysis on the Composition of Video Game Virtual Spaces</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Qin%20Luofeng">Qin Luofeng</a>, <a href="https://publications.waset.org/abstracts/search?q=Shen%20Siqi"> Shen Siqi</a> </p> <p class="card-text"><strong>Abstract:</strong></p> For the 58 years since the first video game came into being, the video game industry is getting through an explosive evolution from then on. Video games exert great influence on society and become a reflection of public life to some extent. Video game virtual spaces are where activities are taking place like real spaces. And that’s the reason why some architects pay attention to video games. However, compared to the researches on the appearance of games, we observe a lack of theoretical comprehensive on the construction of video game virtual spaces. The research method of this paper is to collect literature and conduct theoretical research about the virtual space in video games firstly. And then analogizing the opinions on the space phenomena from the theory of literature and films. Finally, this paper proposes a three-layer framework for the construction of video game virtual spaces: “algorithmic space-narrative space players space”, which correspond to the exterior, expressive, affective parts of the game space. Also, we illustrate each sub-space according to numerous instances of published video games. Hoping this writing could promote the interactive development of video games and architecture. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=video%20game" title="video game">video game</a>, <a href="https://publications.waset.org/abstracts/search?q=virtual%20space" title=" virtual space"> virtual space</a>, <a href="https://publications.waset.org/abstracts/search?q=narrativity" title=" narrativity"> narrativity</a>, <a href="https://publications.waset.org/abstracts/search?q=social%20space" title=" social space"> social space</a>, <a href="https://publications.waset.org/abstracts/search?q=emotional%20connection" title=" emotional connection"> emotional connection</a> </p> <a href="https://publications.waset.org/abstracts/118519/structural-analysis-on-the-composition-of-video-game-virtual-spaces" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/118519.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">267</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1055</span> Key Frame Based Video Summarization via Dependency Optimization</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Janya%20Sainui">Janya Sainui</a> </p> <p class="card-text"><strong>Abstract:</strong></p> As a rapid growth of digital videos and data communications, video summarization that provides a shorter version of the video for fast video browsing and retrieval is necessary. Key frame extraction is one of the mechanisms to generate video summary. In general, the extracted key frames should both represent the entire video content and contain minimum redundancy. However, most of the existing approaches heuristically select key frames; hence, the selected key frames may not be the most different frames and/or not cover the entire content of a video. In this paper, we propose a method of video summarization which provides the reasonable objective functions for selecting key frames. In particular, we apply a statistical dependency measure called quadratic mutual informaion as our objective functions for maximizing the coverage of the entire video content as well as minimizing the redundancy among selected key frames. The proposed key frame extraction algorithm finds key frames as an optimization problem. Through experiments, we demonstrate the success of the proposed video summarization approach that produces video summary with better coverage of the entire video content while less redundancy among key frames comparing to the state-of-the-art approaches. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=video%20summarization" title="video summarization">video summarization</a>, <a href="https://publications.waset.org/abstracts/search?q=key%20frame%20extraction" title=" key frame extraction"> key frame extraction</a>, <a href="https://publications.waset.org/abstracts/search?q=dependency%20measure" title=" dependency measure"> dependency measure</a>, <a href="https://publications.waset.org/abstracts/search?q=quadratic%20mutual%20information" title=" quadratic mutual information"> quadratic mutual information</a> </p> <a href="https://publications.waset.org/abstracts/75218/key-frame-based-video-summarization-via-dependency-optimization" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/75218.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">266</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1054</span> Application of Medical Information System for Image-Based Second Opinion Consultations–Georgian Experience</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Kldiashvili%20Ekaterina">Kldiashvili Ekaterina</a>, <a href="https://publications.waset.org/abstracts/search?q=Burduli%20Archil"> Burduli Archil</a>, <a href="https://publications.waset.org/abstracts/search?q=Ghortlishvili%20Gocha"> Ghortlishvili Gocha</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Introduction – Medical information system (MIS) is at the heart of information technology (IT) implementation policies in healthcare systems around the world. Different architecture and application models of MIS are developed. Despite of obvious advantages and benefits, application of MIS in everyday practice is slow. Objective - On the background of analysis of the existing models of MIS in Georgia has been created a multi-user web-based approach. This presentation will present the architecture of the system and its application for image based second opinion consultations. Methods – The MIS has been created with .Net technology and SQL database architecture. It realizes local (intranet) and remote (internet) access to the system and management of databases. The MIS is fully operational approach, which is successfully used for medical data registration and management as well as for creation, editing and maintenance of the electronic medical records (EMR). Five hundred Georgian language electronic medical records from the cervical screening activity illustrated by images were selected for second opinion consultations. Results – The primary goal of the MIS is patient management. However, the system can be successfully applied for image based second opinion consultations. Discussion – The ideal of healthcare in the information age must be to create a situation where healthcare professionals spend more time creating knowledge from medical information and less time managing medical information. The application of easily available and adaptable technology and improvement of the infrastructure conditions is the basis for eHealth applications. Conclusion - The MIS is perspective and actual technology solution. It can be successfully and effectively used for image based second opinion consultations. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=digital%20images" title="digital images">digital images</a>, <a href="https://publications.waset.org/abstracts/search?q=medical%20information%20system" title=" medical information system"> medical information system</a>, <a href="https://publications.waset.org/abstracts/search?q=second%20opinion%20consultations" title=" second opinion consultations"> second opinion consultations</a>, <a href="https://publications.waset.org/abstracts/search?q=electronic%20medical%20record" title=" electronic medical record"> electronic medical record</a> </p> <a href="https://publications.waset.org/abstracts/39945/application-of-medical-information-system-for-image-based-second-opinion-consultations-georgian-experience" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/39945.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">450</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1053</span> Efficient Storage and Intelligent Retrieval of Multimedia Streams Using H. 265</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=S.%20Sarumathi">S. Sarumathi</a>, <a href="https://publications.waset.org/abstracts/search?q=C.%20Deepadharani"> C. Deepadharani</a>, <a href="https://publications.waset.org/abstracts/search?q=Garimella%20Archana"> Garimella Archana</a>, <a href="https://publications.waset.org/abstracts/search?q=S.%20Dakshayani"> S. Dakshayani</a>, <a href="https://publications.waset.org/abstracts/search?q=D.%20Logeshwaran"> D. Logeshwaran</a>, <a href="https://publications.waset.org/abstracts/search?q=D.%20Jayakumar"> D. Jayakumar</a>, <a href="https://publications.waset.org/abstracts/search?q=Vijayarangan%20Natarajan"> Vijayarangan Natarajan</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The need of the hour for the customers who use a dial-up or a low broadband connection for their internet services is to access HD video data. This can be achieved by developing a new video format using H. 265. This is the latest video codec standard developed by ISO/IEC Moving Picture Experts Group (MPEG) and ITU-T Video Coding Experts Group (VCEG) on April 2013. This new standard for video compression has the potential to deliver higher performance than the earlier standards such as H. 264/AVC. In comparison with H. 264, HEVC offers a clearer, higher quality image at half the original bitrate. At this lower bitrate, it is possible to transmit high definition videos using low bandwidth. It doubles the data compression ratio supporting 8K Ultra HD and resolutions up to 8192×4320. In the proposed model, we design a new video format which supports this H. 265 standard. The major areas of applications in the coming future would lead to enhancements in the performance level of digital television like Tata Sky and Sun Direct, BluRay Discs, Mobile Video, Video Conferencing and Internet and Live Video streaming. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=access%20HD%20video" title="access HD video">access HD video</a>, <a href="https://publications.waset.org/abstracts/search?q=H.%20265%20video%20standard" title=" H. 265 video standard"> H. 265 video standard</a>, <a href="https://publications.waset.org/abstracts/search?q=high%20performance" title=" high performance"> high performance</a>, <a href="https://publications.waset.org/abstracts/search?q=high%20quality%20image" title=" high quality image"> high quality image</a>, <a href="https://publications.waset.org/abstracts/search?q=low%20bandwidth" title=" low bandwidth"> low bandwidth</a>, <a href="https://publications.waset.org/abstracts/search?q=new%20video%20format" title=" new video format"> new video format</a>, <a href="https://publications.waset.org/abstracts/search?q=video%20streaming%20applications" title=" video streaming applications"> video streaming applications</a> </p> <a href="https://publications.waset.org/abstracts/1881/efficient-storage-and-intelligent-retrieval-of-multimedia-streams-using-h-265" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/1881.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">354</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1052</span> Video Shot Detection and Key Frame Extraction Using Faber-Shauder DWT and SVD</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Assma%20Azeroual">Assma Azeroual</a>, <a href="https://publications.waset.org/abstracts/search?q=Karim%20Afdel"> Karim Afdel</a>, <a href="https://publications.waset.org/abstracts/search?q=Mohamed%20El%20Hajji"> Mohamed El Hajji</a>, <a href="https://publications.waset.org/abstracts/search?q=Hassan%20Douzi"> Hassan Douzi</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Key frame extraction methods select the most representative frames of a video, which can be used in different areas of video processing such as video retrieval, video summary, and video indexing. In this paper we present a novel approach for extracting key frames from video sequences. The frame is characterized uniquely by his contours which are represented by the dominant blocks. These dominant blocks are located on the contours and its near textures. When the video frames have a noticeable changement, its dominant blocks changed, then we can extracte a key frame. The dominant blocks of every frame is computed, and then feature vectors are extracted from the dominant blocks image of each frame and arranged in a feature matrix. Singular Value Decomposition is used to calculate sliding windows ranks of those matrices. Finally the computed ranks are traced and then we are able to extract key frames of a video. Experimental results show that the proposed approach is robust against a large range of digital effects used during shot transition. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=FSDWT" title="FSDWT">FSDWT</a>, <a href="https://publications.waset.org/abstracts/search?q=key%20frame%20extraction" title=" key frame extraction"> key frame extraction</a>, <a href="https://publications.waset.org/abstracts/search?q=shot%20detection" title=" shot detection"> shot detection</a>, <a href="https://publications.waset.org/abstracts/search?q=singular%20value%20decomposition" title=" singular value decomposition"> singular value decomposition</a> </p> <a href="https://publications.waset.org/abstracts/18296/video-shot-detection-and-key-frame-extraction-using-faber-shauder-dwt-and-svd" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/18296.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">397</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1051</span> Multimodal Convolutional Neural Network for Musical Instrument Recognition</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Yagya%20Raj%20Pandeya">Yagya Raj Pandeya</a>, <a href="https://publications.waset.org/abstracts/search?q=Joonwhoan%20Lee"> Joonwhoan Lee</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The dynamic behavior of music and video makes it difficult to evaluate musical instrument playing in a video by computer system. Any television or film video clip with music information are rich sources for analyzing musical instruments using modern machine learning technologies. In this research, we integrate the audio and video information sources using convolutional neural network (CNN) and pass network learned features through recurrent neural network (RNN) to preserve the dynamic behaviors of audio and video. We use different pre-trained CNN for music and video feature extraction and then fine tune each model. The music network use 2D convolutional network and video network use 3D convolution (C3D). Finally, we concatenate each music and video feature by preserving the time varying features. The long short term memory (LSTM) network is used for long-term dynamic feature characterization and then use late fusion with generalized mean. The proposed network performs better performance to recognize the musical instrument using audio-video multimodal neural network. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=multimodal" title="multimodal">multimodal</a>, <a href="https://publications.waset.org/abstracts/search?q=3D%20convolution" title=" 3D convolution"> 3D convolution</a>, <a href="https://publications.waset.org/abstracts/search?q=music-video%20feature%20extraction" title=" music-video feature extraction"> music-video feature extraction</a>, <a href="https://publications.waset.org/abstracts/search?q=generalized%20mean" title=" generalized mean"> generalized mean</a> </p> <a href="https://publications.waset.org/abstracts/104041/multimodal-convolutional-neural-network-for-musical-instrument-recognition" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/104041.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">215</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1050</span> Recognising Patients’ Perspective on Health Behaviour Problems Through Laughter: Implications for Patient-Centered Care Practice in Behaviour Change Consultations in General Practice</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Binh%20Thanh%20Ta">Binh Thanh Ta</a>, <a href="https://publications.waset.org/abstracts/search?q=Elizabeth%20Sturgiss"> Elizabeth Sturgiss</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Central to patient-centered care is the idea of treating a patient as a person and understanding their perspectives regarding their health conditions and care preferences. Surprisingly, little is known about how GPs can understand their patients’ perspectives. This paper addresses the challenge of understanding patient perspectives in behavior change consultations by adopting Conversation Analysis (CA), which is an empirical research approach that allows both researchers and the audience to examine patients’ perspectives as displayed in GP-patient interaction. To understand people’s perspectives, CA researchers do not rely on what they say but instead on how they demonstrate their endogenous orientations to social norms when they interact with each other. Underlying CA is the notion that social interaction is orderly by all means. (It is important to note that social orders should not be treated as exogenous sets of rules that predetermine human behaviors. Rather social orders are constructed and oriented by social members through their interactional practices. Also, note that these interactional practices are the resources shared by all social members). As CA offers tools to uncover the orderliness of interactional practices, it not only allows us to understand the perspective of a particular patient in a particular medical encounter but, more importantly, enables us to recognise the shared interactional practice for signifying a particular perspective. Drawing on the 10 video-recorded consultations on behavior change in primary care, we have discovered the orderliness of patient laughter when reporting health behaviors, which signifies their orientation to the problematic nature of the reported behaviors. Among 24 cases where patients reported their health behaviors, we found 19 cases in which they laughed while speaking. In the five cases where patients did not laugh, we found that they explicitly framed their behavior as unproblematic. This finding echoes the CA body research on laughter, which suggests that laughter produced by first speakers (as opposed to laughing in response to what has been said earlier) normally indicates some sort of problems oriented to the self (e.g. self-tease, self-depreciation, etc.). This finding points to the significance of understanding when and why patients laugh; such understanding would assist GPs to recognise whether patients treat their behavior as problematic or not, thereby producing responses sensitive to patient perspectives. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=patient%20centered%20care" title="patient centered care">patient centered care</a>, <a href="https://publications.waset.org/abstracts/search?q=laughter" title=" laughter"> laughter</a>, <a href="https://publications.waset.org/abstracts/search?q=conversation%20analysis" title=" conversation analysis"> conversation analysis</a>, <a href="https://publications.waset.org/abstracts/search?q=primary%20care" title=" primary care"> primary care</a>, <a href="https://publications.waset.org/abstracts/search?q=behaviour%20change%20consultations" title=" behaviour change consultations"> behaviour change consultations</a> </p> <a href="https://publications.waset.org/abstracts/160072/recognising-patients-perspective-on-health-behaviour-problems-through-laughter-implications-for-patient-centered-care-practice-in-behaviour-change-consultations-in-general-practice" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/160072.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">99</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1049</span> Surveillance Video Summarization Based on Histogram Differencing and Sum Conditional Variance</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Nada%20Jasim%20Habeeb">Nada Jasim Habeeb</a>, <a href="https://publications.waset.org/abstracts/search?q=Rana%20Saad%20Mohammed"> Rana Saad Mohammed</a>, <a href="https://publications.waset.org/abstracts/search?q=Muntaha%20Khudair%20Abbass"> Muntaha Khudair Abbass </a> </p> <p class="card-text"><strong>Abstract:</strong></p> For more efficient and fast video summarization, this paper presents a surveillance video summarization method. The presented method works to improve video summarization technique. This method depends on temporal differencing to extract most important data from large video stream. This method uses histogram differencing and Sum Conditional Variance which is robust against to illumination variations in order to extract motion objects. The experimental results showed that the presented method gives better output compared with temporal differencing based summarization techniques. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=temporal%20differencing" title="temporal differencing">temporal differencing</a>, <a href="https://publications.waset.org/abstracts/search?q=video%20summarization" title=" video summarization"> video summarization</a>, <a href="https://publications.waset.org/abstracts/search?q=histogram%20differencing" title=" histogram differencing"> histogram differencing</a>, <a href="https://publications.waset.org/abstracts/search?q=sum%20conditional%20variance" title=" sum conditional variance"> sum conditional variance</a> </p> <a href="https://publications.waset.org/abstracts/54404/surveillance-video-summarization-based-on-histogram-differencing-and-sum-conditional-variance" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/54404.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">348</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1048</span> Evaluating the Performance of Existing Full-Reference Quality Metrics on High Dynamic Range (HDR) Video Content</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Maryam%20Azimi">Maryam Azimi</a>, <a href="https://publications.waset.org/abstracts/search?q=Amin%20Banitalebi-Dehkordi"> Amin Banitalebi-Dehkordi</a>, <a href="https://publications.waset.org/abstracts/search?q=Yuanyuan%20Dong"> Yuanyuan Dong</a>, <a href="https://publications.waset.org/abstracts/search?q=Mahsa%20T.%20Pourazad"> Mahsa T. Pourazad</a>, <a href="https://publications.waset.org/abstracts/search?q=Panos%20Nasiopoulos"> Panos Nasiopoulos</a> </p> <p class="card-text"><strong>Abstract:</strong></p> While there exists a wide variety of Low Dynamic Range (LDR) quality metrics, only a limited number of metrics are designed specifically for the High Dynamic Range (HDR) content. With the introduction of HDR video compression standardization effort by international standardization bodies, the need for an efficient video quality metric for HDR applications has become more pronounced. The objective of this study is to compare the performance of the existing full-reference LDR and HDR video quality metrics on HDR content and identify the most effective one for HDR applications. To this end, a new HDR video data set is created, which consists of representative indoor and outdoor video sequences with different brightness, motion levels and different representing types of distortions. The quality of each distorted video in this data set is evaluated both subjectively and objectively. The correlation between the subjective and objective results confirm that VIF quality metric outperforms all to their tested metrics in the presence of the tested types of distortions. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=HDR" title="HDR">HDR</a>, <a href="https://publications.waset.org/abstracts/search?q=dynamic%20range" title=" dynamic range"> dynamic range</a>, <a href="https://publications.waset.org/abstracts/search?q=LDR" title=" LDR"> LDR</a>, <a href="https://publications.waset.org/abstracts/search?q=subjective%20evaluation" title=" subjective evaluation"> subjective evaluation</a>, <a href="https://publications.waset.org/abstracts/search?q=video%20compression" title=" video compression"> video compression</a>, <a href="https://publications.waset.org/abstracts/search?q=HEVC" title=" HEVC"> HEVC</a>, <a href="https://publications.waset.org/abstracts/search?q=video%20quality%20metrics" title=" video quality metrics"> video quality metrics</a> </p> <a href="https://publications.waset.org/abstracts/18171/evaluating-the-performance-of-existing-full-reference-quality-metrics-on-high-dynamic-range-hdr-video-content" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/18171.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">524</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1047</span> Extending Image Captioning to Video Captioning Using Encoder-Decoder</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Sikiru%20Ademola%20Adewale">Sikiru Ademola Adewale</a>, <a href="https://publications.waset.org/abstracts/search?q=Joe%20Thomas"> Joe Thomas</a>, <a href="https://publications.waset.org/abstracts/search?q=Bolanle%20Hafiz%20Matti"> Bolanle Hafiz Matti</a>, <a href="https://publications.waset.org/abstracts/search?q=Tosin%20Ige"> Tosin Ige</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This project demonstrates the implementation and use of an encoder-decoder model to perform a many-to-many mapping of video data to text captions. The many-to-many mapping occurs via an input temporal sequence of video frames to an output sequence of words to form a caption sentence. Data preprocessing, model construction, and model training are discussed. Caption correctness is evaluated using 2-gram BLEU scores across the different splits of the dataset. Specific examples of output captions were shown to demonstrate model generality over the video temporal dimension. Predicted captions were shown to generalize over video action, even in instances where the video scene changed dramatically. Model architecture changes are discussed to improve sentence grammar and correctness. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=decoder" title="decoder">decoder</a>, <a href="https://publications.waset.org/abstracts/search?q=encoder" title=" encoder"> encoder</a>, <a href="https://publications.waset.org/abstracts/search?q=many-to-many%20mapping" title=" many-to-many mapping"> many-to-many mapping</a>, <a href="https://publications.waset.org/abstracts/search?q=video%20captioning" title=" video captioning"> video captioning</a>, <a href="https://publications.waset.org/abstracts/search?q=2-gram%20BLEU" title=" 2-gram BLEU"> 2-gram BLEU</a> </p> <a href="https://publications.waset.org/abstracts/164540/extending-image-captioning-to-video-captioning-using-encoder-decoder" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/164540.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">108</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1046</span> H.264 Video Privacy Protection Method Using Regions of Interest Encryption</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Taekyun%20Doo">Taekyun Doo</a>, <a href="https://publications.waset.org/abstracts/search?q=Cheongmin%20Ji"> Cheongmin Ji</a>, <a href="https://publications.waset.org/abstracts/search?q=Manpyo%20Hong"> Manpyo Hong</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Like a closed-circuit television (CCTV), video surveillance system is widely placed for gathering video from unspecified people to prevent crime, surveillance, or many other purposes. However, abuse of CCTV brings about concerns of personal privacy invasions. In this paper, we propose an encryption method to protect personal privacy system in H.264 compressed video bitstream with encrypting only regions of interest (ROI). There is no need to change the existing video surveillance system. In addition, encrypting ROI in compressed video bitstream is a challenging work due to spatial and temporal drift errors. For this reason, we propose a novel drift mitigation method when ROI is encrypted. The proposed method was implemented by using JM reference software based on the H.264 compressed videos, and experimental results show the verification of our proposed methods and its effectiveness. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=H.264%2FAVC" title="H.264/AVC">H.264/AVC</a>, <a href="https://publications.waset.org/abstracts/search?q=video%20encryption" title=" video encryption"> video encryption</a>, <a href="https://publications.waset.org/abstracts/search?q=privacy%20protection" title=" privacy protection"> privacy protection</a>, <a href="https://publications.waset.org/abstracts/search?q=post%20compression" title=" post compression"> post compression</a>, <a href="https://publications.waset.org/abstracts/search?q=region%20of%20interest" title=" region of interest"> region of interest</a> </p> <a href="https://publications.waset.org/abstracts/57651/h264-video-privacy-protection-method-using-regions-of-interest-encryption" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/57651.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">340</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1045</span> The Impact of Keyword and Full Video Captioning on Listening Comprehension</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Elias%20Bensalem">Elias Bensalem</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This study investigates the effect of two types of captioning (full and keyword captioning) on listening comprehension. Thirty-six university-level EFL students participated in the study. They were randomly assigned to watch three video clips under three conditions. The first group watched the video clips with full captions. The second group watched the same video clips with keyword captions. The control group watched the video clips without captions. After watching each clip, participants took a listening comprehension test. At the end of the experiment, participants completed a questionnaire to measure their perceptions about the use of captions and the video clips they watched. Results indicated that the full captioning group significantly outperformed both the keyword captioning and the no captioning group on the listening comprehension tests. However, this study did not find any significant difference between the keyword captioning group and the no captioning group. Results of the survey suggest that keyword captioning were a source of distraction for participants. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=captions" title="captions">captions</a>, <a href="https://publications.waset.org/abstracts/search?q=EFL" title=" EFL"> EFL</a>, <a href="https://publications.waset.org/abstracts/search?q=listening%20comprehension" title=" listening comprehension"> listening comprehension</a>, <a href="https://publications.waset.org/abstracts/search?q=video" title=" video"> video</a> </p> <a href="https://publications.waset.org/abstracts/62467/the-impact-of-keyword-and-full-video-captioning-on-listening-comprehension" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/62467.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">262</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1044</span> Temporally Coherent 3D Animation Reconstruction from RGB-D Video Data</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Salam%20Khalifa">Salam Khalifa</a>, <a href="https://publications.waset.org/abstracts/search?q=Naveed%20Ahmed"> Naveed Ahmed</a> </p> <p class="card-text"><strong>Abstract:</strong></p> We present a new method to reconstruct a temporally coherent 3D animation from single or multi-view RGB-D video data using unbiased feature point sampling. Given RGB-D video data, in form of a 3D point cloud sequence, our method first extracts feature points using both color and depth information. In the subsequent steps, these feature points are used to match two 3D point clouds in consecutive frames independent of their resolution. Our new motion vectors based dynamic alignment method then fully reconstruct a spatio-temporally coherent 3D animation. We perform extensive quantitative validation using novel error functions to analyze the results. We show that despite the limiting factors of temporal and spatial noise associated to RGB-D data, it is possible to extract temporal coherence to faithfully reconstruct a temporally coherent 3D animation from RGB-D video data. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=3D%20video" title="3D video">3D video</a>, <a href="https://publications.waset.org/abstracts/search?q=3D%20animation" title=" 3D animation"> 3D animation</a>, <a href="https://publications.waset.org/abstracts/search?q=RGB-D%20video" title=" RGB-D video"> RGB-D video</a>, <a href="https://publications.waset.org/abstracts/search?q=temporally%20coherent%203D%20animation" title=" temporally coherent 3D animation"> temporally coherent 3D animation</a> </p> <a href="https://publications.waset.org/abstracts/12034/temporally-coherent-3d-animation-reconstruction-from-rgb-d-video-data" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/12034.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">373</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1043</span> Teleconsultations and The Need of Onsite Additional Medical Services</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Cristina%20Hotoleanu">Cristina Hotoleanu</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Introduction: The recent Covid-19 pandemic accelerated the development of e-health, including telemedicine, smartphone applications, and medical wearable devices. Providing remote teleconsultations supposes challenges which may require further face-to-face medical interactions. The aim of this study was to assess the correlation between the types of teleconsultations and the need of onsite medical services (investigations and medical visits) for the diagnosis and treatment. Methods: a retrospective study including all the teleconsultations using the platform offered by a telehealth provider in Romania (Telios Care SA) between May 1, 2021- April 30, 2022, was performed. Binary data were analysed using the chi-square test with a significance level of p < 0.05. Results: out of 7163 consultations, 3961 were phone calls, 1981 were online messages, and 1221 were video calls. Onsite medical services were indicated in 3327 (46.44%) cases; the onsite investigations or the onsite visits were recommended for 2908 patients as follows: 2326 in case of phone calls, 582 in case of online messages, none in case of video calls. Both onsite investigations and visits were indicated for 419 patients. The need for onsite additional medical services was significantly higher in the case of phone calls than in the other 2 types of teleconsultations (Chi square= 1207.06, p= 0.00001). The indication for onsite services was done mainly after teleconsultations covering medical specialties (87.34%), significantly higher than the other specialties (Chi square=914.59, p=0.00001). Teleconsultations in surgical specialties and other fields (pharmacy, dentistry, psychology, wellbeing- nutrition, fitness) resulted in 12.13%, respective less than 1%, indication for onsite investigations or visits, explained by using of video calls in most of the cases. Conclusion: a further onsite medical service was necessary in less than a half of the teleconsultations. This indication was done mainly after phone calls and teleconsultations in medical specialties. Video calls were used mostly in psychology, nutrition, and fitness teleconsultations and did not require a further onsite medical service. Other studies are necessary to assess better the types of teleconsultations and the specialties bringing the biggest benefit for the patients. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=onsite%20medical%20services" title="onsite medical services">onsite medical services</a>, <a href="https://publications.waset.org/abstracts/search?q=phone%20calls" title=" phone calls"> phone calls</a>, <a href="https://publications.waset.org/abstracts/search?q=teleconsultations" title=" teleconsultations"> teleconsultations</a>, <a href="https://publications.waset.org/abstracts/search?q=telemedicine" title=" telemedicine"> telemedicine</a> </p> <a href="https://publications.waset.org/abstracts/151975/teleconsultations-and-the-need-of-onsite-additional-medical-services" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/151975.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">101</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1042</span> A Passive Digital Video Authentication Technique Using Wavelet Based Optical Flow Variation Thresholding</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=R.%20S.%20Remya">R. S. Remya</a>, <a href="https://publications.waset.org/abstracts/search?q=U.%20S.%20Sethulekshmi"> U. S. Sethulekshmi</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Detecting the authenticity of a video is an important issue in digital forensics as Video is used as a silent evidence in court such as in child pornography, movie piracy cases, insurance claims, cases involving scientific fraud, traffic monitoring etc. The biggest threat to video data is the availability of modern open video editing tools which enable easy editing of videos without leaving any trace of tampering. In this paper, we propose an efficient passive method for inter-frame video tampering detection, its type and location by estimating the optical flow of wavelet features of adjacent frames and thresholding the variation in the estimated feature. The performance of the algorithm is compared with the z-score thresholding and achieved an efficiency above 95% on all the tested databases. The proposed method works well for videos with dynamic (forensics) as well as static (surveillance) background. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=discrete%20wavelet%20transform" title="discrete wavelet transform">discrete wavelet transform</a>, <a href="https://publications.waset.org/abstracts/search?q=optical%20flow" title=" optical flow"> optical flow</a>, <a href="https://publications.waset.org/abstracts/search?q=optical%20flow%20variation" title=" optical flow variation"> optical flow variation</a>, <a href="https://publications.waset.org/abstracts/search?q=video%20tampering" title=" video tampering"> video tampering</a> </p> <a href="https://publications.waset.org/abstracts/45252/a-passive-digital-video-authentication-technique-using-wavelet-based-optical-flow-variation-thresholding" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/45252.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">359</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1041</span> Video Sharing System Based On Wi-fi Camera</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Qidi%20Lin">Qidi Lin</a>, <a href="https://publications.waset.org/abstracts/search?q=Jinbin%20Huang"> Jinbin Huang</a>, <a href="https://publications.waset.org/abstracts/search?q=Weile%20Liang"> Weile Liang</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper introduces a video sharing platform based on WiFi, which consists of camera, mobile phone and PC server. This platform can receive wireless signal from the camera and show the live video on the mobile phone captured by camera. In addition that, it is able to send commands to camera and control the camera’s holder to rotate. The platform can be applied to interactive teaching and dangerous area’s monitoring and so on. Testing results show that the platform can share the live video of mobile phone. Furthermore, if the system’s PC sever and the camera and many mobile phones are connected together, it can transfer photos concurrently. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=Wifi%20Camera" title="Wifi Camera">Wifi Camera</a>, <a href="https://publications.waset.org/abstracts/search?q=socket%20mobile" title=" socket mobile"> socket mobile</a>, <a href="https://publications.waset.org/abstracts/search?q=platform%20video%20monitoring" title=" platform video monitoring"> platform video monitoring</a>, <a href="https://publications.waset.org/abstracts/search?q=remote%20control" title=" remote control"> remote control</a> </p> <a href="https://publications.waset.org/abstracts/31912/video-sharing-system-based-on-wi-fi-camera" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/31912.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">336</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1040</span> Internet Optimization by Negotiating Traffic Times </h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Carlos%20Gonzalez">Carlos Gonzalez</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper describes a system to optimize the use of the internet by clients requiring downloading of videos at peak hours. The system consists of a web server belonging to a provider of video contents, a provider of internet communications and a software application running on a client&rsquo;s computer. The client using the application software will communicate to the video provider a list of the client&rsquo;s future video demands. The video provider calculates which videos are going to be more in demand for download in the immediate future, and proceeds to request the internet provider the most optimal hours to do the downloading. The times of the downloading will be sent to the application software, which will use the information of pre-established hours negotiated between the video provider and the internet provider to download those videos. The videos will be saved in a special protected section of the user&rsquo;s hard disk, which will only be accessed by the application software in the client&rsquo;s computer. When the client is ready to see a video, the application will search the list of current existent videos in the area of the hard disk; if it does exist, it will use this video directly without the need for internet access. We found that the best way to optimize the download traffic of videos is by negotiation between the internet communication provider and the video content provider. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=internet%20optimization" title="internet optimization">internet optimization</a>, <a href="https://publications.waset.org/abstracts/search?q=video%20download" title=" video download"> video download</a>, <a href="https://publications.waset.org/abstracts/search?q=future%20demands" title=" future demands"> future demands</a>, <a href="https://publications.waset.org/abstracts/search?q=secure%20storage" title=" secure storage"> secure storage</a> </p> <a href="https://publications.waset.org/abstracts/107006/internet-optimization-by-negotiating-traffic-times" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/107006.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">136</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1039</span> Grounded Theory of Consumer Loyalty: A Perspective through Video Game Addiction</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Bassam%20Shaikh">Bassam Shaikh</a>, <a href="https://publications.waset.org/abstracts/search?q=R.%20S.%20A.%20Jumain"> R. S. A. Jumain</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Game addiction has become an extremely important topic in psychology researchers, particularly in understanding and explaining why individuals become addicted (to video games). In previous studies, effect of online game addiction on social responsibilities, health problems, government action, and the behaviors of individuals to purchase and the causes of making individuals addicted on the video games has been discussed. Extending these concepts in marketing, it could be argued than the phenomenon could enlighten and extending our understanding on consumer loyalty. This study took the Grounded Theory approach, and found that motivation, satisfaction, fulfillments, exploration and achievements to be part of the important elements that builds consumer loyalty. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=grounded%20theory" title="grounded theory">grounded theory</a>, <a href="https://publications.waset.org/abstracts/search?q=consumer%20loyalty" title=" consumer loyalty"> consumer loyalty</a>, <a href="https://publications.waset.org/abstracts/search?q=video%20games" title=" video games"> video games</a>, <a href="https://publications.waset.org/abstracts/search?q=video%20game%20addiction" title=" video game addiction"> video game addiction</a> </p> <a href="https://publications.waset.org/abstracts/9724/grounded-theory-of-consumer-loyalty-a-perspective-through-video-game-addiction" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/9724.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">534</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1038</span> The Effect of Video Games on English as a Foreign Language Students&#039; Language Learning Motivation</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Shamim%20Ali">Shamim Ali</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Researchers and teachers have begun developing digital games and model environments for educational purpose; therefore this study examines the effect of a videos game on secondary school students’ language learning motivation. Secondly, it tries to find out the opportunities to develop a decision making process and simultaneously it analyzes the solutions for further implementation in educational setting. Participants were 30 male students randomly assigned to one of the following three treatments: 10 students were assigned to read the game’s story; 10 students were players, who played video game; and, and the last 10 students acted as watchers and observers, their duty was to watch their classmates play the digital video game. A language learning motivation scale was developed and it was given to the participants as a pre- and post-test. Results indicated a significant language learning motivation and the participants were quite motivated in the end. It is, thus, concluded that the use of video games can help enhance high school students’ language learning motivation. It was suggested that video games should be used as a complementary activity not as a replacement for textbook since excessive use of video games can divert the original purpose of learning. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=EFL" title="EFL">EFL</a>, <a href="https://publications.waset.org/abstracts/search?q=English%20as%20a%20Foreign%20Language" title=" English as a Foreign Language"> English as a Foreign Language</a>, <a href="https://publications.waset.org/abstracts/search?q=motivation" title=" motivation"> motivation</a>, <a href="https://publications.waset.org/abstracts/search?q=video%20games" title=" video games"> video games</a>, <a href="https://publications.waset.org/abstracts/search?q=EFL%20learners" title=" EFL learners"> EFL learners</a> </p> <a href="https://publications.waset.org/abstracts/100055/the-effect-of-video-games-on-english-as-a-foreign-language-students-language-learning-motivation" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/100055.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">178</span> </span> </div> </div> <ul class="pagination"> <li class="page-item disabled"><span class="page-link">&lsaquo;</span></li> <li class="page-item active"><span class="page-link">1</span></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=video%20consultations&amp;page=2">2</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=video%20consultations&amp;page=3">3</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=video%20consultations&amp;page=4">4</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=video%20consultations&amp;page=5">5</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=video%20consultations&amp;page=6">6</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=video%20consultations&amp;page=7">7</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=video%20consultations&amp;page=8">8</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=video%20consultations&amp;page=9">9</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=video%20consultations&amp;page=10">10</a></li> <li class="page-item disabled"><span class="page-link">...</span></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=video%20consultations&amp;page=35">35</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=video%20consultations&amp;page=36">36</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=video%20consultations&amp;page=2" rel="next">&rsaquo;</a></li> </ul> </div> </main> <footer> <div id="infolinks" class="pt-3 pb-2"> <div class="container"> <div style="background-color:#f5f5f5;" class="p-3"> <div class="row"> <div class="col-md-2"> <ul class="list-unstyled"> About <li><a href="https://waset.org/page/support">About Us</a></li> <li><a href="https://waset.org/page/support#legal-information">Legal</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/WASET-16th-foundational-anniversary.pdf">WASET celebrates its 16th foundational anniversary</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Account <li><a href="https://waset.org/profile">My Account</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Explore <li><a href="https://waset.org/disciplines">Disciplines</a></li> <li><a href="https://waset.org/conferences">Conferences</a></li> <li><a href="https://waset.org/conference-programs">Conference Program</a></li> <li><a href="https://waset.org/committees">Committees</a></li> <li><a href="https://publications.waset.org">Publications</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Research <li><a href="https://publications.waset.org/abstracts">Abstracts</a></li> <li><a href="https://publications.waset.org">Periodicals</a></li> <li><a href="https://publications.waset.org/archive">Archive</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Open Science <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Science-Philosophy.pdf">Open Science Philosophy</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Science-Award.pdf">Open Science Award</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Society-Open-Science-and-Open-Innovation.pdf">Open Innovation</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Postdoctoral-Fellowship-Award.pdf">Postdoctoral Fellowship Award</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Scholarly-Research-Review.pdf">Scholarly Research Review</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Support <li><a href="https://waset.org/page/support">Support</a></li> <li><a href="https://waset.org/profile/messages/create">Contact Us</a></li> <li><a href="https://waset.org/profile/messages/create">Report Abuse</a></li> </ul> </div> </div> </div> </div> </div> <div class="container text-center"> <hr style="margin-top:0;margin-bottom:.3rem;"> <a href="https://creativecommons.org/licenses/by/4.0/" target="_blank" class="text-muted small">Creative Commons Attribution 4.0 International License</a> <div id="copy" class="mt-2">&copy; 2024 World Academy of Science, Engineering and Technology</div> </div> </footer> <a href="javascript:" id="return-to-top"><i class="fas fa-arrow-up"></i></a> <div class="modal" id="modal-template"> <div class="modal-dialog"> <div class="modal-content"> <div class="row m-0 mt-1"> <div class="col-md-12"> <button type="button" class="close" data-dismiss="modal" aria-label="Close"><span aria-hidden="true">&times;</span></button> </div> </div> <div class="modal-body"></div> </div> </div> </div> <script src="https://cdn.waset.org/static/plugins/jquery-3.3.1.min.js"></script> <script src="https://cdn.waset.org/static/plugins/bootstrap-4.2.1/js/bootstrap.bundle.min.js"></script> <script src="https://cdn.waset.org/static/js/site.js?v=150220211556"></script> <script> jQuery(document).ready(function() { /*jQuery.get("https://publications.waset.org/xhr/user-menu", function (response) { jQuery('#mainNavMenu').append(response); });*/ jQuery.get({ url: "https://publications.waset.org/xhr/user-menu", cache: false }).then(function(response){ jQuery('#mainNavMenu').append(response); }); }); </script> </body> </html>

Pages: 1 2 3 4 5 6 7 8 9 10