CINXE.COM
Search results for: video conferencing course
<!DOCTYPE html> <html lang="en" dir="ltr"> <head> <!-- Google tag (gtag.js) --> <script async src="https://www.googletagmanager.com/gtag/js?id=G-P63WKM1TM1"></script> <script> window.dataLayer = window.dataLayer || []; function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'G-P63WKM1TM1'); </script> <!-- Yandex.Metrika counter --> <script type="text/javascript" > (function(m,e,t,r,i,k,a){m[i]=m[i]||function(){(m[i].a=m[i].a||[]).push(arguments)}; m[i].l=1*new Date(); for (var j = 0; j < document.scripts.length; j++) {if (document.scripts[j].src === r) { return; }} k=e.createElement(t),a=e.getElementsByTagName(t)[0],k.async=1,k.src=r,a.parentNode.insertBefore(k,a)}) (window, document, "script", "https://mc.yandex.ru/metrika/tag.js", "ym"); ym(55165297, "init", { clickmap:false, trackLinks:true, accurateTrackBounce:true, webvisor:false }); </script> <noscript><div><img src="https://mc.yandex.ru/watch/55165297" style="position:absolute; left:-9999px;" alt="" /></div></noscript> <!-- /Yandex.Metrika counter --> <!-- Matomo --> <!-- End Matomo Code --> <title>Search results for: video conferencing course</title> <meta name="description" content="Search results for: video conferencing course"> <meta name="keywords" content="video conferencing course"> <meta name="viewport" content="width=device-width, initial-scale=1, minimum-scale=1, maximum-scale=1, user-scalable=no"> <meta charset="utf-8"> <link href="https://cdn.waset.org/favicon.ico" type="image/x-icon" rel="shortcut icon"> <link href="https://cdn.waset.org/static/plugins/bootstrap-4.2.1/css/bootstrap.min.css" rel="stylesheet"> <link href="https://cdn.waset.org/static/plugins/fontawesome/css/all.min.css" rel="stylesheet"> <link href="https://cdn.waset.org/static/css/site.css?v=150220211555" rel="stylesheet"> </head> <body> <header> <div class="container"> <nav class="navbar navbar-expand-lg navbar-light"> <a class="navbar-brand" href="https://waset.org"> <img src="https://cdn.waset.org/static/images/wasetc.png" alt="Open Science Research Excellence" title="Open Science Research Excellence" /> </a> <button class="d-block d-lg-none navbar-toggler ml-auto" type="button" data-toggle="collapse" data-target="#navbarMenu" aria-controls="navbarMenu" aria-expanded="false" aria-label="Toggle navigation"> <span class="navbar-toggler-icon"></span> </button> <div class="w-100"> <div class="d-none d-lg-flex flex-row-reverse"> <form method="get" action="https://waset.org/search" class="form-inline my-2 my-lg-0"> <input class="form-control mr-sm-2" type="search" placeholder="Search Conferences" value="video conferencing course" name="q" aria-label="Search"> <button class="btn btn-light my-2 my-sm-0" type="submit"><i class="fas fa-search"></i></button> </form> </div> <div class="collapse navbar-collapse mt-1" id="navbarMenu"> <ul class="navbar-nav ml-auto align-items-center" id="mainNavMenu"> <li class="nav-item"> <a class="nav-link" href="https://waset.org/conferences" title="Conferences in 2024/2025/2026">Conferences</a> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/disciplines" title="Disciplines">Disciplines</a> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/committees" rel="nofollow">Committees</a> </li> <li class="nav-item dropdown"> <a class="nav-link dropdown-toggle" href="#" id="navbarDropdownPublications" role="button" data-toggle="dropdown" aria-haspopup="true" aria-expanded="false"> Publications </a> <div class="dropdown-menu" aria-labelledby="navbarDropdownPublications"> <a class="dropdown-item" href="https://publications.waset.org/abstracts">Abstracts</a> <a class="dropdown-item" href="https://publications.waset.org">Periodicals</a> <a class="dropdown-item" href="https://publications.waset.org/archive">Archive</a> </div> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/page/support" title="Support">Support</a> </li> </ul> </div> </div> </nav> </div> </header> <main> <div class="container mt-4"> <div class="row"> <div class="col-md-9 mx-auto"> <form method="get" action="https://publications.waset.org/abstracts/search"> <div id="custom-search-input"> <div class="input-group"> <i class="fas fa-search"></i> <input type="text" class="search-query" name="q" placeholder="Author, Title, Abstract, Keywords" value="video conferencing course"> <input type="submit" class="btn_search" value="Search"> </div> </div> </form> </div> </div> <div class="row mt-3"> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Commenced</strong> in January 2007</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Frequency:</strong> Monthly</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Edition:</strong> International</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Paper Count:</strong> 992</div> </div> </div> </div> <h1 class="mt-3 mb-3 text-center" style="font-size:1.6rem;">Search results for: video conferencing course</h1> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">992</span> Performance Evaluation of Routing Protocols for Video Conference over MPLS VPN Network</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Abdullah%20Al%20Mamun">Abdullah Al Mamun</a>, <a href="https://publications.waset.org/abstracts/search?q=Tarek%20R.%20Sheltami"> Tarek R. Sheltami</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Video conferencing is a highly demanding facility now a days in order to its real time characteristics, but faster communication is the prior requirement of this technology. Multi Protocol Label Switching (MPLS) IP Virtual Private Network (VPN) address this problem and it is able to make a communication faster than others techniques. However, this paper studies the performance comparison of video traffic between two routing protocols namely the Enhanced Interior Gateway Protocol(EIGRP) and Open Shortest Path First (OSPF). The combination of traditional routing and MPLS improve the forwarding mechanism, scalability and overall network performance. We will use GNS3 and OPNET Modeler 14.5 to simulate many different scenarios and metrics such as delay, jitter and mean opinion score (MOS) value are measured. The simulation result will show that OSPF and BGP-MPLS VPN offers best performance for video conferencing application. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=OSPF" title="OSPF">OSPF</a>, <a href="https://publications.waset.org/abstracts/search?q=BGP" title=" BGP"> BGP</a>, <a href="https://publications.waset.org/abstracts/search?q=EIGRP" title=" EIGRP"> EIGRP</a>, <a href="https://publications.waset.org/abstracts/search?q=MPLS" title=" MPLS"> MPLS</a>, <a href="https://publications.waset.org/abstracts/search?q=Video%20conference" title=" Video conference"> Video conference</a>, <a href="https://publications.waset.org/abstracts/search?q=Provider%20router" title=" Provider router"> Provider router</a>, <a href="https://publications.waset.org/abstracts/search?q=edge%20router" title=" edge router"> edge router</a>, <a href="https://publications.waset.org/abstracts/search?q=layer3%20VPN" title=" layer3 VPN"> layer3 VPN</a> </p> <a href="https://publications.waset.org/abstracts/32093/performance-evaluation-of-routing-protocols-for-video-conference-over-mpls-vpn-network" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/32093.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">331</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">991</span> A Study of Taiwanese Students' Language Use in the Primary International Education via Video Conferencing Course</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Chialing%20Chang">Chialing Chang</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Language and culture are critical foundations of international mobility. However, the students who are limited to the local environment may affect their learning outcome and global perspective. Video Conferencing has been proven an economical way for students as a medium to communicate with international students around the world. In Taiwan, the National Development Commission advocated the development of bilingual national policies in 2030 to enhance national competitiveness and foster English proficiency and fully launched bilingual activation of the education system. Globalization is closely related to the development of Taiwan's education. Therefore, the teacher conducted an integrated lesson through interdisciplinary learning. This study aims to investigate how the teacher helps develop students' global and language core competencies in the international education class. The methodology comprises four stages, which are lesson planning, class observation, learning data collection, and speech analysis. The Grice's Conversational Maxims are adopted to analyze the students' conversation in the video conferencing course. It is the action research from the teacher's reflection on approaches to developing students' language learning skills. The study lays the foundation for mastering the teacher's international education professional development and improving teachers' teaching quality and teaching effectiveness as a reference for teachers' future instruction. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=international%20education" title="international education">international education</a>, <a href="https://publications.waset.org/abstracts/search?q=language%20learning" title=" language learning"> language learning</a>, <a href="https://publications.waset.org/abstracts/search?q=Grice%27s%20conversational%20maxims" title=" Grice's conversational maxims"> Grice's conversational maxims</a>, <a href="https://publications.waset.org/abstracts/search?q=video%20conferencing%20course" title=" video conferencing course"> video conferencing course</a> </p> <a href="https://publications.waset.org/abstracts/124122/a-study-of-taiwanese-students-language-use-in-the-primary-international-education-via-video-conferencing-course" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/124122.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">121</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">990</span> Improving Second Language Speaking Skills via Video Exchange</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Nami%20Takase">Nami Takase</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Computer-mediated-communication allows people to connect and interact with each other as if they were sharing the same space. The current study examined the effects of using video letters (VLs) on the development of second language speaking skills of Common European Framework of Reference for Languages (CEFR) A1 and CEFR B2 level learners of English as a foreign language. Two groups were formed to measure the impact of VLs. The experimental and control groups were given the same topic, and both groups worked with a native English-speaking university student from the United States of America. Students in the experimental group exchanged VLs, and students in the control group used video conferencing. Pre- and post-tests were conducted to examine the effects of each practice mode. The transcribed speech-text data showed that the VL group had improved speech accuracy scores, while the video conferencing group had increased sentence complexity scores. The use of VLs may be more effective for beginner-level learners because they are able to notice their own errors and replay videos to better understand the native speaker’s speech at their own pace. Both the VL and video conferencing groups provided positive feedback regarding their interactions with native speakers. The results showed how different types of computer-mediated communication impacts different areas of language learning and speaking practice and how each of these types of online communication tool is suited to different teaching objectives. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=computer-assisted-language-learning" title="computer-assisted-language-learning">computer-assisted-language-learning</a>, <a href="https://publications.waset.org/abstracts/search?q=computer-mediated-communication" title=" computer-mediated-communication"> computer-mediated-communication</a>, <a href="https://publications.waset.org/abstracts/search?q=english%20as%20a%20foreign%20language" title=" english as a foreign language"> english as a foreign language</a>, <a href="https://publications.waset.org/abstracts/search?q=speaking" title=" speaking"> speaking</a> </p> <a href="https://publications.waset.org/abstracts/155719/improving-second-language-speaking-skills-via-video-exchange" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/155719.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">99</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">989</span> Cosmetic Surgery on the Rise: The Impact of Remote Communication</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Bruno%20Di%20Pace">Bruno Di Pace</a>, <a href="https://publications.waset.org/abstracts/search?q=Roxanne%20H.%20Padley"> Roxanne H. Padley</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Aims: The recent increase in remote video interaction has increased the number of requests for teleconsultations with plastic surgeons in private practice (70% in the UK and 64% in the USA). This study investigated the motivations for such an increase and the underlying psychological impact on patients. Method: An anonymous web-based poll of 8 questions was designed and distributed to patients seeking cosmetic surgery through social networks in both Italy and the UK. The questions gathered responses regarding 1. Reasons for pursuing cosmetic surgery; 2. The effects of delays caused by the SARS-COV-2 pandemic; 3. The effects on mood; 4. The influence of video conferencing on body-image perception. Results: 85 respondents completed the online poll. Overall, 68% of respondents stated that seeing themselves more frequently online had influenced their decision to seek cosmetic surgery. The types of surgeries indicated were predominantly to the upper body and face (82%). Delays and access to surgeons during the pandemic were perceived as negatively impacting patients' moods (95%). Body-image perception and self-esteem were lower than in the pre-pandemic, particularly during lockdown (72%). Patients were more inclined to undergo cosmetic surgery during the pandemic, both due to the wish to improve their “lockdown face” for video conferencing (77%) and also due to the benefits of home recovery while in smart working (58%). Conclusions: Overall, findings suggest that video conferencing has led to a significant increase in requests for cosmetic surgery and the so-called “Zoom Boom” effect. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=cosmetic%20surgery" title="cosmetic surgery">cosmetic surgery</a>, <a href="https://publications.waset.org/abstracts/search?q=remote%20communication" title=" remote communication"> remote communication</a>, <a href="https://publications.waset.org/abstracts/search?q=telehealth" title=" telehealth"> telehealth</a>, <a href="https://publications.waset.org/abstracts/search?q=zoom%20boom" title=" zoom boom"> zoom boom</a> </p> <a href="https://publications.waset.org/abstracts/139832/cosmetic-surgery-on-the-rise-the-impact-of-remote-communication" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/139832.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">179</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">988</span> The Use of Videoconferencing in a Task-Based Beginners' Chinese Class</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Sijia%20Guo">Sijia Guo</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The development of new technologies and the falling cost of high-speed Internet access have made it easier for institutes and language teachers to opt different ways to communicate with students at distance. The emergence of web-conferencing applications, which integrate text, chat, audio / video and graphic facilities, offers great opportunities for language learning to through the multimodal environment. This paper reports on data elicited from a Ph.D. study of using web-conferencing in the teaching of first-year Chinese class in order to promote learners’ collaborative learning. Firstly, a comparison of four desktop videoconferencing (DVC) tools was conducted to determine the pedagogical value of the videoconferencing tool-Blackboard Collaborate. Secondly, the evaluation of 14 campus-based Chinese learners who conducted five one-hour online sessions via the multimodal environment reveals the users’ choice of modes and their learning preference. The findings show that the tasks designed for the web-conferencing environment contributed to the learners’ collaborative learning and second language acquisition. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=computer-mediated%20communication%20%28CMC%29" title="computer-mediated communication (CMC)">computer-mediated communication (CMC)</a>, <a href="https://publications.waset.org/abstracts/search?q=CALL%20evaluation" title=" CALL evaluation"> CALL evaluation</a>, <a href="https://publications.waset.org/abstracts/search?q=TBLT" title=" TBLT"> TBLT</a>, <a href="https://publications.waset.org/abstracts/search?q=web-conferencing" title=" web-conferencing"> web-conferencing</a>, <a href="https://publications.waset.org/abstracts/search?q=online%20Chinese%20teaching" title=" online Chinese teaching"> online Chinese teaching</a> </p> <a href="https://publications.waset.org/abstracts/43984/the-use-of-videoconferencing-in-a-task-based-beginners-chinese-class" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/43984.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">309</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">987</span> Efficient Storage and Intelligent Retrieval of Multimedia Streams Using H. 265</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=S.%20Sarumathi">S. Sarumathi</a>, <a href="https://publications.waset.org/abstracts/search?q=C.%20Deepadharani"> C. Deepadharani</a>, <a href="https://publications.waset.org/abstracts/search?q=Garimella%20Archana"> Garimella Archana</a>, <a href="https://publications.waset.org/abstracts/search?q=S.%20Dakshayani"> S. Dakshayani</a>, <a href="https://publications.waset.org/abstracts/search?q=D.%20Logeshwaran"> D. Logeshwaran</a>, <a href="https://publications.waset.org/abstracts/search?q=D.%20Jayakumar"> D. Jayakumar</a>, <a href="https://publications.waset.org/abstracts/search?q=Vijayarangan%20Natarajan"> Vijayarangan Natarajan</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The need of the hour for the customers who use a dial-up or a low broadband connection for their internet services is to access HD video data. This can be achieved by developing a new video format using H. 265. This is the latest video codec standard developed by ISO/IEC Moving Picture Experts Group (MPEG) and ITU-T Video Coding Experts Group (VCEG) on April 2013. This new standard for video compression has the potential to deliver higher performance than the earlier standards such as H. 264/AVC. In comparison with H. 264, HEVC offers a clearer, higher quality image at half the original bitrate. At this lower bitrate, it is possible to transmit high definition videos using low bandwidth. It doubles the data compression ratio supporting 8K Ultra HD and resolutions up to 8192×4320. In the proposed model, we design a new video format which supports this H. 265 standard. The major areas of applications in the coming future would lead to enhancements in the performance level of digital television like Tata Sky and Sun Direct, BluRay Discs, Mobile Video, Video Conferencing and Internet and Live Video streaming. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=access%20HD%20video" title="access HD video">access HD video</a>, <a href="https://publications.waset.org/abstracts/search?q=H.%20265%20video%20standard" title=" H. 265 video standard"> H. 265 video standard</a>, <a href="https://publications.waset.org/abstracts/search?q=high%20performance" title=" high performance"> high performance</a>, <a href="https://publications.waset.org/abstracts/search?q=high%20quality%20image" title=" high quality image"> high quality image</a>, <a href="https://publications.waset.org/abstracts/search?q=low%20bandwidth" title=" low bandwidth"> low bandwidth</a>, <a href="https://publications.waset.org/abstracts/search?q=new%20video%20format" title=" new video format"> new video format</a>, <a href="https://publications.waset.org/abstracts/search?q=video%20streaming%20applications" title=" video streaming applications"> video streaming applications</a> </p> <a href="https://publications.waset.org/abstracts/1881/efficient-storage-and-intelligent-retrieval-of-multimedia-streams-using-h-265" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/1881.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">354</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">986</span> H.263 Based Video Transceiver for Wireless Camera System</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Won-Ho%20Kim">Won-Ho Kim</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In this paper, a design of H.263 based wireless video transceiver is presented for wireless camera system. It uses standard WIFI transceiver and the covering area is up to 100m. Furthermore the standard H.263 video encoding technique is used for video compression since wireless video transmitter is unable to transmit high capacity raw data in real time and the implemented system is capable of streaming at speed of less than 1Mbps using NTSC 720x480 video. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=wireless%20video%20transceiver" title="wireless video transceiver">wireless video transceiver</a>, <a href="https://publications.waset.org/abstracts/search?q=video%20surveillance%20camera" title=" video surveillance camera"> video surveillance camera</a>, <a href="https://publications.waset.org/abstracts/search?q=H.263%20video%20encoding%20digital%20signal%20processing" title=" H.263 video encoding digital signal processing"> H.263 video encoding digital signal processing</a> </p> <a href="https://publications.waset.org/abstracts/12951/h263-based-video-transceiver-for-wireless-camera-system" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/12951.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">364</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">985</span> Extraction of Text Subtitles in Multimedia Systems</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Amarjit%20Singh">Amarjit Singh</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In this paper, a method for extraction of text subtitles in large video is proposed. The video data needs to be annotated for many multimedia applications. Text is incorporated in digital video for the motive of providing useful information about that video. So need arises to detect text present in video to understanding and video indexing. This is achieved in two steps. First step is text localization and the second step is text verification. The method of text detection can be extended to text recognition which finds applications in automatic video indexing; video annotation and content based video retrieval. The method has been tested on various types of videos. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=video" title="video">video</a>, <a href="https://publications.waset.org/abstracts/search?q=subtitles" title=" subtitles"> subtitles</a>, <a href="https://publications.waset.org/abstracts/search?q=extraction" title=" extraction"> extraction</a>, <a href="https://publications.waset.org/abstracts/search?q=annotation" title=" annotation"> annotation</a>, <a href="https://publications.waset.org/abstracts/search?q=frames" title=" frames"> frames</a> </p> <a href="https://publications.waset.org/abstracts/24441/extraction-of-text-subtitles-in-multimedia-systems" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/24441.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">601</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">984</span> Performance of High Efficiency Video Codec over Wireless Channels</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Mohd%20Ayyub%20Khan">Mohd Ayyub Khan</a>, <a href="https://publications.waset.org/abstracts/search?q=Nadeem%20Akhtar"> Nadeem Akhtar</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Due to recent advances in wireless communication technologies and hand-held devices, there is a huge demand for video-based applications such as video surveillance, video conferencing, remote surgery, Digital Video Broadcast (DVB), IPTV, online learning courses, YouTube, WhatsApp, Instagram, Facebook, Interactive Video Games. However, the raw videos posses very high bandwidth which makes the compression a must before its transmission over the wireless channels. The High Efficiency Video Codec (HEVC) (also called H.265) is latest state-of-the-art video coding standard developed by the Joint effort of ITU-T and ISO/IEC teams. HEVC is targeted for high resolution videos such as 4K or 8K resolutions that can fulfil the recent demands for video services. The compression ratio achieved by the HEVC is twice as compared to its predecessor H.264/AVC for same quality level. The compression efficiency is generally increased by removing more correlation between the frames/pixels using complex techniques such as extensive intra and inter prediction techniques. As more correlation is removed, the chances of interdependency among coded bits increases. Thus, bit errors may have large effect on the reconstructed video. Sometimes even single bit error can lead to catastrophic failure of the reconstructed video. In this paper, we study the performance of HEVC bitstream over additive white Gaussian noise (AWGN) channel. Moreover, HEVC over Quadrature Amplitude Modulation (QAM) combined with forward error correction (FEC) schemes are also explored over the noisy channel. The video will be encoded using HEVC, and the coded bitstream is channel coded to provide some redundancies. The channel coded bitstream is then modulated using QAM and transmitted over AWGN channel. At the receiver, the symbols are demodulated and channel decoded to obtain the video bitstream. The bitstream is then used to reconstruct the video using HEVC decoder. It is observed that as the signal to noise ratio of channel is decreased the quality of the reconstructed video decreases drastically. Using proper FEC codes, the quality of the video can be restored up to certain extent. Thus, the performance analysis of HEVC presented in this paper may assist in designing the optimized code rate of FEC such that the quality of the reconstructed video is maximized over wireless channels. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=AWGN" title="AWGN">AWGN</a>, <a href="https://publications.waset.org/abstracts/search?q=forward%20error%20correction" title=" forward error correction"> forward error correction</a>, <a href="https://publications.waset.org/abstracts/search?q=HEVC" title=" HEVC"> HEVC</a>, <a href="https://publications.waset.org/abstracts/search?q=video%20coding" title=" video coding"> video coding</a>, <a href="https://publications.waset.org/abstracts/search?q=QAM" title=" QAM"> QAM</a> </p> <a href="https://publications.waset.org/abstracts/92062/performance-of-high-efficiency-video-codec-over-wireless-channels" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/92062.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">149</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">983</span> Video Summarization: Techniques and Applications</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Zaynab%20El%20Khattabi">Zaynab El Khattabi</a>, <a href="https://publications.waset.org/abstracts/search?q=Youness%20Tabii"> Youness Tabii</a>, <a href="https://publications.waset.org/abstracts/search?q=Abdelhamid%20Benkaddour"> Abdelhamid Benkaddour</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Nowadays, huge amount of multimedia repositories make the browsing, retrieval and delivery of video contents very slow and even difficult tasks. Video summarization has been proposed to improve faster browsing of large video collections and more efficient content indexing and access. In this paper, we focus on approaches to video summarization. The video summaries can be generated in many different forms. However, two fundamentals ways to generate summaries are static and dynamic. We present different techniques for each mode in the literature and describe some features used for generating video summaries. We conclude with perspective for further research. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=video%20summarization" title="video summarization">video summarization</a>, <a href="https://publications.waset.org/abstracts/search?q=static%20summarization" title=" static summarization"> static summarization</a>, <a href="https://publications.waset.org/abstracts/search?q=video%20skimming" title=" video skimming"> video skimming</a>, <a href="https://publications.waset.org/abstracts/search?q=semantic%20features" title=" semantic features"> semantic features</a> </p> <a href="https://publications.waset.org/abstracts/27644/video-summarization-techniques-and-applications" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/27644.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">401</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">982</span> The Use of Video Conferencing to Aid the Decision in Whether Vulnerable Patients Should Attend In-Person Appointments during a COVID Pandemic</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Nadia%20Arikat">Nadia Arikat</a>, <a href="https://publications.waset.org/abstracts/search?q=Katharine%20Blain"> Katharine Blain</a> </p> <p class="card-text"><strong>Abstract:</strong></p> During the worst of the COVID pandemic, only essential treatment was provided for patients needing urgent care. With the prolonged extent of the pandemic, there has been a return to more routine referrals for paediatric dentistry advice and treatment for specialist conditions. However, some of these patients and/or their carers may have significant medical issues meaning that attending in-person appointments carries additional risks. This poses an ethical dilemma for clinicians. This project looks at how a secure video conferencing platform (“Near Me”) has been used to assess the need and urgency for in-person new patient visits, particularly for patients and families with additional risks. “Near Me” is a secure online video consulting service used by NHS Scotland. In deciding whether to bring a new patient to the hospital for an appointment, the clinical condition of the teeth together with the urgency for treatment need to be assessed. This is not always apparent from the referral letter. In addition, it is important to judge the risks to the patients and carers of such visits, particularly if they have medical issues. The use and effectiveness of “Near Me” consultations to help decide whether vulnerable paediatric patients should have in-person appointments will be illustrated and discussed using two families: one where the child is medically compromised (Alagille syndrome with previous liver transplant), and the other where there is a medically compromised parent (undergoing chemotherapy and a bone marrow transplant). In both cases, it was necessary to take into consideration the risks and moral implications of requesting that they attend the dental hospital during a pandemic. The option of remote consultation allowed further clinical information to be evaluated and the families take part in the decision-making process about whether and when such visits should be scheduled. These cases will demonstrate how medically compromised patients (or patients with vulnerable carers), could have their dental needs assessed in a socially distanced manner by video consultation. Together, the clinician and the patient’s family can weigh up the risks, with regards to COVID-19, of attending for in-person appointments against the benefit of having treatment. This is particularly important for new paediatric patients who have not yet had a formal assessment. The limitations of this technology will also be discussed. It is limited by internet availability, the strength of the connection, the video quality and families owning a device which allows video calls. For those from a lower socio-economic background or living in some rural areas, this may not be possible or limit its usefulness. For the two patients discussed in this project, where the urgency of their dental condition was unclear, video consultation proved beneficial in deciding an appropriate outcome and preventing unnecessary exposure of vulnerable people to a hospital environment during a pandemic, demonstrating the usefulness of such technology when it is used appropriately. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=COVID-19" title="COVID-19">COVID-19</a>, <a href="https://publications.waset.org/abstracts/search?q=paediatrics" title=" paediatrics"> paediatrics</a>, <a href="https://publications.waset.org/abstracts/search?q=triage" title=" triage"> triage</a>, <a href="https://publications.waset.org/abstracts/search?q=video%20consultations" title=" video consultations"> video consultations</a> </p> <a href="https://publications.waset.org/abstracts/142314/the-use-of-video-conferencing-to-aid-the-decision-in-whether-vulnerable-patients-should-attend-in-person-appointments-during-a-covid-pandemic" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/142314.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">98</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">981</span> Lecture Video Indexing and Retrieval Using Topic Keywords</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=B.%20J.%20Sandesh">B. J. Sandesh</a>, <a href="https://publications.waset.org/abstracts/search?q=Saurabha%20Jirgi"> Saurabha Jirgi</a>, <a href="https://publications.waset.org/abstracts/search?q=S.%20Vidya"> S. Vidya</a>, <a href="https://publications.waset.org/abstracts/search?q=Prakash%20Eljer"> Prakash Eljer</a>, <a href="https://publications.waset.org/abstracts/search?q=Gowri%20Srinivasa"> Gowri Srinivasa</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In this paper, we propose a framework to help users to search and retrieve the portions in the lecture video of their interest. This is achieved by temporally segmenting and indexing the lecture video using the topic keywords. We use transcribed text from the video and documents relevant to the video topic extracted from the web for this purpose. The keywords for indexing are found by applying the non-negative matrix factorization (NMF) topic modeling techniques on the web documents. Our proposed technique first creates indices on the transcribed documents using the topic keywords, and these are mapped to the video to find the start and end time of the portions of the video for a particular topic. This time information is stored in the index table along with the topic keyword which is used to retrieve the specific portions of the video for the query provided by the users. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=video%20indexing%20and%20retrieval" title="video indexing and retrieval">video indexing and retrieval</a>, <a href="https://publications.waset.org/abstracts/search?q=lecture%20videos" title=" lecture videos"> lecture videos</a>, <a href="https://publications.waset.org/abstracts/search?q=content%20based%20video%20search" title=" content based video search"> content based video search</a>, <a href="https://publications.waset.org/abstracts/search?q=multimodal%20indexing" title=" multimodal indexing"> multimodal indexing</a> </p> <a href="https://publications.waset.org/abstracts/77066/lecture-video-indexing-and-retrieval-using-topic-keywords" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/77066.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">250</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">980</span> Distributed Processing for Content Based Lecture Video Retrieval on Hadoop Framework</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=U.%20S.%20N.%20Raju">U. S. N. Raju</a>, <a href="https://publications.waset.org/abstracts/search?q=Kothuri%20Sai%20Kiran"> Kothuri Sai Kiran</a>, <a href="https://publications.waset.org/abstracts/search?q=Meena%20G.%20Kamal"> Meena G. Kamal</a>, <a href="https://publications.waset.org/abstracts/search?q=Vinay%20Nikhil%20Pabba"> Vinay Nikhil Pabba</a>, <a href="https://publications.waset.org/abstracts/search?q=Suresh%20Kanaparthi"> Suresh Kanaparthi</a> </p> <p class="card-text"><strong>Abstract:</strong></p> There is huge amount of lecture video data available for public use, and many more lecture videos are being created and uploaded every day. Searching for videos on required topics from this huge database is a challenging task. Therefore, an efficient method for video retrieval is needed. An approach for automated video indexing and video search in large lecture video archives is presented. As the amount of video lecture data is huge, it is very inefficient to do the processing in a centralized computation framework. Hence, Hadoop Framework for distributed computing for Big Video Data is used. First, step in the process is automatic video segmentation and key-frame detection to offer a visual guideline for the video content navigation. In the next step, we extract textual metadata by applying video Optical Character Recognition (OCR) technology on key-frames. The OCR and detected slide text line types are adopted for keyword extraction, by which both video- and segment-level keywords are extracted for content-based video browsing and search. The performance of the indexing process can be improved for a large database by using distributed computing on Hadoop framework. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=video%20lectures" title="video lectures">video lectures</a>, <a href="https://publications.waset.org/abstracts/search?q=big%20video%20data" title=" big video data"> big video data</a>, <a href="https://publications.waset.org/abstracts/search?q=video%20retrieval" title=" video retrieval"> video retrieval</a>, <a href="https://publications.waset.org/abstracts/search?q=hadoop" title=" hadoop"> hadoop</a> </p> <a href="https://publications.waset.org/abstracts/26648/distributed-processing-for-content-based-lecture-video-retrieval-on-hadoop-framework" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/26648.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">534</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">979</span> Video Stabilization Using Feature Point Matching</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Shamsundar%20Kulkarni">Shamsundar Kulkarni</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Video capturing by non-professionals will lead to unanticipated effects. Such as image distortion, image blurring etc. Hence, many researchers study such drawbacks to enhance the quality of videos. In this paper, an algorithm is proposed to stabilize jittery videos .A stable output video will be attained without the effect of jitter which is caused due to shaking of handheld camera during video recording. Firstly, salient points from each frame from the input video are identified and processed followed by optimizing and stabilize the video. Optimization includes the quality of the video stabilization. This method has shown good result in terms of stabilization and it discarded distortion from the output videos recorded in different circumstances. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=video%20stabilization" title="video stabilization">video stabilization</a>, <a href="https://publications.waset.org/abstracts/search?q=point%20feature%20matching" title=" point feature matching"> point feature matching</a>, <a href="https://publications.waset.org/abstracts/search?q=salient%20points" title=" salient points"> salient points</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20quality%20measurement" title=" image quality measurement"> image quality measurement</a> </p> <a href="https://publications.waset.org/abstracts/57341/video-stabilization-using-feature-point-matching" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/57341.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">313</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">978</span> Structural Analysis on the Composition of Video Game Virtual Spaces</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Qin%20Luofeng">Qin Luofeng</a>, <a href="https://publications.waset.org/abstracts/search?q=Shen%20Siqi"> Shen Siqi</a> </p> <p class="card-text"><strong>Abstract:</strong></p> For the 58 years since the first video game came into being, the video game industry is getting through an explosive evolution from then on. Video games exert great influence on society and become a reflection of public life to some extent. Video game virtual spaces are where activities are taking place like real spaces. And that’s the reason why some architects pay attention to video games. However, compared to the researches on the appearance of games, we observe a lack of theoretical comprehensive on the construction of video game virtual spaces. The research method of this paper is to collect literature and conduct theoretical research about the virtual space in video games firstly. And then analogizing the opinions on the space phenomena from the theory of literature and films. Finally, this paper proposes a three-layer framework for the construction of video game virtual spaces: “algorithmic space-narrative space players space”, which correspond to the exterior, expressive, affective parts of the game space. Also, we illustrate each sub-space according to numerous instances of published video games. Hoping this writing could promote the interactive development of video games and architecture. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=video%20game" title="video game">video game</a>, <a href="https://publications.waset.org/abstracts/search?q=virtual%20space" title=" virtual space"> virtual space</a>, <a href="https://publications.waset.org/abstracts/search?q=narrativity" title=" narrativity"> narrativity</a>, <a href="https://publications.waset.org/abstracts/search?q=social%20space" title=" social space"> social space</a>, <a href="https://publications.waset.org/abstracts/search?q=emotional%20connection" title=" emotional connection"> emotional connection</a> </p> <a href="https://publications.waset.org/abstracts/118519/structural-analysis-on-the-composition-of-video-game-virtual-spaces" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/118519.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">267</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">977</span> Key Frame Based Video Summarization via Dependency Optimization</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Janya%20Sainui">Janya Sainui</a> </p> <p class="card-text"><strong>Abstract:</strong></p> As a rapid growth of digital videos and data communications, video summarization that provides a shorter version of the video for fast video browsing and retrieval is necessary. Key frame extraction is one of the mechanisms to generate video summary. In general, the extracted key frames should both represent the entire video content and contain minimum redundancy. However, most of the existing approaches heuristically select key frames; hence, the selected key frames may not be the most different frames and/or not cover the entire content of a video. In this paper, we propose a method of video summarization which provides the reasonable objective functions for selecting key frames. In particular, we apply a statistical dependency measure called quadratic mutual informaion as our objective functions for maximizing the coverage of the entire video content as well as minimizing the redundancy among selected key frames. The proposed key frame extraction algorithm finds key frames as an optimization problem. Through experiments, we demonstrate the success of the proposed video summarization approach that produces video summary with better coverage of the entire video content while less redundancy among key frames comparing to the state-of-the-art approaches. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=video%20summarization" title="video summarization">video summarization</a>, <a href="https://publications.waset.org/abstracts/search?q=key%20frame%20extraction" title=" key frame extraction"> key frame extraction</a>, <a href="https://publications.waset.org/abstracts/search?q=dependency%20measure" title=" dependency measure"> dependency measure</a>, <a href="https://publications.waset.org/abstracts/search?q=quadratic%20mutual%20information" title=" quadratic mutual information"> quadratic mutual information</a> </p> <a href="https://publications.waset.org/abstracts/75218/key-frame-based-video-summarization-via-dependency-optimization" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/75218.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">266</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">976</span> [Keynote Talk]: Computer-Assisted Language Learning (CALL) for Teaching English to Speakers of Other Languages (TESOL/ESOL) as a Foreign Language (TEFL/EFL), Second Language (TESL/ESL), or Additional Language (TEAL/EAL)</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Andrew%20Laghos">Andrew Laghos</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Computer-assisted language learning (CALL) is defined as the use of computers to help learn languages. In this study we look at several different types of CALL tools and applications and how they can assist Adults and Young Learners in learning the English language as a foreign, second or additional language. It is important to identify the roles of the teacher and the learners, and what the learners’ motivations are for learning the language. Audio, video, interactive multimedia games, online translation services, conferencing, chat rooms, discussion forums, social networks, social media, email communication, songs and music video clips are just some of the many ways computers are currently being used to enhance language learning. CALL may be used for classroom teaching as well as for online and mobile learning. Advantages and disadvantages of CALL are discussed and the study ends with future predictions of CALL. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=computer-assisted%20language%20learning%20%28CALL%29" title="computer-assisted language learning (CALL)">computer-assisted language learning (CALL)</a>, <a href="https://publications.waset.org/abstracts/search?q=teaching%20English%20as%20a%20foreign%20language%20%28TEFL%2FEFL%29" title=" teaching English as a foreign language (TEFL/EFL)"> teaching English as a foreign language (TEFL/EFL)</a>, <a href="https://publications.waset.org/abstracts/search?q=adult%20learners" title=" adult learners"> adult learners</a>, <a href="https://publications.waset.org/abstracts/search?q=young%20learners" title=" young learners"> young learners</a> </p> <a href="https://publications.waset.org/abstracts/47997/keynote-talk-computer-assisted-language-learning-call-for-teaching-english-to-speakers-of-other-languages-tesolesol-as-a-foreign-language-teflefl-second-language-teslesl-or-additional-language-tealeal" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/47997.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">434</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">975</span> A Unified Webcam Proctoring Solution on Edge</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Saw%20Thiha">Saw Thiha</a>, <a href="https://publications.waset.org/abstracts/search?q=Jay%20Rajasekera"> Jay Rajasekera</a> </p> <p class="card-text"><strong>Abstract:</strong></p> A boom in video conferencing generated millions of hours of video data daily to be analyzed. However, such enormous data pose certain scalability issues to be analyzed efficiently, let alone do it in real-time, as online conferences can involve hundreds of people and can last for hours. This paper proposes an efficient online proctoring solution that can analyze the online conferences real-time on edge devices such as Android, iOS, and desktops. Since the computation can be done upfront on the devices where online conferences take place, it can scale well without requiring intensive resources such as GPU servers and complex cloud infrastructure. According to the linear models, face orientation does indeed impact the perceived eye openness. Also, the proposed z score facial landmark standardization was proven to be functional in detecting face orientation and contributed to classifying eye blinks with single eyelid distance computation while achieving a better f1 score and accuracy than the Eye Aspect Ratio (EAR) threshold method. Last but not least, the authors implemented the solution natively in the MediaPipe framework and open-sourced it along with the reproducible experimental results on GitHub. The solution provides face orientation, eye blink, facial activity, and translation detections out of the box and is highly customizable and extensible. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=android" title="android">android</a>, <a href="https://publications.waset.org/abstracts/search?q=desktop" title=" desktop"> desktop</a>, <a href="https://publications.waset.org/abstracts/search?q=edge%20computing" title=" edge computing"> edge computing</a>, <a href="https://publications.waset.org/abstracts/search?q=blink" title=" blink"> blink</a>, <a href="https://publications.waset.org/abstracts/search?q=face%20orientation" title=" face orientation"> face orientation</a>, <a href="https://publications.waset.org/abstracts/search?q=facial%20activity%20and%20translation" title=" facial activity and translation"> facial activity and translation</a>, <a href="https://publications.waset.org/abstracts/search?q=MediaPipe" title=" MediaPipe"> MediaPipe</a>, <a href="https://publications.waset.org/abstracts/search?q=open%20source" title=" open source"> open source</a>, <a href="https://publications.waset.org/abstracts/search?q=real-time" title=" real-time"> real-time</a>, <a href="https://publications.waset.org/abstracts/search?q=video%20conference" title=" video conference"> video conference</a>, <a href="https://publications.waset.org/abstracts/search?q=web" title=" web"> web</a>, <a href="https://publications.waset.org/abstracts/search?q=iOS" title=" iOS"> iOS</a>, <a href="https://publications.waset.org/abstracts/search?q=Z%20score%20facial%20landmark%20standardization" title=" Z score facial landmark standardization"> Z score facial landmark standardization</a> </p> <a href="https://publications.waset.org/abstracts/155052/a-unified-webcam-proctoring-solution-on-edge" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/155052.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">97</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">974</span> Video Shot Detection and Key Frame Extraction Using Faber-Shauder DWT and SVD</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Assma%20Azeroual">Assma Azeroual</a>, <a href="https://publications.waset.org/abstracts/search?q=Karim%20Afdel"> Karim Afdel</a>, <a href="https://publications.waset.org/abstracts/search?q=Mohamed%20El%20Hajji"> Mohamed El Hajji</a>, <a href="https://publications.waset.org/abstracts/search?q=Hassan%20Douzi"> Hassan Douzi</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Key frame extraction methods select the most representative frames of a video, which can be used in different areas of video processing such as video retrieval, video summary, and video indexing. In this paper we present a novel approach for extracting key frames from video sequences. The frame is characterized uniquely by his contours which are represented by the dominant blocks. These dominant blocks are located on the contours and its near textures. When the video frames have a noticeable changement, its dominant blocks changed, then we can extracte a key frame. The dominant blocks of every frame is computed, and then feature vectors are extracted from the dominant blocks image of each frame and arranged in a feature matrix. Singular Value Decomposition is used to calculate sliding windows ranks of those matrices. Finally the computed ranks are traced and then we are able to extract key frames of a video. Experimental results show that the proposed approach is robust against a large range of digital effects used during shot transition. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=FSDWT" title="FSDWT">FSDWT</a>, <a href="https://publications.waset.org/abstracts/search?q=key%20frame%20extraction" title=" key frame extraction"> key frame extraction</a>, <a href="https://publications.waset.org/abstracts/search?q=shot%20detection" title=" shot detection"> shot detection</a>, <a href="https://publications.waset.org/abstracts/search?q=singular%20value%20decomposition" title=" singular value decomposition"> singular value decomposition</a> </p> <a href="https://publications.waset.org/abstracts/18296/video-shot-detection-and-key-frame-extraction-using-faber-shauder-dwt-and-svd" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/18296.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">398</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">973</span> Multimodal Convolutional Neural Network for Musical Instrument Recognition</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Yagya%20Raj%20Pandeya">Yagya Raj Pandeya</a>, <a href="https://publications.waset.org/abstracts/search?q=Joonwhoan%20Lee"> Joonwhoan Lee</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The dynamic behavior of music and video makes it difficult to evaluate musical instrument playing in a video by computer system. Any television or film video clip with music information are rich sources for analyzing musical instruments using modern machine learning technologies. In this research, we integrate the audio and video information sources using convolutional neural network (CNN) and pass network learned features through recurrent neural network (RNN) to preserve the dynamic behaviors of audio and video. We use different pre-trained CNN for music and video feature extraction and then fine tune each model. The music network use 2D convolutional network and video network use 3D convolution (C3D). Finally, we concatenate each music and video feature by preserving the time varying features. The long short term memory (LSTM) network is used for long-term dynamic feature characterization and then use late fusion with generalized mean. The proposed network performs better performance to recognize the musical instrument using audio-video multimodal neural network. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=multimodal" title="multimodal">multimodal</a>, <a href="https://publications.waset.org/abstracts/search?q=3D%20convolution" title=" 3D convolution"> 3D convolution</a>, <a href="https://publications.waset.org/abstracts/search?q=music-video%20feature%20extraction" title=" music-video feature extraction"> music-video feature extraction</a>, <a href="https://publications.waset.org/abstracts/search?q=generalized%20mean" title=" generalized mean"> generalized mean</a> </p> <a href="https://publications.waset.org/abstracts/104041/multimodal-convolutional-neural-network-for-musical-instrument-recognition" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/104041.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">215</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">972</span> Surveillance Video Summarization Based on Histogram Differencing and Sum Conditional Variance</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Nada%20Jasim%20Habeeb">Nada Jasim Habeeb</a>, <a href="https://publications.waset.org/abstracts/search?q=Rana%20Saad%20Mohammed"> Rana Saad Mohammed</a>, <a href="https://publications.waset.org/abstracts/search?q=Muntaha%20Khudair%20Abbass"> Muntaha Khudair Abbass </a> </p> <p class="card-text"><strong>Abstract:</strong></p> For more efficient and fast video summarization, this paper presents a surveillance video summarization method. The presented method works to improve video summarization technique. This method depends on temporal differencing to extract most important data from large video stream. This method uses histogram differencing and Sum Conditional Variance which is robust against to illumination variations in order to extract motion objects. The experimental results showed that the presented method gives better output compared with temporal differencing based summarization techniques. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=temporal%20differencing" title="temporal differencing">temporal differencing</a>, <a href="https://publications.waset.org/abstracts/search?q=video%20summarization" title=" video summarization"> video summarization</a>, <a href="https://publications.waset.org/abstracts/search?q=histogram%20differencing" title=" histogram differencing"> histogram differencing</a>, <a href="https://publications.waset.org/abstracts/search?q=sum%20conditional%20variance" title=" sum conditional variance"> sum conditional variance</a> </p> <a href="https://publications.waset.org/abstracts/54404/surveillance-video-summarization-based-on-histogram-differencing-and-sum-conditional-variance" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/54404.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">349</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">971</span> Evaluating the Performance of Existing Full-Reference Quality Metrics on High Dynamic Range (HDR) Video Content</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Maryam%20Azimi">Maryam Azimi</a>, <a href="https://publications.waset.org/abstracts/search?q=Amin%20Banitalebi-Dehkordi"> Amin Banitalebi-Dehkordi</a>, <a href="https://publications.waset.org/abstracts/search?q=Yuanyuan%20Dong"> Yuanyuan Dong</a>, <a href="https://publications.waset.org/abstracts/search?q=Mahsa%20T.%20Pourazad"> Mahsa T. Pourazad</a>, <a href="https://publications.waset.org/abstracts/search?q=Panos%20Nasiopoulos"> Panos Nasiopoulos</a> </p> <p class="card-text"><strong>Abstract:</strong></p> While there exists a wide variety of Low Dynamic Range (LDR) quality metrics, only a limited number of metrics are designed specifically for the High Dynamic Range (HDR) content. With the introduction of HDR video compression standardization effort by international standardization bodies, the need for an efficient video quality metric for HDR applications has become more pronounced. The objective of this study is to compare the performance of the existing full-reference LDR and HDR video quality metrics on HDR content and identify the most effective one for HDR applications. To this end, a new HDR video data set is created, which consists of representative indoor and outdoor video sequences with different brightness, motion levels and different representing types of distortions. The quality of each distorted video in this data set is evaluated both subjectively and objectively. The correlation between the subjective and objective results confirm that VIF quality metric outperforms all to their tested metrics in the presence of the tested types of distortions. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=HDR" title="HDR">HDR</a>, <a href="https://publications.waset.org/abstracts/search?q=dynamic%20range" title=" dynamic range"> dynamic range</a>, <a href="https://publications.waset.org/abstracts/search?q=LDR" title=" LDR"> LDR</a>, <a href="https://publications.waset.org/abstracts/search?q=subjective%20evaluation" title=" subjective evaluation"> subjective evaluation</a>, <a href="https://publications.waset.org/abstracts/search?q=video%20compression" title=" video compression"> video compression</a>, <a href="https://publications.waset.org/abstracts/search?q=HEVC" title=" HEVC"> HEVC</a>, <a href="https://publications.waset.org/abstracts/search?q=video%20quality%20metrics" title=" video quality metrics"> video quality metrics</a> </p> <a href="https://publications.waset.org/abstracts/18171/evaluating-the-performance-of-existing-full-reference-quality-metrics-on-high-dynamic-range-hdr-video-content" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/18171.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">525</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">970</span> Extending Image Captioning to Video Captioning Using Encoder-Decoder</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Sikiru%20Ademola%20Adewale">Sikiru Ademola Adewale</a>, <a href="https://publications.waset.org/abstracts/search?q=Joe%20Thomas"> Joe Thomas</a>, <a href="https://publications.waset.org/abstracts/search?q=Bolanle%20Hafiz%20Matti"> Bolanle Hafiz Matti</a>, <a href="https://publications.waset.org/abstracts/search?q=Tosin%20Ige"> Tosin Ige</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This project demonstrates the implementation and use of an encoder-decoder model to perform a many-to-many mapping of video data to text captions. The many-to-many mapping occurs via an input temporal sequence of video frames to an output sequence of words to form a caption sentence. Data preprocessing, model construction, and model training are discussed. Caption correctness is evaluated using 2-gram BLEU scores across the different splits of the dataset. Specific examples of output captions were shown to demonstrate model generality over the video temporal dimension. Predicted captions were shown to generalize over video action, even in instances where the video scene changed dramatically. Model architecture changes are discussed to improve sentence grammar and correctness. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=decoder" title="decoder">decoder</a>, <a href="https://publications.waset.org/abstracts/search?q=encoder" title=" encoder"> encoder</a>, <a href="https://publications.waset.org/abstracts/search?q=many-to-many%20mapping" title=" many-to-many mapping"> many-to-many mapping</a>, <a href="https://publications.waset.org/abstracts/search?q=video%20captioning" title=" video captioning"> video captioning</a>, <a href="https://publications.waset.org/abstracts/search?q=2-gram%20BLEU" title=" 2-gram BLEU"> 2-gram BLEU</a> </p> <a href="https://publications.waset.org/abstracts/164540/extending-image-captioning-to-video-captioning-using-encoder-decoder" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/164540.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">108</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">969</span> Difficulties for Implementation of Telenursing: An Experience Report</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Jacqueline%20A.%20G.%20Sachett">Jacqueline A. G. Sachett</a>, <a href="https://publications.waset.org/abstracts/search?q=Cl%C3%A1udia%20S.%20Nogueira"> Cláudia S. Nogueira</a>, <a href="https://publications.waset.org/abstracts/search?q=Diana%20C.%20P.%20Lima"> Diana C. P. Lima</a>, <a href="https://publications.waset.org/abstracts/search?q=Jessica%20T.%20S.%20Oliveira"> Jessica T. S. Oliveira</a>, <a href="https://publications.waset.org/abstracts/search?q=Guilherme%20K.%20M.%20Salazar"> Guilherme K. M. Salazar</a>, <a href="https://publications.waset.org/abstracts/search?q=L%C3%ADlian%20K.%20Aguiar"> Lílian K. Aguiar</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The Polo Amazon Telehealth offers several tools for professionals working in Primary Health Care as a second formative opinion, teleconsulting and training between the different areas, whether medicine, dentistry, nursing, physiotherapy, among others. These activities have a monthly schedule of free access to the municipalities of Amazonas registered. With this premise, and in partnership with the University of the State of Amazonas (UEA), is promoting the practice of the triad; teaching-research-extension in order to collaborate with the enrichment and acquisition of knowledge through educational practices carried out through teleconferences. Therefore, nursing is to join efforts and inserts as a collaborator of this project running, contributing to the education and training of these professionals who are part of the health system in full Amazon. The aim of this study is to report the experience of academic of Amazonas State University nursing course, about the experience in the extension project underway in Polo Telemedicine Amazon. This was a descriptive study, the experience report type, about the experience of nursing academic UEA, by extension 'Telenursing: teleconsulting and second formative opinion for FHS professionals in the state of Amazonas' project, held in Polo Telemedicine Amazon, through an agreement with the UEA and funded by the Foundation of Amazonas Research from July / 2012 to July / 2016. Initially developed active search of members of the Family Health Strategy professionals, in order to provide training and training teams to use the virtual clinic, as well as the virtual environment is the focus of this tool design. The election period was an aggravating factor for the implementation of teleconsulting proposal, due to change of managers in each municipality, requiring the stoppage until they assume their positions. From this definition, we established the need for new training. The first video conference took place on 03.14.2013 for learning and training in the use of Virtual Learning Environment and Virtual Clinic, with the participation of municipalities of Novo Aripuanã, São Paulo de Olivença and Manacapuru. During the whole project was carried out literature about what is being done and produced at the national level about the subject. By the time the telenursing project has received twenty-five (25) consultancy requests. The consultants sent by nursing professionals, all have been answered to date. Faced with the lived experience, particularly in video conferencing, face to cause difficulties issues, such as the fluctuation in the number of participants in activities, difficulty of participants to reconcile the opening hours of the units with the schedule of video conferencing, transmission difficulties and changes schedule. It was concluded that the establishment of connection between the Telehealth points is one of the main factors for the implementation of Telenursing and that this feature is still new for nursing. However, effective training and updating, may provide to these professional category subsidies to quality health care in the Amazon. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=Amazon" title="Amazon">Amazon</a>, <a href="https://publications.waset.org/abstracts/search?q=teleconsulting" title=" teleconsulting"> teleconsulting</a>, <a href="https://publications.waset.org/abstracts/search?q=telehealth" title=" telehealth"> telehealth</a>, <a href="https://publications.waset.org/abstracts/search?q=telenursing" title=" telenursing"> telenursing</a> </p> <a href="https://publications.waset.org/abstracts/62486/difficulties-for-implementation-of-telenursing-an-experience-report" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/62486.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">310</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">968</span> H.264 Video Privacy Protection Method Using Regions of Interest Encryption</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Taekyun%20Doo">Taekyun Doo</a>, <a href="https://publications.waset.org/abstracts/search?q=Cheongmin%20Ji"> Cheongmin Ji</a>, <a href="https://publications.waset.org/abstracts/search?q=Manpyo%20Hong"> Manpyo Hong</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Like a closed-circuit television (CCTV), video surveillance system is widely placed for gathering video from unspecified people to prevent crime, surveillance, or many other purposes. However, abuse of CCTV brings about concerns of personal privacy invasions. In this paper, we propose an encryption method to protect personal privacy system in H.264 compressed video bitstream with encrypting only regions of interest (ROI). There is no need to change the existing video surveillance system. In addition, encrypting ROI in compressed video bitstream is a challenging work due to spatial and temporal drift errors. For this reason, we propose a novel drift mitigation method when ROI is encrypted. The proposed method was implemented by using JM reference software based on the H.264 compressed videos, and experimental results show the verification of our proposed methods and its effectiveness. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=H.264%2FAVC" title="H.264/AVC">H.264/AVC</a>, <a href="https://publications.waset.org/abstracts/search?q=video%20encryption" title=" video encryption"> video encryption</a>, <a href="https://publications.waset.org/abstracts/search?q=privacy%20protection" title=" privacy protection"> privacy protection</a>, <a href="https://publications.waset.org/abstracts/search?q=post%20compression" title=" post compression"> post compression</a>, <a href="https://publications.waset.org/abstracts/search?q=region%20of%20interest" title=" region of interest"> region of interest</a> </p> <a href="https://publications.waset.org/abstracts/57651/h264-video-privacy-protection-method-using-regions-of-interest-encryption" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/57651.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">340</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">967</span> The Impact of Keyword and Full Video Captioning on Listening Comprehension</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Elias%20Bensalem">Elias Bensalem</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This study investigates the effect of two types of captioning (full and keyword captioning) on listening comprehension. Thirty-six university-level EFL students participated in the study. They were randomly assigned to watch three video clips under three conditions. The first group watched the video clips with full captions. The second group watched the same video clips with keyword captions. The control group watched the video clips without captions. After watching each clip, participants took a listening comprehension test. At the end of the experiment, participants completed a questionnaire to measure their perceptions about the use of captions and the video clips they watched. Results indicated that the full captioning group significantly outperformed both the keyword captioning and the no captioning group on the listening comprehension tests. However, this study did not find any significant difference between the keyword captioning group and the no captioning group. Results of the survey suggest that keyword captioning were a source of distraction for participants. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=captions" title="captions">captions</a>, <a href="https://publications.waset.org/abstracts/search?q=EFL" title=" EFL"> EFL</a>, <a href="https://publications.waset.org/abstracts/search?q=listening%20comprehension" title=" listening comprehension"> listening comprehension</a>, <a href="https://publications.waset.org/abstracts/search?q=video" title=" video"> video</a> </p> <a href="https://publications.waset.org/abstracts/62467/the-impact-of-keyword-and-full-video-captioning-on-listening-comprehension" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/62467.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">262</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">966</span> Temporally Coherent 3D Animation Reconstruction from RGB-D Video Data</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Salam%20Khalifa">Salam Khalifa</a>, <a href="https://publications.waset.org/abstracts/search?q=Naveed%20Ahmed"> Naveed Ahmed</a> </p> <p class="card-text"><strong>Abstract:</strong></p> We present a new method to reconstruct a temporally coherent 3D animation from single or multi-view RGB-D video data using unbiased feature point sampling. Given RGB-D video data, in form of a 3D point cloud sequence, our method first extracts feature points using both color and depth information. In the subsequent steps, these feature points are used to match two 3D point clouds in consecutive frames independent of their resolution. Our new motion vectors based dynamic alignment method then fully reconstruct a spatio-temporally coherent 3D animation. We perform extensive quantitative validation using novel error functions to analyze the results. We show that despite the limiting factors of temporal and spatial noise associated to RGB-D data, it is possible to extract temporal coherence to faithfully reconstruct a temporally coherent 3D animation from RGB-D video data. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=3D%20video" title="3D video">3D video</a>, <a href="https://publications.waset.org/abstracts/search?q=3D%20animation" title=" 3D animation"> 3D animation</a>, <a href="https://publications.waset.org/abstracts/search?q=RGB-D%20video" title=" RGB-D video"> RGB-D video</a>, <a href="https://publications.waset.org/abstracts/search?q=temporally%20coherent%203D%20animation" title=" temporally coherent 3D animation"> temporally coherent 3D animation</a> </p> <a href="https://publications.waset.org/abstracts/12034/temporally-coherent-3d-animation-reconstruction-from-rgb-d-video-data" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/12034.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">373</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">965</span> Developing Digital Competencies in Aboriginal Students through University-College Partnerships </h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=W.%20S.%20Barber">W. S. Barber</a>, <a href="https://publications.waset.org/abstracts/search?q=S.%20L.%20King"> S. L. King </a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper reports on a pilot project to develop a collaborative partnership between a community college in rural northern Ontario, Canada, and an urban university in the greater Toronto area in Oshawa, Canada. Partner institutions will collaborate to address learning needs of university applicants whose goals are to attain an undergraduate university BA in Educational Studies and Digital Technology degree, but who may not live in a geographical location that would facilitate this pathways process. The UOIT BA degree is attained through a 2+2 program, where students with a 2 year college diploma or equivalent can attain a four year undergraduate degree. The goals reported on the project are as: 1. Our aim is to expand the BA program to include an additional stream which includes serious educational games, simulations and virtual environments, 2. Develop fully (using both synchronous and asynchronous technologies) online learning modules for use by university applicants who otherwise are not geographically located close to a physical university site, 3. Assess the digital competencies of all students, including members of local, distance and Indigenous communities using a validated tool developed and tested by UOIT across numerous populations. This tool, the General Technical Competency Use and Scale (GTCU) will provide the collaborating institutions with data that will allow for analyzing how well students are prepared to succeed in fully online learning communities. Philosophically, the UOIT BA program is based on a fully online learning communities model (FOLC) that can be accessed from anywhere in the world through digital learning environments via audio video conferencing tools such as Adobe Connect. It also follows models of adult learning and mobile learning, and makes a university degree accessible to the increasing demographic of adult learners who may use mobile devices to learn anywhere anytime. The program is based on key principles of Problem Based Learning, allowing students to build their own understandings through the co-design of the learning environment in collaboration with the instructors and their peers. In this way, this degree allows students to personalize and individualize the learning based on their own culture, background and professional/personal experiences. Using modified flipped classroom strategies, students are able to interrogate video modules on their own time in preparation for one hour discussions occurring in video conferencing sessions. As a consequence of the program flexibility, students may continue to work full or part time. All of the partner institutions will co-develop four new modules, administer the GTCU and share data, while creating a new stream of the UOIT BA degree. This will increase accessibility for students to bridge from community colleges to university through a fully digital environment. We aim to work collaboratively with Indigenous elders, community members and distance education instructors to increase opportunities for more students to attain a university education. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=aboriginal" title="aboriginal">aboriginal</a>, <a href="https://publications.waset.org/abstracts/search?q=college" title=" college"> college</a>, <a href="https://publications.waset.org/abstracts/search?q=competencies" title=" competencies"> competencies</a>, <a href="https://publications.waset.org/abstracts/search?q=digital" title=" digital"> digital</a>, <a href="https://publications.waset.org/abstracts/search?q=universities" title=" universities"> universities</a> </p> <a href="https://publications.waset.org/abstracts/64367/developing-digital-competencies-in-aboriginal-students-through-university-college-partnerships" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/64367.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">215</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">964</span> A Passive Digital Video Authentication Technique Using Wavelet Based Optical Flow Variation Thresholding</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=R.%20S.%20Remya">R. S. Remya</a>, <a href="https://publications.waset.org/abstracts/search?q=U.%20S.%20Sethulekshmi"> U. S. Sethulekshmi</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Detecting the authenticity of a video is an important issue in digital forensics as Video is used as a silent evidence in court such as in child pornography, movie piracy cases, insurance claims, cases involving scientific fraud, traffic monitoring etc. The biggest threat to video data is the availability of modern open video editing tools which enable easy editing of videos without leaving any trace of tampering. In this paper, we propose an efficient passive method for inter-frame video tampering detection, its type and location by estimating the optical flow of wavelet features of adjacent frames and thresholding the variation in the estimated feature. The performance of the algorithm is compared with the z-score thresholding and achieved an efficiency above 95% on all the tested databases. The proposed method works well for videos with dynamic (forensics) as well as static (surveillance) background. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=discrete%20wavelet%20transform" title="discrete wavelet transform">discrete wavelet transform</a>, <a href="https://publications.waset.org/abstracts/search?q=optical%20flow" title=" optical flow"> optical flow</a>, <a href="https://publications.waset.org/abstracts/search?q=optical%20flow%20variation" title=" optical flow variation"> optical flow variation</a>, <a href="https://publications.waset.org/abstracts/search?q=video%20tampering" title=" video tampering"> video tampering</a> </p> <a href="https://publications.waset.org/abstracts/45252/a-passive-digital-video-authentication-technique-using-wavelet-based-optical-flow-variation-thresholding" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/45252.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">359</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">963</span> Video Sharing System Based On Wi-fi Camera</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Qidi%20Lin">Qidi Lin</a>, <a href="https://publications.waset.org/abstracts/search?q=Jinbin%20Huang"> Jinbin Huang</a>, <a href="https://publications.waset.org/abstracts/search?q=Weile%20Liang"> Weile Liang</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper introduces a video sharing platform based on WiFi, which consists of camera, mobile phone and PC server. This platform can receive wireless signal from the camera and show the live video on the mobile phone captured by camera. In addition that, it is able to send commands to camera and control the camera’s holder to rotate. The platform can be applied to interactive teaching and dangerous area’s monitoring and so on. Testing results show that the platform can share the live video of mobile phone. Furthermore, if the system’s PC sever and the camera and many mobile phones are connected together, it can transfer photos concurrently. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=Wifi%20Camera" title="Wifi Camera">Wifi Camera</a>, <a href="https://publications.waset.org/abstracts/search?q=socket%20mobile" title=" socket mobile"> socket mobile</a>, <a href="https://publications.waset.org/abstracts/search?q=platform%20video%20monitoring" title=" platform video monitoring"> platform video monitoring</a>, <a href="https://publications.waset.org/abstracts/search?q=remote%20control" title=" remote control"> remote control</a> </p> <a href="https://publications.waset.org/abstracts/31912/video-sharing-system-based-on-wi-fi-camera" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/31912.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">337</span> </span> </div> </div> <ul class="pagination"> <li class="page-item disabled"><span class="page-link">‹</span></li> <li class="page-item active"><span class="page-link">1</span></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=video%20conferencing%20course&page=2">2</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=video%20conferencing%20course&page=3">3</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=video%20conferencing%20course&page=4">4</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=video%20conferencing%20course&page=5">5</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=video%20conferencing%20course&page=6">6</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=video%20conferencing%20course&page=7">7</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=video%20conferencing%20course&page=8">8</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=video%20conferencing%20course&page=9">9</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=video%20conferencing%20course&page=10">10</a></li> <li class="page-item disabled"><span class="page-link">...</span></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=video%20conferencing%20course&page=33">33</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=video%20conferencing%20course&page=34">34</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=video%20conferencing%20course&page=2" rel="next">›</a></li> </ul> </div> </main> <footer> <div id="infolinks" class="pt-3 pb-2"> <div class="container"> <div style="background-color:#f5f5f5;" class="p-3"> <div class="row"> <div class="col-md-2"> <ul class="list-unstyled"> About <li><a href="https://waset.org/page/support">About Us</a></li> <li><a href="https://waset.org/page/support#legal-information">Legal</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/WASET-16th-foundational-anniversary.pdf">WASET celebrates its 16th foundational anniversary</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Account <li><a href="https://waset.org/profile">My Account</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Explore <li><a href="https://waset.org/disciplines">Disciplines</a></li> <li><a href="https://waset.org/conferences">Conferences</a></li> <li><a href="https://waset.org/conference-programs">Conference Program</a></li> <li><a href="https://waset.org/committees">Committees</a></li> <li><a href="https://publications.waset.org">Publications</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Research <li><a href="https://publications.waset.org/abstracts">Abstracts</a></li> <li><a href="https://publications.waset.org">Periodicals</a></li> <li><a href="https://publications.waset.org/archive">Archive</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Open Science <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Science-Philosophy.pdf">Open Science Philosophy</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Science-Award.pdf">Open Science Award</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Society-Open-Science-and-Open-Innovation.pdf">Open Innovation</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Postdoctoral-Fellowship-Award.pdf">Postdoctoral Fellowship Award</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Scholarly-Research-Review.pdf">Scholarly Research Review</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Support <li><a href="https://waset.org/page/support">Support</a></li> <li><a href="https://waset.org/profile/messages/create">Contact Us</a></li> <li><a href="https://waset.org/profile/messages/create">Report Abuse</a></li> </ul> </div> </div> </div> </div> </div> <div class="container text-center"> <hr style="margin-top:0;margin-bottom:.3rem;"> <a href="https://creativecommons.org/licenses/by/4.0/" target="_blank" class="text-muted small">Creative Commons Attribution 4.0 International License</a> <div id="copy" class="mt-2">© 2024 World Academy of Science, Engineering and Technology</div> </div> </footer> <a href="javascript:" id="return-to-top"><i class="fas fa-arrow-up"></i></a> <div class="modal" id="modal-template"> <div class="modal-dialog"> <div class="modal-content"> <div class="row m-0 mt-1"> <div class="col-md-12"> <button type="button" class="close" data-dismiss="modal" aria-label="Close"><span aria-hidden="true">×</span></button> </div> </div> <div class="modal-body"></div> </div> </div> </div> <script src="https://cdn.waset.org/static/plugins/jquery-3.3.1.min.js"></script> <script src="https://cdn.waset.org/static/plugins/bootstrap-4.2.1/js/bootstrap.bundle.min.js"></script> <script src="https://cdn.waset.org/static/js/site.js?v=150220211556"></script> <script> jQuery(document).ready(function() { /*jQuery.get("https://publications.waset.org/xhr/user-menu", function (response) { jQuery('#mainNavMenu').append(response); });*/ jQuery.get({ url: "https://publications.waset.org/xhr/user-menu", cache: false }).then(function(response){ jQuery('#mainNavMenu').append(response); }); }); </script> </body> </html>