CINXE.COM

Search results for: video laryngoscopy

<!DOCTYPE html> <html lang="en" dir="ltr"> <head> <!-- Google tag (gtag.js) --> <script async src="https://www.googletagmanager.com/gtag/js?id=G-P63WKM1TM1"></script> <script> window.dataLayer = window.dataLayer || []; function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'G-P63WKM1TM1'); </script> <!-- Yandex.Metrika counter --> <script type="text/javascript" > (function(m,e,t,r,i,k,a){m[i]=m[i]||function(){(m[i].a=m[i].a||[]).push(arguments)}; m[i].l=1*new Date(); for (var j = 0; j < document.scripts.length; j++) {if (document.scripts[j].src === r) { return; }} k=e.createElement(t),a=e.getElementsByTagName(t)[0],k.async=1,k.src=r,a.parentNode.insertBefore(k,a)}) (window, document, "script", "https://mc.yandex.ru/metrika/tag.js", "ym"); ym(55165297, "init", { clickmap:false, trackLinks:true, accurateTrackBounce:true, webvisor:false }); </script> <noscript><div><img src="https://mc.yandex.ru/watch/55165297" style="position:absolute; left:-9999px;" alt="" /></div></noscript> <!-- /Yandex.Metrika counter --> <!-- Matomo --> <!-- End Matomo Code --> <title>Search results for: video laryngoscopy</title> <meta name="description" content="Search results for: video laryngoscopy"> <meta name="keywords" content="video laryngoscopy"> <meta name="viewport" content="width=device-width, initial-scale=1, minimum-scale=1, maximum-scale=1, user-scalable=no"> <meta charset="utf-8"> <link href="https://cdn.waset.org/favicon.ico" type="image/x-icon" rel="shortcut icon"> <link href="https://cdn.waset.org/static/plugins/bootstrap-4.2.1/css/bootstrap.min.css" rel="stylesheet"> <link href="https://cdn.waset.org/static/plugins/fontawesome/css/all.min.css" rel="stylesheet"> <link href="https://cdn.waset.org/static/css/site.css?v=150220211555" rel="stylesheet"> </head> <body> <header> <div class="container"> <nav class="navbar navbar-expand-lg navbar-light"> <a class="navbar-brand" href="https://waset.org"> <img src="https://cdn.waset.org/static/images/wasetc.png" alt="Open Science Research Excellence" title="Open Science Research Excellence" /> </a> <button class="d-block d-lg-none navbar-toggler ml-auto" type="button" data-toggle="collapse" data-target="#navbarMenu" aria-controls="navbarMenu" aria-expanded="false" aria-label="Toggle navigation"> <span class="navbar-toggler-icon"></span> </button> <div class="w-100"> <div class="d-none d-lg-flex flex-row-reverse"> <form method="get" action="https://waset.org/search" class="form-inline my-2 my-lg-0"> <input class="form-control mr-sm-2" type="search" placeholder="Search Conferences" value="video laryngoscopy" name="q" aria-label="Search"> <button class="btn btn-light my-2 my-sm-0" type="submit"><i class="fas fa-search"></i></button> </form> </div> <div class="collapse navbar-collapse mt-1" id="navbarMenu"> <ul class="navbar-nav ml-auto align-items-center" id="mainNavMenu"> <li class="nav-item"> <a class="nav-link" href="https://waset.org/conferences" title="Conferences in 2024/2025/2026">Conferences</a> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/disciplines" title="Disciplines">Disciplines</a> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/committees" rel="nofollow">Committees</a> </li> <li class="nav-item dropdown"> <a class="nav-link dropdown-toggle" href="#" id="navbarDropdownPublications" role="button" data-toggle="dropdown" aria-haspopup="true" aria-expanded="false"> Publications </a> <div class="dropdown-menu" aria-labelledby="navbarDropdownPublications"> <a class="dropdown-item" href="https://publications.waset.org/abstracts">Abstracts</a> <a class="dropdown-item" href="https://publications.waset.org">Periodicals</a> <a class="dropdown-item" href="https://publications.waset.org/archive">Archive</a> </div> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/page/support" title="Support">Support</a> </li> </ul> </div> </div> </nav> </div> </header> <main> <div class="container mt-4"> <div class="row"> <div class="col-md-9 mx-auto"> <form method="get" action="https://publications.waset.org/abstracts/search"> <div id="custom-search-input"> <div class="input-group"> <i class="fas fa-search"></i> <input type="text" class="search-query" name="q" placeholder="Author, Title, Abstract, Keywords" value="video laryngoscopy"> <input type="submit" class="btn_search" value="Search"> </div> </div> </form> </div> </div> <div class="row mt-3"> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Commenced</strong> in January 2007</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Frequency:</strong> Monthly</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Edition:</strong> International</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Paper Count:</strong> 984</div> </div> </div> </div> <h1 class="mt-3 mb-3 text-center" style="font-size:1.6rem;">Search results for: video laryngoscopy</h1> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">984</span> Proof of Concept of Video Laryngoscopy Intubation: Potential Utility in the Pre-Hospital Environment by Emergency Medical Technicians</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=A.%20Al%20Hajeri">A. Al Hajeri</a>, <a href="https://publications.waset.org/abstracts/search?q=M.%20E.%20Minton"> M. E. Minton</a>, <a href="https://publications.waset.org/abstracts/search?q=B.%20Haskins"> B. Haskins</a>, <a href="https://publications.waset.org/abstracts/search?q=F.%20H.%20Cummins"> F. H. Cummins</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The pre-hospital endotracheal intubation is fraught with difficulties; one solution offered has been video laryngoscopy (VL) which permits better visualization of the glottis than the standard method of direct laryngoscopy (DL). This method has resulted in a higher first attempt success rate and fewer failed intubations. However, VL has mainly been evaluated by experienced providers (experienced anesthetists), and as such the utility of this device for those whom infrequently intubate has not been thoroughly assessed. We sought to evaluate this equipment to determine whether in the hands of novice providers this equipment could prove an effective airway management adjunct. DL and two VL methods (C-Mac with distal screen/C-Mac with attached screen) were evaluated by simulating practice on a Laerdal airway management trainer manikin. Twenty Emergency Medical Technicians (basics) were recruited as novice practitioners. This group was used to eliminate bias, as these clinicians had no pre-hospital experience of intubation (although they did have basic airway skills). The following areas were assessed: Time taken to intubate, number of attempts required to successfully intubate, ease of use of equipment VL (attached screen) took on average longer for novice clinicians to successfully intubate and had a lower success rate and reported higher rating of difficulty compared to DL. However, VL (with distal screen) and DL were comparable on intubation times, success rate, gastric inflation rate and rating of difficulty by the user. This study highlights the routine use of VL by inexperienced clinicians would be of no added benefit over DL. Further studies are required to determine whether Emergency Medical Technicians (Paramedics) would benefit from this airway adjunct, and ascertain whether after initial mastery of VL (with a distal screen), lower intubation times and difficulty rating may be achievable. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=direct%20laryngoscopy" title="direct laryngoscopy">direct laryngoscopy</a>, <a href="https://publications.waset.org/abstracts/search?q=endotracheal%20intubation" title=" endotracheal intubation"> endotracheal intubation</a>, <a href="https://publications.waset.org/abstracts/search?q=pre-hospital" title=" pre-hospital"> pre-hospital</a>, <a href="https://publications.waset.org/abstracts/search?q=video%20laryngoscopy" title=" video laryngoscopy"> video laryngoscopy</a> </p> <a href="https://publications.waset.org/abstracts/24688/proof-of-concept-of-video-laryngoscopy-intubation-potential-utility-in-the-pre-hospital-environment-by-emergency-medical-technicians" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/24688.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">410</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">983</span> A Comparison between the McGrath Video Laryngoscope and the Macintosh Laryngoscopy in Children with Expected Normal Airway</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Jong%20Yeop%20Kim">Jong Yeop Kim</a>, <a href="https://publications.waset.org/abstracts/search?q=Ji%20Eun%20Kim"> Ji Eun Kim</a>, <a href="https://publications.waset.org/abstracts/search?q=Hyun%20Jeong%20Kwak"> Hyun Jeong Kwak</a>, <a href="https://publications.waset.org/abstracts/search?q=Sook%20Young%20Lee"> Sook Young Lee</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Background: This prospective, randomized, controlled study was performed to evaluate the usefulness of the McGrath VL compared to Macintosh laryngoscopy in children with expected normal airway during endotracheal intubation, by comparing the time to intubation and ease of intubation. Methods: Eighty-four patients, aged 1-10 years undergoing endotracheal intubation for elective surgery were randomly assigned to McGrath group (n = 42) or Macintosh group (n = 42). Anesthesia was induced with propofol 2.5-3.0 mg/kg and sevoflurane 5-8 vol%. Orotracheal intubation was performed 2 minutes after injection of rocuronium 0.6 mg/kg with McGrath VL or Macintosh laryngoscope. The primary outcome was time to intubation. The Cormack and Lehane glottic grade, intubation difficulty score (IDS), and success rate of intubation were assessed. Hemodynamic changes also were recorded. Results: Median time to intubation [interquartile range] was not different between the McGrath group and the Macintosh group (25.0 [22.8-28.3] s vs. 26.0 [24.0-29.0] s, p = 0.301). The incidence of grade I glottic view was significantly higher in theMcGrath group than in the Macintosh group (95% vs. 74%, p = 0.013). Median IDS was lower in the McGrath group than in the Macintosh group (0 [0-0] vs. 0 [0-1], p = 0.018). There were no significant differences in success rate on intubation or hemodynamics between the two groups. Conclusions: McGrath VL provides better laryngeal views and lower IDS, but similar intubation times and success rates compared to the Macintosh laryngoscope in children with the normal airway. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=intubation" title="intubation">intubation</a>, <a href="https://publications.waset.org/abstracts/search?q=Macintosh%20laryngoscopy" title=" Macintosh laryngoscopy"> Macintosh laryngoscopy</a>, <a href="https://publications.waset.org/abstracts/search?q=Mcgrath%20videolaryngoscopy" title=" Mcgrath videolaryngoscopy"> Mcgrath videolaryngoscopy</a>, <a href="https://publications.waset.org/abstracts/search?q=pediatrics" title=" pediatrics"> pediatrics</a> </p> <a href="https://publications.waset.org/abstracts/75537/a-comparison-between-the-mcgrath-video-laryngoscope-and-the-macintosh-laryngoscopy-in-children-with-expected-normal-airway" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/75537.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">228</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">982</span> H.263 Based Video Transceiver for Wireless Camera System</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Won-Ho%20Kim">Won-Ho Kim</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In this paper, a design of H.263 based wireless video transceiver is presented for wireless camera system. It uses standard WIFI transceiver and the covering area is up to 100m. Furthermore the standard H.263 video encoding technique is used for video compression since wireless video transmitter is unable to transmit high capacity raw data in real time and the implemented system is capable of streaming at speed of less than 1Mbps using NTSC 720x480 video. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=wireless%20video%20transceiver" title="wireless video transceiver">wireless video transceiver</a>, <a href="https://publications.waset.org/abstracts/search?q=video%20surveillance%20camera" title=" video surveillance camera"> video surveillance camera</a>, <a href="https://publications.waset.org/abstracts/search?q=H.263%20video%20encoding%20digital%20signal%20processing" title=" H.263 video encoding digital signal processing"> H.263 video encoding digital signal processing</a> </p> <a href="https://publications.waset.org/abstracts/12951/h263-based-video-transceiver-for-wireless-camera-system" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/12951.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">365</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">981</span> Extraction of Text Subtitles in Multimedia Systems</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Amarjit%20Singh">Amarjit Singh</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In this paper, a method for extraction of text subtitles in large video is proposed. The video data needs to be annotated for many multimedia applications. Text is incorporated in digital video for the motive of providing useful information about that video. So need arises to detect text present in video to understanding and video indexing. This is achieved in two steps. First step is text localization and the second step is text verification. The method of text detection can be extended to text recognition which finds applications in automatic video indexing; video annotation and content based video retrieval. The method has been tested on various types of videos. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=video" title="video">video</a>, <a href="https://publications.waset.org/abstracts/search?q=subtitles" title=" subtitles"> subtitles</a>, <a href="https://publications.waset.org/abstracts/search?q=extraction" title=" extraction"> extraction</a>, <a href="https://publications.waset.org/abstracts/search?q=annotation" title=" annotation"> annotation</a>, <a href="https://publications.waset.org/abstracts/search?q=frames" title=" frames"> frames</a> </p> <a href="https://publications.waset.org/abstracts/24441/extraction-of-text-subtitles-in-multimedia-systems" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/24441.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">601</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">980</span> Video Summarization: Techniques and Applications</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Zaynab%20El%20Khattabi">Zaynab El Khattabi</a>, <a href="https://publications.waset.org/abstracts/search?q=Youness%20Tabii"> Youness Tabii</a>, <a href="https://publications.waset.org/abstracts/search?q=Abdelhamid%20Benkaddour"> Abdelhamid Benkaddour</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Nowadays, huge amount of multimedia repositories make the browsing, retrieval and delivery of video contents very slow and even difficult tasks. Video summarization has been proposed to improve faster browsing of large video collections and more efficient content indexing and access. In this paper, we focus on approaches to video summarization. The video summaries can be generated in many different forms. However, two fundamentals ways to generate summaries are static and dynamic. We present different techniques for each mode in the literature and describe some features used for generating video summaries. We conclude with perspective for further research. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=video%20summarization" title="video summarization">video summarization</a>, <a href="https://publications.waset.org/abstracts/search?q=static%20summarization" title=" static summarization"> static summarization</a>, <a href="https://publications.waset.org/abstracts/search?q=video%20skimming" title=" video skimming"> video skimming</a>, <a href="https://publications.waset.org/abstracts/search?q=semantic%20features" title=" semantic features"> semantic features</a> </p> <a href="https://publications.waset.org/abstracts/27644/video-summarization-techniques-and-applications" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/27644.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">402</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">979</span> Lecture Video Indexing and Retrieval Using Topic Keywords</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=B.%20J.%20Sandesh">B. J. Sandesh</a>, <a href="https://publications.waset.org/abstracts/search?q=Saurabha%20Jirgi"> Saurabha Jirgi</a>, <a href="https://publications.waset.org/abstracts/search?q=S.%20Vidya"> S. Vidya</a>, <a href="https://publications.waset.org/abstracts/search?q=Prakash%20Eljer"> Prakash Eljer</a>, <a href="https://publications.waset.org/abstracts/search?q=Gowri%20Srinivasa"> Gowri Srinivasa</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In this paper, we propose a framework to help users to search and retrieve the portions in the lecture video of their interest. This is achieved by temporally segmenting and indexing the lecture video using the topic keywords. We use transcribed text from the video and documents relevant to the video topic extracted from the web for this purpose. The keywords for indexing are found by applying the non-negative matrix factorization (NMF) topic modeling techniques on the web documents. Our proposed technique first creates indices on the transcribed documents using the topic keywords, and these are mapped to the video to find the start and end time of the portions of the video for a particular topic. This time information is stored in the index table along with the topic keyword which is used to retrieve the specific portions of the video for the query provided by the users. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=video%20indexing%20and%20retrieval" title="video indexing and retrieval">video indexing and retrieval</a>, <a href="https://publications.waset.org/abstracts/search?q=lecture%20videos" title=" lecture videos"> lecture videos</a>, <a href="https://publications.waset.org/abstracts/search?q=content%20based%20video%20search" title=" content based video search"> content based video search</a>, <a href="https://publications.waset.org/abstracts/search?q=multimodal%20indexing" title=" multimodal indexing"> multimodal indexing</a> </p> <a href="https://publications.waset.org/abstracts/77066/lecture-video-indexing-and-retrieval-using-topic-keywords" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/77066.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">250</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">978</span> Distributed Processing for Content Based Lecture Video Retrieval on Hadoop Framework</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=U.%20S.%20N.%20Raju">U. S. N. Raju</a>, <a href="https://publications.waset.org/abstracts/search?q=Kothuri%20Sai%20Kiran"> Kothuri Sai Kiran</a>, <a href="https://publications.waset.org/abstracts/search?q=Meena%20G.%20Kamal"> Meena G. Kamal</a>, <a href="https://publications.waset.org/abstracts/search?q=Vinay%20Nikhil%20Pabba"> Vinay Nikhil Pabba</a>, <a href="https://publications.waset.org/abstracts/search?q=Suresh%20Kanaparthi"> Suresh Kanaparthi</a> </p> <p class="card-text"><strong>Abstract:</strong></p> There is huge amount of lecture video data available for public use, and many more lecture videos are being created and uploaded every day. Searching for videos on required topics from this huge database is a challenging task. Therefore, an efficient method for video retrieval is needed. An approach for automated video indexing and video search in large lecture video archives is presented. As the amount of video lecture data is huge, it is very inefficient to do the processing in a centralized computation framework. Hence, Hadoop Framework for distributed computing for Big Video Data is used. First, step in the process is automatic video segmentation and key-frame detection to offer a visual guideline for the video content navigation. In the next step, we extract textual metadata by applying video Optical Character Recognition (OCR) technology on key-frames. The OCR and detected slide text line types are adopted for keyword extraction, by which both video- and segment-level keywords are extracted for content-based video browsing and search. The performance of the indexing process can be improved for a large database by using distributed computing on Hadoop framework. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=video%20lectures" title="video lectures">video lectures</a>, <a href="https://publications.waset.org/abstracts/search?q=big%20video%20data" title=" big video data"> big video data</a>, <a href="https://publications.waset.org/abstracts/search?q=video%20retrieval" title=" video retrieval"> video retrieval</a>, <a href="https://publications.waset.org/abstracts/search?q=hadoop" title=" hadoop"> hadoop</a> </p> <a href="https://publications.waset.org/abstracts/26648/distributed-processing-for-content-based-lecture-video-retrieval-on-hadoop-framework" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/26648.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">534</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">977</span> Video Stabilization Using Feature Point Matching</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Shamsundar%20Kulkarni">Shamsundar Kulkarni</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Video capturing by non-professionals will lead to unanticipated effects. Such as image distortion, image blurring etc. Hence, many researchers study such drawbacks to enhance the quality of videos. In this paper, an algorithm is proposed to stabilize jittery videos .A stable output video will be attained without the effect of jitter which is caused due to shaking of handheld camera during video recording. Firstly, salient points from each frame from the input video are identified and processed followed by optimizing and stabilize the video. Optimization includes the quality of the video stabilization. This method has shown good result in terms of stabilization and it discarded distortion from the output videos recorded in different circumstances. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=video%20stabilization" title="video stabilization">video stabilization</a>, <a href="https://publications.waset.org/abstracts/search?q=point%20feature%20matching" title=" point feature matching"> point feature matching</a>, <a href="https://publications.waset.org/abstracts/search?q=salient%20points" title=" salient points"> salient points</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20quality%20measurement" title=" image quality measurement"> image quality measurement</a> </p> <a href="https://publications.waset.org/abstracts/57341/video-stabilization-using-feature-point-matching" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/57341.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">313</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">976</span> Structural Analysis on the Composition of Video Game Virtual Spaces</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Qin%20Luofeng">Qin Luofeng</a>, <a href="https://publications.waset.org/abstracts/search?q=Shen%20Siqi"> Shen Siqi</a> </p> <p class="card-text"><strong>Abstract:</strong></p> For the 58 years since the first video game came into being, the video game industry is getting through an explosive evolution from then on. Video games exert great influence on society and become a reflection of public life to some extent. Video game virtual spaces are where activities are taking place like real spaces. And that’s the reason why some architects pay attention to video games. However, compared to the researches on the appearance of games, we observe a lack of theoretical comprehensive on the construction of video game virtual spaces. The research method of this paper is to collect literature and conduct theoretical research about the virtual space in video games firstly. And then analogizing the opinions on the space phenomena from the theory of literature and films. Finally, this paper proposes a three-layer framework for the construction of video game virtual spaces: “algorithmic space-narrative space players space”, which correspond to the exterior, expressive, affective parts of the game space. Also, we illustrate each sub-space according to numerous instances of published video games. Hoping this writing could promote the interactive development of video games and architecture. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=video%20game" title="video game">video game</a>, <a href="https://publications.waset.org/abstracts/search?q=virtual%20space" title=" virtual space"> virtual space</a>, <a href="https://publications.waset.org/abstracts/search?q=narrativity" title=" narrativity"> narrativity</a>, <a href="https://publications.waset.org/abstracts/search?q=social%20space" title=" social space"> social space</a>, <a href="https://publications.waset.org/abstracts/search?q=emotional%20connection" title=" emotional connection"> emotional connection</a> </p> <a href="https://publications.waset.org/abstracts/118519/structural-analysis-on-the-composition-of-video-game-virtual-spaces" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/118519.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">268</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">975</span> Key Frame Based Video Summarization via Dependency Optimization</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Janya%20Sainui">Janya Sainui</a> </p> <p class="card-text"><strong>Abstract:</strong></p> As a rapid growth of digital videos and data communications, video summarization that provides a shorter version of the video for fast video browsing and retrieval is necessary. Key frame extraction is one of the mechanisms to generate video summary. In general, the extracted key frames should both represent the entire video content and contain minimum redundancy. However, most of the existing approaches heuristically select key frames; hence, the selected key frames may not be the most different frames and/or not cover the entire content of a video. In this paper, we propose a method of video summarization which provides the reasonable objective functions for selecting key frames. In particular, we apply a statistical dependency measure called quadratic mutual informaion as our objective functions for maximizing the coverage of the entire video content as well as minimizing the redundancy among selected key frames. The proposed key frame extraction algorithm finds key frames as an optimization problem. Through experiments, we demonstrate the success of the proposed video summarization approach that produces video summary with better coverage of the entire video content while less redundancy among key frames comparing to the state-of-the-art approaches. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=video%20summarization" title="video summarization">video summarization</a>, <a href="https://publications.waset.org/abstracts/search?q=key%20frame%20extraction" title=" key frame extraction"> key frame extraction</a>, <a href="https://publications.waset.org/abstracts/search?q=dependency%20measure" title=" dependency measure"> dependency measure</a>, <a href="https://publications.waset.org/abstracts/search?q=quadratic%20mutual%20information" title=" quadratic mutual information"> quadratic mutual information</a> </p> <a href="https://publications.waset.org/abstracts/75218/key-frame-based-video-summarization-via-dependency-optimization" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/75218.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">266</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">974</span> Efficient Storage and Intelligent Retrieval of Multimedia Streams Using H. 265</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=S.%20Sarumathi">S. Sarumathi</a>, <a href="https://publications.waset.org/abstracts/search?q=C.%20Deepadharani"> C. Deepadharani</a>, <a href="https://publications.waset.org/abstracts/search?q=Garimella%20Archana"> Garimella Archana</a>, <a href="https://publications.waset.org/abstracts/search?q=S.%20Dakshayani"> S. Dakshayani</a>, <a href="https://publications.waset.org/abstracts/search?q=D.%20Logeshwaran"> D. Logeshwaran</a>, <a href="https://publications.waset.org/abstracts/search?q=D.%20Jayakumar"> D. Jayakumar</a>, <a href="https://publications.waset.org/abstracts/search?q=Vijayarangan%20Natarajan"> Vijayarangan Natarajan</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The need of the hour for the customers who use a dial-up or a low broadband connection for their internet services is to access HD video data. This can be achieved by developing a new video format using H. 265. This is the latest video codec standard developed by ISO/IEC Moving Picture Experts Group (MPEG) and ITU-T Video Coding Experts Group (VCEG) on April 2013. This new standard for video compression has the potential to deliver higher performance than the earlier standards such as H. 264/AVC. In comparison with H. 264, HEVC offers a clearer, higher quality image at half the original bitrate. At this lower bitrate, it is possible to transmit high definition videos using low bandwidth. It doubles the data compression ratio supporting 8K Ultra HD and resolutions up to 8192×4320. In the proposed model, we design a new video format which supports this H. 265 standard. The major areas of applications in the coming future would lead to enhancements in the performance level of digital television like Tata Sky and Sun Direct, BluRay Discs, Mobile Video, Video Conferencing and Internet and Live Video streaming. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=access%20HD%20video" title="access HD video">access HD video</a>, <a href="https://publications.waset.org/abstracts/search?q=H.%20265%20video%20standard" title=" H. 265 video standard"> H. 265 video standard</a>, <a href="https://publications.waset.org/abstracts/search?q=high%20performance" title=" high performance"> high performance</a>, <a href="https://publications.waset.org/abstracts/search?q=high%20quality%20image" title=" high quality image"> high quality image</a>, <a href="https://publications.waset.org/abstracts/search?q=low%20bandwidth" title=" low bandwidth"> low bandwidth</a>, <a href="https://publications.waset.org/abstracts/search?q=new%20video%20format" title=" new video format"> new video format</a>, <a href="https://publications.waset.org/abstracts/search?q=video%20streaming%20applications" title=" video streaming applications"> video streaming applications</a> </p> <a href="https://publications.waset.org/abstracts/1881/efficient-storage-and-intelligent-retrieval-of-multimedia-streams-using-h-265" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/1881.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">354</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">973</span> Video Shot Detection and Key Frame Extraction Using Faber-Shauder DWT and SVD</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Assma%20Azeroual">Assma Azeroual</a>, <a href="https://publications.waset.org/abstracts/search?q=Karim%20Afdel"> Karim Afdel</a>, <a href="https://publications.waset.org/abstracts/search?q=Mohamed%20El%20Hajji"> Mohamed El Hajji</a>, <a href="https://publications.waset.org/abstracts/search?q=Hassan%20Douzi"> Hassan Douzi</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Key frame extraction methods select the most representative frames of a video, which can be used in different areas of video processing such as video retrieval, video summary, and video indexing. In this paper we present a novel approach for extracting key frames from video sequences. The frame is characterized uniquely by his contours which are represented by the dominant blocks. These dominant blocks are located on the contours and its near textures. When the video frames have a noticeable changement, its dominant blocks changed, then we can extracte a key frame. The dominant blocks of every frame is computed, and then feature vectors are extracted from the dominant blocks image of each frame and arranged in a feature matrix. Singular Value Decomposition is used to calculate sliding windows ranks of those matrices. Finally the computed ranks are traced and then we are able to extract key frames of a video. Experimental results show that the proposed approach is robust against a large range of digital effects used during shot transition. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=FSDWT" title="FSDWT">FSDWT</a>, <a href="https://publications.waset.org/abstracts/search?q=key%20frame%20extraction" title=" key frame extraction"> key frame extraction</a>, <a href="https://publications.waset.org/abstracts/search?q=shot%20detection" title=" shot detection"> shot detection</a>, <a href="https://publications.waset.org/abstracts/search?q=singular%20value%20decomposition" title=" singular value decomposition"> singular value decomposition</a> </p> <a href="https://publications.waset.org/abstracts/18296/video-shot-detection-and-key-frame-extraction-using-faber-shauder-dwt-and-svd" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/18296.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">398</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">972</span> Multimodal Convolutional Neural Network for Musical Instrument Recognition</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Yagya%20Raj%20Pandeya">Yagya Raj Pandeya</a>, <a href="https://publications.waset.org/abstracts/search?q=Joonwhoan%20Lee"> Joonwhoan Lee</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The dynamic behavior of music and video makes it difficult to evaluate musical instrument playing in a video by computer system. Any television or film video clip with music information are rich sources for analyzing musical instruments using modern machine learning technologies. In this research, we integrate the audio and video information sources using convolutional neural network (CNN) and pass network learned features through recurrent neural network (RNN) to preserve the dynamic behaviors of audio and video. We use different pre-trained CNN for music and video feature extraction and then fine tune each model. The music network use 2D convolutional network and video network use 3D convolution (C3D). Finally, we concatenate each music and video feature by preserving the time varying features. The long short term memory (LSTM) network is used for long-term dynamic feature characterization and then use late fusion with generalized mean. The proposed network performs better performance to recognize the musical instrument using audio-video multimodal neural network. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=multimodal" title="multimodal">multimodal</a>, <a href="https://publications.waset.org/abstracts/search?q=3D%20convolution" title=" 3D convolution"> 3D convolution</a>, <a href="https://publications.waset.org/abstracts/search?q=music-video%20feature%20extraction" title=" music-video feature extraction"> music-video feature extraction</a>, <a href="https://publications.waset.org/abstracts/search?q=generalized%20mean" title=" generalized mean"> generalized mean</a> </p> <a href="https://publications.waset.org/abstracts/104041/multimodal-convolutional-neural-network-for-musical-instrument-recognition" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/104041.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">215</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">971</span> Surveillance Video Summarization Based on Histogram Differencing and Sum Conditional Variance</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Nada%20Jasim%20Habeeb">Nada Jasim Habeeb</a>, <a href="https://publications.waset.org/abstracts/search?q=Rana%20Saad%20Mohammed"> Rana Saad Mohammed</a>, <a href="https://publications.waset.org/abstracts/search?q=Muntaha%20Khudair%20Abbass"> Muntaha Khudair Abbass </a> </p> <p class="card-text"><strong>Abstract:</strong></p> For more efficient and fast video summarization, this paper presents a surveillance video summarization method. The presented method works to improve video summarization technique. This method depends on temporal differencing to extract most important data from large video stream. This method uses histogram differencing and Sum Conditional Variance which is robust against to illumination variations in order to extract motion objects. The experimental results showed that the presented method gives better output compared with temporal differencing based summarization techniques. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=temporal%20differencing" title="temporal differencing">temporal differencing</a>, <a href="https://publications.waset.org/abstracts/search?q=video%20summarization" title=" video summarization"> video summarization</a>, <a href="https://publications.waset.org/abstracts/search?q=histogram%20differencing" title=" histogram differencing"> histogram differencing</a>, <a href="https://publications.waset.org/abstracts/search?q=sum%20conditional%20variance" title=" sum conditional variance"> sum conditional variance</a> </p> <a href="https://publications.waset.org/abstracts/54404/surveillance-video-summarization-based-on-histogram-differencing-and-sum-conditional-variance" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/54404.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">349</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">970</span> Evaluating the Performance of Existing Full-Reference Quality Metrics on High Dynamic Range (HDR) Video Content</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Maryam%20Azimi">Maryam Azimi</a>, <a href="https://publications.waset.org/abstracts/search?q=Amin%20Banitalebi-Dehkordi"> Amin Banitalebi-Dehkordi</a>, <a href="https://publications.waset.org/abstracts/search?q=Yuanyuan%20Dong"> Yuanyuan Dong</a>, <a href="https://publications.waset.org/abstracts/search?q=Mahsa%20T.%20Pourazad"> Mahsa T. Pourazad</a>, <a href="https://publications.waset.org/abstracts/search?q=Panos%20Nasiopoulos"> Panos Nasiopoulos</a> </p> <p class="card-text"><strong>Abstract:</strong></p> While there exists a wide variety of Low Dynamic Range (LDR) quality metrics, only a limited number of metrics are designed specifically for the High Dynamic Range (HDR) content. With the introduction of HDR video compression standardization effort by international standardization bodies, the need for an efficient video quality metric for HDR applications has become more pronounced. The objective of this study is to compare the performance of the existing full-reference LDR and HDR video quality metrics on HDR content and identify the most effective one for HDR applications. To this end, a new HDR video data set is created, which consists of representative indoor and outdoor video sequences with different brightness, motion levels and different representing types of distortions. The quality of each distorted video in this data set is evaluated both subjectively and objectively. The correlation between the subjective and objective results confirm that VIF quality metric outperforms all to their tested metrics in the presence of the tested types of distortions. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=HDR" title="HDR">HDR</a>, <a href="https://publications.waset.org/abstracts/search?q=dynamic%20range" title=" dynamic range"> dynamic range</a>, <a href="https://publications.waset.org/abstracts/search?q=LDR" title=" LDR"> LDR</a>, <a href="https://publications.waset.org/abstracts/search?q=subjective%20evaluation" title=" subjective evaluation"> subjective evaluation</a>, <a href="https://publications.waset.org/abstracts/search?q=video%20compression" title=" video compression"> video compression</a>, <a href="https://publications.waset.org/abstracts/search?q=HEVC" title=" HEVC"> HEVC</a>, <a href="https://publications.waset.org/abstracts/search?q=video%20quality%20metrics" title=" video quality metrics"> video quality metrics</a> </p> <a href="https://publications.waset.org/abstracts/18171/evaluating-the-performance-of-existing-full-reference-quality-metrics-on-high-dynamic-range-hdr-video-content" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/18171.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">525</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">969</span> Extending Image Captioning to Video Captioning Using Encoder-Decoder</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Sikiru%20Ademola%20Adewale">Sikiru Ademola Adewale</a>, <a href="https://publications.waset.org/abstracts/search?q=Joe%20Thomas"> Joe Thomas</a>, <a href="https://publications.waset.org/abstracts/search?q=Bolanle%20Hafiz%20Matti"> Bolanle Hafiz Matti</a>, <a href="https://publications.waset.org/abstracts/search?q=Tosin%20Ige"> Tosin Ige</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This project demonstrates the implementation and use of an encoder-decoder model to perform a many-to-many mapping of video data to text captions. The many-to-many mapping occurs via an input temporal sequence of video frames to an output sequence of words to form a caption sentence. Data preprocessing, model construction, and model training are discussed. Caption correctness is evaluated using 2-gram BLEU scores across the different splits of the dataset. Specific examples of output captions were shown to demonstrate model generality over the video temporal dimension. Predicted captions were shown to generalize over video action, even in instances where the video scene changed dramatically. Model architecture changes are discussed to improve sentence grammar and correctness. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=decoder" title="decoder">decoder</a>, <a href="https://publications.waset.org/abstracts/search?q=encoder" title=" encoder"> encoder</a>, <a href="https://publications.waset.org/abstracts/search?q=many-to-many%20mapping" title=" many-to-many mapping"> many-to-many mapping</a>, <a href="https://publications.waset.org/abstracts/search?q=video%20captioning" title=" video captioning"> video captioning</a>, <a href="https://publications.waset.org/abstracts/search?q=2-gram%20BLEU" title=" 2-gram BLEU"> 2-gram BLEU</a> </p> <a href="https://publications.waset.org/abstracts/164540/extending-image-captioning-to-video-captioning-using-encoder-decoder" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/164540.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">108</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">968</span> H.264 Video Privacy Protection Method Using Regions of Interest Encryption</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Taekyun%20Doo">Taekyun Doo</a>, <a href="https://publications.waset.org/abstracts/search?q=Cheongmin%20Ji"> Cheongmin Ji</a>, <a href="https://publications.waset.org/abstracts/search?q=Manpyo%20Hong"> Manpyo Hong</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Like a closed-circuit television (CCTV), video surveillance system is widely placed for gathering video from unspecified people to prevent crime, surveillance, or many other purposes. However, abuse of CCTV brings about concerns of personal privacy invasions. In this paper, we propose an encryption method to protect personal privacy system in H.264 compressed video bitstream with encrypting only regions of interest (ROI). There is no need to change the existing video surveillance system. In addition, encrypting ROI in compressed video bitstream is a challenging work due to spatial and temporal drift errors. For this reason, we propose a novel drift mitigation method when ROI is encrypted. The proposed method was implemented by using JM reference software based on the H.264 compressed videos, and experimental results show the verification of our proposed methods and its effectiveness. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=H.264%2FAVC" title="H.264/AVC">H.264/AVC</a>, <a href="https://publications.waset.org/abstracts/search?q=video%20encryption" title=" video encryption"> video encryption</a>, <a href="https://publications.waset.org/abstracts/search?q=privacy%20protection" title=" privacy protection"> privacy protection</a>, <a href="https://publications.waset.org/abstracts/search?q=post%20compression" title=" post compression"> post compression</a>, <a href="https://publications.waset.org/abstracts/search?q=region%20of%20interest" title=" region of interest"> region of interest</a> </p> <a href="https://publications.waset.org/abstracts/57651/h264-video-privacy-protection-method-using-regions-of-interest-encryption" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/57651.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">340</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">967</span> The Impact of Keyword and Full Video Captioning on Listening Comprehension</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Elias%20Bensalem">Elias Bensalem</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This study investigates the effect of two types of captioning (full and keyword captioning) on listening comprehension. Thirty-six university-level EFL students participated in the study. They were randomly assigned to watch three video clips under three conditions. The first group watched the video clips with full captions. The second group watched the same video clips with keyword captions. The control group watched the video clips without captions. After watching each clip, participants took a listening comprehension test. At the end of the experiment, participants completed a questionnaire to measure their perceptions about the use of captions and the video clips they watched. Results indicated that the full captioning group significantly outperformed both the keyword captioning and the no captioning group on the listening comprehension tests. However, this study did not find any significant difference between the keyword captioning group and the no captioning group. Results of the survey suggest that keyword captioning were a source of distraction for participants. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=captions" title="captions">captions</a>, <a href="https://publications.waset.org/abstracts/search?q=EFL" title=" EFL"> EFL</a>, <a href="https://publications.waset.org/abstracts/search?q=listening%20comprehension" title=" listening comprehension"> listening comprehension</a>, <a href="https://publications.waset.org/abstracts/search?q=video" title=" video"> video</a> </p> <a href="https://publications.waset.org/abstracts/62467/the-impact-of-keyword-and-full-video-captioning-on-listening-comprehension" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/62467.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">262</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">966</span> Temporally Coherent 3D Animation Reconstruction from RGB-D Video Data</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Salam%20Khalifa">Salam Khalifa</a>, <a href="https://publications.waset.org/abstracts/search?q=Naveed%20Ahmed"> Naveed Ahmed</a> </p> <p class="card-text"><strong>Abstract:</strong></p> We present a new method to reconstruct a temporally coherent 3D animation from single or multi-view RGB-D video data using unbiased feature point sampling. Given RGB-D video data, in form of a 3D point cloud sequence, our method first extracts feature points using both color and depth information. In the subsequent steps, these feature points are used to match two 3D point clouds in consecutive frames independent of their resolution. Our new motion vectors based dynamic alignment method then fully reconstruct a spatio-temporally coherent 3D animation. We perform extensive quantitative validation using novel error functions to analyze the results. We show that despite the limiting factors of temporal and spatial noise associated to RGB-D data, it is possible to extract temporal coherence to faithfully reconstruct a temporally coherent 3D animation from RGB-D video data. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=3D%20video" title="3D video">3D video</a>, <a href="https://publications.waset.org/abstracts/search?q=3D%20animation" title=" 3D animation"> 3D animation</a>, <a href="https://publications.waset.org/abstracts/search?q=RGB-D%20video" title=" RGB-D video"> RGB-D video</a>, <a href="https://publications.waset.org/abstracts/search?q=temporally%20coherent%203D%20animation" title=" temporally coherent 3D animation"> temporally coherent 3D animation</a> </p> <a href="https://publications.waset.org/abstracts/12034/temporally-coherent-3d-animation-reconstruction-from-rgb-d-video-data" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/12034.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">373</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">965</span> A Passive Digital Video Authentication Technique Using Wavelet Based Optical Flow Variation Thresholding</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=R.%20S.%20Remya">R. S. Remya</a>, <a href="https://publications.waset.org/abstracts/search?q=U.%20S.%20Sethulekshmi"> U. S. Sethulekshmi</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Detecting the authenticity of a video is an important issue in digital forensics as Video is used as a silent evidence in court such as in child pornography, movie piracy cases, insurance claims, cases involving scientific fraud, traffic monitoring etc. The biggest threat to video data is the availability of modern open video editing tools which enable easy editing of videos without leaving any trace of tampering. In this paper, we propose an efficient passive method for inter-frame video tampering detection, its type and location by estimating the optical flow of wavelet features of adjacent frames and thresholding the variation in the estimated feature. The performance of the algorithm is compared with the z-score thresholding and achieved an efficiency above 95% on all the tested databases. The proposed method works well for videos with dynamic (forensics) as well as static (surveillance) background. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=discrete%20wavelet%20transform" title="discrete wavelet transform">discrete wavelet transform</a>, <a href="https://publications.waset.org/abstracts/search?q=optical%20flow" title=" optical flow"> optical flow</a>, <a href="https://publications.waset.org/abstracts/search?q=optical%20flow%20variation" title=" optical flow variation"> optical flow variation</a>, <a href="https://publications.waset.org/abstracts/search?q=video%20tampering" title=" video tampering"> video tampering</a> </p> <a href="https://publications.waset.org/abstracts/45252/a-passive-digital-video-authentication-technique-using-wavelet-based-optical-flow-variation-thresholding" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/45252.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">359</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">964</span> Video Sharing System Based On Wi-fi Camera</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Qidi%20Lin">Qidi Lin</a>, <a href="https://publications.waset.org/abstracts/search?q=Jinbin%20Huang"> Jinbin Huang</a>, <a href="https://publications.waset.org/abstracts/search?q=Weile%20Liang"> Weile Liang</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper introduces a video sharing platform based on WiFi, which consists of camera, mobile phone and PC server. This platform can receive wireless signal from the camera and show the live video on the mobile phone captured by camera. In addition that, it is able to send commands to camera and control the camera’s holder to rotate. The platform can be applied to interactive teaching and dangerous area’s monitoring and so on. Testing results show that the platform can share the live video of mobile phone. Furthermore, if the system’s PC sever and the camera and many mobile phones are connected together, it can transfer photos concurrently. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=Wifi%20Camera" title="Wifi Camera">Wifi Camera</a>, <a href="https://publications.waset.org/abstracts/search?q=socket%20mobile" title=" socket mobile"> socket mobile</a>, <a href="https://publications.waset.org/abstracts/search?q=platform%20video%20monitoring" title=" platform video monitoring"> platform video monitoring</a>, <a href="https://publications.waset.org/abstracts/search?q=remote%20control" title=" remote control"> remote control</a> </p> <a href="https://publications.waset.org/abstracts/31912/video-sharing-system-based-on-wi-fi-camera" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/31912.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">338</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">963</span> Internet Optimization by Negotiating Traffic Times </h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Carlos%20Gonzalez">Carlos Gonzalez</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper describes a system to optimize the use of the internet by clients requiring downloading of videos at peak hours. The system consists of a web server belonging to a provider of video contents, a provider of internet communications and a software application running on a client&rsquo;s computer. The client using the application software will communicate to the video provider a list of the client&rsquo;s future video demands. The video provider calculates which videos are going to be more in demand for download in the immediate future, and proceeds to request the internet provider the most optimal hours to do the downloading. The times of the downloading will be sent to the application software, which will use the information of pre-established hours negotiated between the video provider and the internet provider to download those videos. The videos will be saved in a special protected section of the user&rsquo;s hard disk, which will only be accessed by the application software in the client&rsquo;s computer. When the client is ready to see a video, the application will search the list of current existent videos in the area of the hard disk; if it does exist, it will use this video directly without the need for internet access. We found that the best way to optimize the download traffic of videos is by negotiation between the internet communication provider and the video content provider. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=internet%20optimization" title="internet optimization">internet optimization</a>, <a href="https://publications.waset.org/abstracts/search?q=video%20download" title=" video download"> video download</a>, <a href="https://publications.waset.org/abstracts/search?q=future%20demands" title=" future demands"> future demands</a>, <a href="https://publications.waset.org/abstracts/search?q=secure%20storage" title=" secure storage"> secure storage</a> </p> <a href="https://publications.waset.org/abstracts/107006/internet-optimization-by-negotiating-traffic-times" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/107006.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">136</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">962</span> Grounded Theory of Consumer Loyalty: A Perspective through Video Game Addiction</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Bassam%20Shaikh">Bassam Shaikh</a>, <a href="https://publications.waset.org/abstracts/search?q=R.%20S.%20A.%20Jumain"> R. S. A. Jumain</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Game addiction has become an extremely important topic in psychology researchers, particularly in understanding and explaining why individuals become addicted (to video games). In previous studies, effect of online game addiction on social responsibilities, health problems, government action, and the behaviors of individuals to purchase and the causes of making individuals addicted on the video games has been discussed. Extending these concepts in marketing, it could be argued than the phenomenon could enlighten and extending our understanding on consumer loyalty. This study took the Grounded Theory approach, and found that motivation, satisfaction, fulfillments, exploration and achievements to be part of the important elements that builds consumer loyalty. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=grounded%20theory" title="grounded theory">grounded theory</a>, <a href="https://publications.waset.org/abstracts/search?q=consumer%20loyalty" title=" consumer loyalty"> consumer loyalty</a>, <a href="https://publications.waset.org/abstracts/search?q=video%20games" title=" video games"> video games</a>, <a href="https://publications.waset.org/abstracts/search?q=video%20game%20addiction" title=" video game addiction"> video game addiction</a> </p> <a href="https://publications.waset.org/abstracts/9724/grounded-theory-of-consumer-loyalty-a-perspective-through-video-game-addiction" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/9724.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">535</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">961</span> Tackling the Digital Divide: Enhancing Video Consultation Access for Digital Illiterate Patients in the Hospital</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Wieke%20Ellen%20Bouwes">Wieke Ellen Bouwes</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This study aims to unravel which factors enhance accessibility of video consultations (VCs) for patients with low digital literacy. Thirteen in-depth interviews with patients, hospital employees, eHealth experts, and digital support organizations were held. Patients with low digital literacy received in-home support during real-time video consultations and are observed during the set-up of these consultations. Key findings highlight the importance of patient acceptance, emphasizing video consultations benefits and avoiding standardized courses. The lack of a uniform video consultation system across healthcare providers poses a barrier. Familiarity with support organizations – to support patients in usage of digital tools - among healthcare practitioners enhances accessibility. Moreover, considerations regarding the Dutch General Data Protection Regulation (GDPR) law influence support patients receive. Also, provider readiness to use video consultations influences patient access. Further, alignment between learning styles and support methods seems to determine abilities to learn how to use video consultations. Future research could delve into tailored learning styles and technological solutions for remote access to further explore effectiveness of learning methods. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=video%20consultations" title="video consultations">video consultations</a>, <a href="https://publications.waset.org/abstracts/search?q=digital%20literacy%20skills" title=" digital literacy skills"> digital literacy skills</a>, <a href="https://publications.waset.org/abstracts/search?q=effectiveness%20of%20support" title=" effectiveness of support"> effectiveness of support</a>, <a href="https://publications.waset.org/abstracts/search?q=intra-%20and%20inter-organizational%20relationships" title=" intra- and inter-organizational relationships"> intra- and inter-organizational relationships</a>, <a href="https://publications.waset.org/abstracts/search?q=patient%20acceptance%20of%20video%20consultations" title=" patient acceptance of video consultations"> patient acceptance of video consultations</a> </p> <a href="https://publications.waset.org/abstracts/173756/tackling-the-digital-divide-enhancing-video-consultation-access-for-digital-illiterate-patients-in-the-hospital" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/173756.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">74</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">960</span> The Effect of Video Games on English as a Foreign Language Students&#039; Language Learning Motivation</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Shamim%20Ali">Shamim Ali</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Researchers and teachers have begun developing digital games and model environments for educational purpose; therefore this study examines the effect of a videos game on secondary school students’ language learning motivation. Secondly, it tries to find out the opportunities to develop a decision making process and simultaneously it analyzes the solutions for further implementation in educational setting. Participants were 30 male students randomly assigned to one of the following three treatments: 10 students were assigned to read the game’s story; 10 students were players, who played video game; and, and the last 10 students acted as watchers and observers, their duty was to watch their classmates play the digital video game. A language learning motivation scale was developed and it was given to the participants as a pre- and post-test. Results indicated a significant language learning motivation and the participants were quite motivated in the end. It is, thus, concluded that the use of video games can help enhance high school students’ language learning motivation. It was suggested that video games should be used as a complementary activity not as a replacement for textbook since excessive use of video games can divert the original purpose of learning. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=EFL" title="EFL">EFL</a>, <a href="https://publications.waset.org/abstracts/search?q=English%20as%20a%20Foreign%20Language" title=" English as a Foreign Language"> English as a Foreign Language</a>, <a href="https://publications.waset.org/abstracts/search?q=motivation" title=" motivation"> motivation</a>, <a href="https://publications.waset.org/abstracts/search?q=video%20games" title=" video games"> video games</a>, <a href="https://publications.waset.org/abstracts/search?q=EFL%20learners" title=" EFL learners"> EFL learners</a> </p> <a href="https://publications.waset.org/abstracts/100055/the-effect-of-video-games-on-english-as-a-foreign-language-students-language-learning-motivation" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/100055.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">179</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">959</span> Content Based Video Retrieval System Using Principal Object Analysis</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Van%20Thinh%20Bui">Van Thinh Bui</a>, <a href="https://publications.waset.org/abstracts/search?q=Anh%20Tuan%20Tran"> Anh Tuan Tran</a>, <a href="https://publications.waset.org/abstracts/search?q=Quoc%20Viet%20Ngo"> Quoc Viet Ngo</a>, <a href="https://publications.waset.org/abstracts/search?q=The%20Bao%20Pham"> The Bao Pham</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Video retrieval is a searching problem on videos or clips based on content in which they are relatively close to an input image or video. The application of this retrieval consists of selecting video in a folder or recognizing a human in security camera. However, some recent approaches have been in challenging problem due to the diversity of video types, frame transitions and camera positions. Besides, that an appropriate measures is selected for the problem is a question. In order to overcome all obstacles, we propose a content-based video retrieval system in some main steps resulting in a good performance. From a main video, we process extracting keyframes and principal objects using Segmentation of Aggregating Superpixels (SAS) algorithm. After that, Speeded Up Robust Features (SURF) are selected from those principal objects. Then, the model “Bag-of-words” in accompanied by SVM classification are applied to obtain the retrieval result. Our system is performed on over 300 videos in diversity from music, history, movie, sports, and natural scene to TV program show. The performance is evaluated in promising comparison to the other approaches. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=video%20retrieval" title="video retrieval">video retrieval</a>, <a href="https://publications.waset.org/abstracts/search?q=principal%20objects" title=" principal objects"> principal objects</a>, <a href="https://publications.waset.org/abstracts/search?q=keyframe" title=" keyframe"> keyframe</a>, <a href="https://publications.waset.org/abstracts/search?q=segmentation%20of%20aggregating%20superpixels" title=" segmentation of aggregating superpixels"> segmentation of aggregating superpixels</a>, <a href="https://publications.waset.org/abstracts/search?q=speeded%20up%20robust%20features" title=" speeded up robust features"> speeded up robust features</a>, <a href="https://publications.waset.org/abstracts/search?q=bag-of-words" title=" bag-of-words"> bag-of-words</a>, <a href="https://publications.waset.org/abstracts/search?q=SVM" title=" SVM"> SVM</a> </p> <a href="https://publications.waset.org/abstracts/59753/content-based-video-retrieval-system-using-principal-object-analysis" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/59753.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">302</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">958</span> Remote Video Supervision via DVB-H Channels</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Hanen%20Ghabi">Hanen Ghabi</a>, <a href="https://publications.waset.org/abstracts/search?q=Youssef%20Oudhini"> Youssef Oudhini</a>, <a href="https://publications.waset.org/abstracts/search?q=Hassen%20Mnif"> Hassen Mnif</a> </p> <p class="card-text"><strong>Abstract:</strong></p> By reference to recent publications dealing with the same problem, and as a follow-up to this research work already published, we propose in this article a new original idea of tele supervision exploiting the opportunities offered by the DVB-H system. The objective is to exploit the RF channels of the DVB-H network in order to insert digital remote monitoring images dedicated to a remote solar power plant. Indeed, the DVB-H (Digital Video Broadcast-Handheld) broadcasting system was designed and deployed for digital broadcasting on the same platform as the parent system, DVB-T. We claim to be able to exploit this approach in order to satisfy the operator of remote photovoltaic sites (and others) in order to remotely control the components of isolated installations by means of video surveillance. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=video%20surveillance" title="video surveillance">video surveillance</a>, <a href="https://publications.waset.org/abstracts/search?q=digital%20video%20broadcast-handheld" title=" digital video broadcast-handheld"> digital video broadcast-handheld</a>, <a href="https://publications.waset.org/abstracts/search?q=photovoltaic%20sites" title=" photovoltaic sites"> photovoltaic sites</a>, <a href="https://publications.waset.org/abstracts/search?q=AVC" title=" AVC"> AVC</a> </p> <a href="https://publications.waset.org/abstracts/147516/remote-video-supervision-via-dvb-h-channels" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/147516.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">184</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">957</span> Video-Based Psychoeducation for Caregivers of Persons with Schizophrenia</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Jilu%20David">Jilu David</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Background: Schizophrenia is one of the most misunderstood mental illnesses across the globe. Lack of understanding about mental illnesses often delay treatment, severely affects the functionality of the person, and causes distress to the family. The study, Video-based Psychoeducation for Caregivers of Persons with Schizophrenia, consisted of developing a psychoeducational video about Schizophrenia, its symptoms, causes, treatment, and the importance of family support. Methodology: A quasi-experimental pre-post design was used to understand the feasibility of the study. Qualitative analysis strengthened the feasibility outcomes. Knowledge About Schizophrenia Interview was used to assess the level of knowledge of 10 participants, before and after the screening of the video. Results: Themes of usefulness, length, content, educational component, format of the intervention, and language emerged in the qualitative analysis. There was a statistically significant difference in the knowledge level of participants before and after the video screening. Conclusion: The statistical and qualitative analysis revealed that the video-based psychoeducation program was feasible and that it facilitated a general improvement in knowledge of the participants. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=Schizophrenia" title="Schizophrenia">Schizophrenia</a>, <a href="https://publications.waset.org/abstracts/search?q=mental%20illness" title=" mental illness"> mental illness</a>, <a href="https://publications.waset.org/abstracts/search?q=psychoeducation" title=" psychoeducation"> psychoeducation</a>, <a href="https://publications.waset.org/abstracts/search?q=video-based%20psychoeducation" title=" video-based psychoeducation"> video-based psychoeducation</a>, <a href="https://publications.waset.org/abstracts/search?q=family%20support" title=" family support"> family support</a> </p> <a href="https://publications.waset.org/abstracts/122698/video-based-psychoeducation-for-caregivers-of-persons-with-schizophrenia" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/122698.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">131</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">956</span> The Video Database for Teaching and Learning in Football Refereeing</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=M.%20Armenteros">M. Armenteros</a>, <a href="https://publications.waset.org/abstracts/search?q=A.%20Dom%C3%ADnguez"> A. Domínguez</a>, <a href="https://publications.waset.org/abstracts/search?q=M.%20Fern%C3%A1ndez"> M. Fernández</a>, <a href="https://publications.waset.org/abstracts/search?q=A.%20J.%20Ben%C3%ADtez"> A. J. Benítez</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The following paper describes the video database tool used by the F&eacute;d&eacute;ration Internationale de Football Association (FIFA) as part of the research project developed in collaboration with the Carlos III University of Madrid. The database project began in 2012, with the aim of creating an educational tool for the training of instructors, referees and assistant referees, and it has been used in all FUTURO III courses since 2013. The platform now contains 3,135 video clips of different match situations from FIFA competitions. It has 1,835 users (FIFA instructors, referees and assistant referees). In this work, the main features of the database are described, such as the use of a search tool and the creation of multimedia presentations and video quizzes. The database has been developed in MySQL, ActionScript, Ruby on Rails and HTML. This tool has been rated by users as &quot;very good&quot; in all courses, which prompt us to introduce it as an ideal tool for any other sport that requires the use of video analysis. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=assistants%20referees" title="assistants referees">assistants referees</a>, <a href="https://publications.waset.org/abstracts/search?q=cloud%20computing" title=" cloud computing"> cloud computing</a>, <a href="https://publications.waset.org/abstracts/search?q=e-learning" title=" e-learning"> e-learning</a>, <a href="https://publications.waset.org/abstracts/search?q=instructors" title=" instructors"> instructors</a>, <a href="https://publications.waset.org/abstracts/search?q=FIFA" title=" FIFA"> FIFA</a>, <a href="https://publications.waset.org/abstracts/search?q=referees" title=" referees"> referees</a>, <a href="https://publications.waset.org/abstracts/search?q=soccer" title=" soccer"> soccer</a>, <a href="https://publications.waset.org/abstracts/search?q=video%20database" title=" video database"> video database</a> </p> <a href="https://publications.waset.org/abstracts/49511/the-video-database-for-teaching-and-learning-in-football-refereeing" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/49511.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">440</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">955</span> The Effects of Watching Text-Relevant Video Segments with/without Subtitles on Vocabulary Development of Arabic as a Foreign Language Learners</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Amirreza%20Karami">Amirreza Karami</a>, <a href="https://publications.waset.org/abstracts/search?q=Hawraa%20Nafea%20Hameed%20Alzouwain"> Hawraa Nafea Hameed Alzouwain</a>, <a href="https://publications.waset.org/abstracts/search?q=Freddie%20A.%20Bowles"> Freddie A. Bowles</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This study investigates the effects of watching text-relevant video segments with/without subtitles on vocabulary development of Arabic as a Foreign Language (AFL) learners. The participants of the study were assigned to two groups: one control group and one experimental group. The control group received no video-based instruction while the experimental group watched a text-relevant video segment in three stages: pre, while, and post-instruction. The preliminary results of the pre-test and post-test show that watching text-relevant video segments through following a pre-while-post procedure can help the vocabulary development of AFL learners more than non-video-based instruction. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=text-relevant%20video%20segments" title="text-relevant video segments">text-relevant video segments</a>, <a href="https://publications.waset.org/abstracts/search?q=vocabulary%20development" title=" vocabulary development"> vocabulary development</a>, <a href="https://publications.waset.org/abstracts/search?q=Arabic%20as%20a%20Foreign%20Language" title=" Arabic as a Foreign Language"> Arabic as a Foreign Language</a>, <a href="https://publications.waset.org/abstracts/search?q=AFL" title=" AFL"> AFL</a>, <a href="https://publications.waset.org/abstracts/search?q=pre-while-post%20instruction" title=" pre-while-post instruction"> pre-while-post instruction</a> </p> <a href="https://publications.waset.org/abstracts/126505/the-effects-of-watching-text-relevant-video-segments-withwithout-subtitles-on-vocabulary-development-of-arabic-as-a-foreign-language-learners" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/126505.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">165</span> </span> </div> </div> <ul class="pagination"> <li class="page-item disabled"><span class="page-link">&lsaquo;</span></li> <li class="page-item active"><span class="page-link">1</span></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=video%20laryngoscopy&amp;page=2">2</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=video%20laryngoscopy&amp;page=3">3</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=video%20laryngoscopy&amp;page=4">4</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=video%20laryngoscopy&amp;page=5">5</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=video%20laryngoscopy&amp;page=6">6</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=video%20laryngoscopy&amp;page=7">7</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=video%20laryngoscopy&amp;page=8">8</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=video%20laryngoscopy&amp;page=9">9</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=video%20laryngoscopy&amp;page=10">10</a></li> <li class="page-item disabled"><span class="page-link">...</span></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=video%20laryngoscopy&amp;page=32">32</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=video%20laryngoscopy&amp;page=33">33</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=video%20laryngoscopy&amp;page=2" rel="next">&rsaquo;</a></li> </ul> </div> </main> <footer> <div id="infolinks" class="pt-3 pb-2"> <div class="container"> <div style="background-color:#f5f5f5;" class="p-3"> <div class="row"> <div class="col-md-2"> <ul class="list-unstyled"> About <li><a href="https://waset.org/page/support">About Us</a></li> <li><a href="https://waset.org/page/support#legal-information">Legal</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/WASET-16th-foundational-anniversary.pdf">WASET celebrates its 16th foundational anniversary</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Account <li><a href="https://waset.org/profile">My Account</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Explore <li><a href="https://waset.org/disciplines">Disciplines</a></li> <li><a href="https://waset.org/conferences">Conferences</a></li> <li><a href="https://waset.org/conference-programs">Conference Program</a></li> <li><a href="https://waset.org/committees">Committees</a></li> <li><a href="https://publications.waset.org">Publications</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Research <li><a href="https://publications.waset.org/abstracts">Abstracts</a></li> <li><a href="https://publications.waset.org">Periodicals</a></li> <li><a href="https://publications.waset.org/archive">Archive</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Open Science <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Science-Philosophy.pdf">Open Science Philosophy</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Science-Award.pdf">Open Science Award</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Society-Open-Science-and-Open-Innovation.pdf">Open Innovation</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Postdoctoral-Fellowship-Award.pdf">Postdoctoral Fellowship Award</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Scholarly-Research-Review.pdf">Scholarly Research Review</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Support <li><a href="https://waset.org/page/support">Support</a></li> <li><a href="https://waset.org/profile/messages/create">Contact Us</a></li> <li><a href="https://waset.org/profile/messages/create">Report Abuse</a></li> </ul> </div> </div> </div> </div> </div> <div class="container text-center"> <hr style="margin-top:0;margin-bottom:.3rem;"> <a href="https://creativecommons.org/licenses/by/4.0/" target="_blank" class="text-muted small">Creative Commons Attribution 4.0 International License</a> <div id="copy" class="mt-2">&copy; 2024 World Academy of Science, Engineering and Technology</div> </div> </footer> <a href="javascript:" id="return-to-top"><i class="fas fa-arrow-up"></i></a> <div class="modal" id="modal-template"> <div class="modal-dialog"> <div class="modal-content"> <div class="row m-0 mt-1"> <div class="col-md-12"> <button type="button" class="close" data-dismiss="modal" aria-label="Close"><span aria-hidden="true">&times;</span></button> </div> </div> <div class="modal-body"></div> </div> </div> </div> <script src="https://cdn.waset.org/static/plugins/jquery-3.3.1.min.js"></script> <script src="https://cdn.waset.org/static/plugins/bootstrap-4.2.1/js/bootstrap.bundle.min.js"></script> <script src="https://cdn.waset.org/static/js/site.js?v=150220211556"></script> <script> jQuery(document).ready(function() { /*jQuery.get("https://publications.waset.org/xhr/user-menu", function (response) { jQuery('#mainNavMenu').append(response); });*/ jQuery.get({ url: "https://publications.waset.org/xhr/user-menu", cache: false }).then(function(response){ jQuery('#mainNavMenu').append(response); }); }); </script> </body> </html>

Pages: 1 2 3 4 5 6 7 8 9 10