CINXE.COM
Search results for: big video data analysis
<!DOCTYPE html> <html lang="en" dir="ltr"> <head> <!-- Google tag (gtag.js) --> <script async src="https://www.googletagmanager.com/gtag/js?id=G-P63WKM1TM1"></script> <script> window.dataLayer = window.dataLayer || []; function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'G-P63WKM1TM1'); </script> <!-- Yandex.Metrika counter --> <script type="text/javascript" > (function(m,e,t,r,i,k,a){m[i]=m[i]||function(){(m[i].a=m[i].a||[]).push(arguments)}; m[i].l=1*new Date(); for (var j = 0; j < document.scripts.length; j++) {if (document.scripts[j].src === r) { return; }} k=e.createElement(t),a=e.getElementsByTagName(t)[0],k.async=1,k.src=r,a.parentNode.insertBefore(k,a)}) (window, document, "script", "https://mc.yandex.ru/metrika/tag.js", "ym"); ym(55165297, "init", { clickmap:false, trackLinks:true, accurateTrackBounce:true, webvisor:false }); </script> <noscript><div><img src="https://mc.yandex.ru/watch/55165297" style="position:absolute; left:-9999px;" alt="" /></div></noscript> <!-- /Yandex.Metrika counter --> <!-- Matomo --> <!-- End Matomo Code --> <title>Search results for: big video data analysis</title> <meta name="description" content="Search results for: big video data analysis"> <meta name="keywords" content="big video data analysis"> <meta name="viewport" content="width=device-width, initial-scale=1, minimum-scale=1, maximum-scale=1, user-scalable=no"> <meta charset="utf-8"> <link href="https://cdn.waset.org/favicon.ico" type="image/x-icon" rel="shortcut icon"> <link href="https://cdn.waset.org/static/plugins/bootstrap-4.2.1/css/bootstrap.min.css" rel="stylesheet"> <link href="https://cdn.waset.org/static/plugins/fontawesome/css/all.min.css" rel="stylesheet"> <link href="https://cdn.waset.org/static/css/site.css?v=150220211555" rel="stylesheet"> </head> <body> <header> <div class="container"> <nav class="navbar navbar-expand-lg navbar-light"> <a class="navbar-brand" href="https://waset.org"> <img src="https://cdn.waset.org/static/images/wasetc.png" alt="Open Science Research Excellence" title="Open Science Research Excellence" /> </a> <button class="d-block d-lg-none navbar-toggler ml-auto" type="button" data-toggle="collapse" data-target="#navbarMenu" aria-controls="navbarMenu" aria-expanded="false" aria-label="Toggle navigation"> <span class="navbar-toggler-icon"></span> </button> <div class="w-100"> <div class="d-none d-lg-flex flex-row-reverse"> <form method="get" action="https://waset.org/search" class="form-inline my-2 my-lg-0"> <input class="form-control mr-sm-2" type="search" placeholder="Search Conferences" value="big video data analysis" name="q" aria-label="Search"> <button class="btn btn-light my-2 my-sm-0" type="submit"><i class="fas fa-search"></i></button> </form> </div> <div class="collapse navbar-collapse mt-1" id="navbarMenu"> <ul class="navbar-nav ml-auto align-items-center" id="mainNavMenu"> <li class="nav-item"> <a class="nav-link" href="https://waset.org/conferences" title="Conferences in 2024/2025/2026">Conferences</a> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/disciplines" title="Disciplines">Disciplines</a> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/committees" rel="nofollow">Committees</a> </li> <li class="nav-item dropdown"> <a class="nav-link dropdown-toggle" href="#" id="navbarDropdownPublications" role="button" data-toggle="dropdown" aria-haspopup="true" aria-expanded="false"> Publications </a> <div class="dropdown-menu" aria-labelledby="navbarDropdownPublications"> <a class="dropdown-item" href="https://publications.waset.org/abstracts">Abstracts</a> <a class="dropdown-item" href="https://publications.waset.org">Periodicals</a> <a class="dropdown-item" href="https://publications.waset.org/archive">Archive</a> </div> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/page/support" title="Support">Support</a> </li> </ul> </div> </div> </nav> </div> </header> <main> <div class="container mt-4"> <div class="row"> <div class="col-md-9 mx-auto"> <form method="get" action="https://publications.waset.org/abstracts/search"> <div id="custom-search-input"> <div class="input-group"> <i class="fas fa-search"></i> <input type="text" class="search-query" name="q" placeholder="Author, Title, Abstract, Keywords" value="big video data analysis"> <input type="submit" class="btn_search" value="Search"> </div> </div> </form> </div> </div> <div class="row mt-3"> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Commenced</strong> in January 2007</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Frequency:</strong> Monthly</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Edition:</strong> International</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Paper Count:</strong> 42476</div> </div> </div> </div> <h1 class="mt-3 mb-3 text-center" style="font-size:1.6rem;">Search results for: big video data analysis</h1> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">42476</span> H.263 Based Video Transceiver for Wireless Camera System</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Won-Ho%20Kim">Won-Ho Kim</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In this paper, a design of H.263 based wireless video transceiver is presented for wireless camera system. It uses standard WIFI transceiver and the covering area is up to 100m. Furthermore the standard H.263 video encoding technique is used for video compression since wireless video transmitter is unable to transmit high capacity raw data in real time and the implemented system is capable of streaming at speed of less than 1Mbps using NTSC 720x480 video. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=wireless%20video%20transceiver" title="wireless video transceiver">wireless video transceiver</a>, <a href="https://publications.waset.org/abstracts/search?q=video%20surveillance%20camera" title=" video surveillance camera"> video surveillance camera</a>, <a href="https://publications.waset.org/abstracts/search?q=H.263%20video%20encoding%20digital%20signal%20processing" title=" H.263 video encoding digital signal processing"> H.263 video encoding digital signal processing</a> </p> <a href="https://publications.waset.org/abstracts/12951/h263-based-video-transceiver-for-wireless-camera-system" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/12951.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">364</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">42475</span> Extraction of Text Subtitles in Multimedia Systems</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Amarjit%20Singh">Amarjit Singh</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In this paper, a method for extraction of text subtitles in large video is proposed. The video data needs to be annotated for many multimedia applications. Text is incorporated in digital video for the motive of providing useful information about that video. So need arises to detect text present in video to understanding and video indexing. This is achieved in two steps. First step is text localization and the second step is text verification. The method of text detection can be extended to text recognition which finds applications in automatic video indexing; video annotation and content based video retrieval. The method has been tested on various types of videos. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=video" title="video">video</a>, <a href="https://publications.waset.org/abstracts/search?q=subtitles" title=" subtitles"> subtitles</a>, <a href="https://publications.waset.org/abstracts/search?q=extraction" title=" extraction"> extraction</a>, <a href="https://publications.waset.org/abstracts/search?q=annotation" title=" annotation"> annotation</a>, <a href="https://publications.waset.org/abstracts/search?q=frames" title=" frames"> frames</a> </p> <a href="https://publications.waset.org/abstracts/24441/extraction-of-text-subtitles-in-multimedia-systems" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/24441.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">601</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">42474</span> Distributed Processing for Content Based Lecture Video Retrieval on Hadoop Framework</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=U.%20S.%20N.%20Raju">U. S. N. Raju</a>, <a href="https://publications.waset.org/abstracts/search?q=Kothuri%20Sai%20Kiran"> Kothuri Sai Kiran</a>, <a href="https://publications.waset.org/abstracts/search?q=Meena%20G.%20Kamal"> Meena G. Kamal</a>, <a href="https://publications.waset.org/abstracts/search?q=Vinay%20Nikhil%20Pabba"> Vinay Nikhil Pabba</a>, <a href="https://publications.waset.org/abstracts/search?q=Suresh%20Kanaparthi"> Suresh Kanaparthi</a> </p> <p class="card-text"><strong>Abstract:</strong></p> There is huge amount of lecture video data available for public use, and many more lecture videos are being created and uploaded every day. Searching for videos on required topics from this huge database is a challenging task. Therefore, an efficient method for video retrieval is needed. An approach for automated video indexing and video search in large lecture video archives is presented. As the amount of video lecture data is huge, it is very inefficient to do the processing in a centralized computation framework. Hence, Hadoop Framework for distributed computing for Big Video Data is used. First, step in the process is automatic video segmentation and key-frame detection to offer a visual guideline for the video content navigation. In the next step, we extract textual metadata by applying video Optical Character Recognition (OCR) technology on key-frames. The OCR and detected slide text line types are adopted for keyword extraction, by which both video- and segment-level keywords are extracted for content-based video browsing and search. The performance of the indexing process can be improved for a large database by using distributed computing on Hadoop framework. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=video%20lectures" title="video lectures">video lectures</a>, <a href="https://publications.waset.org/abstracts/search?q=big%20video%20data" title=" big video data"> big video data</a>, <a href="https://publications.waset.org/abstracts/search?q=video%20retrieval" title=" video retrieval"> video retrieval</a>, <a href="https://publications.waset.org/abstracts/search?q=hadoop" title=" hadoop"> hadoop</a> </p> <a href="https://publications.waset.org/abstracts/26648/distributed-processing-for-content-based-lecture-video-retrieval-on-hadoop-framework" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/26648.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">533</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">42473</span> Temporally Coherent 3D Animation Reconstruction from RGB-D Video Data</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Salam%20Khalifa">Salam Khalifa</a>, <a href="https://publications.waset.org/abstracts/search?q=Naveed%20Ahmed"> Naveed Ahmed</a> </p> <p class="card-text"><strong>Abstract:</strong></p> We present a new method to reconstruct a temporally coherent 3D animation from single or multi-view RGB-D video data using unbiased feature point sampling. Given RGB-D video data, in form of a 3D point cloud sequence, our method first extracts feature points using both color and depth information. In the subsequent steps, these feature points are used to match two 3D point clouds in consecutive frames independent of their resolution. Our new motion vectors based dynamic alignment method then fully reconstruct a spatio-temporally coherent 3D animation. We perform extensive quantitative validation using novel error functions to analyze the results. We show that despite the limiting factors of temporal and spatial noise associated to RGB-D data, it is possible to extract temporal coherence to faithfully reconstruct a temporally coherent 3D animation from RGB-D video data. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=3D%20video" title="3D video">3D video</a>, <a href="https://publications.waset.org/abstracts/search?q=3D%20animation" title=" 3D animation"> 3D animation</a>, <a href="https://publications.waset.org/abstracts/search?q=RGB-D%20video" title=" RGB-D video"> RGB-D video</a>, <a href="https://publications.waset.org/abstracts/search?q=temporally%20coherent%203D%20animation" title=" temporally coherent 3D animation"> temporally coherent 3D animation</a> </p> <a href="https://publications.waset.org/abstracts/12034/temporally-coherent-3d-animation-reconstruction-from-rgb-d-video-data" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/12034.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">373</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">42472</span> Video Summarization: Techniques and Applications</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Zaynab%20El%20Khattabi">Zaynab El Khattabi</a>, <a href="https://publications.waset.org/abstracts/search?q=Youness%20Tabii"> Youness Tabii</a>, <a href="https://publications.waset.org/abstracts/search?q=Abdelhamid%20Benkaddour"> Abdelhamid Benkaddour</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Nowadays, huge amount of multimedia repositories make the browsing, retrieval and delivery of video contents very slow and even difficult tasks. Video summarization has been proposed to improve faster browsing of large video collections and more efficient content indexing and access. In this paper, we focus on approaches to video summarization. The video summaries can be generated in many different forms. However, two fundamentals ways to generate summaries are static and dynamic. We present different techniques for each mode in the literature and describe some features used for generating video summaries. We conclude with perspective for further research. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=video%20summarization" title="video summarization">video summarization</a>, <a href="https://publications.waset.org/abstracts/search?q=static%20summarization" title=" static summarization"> static summarization</a>, <a href="https://publications.waset.org/abstracts/search?q=video%20skimming" title=" video skimming"> video skimming</a>, <a href="https://publications.waset.org/abstracts/search?q=semantic%20features" title=" semantic features"> semantic features</a> </p> <a href="https://publications.waset.org/abstracts/27644/video-summarization-techniques-and-applications" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/27644.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">400</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">42471</span> Efficient Storage and Intelligent Retrieval of Multimedia Streams Using H. 265</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=S.%20Sarumathi">S. Sarumathi</a>, <a href="https://publications.waset.org/abstracts/search?q=C.%20Deepadharani"> C. Deepadharani</a>, <a href="https://publications.waset.org/abstracts/search?q=Garimella%20Archana"> Garimella Archana</a>, <a href="https://publications.waset.org/abstracts/search?q=S.%20Dakshayani"> S. Dakshayani</a>, <a href="https://publications.waset.org/abstracts/search?q=D.%20Logeshwaran"> D. Logeshwaran</a>, <a href="https://publications.waset.org/abstracts/search?q=D.%20Jayakumar"> D. Jayakumar</a>, <a href="https://publications.waset.org/abstracts/search?q=Vijayarangan%20Natarajan"> Vijayarangan Natarajan</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The need of the hour for the customers who use a dial-up or a low broadband connection for their internet services is to access HD video data. This can be achieved by developing a new video format using H. 265. This is the latest video codec standard developed by ISO/IEC Moving Picture Experts Group (MPEG) and ITU-T Video Coding Experts Group (VCEG) on April 2013. This new standard for video compression has the potential to deliver higher performance than the earlier standards such as H. 264/AVC. In comparison with H. 264, HEVC offers a clearer, higher quality image at half the original bitrate. At this lower bitrate, it is possible to transmit high definition videos using low bandwidth. It doubles the data compression ratio supporting 8K Ultra HD and resolutions up to 8192×4320. In the proposed model, we design a new video format which supports this H. 265 standard. The major areas of applications in the coming future would lead to enhancements in the performance level of digital television like Tata Sky and Sun Direct, BluRay Discs, Mobile Video, Video Conferencing and Internet and Live Video streaming. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=access%20HD%20video" title="access HD video">access HD video</a>, <a href="https://publications.waset.org/abstracts/search?q=H.%20265%20video%20standard" title=" H. 265 video standard"> H. 265 video standard</a>, <a href="https://publications.waset.org/abstracts/search?q=high%20performance" title=" high performance"> high performance</a>, <a href="https://publications.waset.org/abstracts/search?q=high%20quality%20image" title=" high quality image"> high quality image</a>, <a href="https://publications.waset.org/abstracts/search?q=low%20bandwidth" title=" low bandwidth"> low bandwidth</a>, <a href="https://publications.waset.org/abstracts/search?q=new%20video%20format" title=" new video format"> new video format</a>, <a href="https://publications.waset.org/abstracts/search?q=video%20streaming%20applications" title=" video streaming applications"> video streaming applications</a> </p> <a href="https://publications.waset.org/abstracts/1881/efficient-storage-and-intelligent-retrieval-of-multimedia-streams-using-h-265" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/1881.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">354</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">42470</span> Structural Analysis on the Composition of Video Game Virtual Spaces</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Qin%20Luofeng">Qin Luofeng</a>, <a href="https://publications.waset.org/abstracts/search?q=Shen%20Siqi"> Shen Siqi</a> </p> <p class="card-text"><strong>Abstract:</strong></p> For the 58 years since the first video game came into being, the video game industry is getting through an explosive evolution from then on. Video games exert great influence on society and become a reflection of public life to some extent. Video game virtual spaces are where activities are taking place like real spaces. And that’s the reason why some architects pay attention to video games. However, compared to the researches on the appearance of games, we observe a lack of theoretical comprehensive on the construction of video game virtual spaces. The research method of this paper is to collect literature and conduct theoretical research about the virtual space in video games firstly. And then analogizing the opinions on the space phenomena from the theory of literature and films. Finally, this paper proposes a three-layer framework for the construction of video game virtual spaces: “algorithmic space-narrative space players space”, which correspond to the exterior, expressive, affective parts of the game space. Also, we illustrate each sub-space according to numerous instances of published video games. Hoping this writing could promote the interactive development of video games and architecture. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=video%20game" title="video game">video game</a>, <a href="https://publications.waset.org/abstracts/search?q=virtual%20space" title=" virtual space"> virtual space</a>, <a href="https://publications.waset.org/abstracts/search?q=narrativity" title=" narrativity"> narrativity</a>, <a href="https://publications.waset.org/abstracts/search?q=social%20space" title=" social space"> social space</a>, <a href="https://publications.waset.org/abstracts/search?q=emotional%20connection" title=" emotional connection"> emotional connection</a> </p> <a href="https://publications.waset.org/abstracts/118519/structural-analysis-on-the-composition-of-video-game-virtual-spaces" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/118519.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">267</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">42469</span> Key Frame Based Video Summarization via Dependency Optimization</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Janya%20Sainui">Janya Sainui</a> </p> <p class="card-text"><strong>Abstract:</strong></p> As a rapid growth of digital videos and data communications, video summarization that provides a shorter version of the video for fast video browsing and retrieval is necessary. Key frame extraction is one of the mechanisms to generate video summary. In general, the extracted key frames should both represent the entire video content and contain minimum redundancy. However, most of the existing approaches heuristically select key frames; hence, the selected key frames may not be the most different frames and/or not cover the entire content of a video. In this paper, we propose a method of video summarization which provides the reasonable objective functions for selecting key frames. In particular, we apply a statistical dependency measure called quadratic mutual informaion as our objective functions for maximizing the coverage of the entire video content as well as minimizing the redundancy among selected key frames. The proposed key frame extraction algorithm finds key frames as an optimization problem. Through experiments, we demonstrate the success of the proposed video summarization approach that produces video summary with better coverage of the entire video content while less redundancy among key frames comparing to the state-of-the-art approaches. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=video%20summarization" title="video summarization">video summarization</a>, <a href="https://publications.waset.org/abstracts/search?q=key%20frame%20extraction" title=" key frame extraction"> key frame extraction</a>, <a href="https://publications.waset.org/abstracts/search?q=dependency%20measure" title=" dependency measure"> dependency measure</a>, <a href="https://publications.waset.org/abstracts/search?q=quadratic%20mutual%20information" title=" quadratic mutual information"> quadratic mutual information</a> </p> <a href="https://publications.waset.org/abstracts/75218/key-frame-based-video-summarization-via-dependency-optimization" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/75218.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">266</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">42468</span> Extending Image Captioning to Video Captioning Using Encoder-Decoder</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Sikiru%20Ademola%20Adewale">Sikiru Ademola Adewale</a>, <a href="https://publications.waset.org/abstracts/search?q=Joe%20Thomas"> Joe Thomas</a>, <a href="https://publications.waset.org/abstracts/search?q=Bolanle%20Hafiz%20Matti"> Bolanle Hafiz Matti</a>, <a href="https://publications.waset.org/abstracts/search?q=Tosin%20Ige"> Tosin Ige</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This project demonstrates the implementation and use of an encoder-decoder model to perform a many-to-many mapping of video data to text captions. The many-to-many mapping occurs via an input temporal sequence of video frames to an output sequence of words to form a caption sentence. Data preprocessing, model construction, and model training are discussed. Caption correctness is evaluated using 2-gram BLEU scores across the different splits of the dataset. Specific examples of output captions were shown to demonstrate model generality over the video temporal dimension. Predicted captions were shown to generalize over video action, even in instances where the video scene changed dramatically. Model architecture changes are discussed to improve sentence grammar and correctness. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=decoder" title="decoder">decoder</a>, <a href="https://publications.waset.org/abstracts/search?q=encoder" title=" encoder"> encoder</a>, <a href="https://publications.waset.org/abstracts/search?q=many-to-many%20mapping" title=" many-to-many mapping"> many-to-many mapping</a>, <a href="https://publications.waset.org/abstracts/search?q=video%20captioning" title=" video captioning"> video captioning</a>, <a href="https://publications.waset.org/abstracts/search?q=2-gram%20BLEU" title=" 2-gram BLEU"> 2-gram BLEU</a> </p> <a href="https://publications.waset.org/abstracts/164540/extending-image-captioning-to-video-captioning-using-encoder-decoder" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/164540.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">108</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">42467</span> Evaluating the Performance of Existing Full-Reference Quality Metrics on High Dynamic Range (HDR) Video Content</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Maryam%20Azimi">Maryam Azimi</a>, <a href="https://publications.waset.org/abstracts/search?q=Amin%20Banitalebi-Dehkordi"> Amin Banitalebi-Dehkordi</a>, <a href="https://publications.waset.org/abstracts/search?q=Yuanyuan%20Dong"> Yuanyuan Dong</a>, <a href="https://publications.waset.org/abstracts/search?q=Mahsa%20T.%20Pourazad"> Mahsa T. Pourazad</a>, <a href="https://publications.waset.org/abstracts/search?q=Panos%20Nasiopoulos"> Panos Nasiopoulos</a> </p> <p class="card-text"><strong>Abstract:</strong></p> While there exists a wide variety of Low Dynamic Range (LDR) quality metrics, only a limited number of metrics are designed specifically for the High Dynamic Range (HDR) content. With the introduction of HDR video compression standardization effort by international standardization bodies, the need for an efficient video quality metric for HDR applications has become more pronounced. The objective of this study is to compare the performance of the existing full-reference LDR and HDR video quality metrics on HDR content and identify the most effective one for HDR applications. To this end, a new HDR video data set is created, which consists of representative indoor and outdoor video sequences with different brightness, motion levels and different representing types of distortions. The quality of each distorted video in this data set is evaluated both subjectively and objectively. The correlation between the subjective and objective results confirm that VIF quality metric outperforms all to their tested metrics in the presence of the tested types of distortions. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=HDR" title="HDR">HDR</a>, <a href="https://publications.waset.org/abstracts/search?q=dynamic%20range" title=" dynamic range"> dynamic range</a>, <a href="https://publications.waset.org/abstracts/search?q=LDR" title=" LDR"> LDR</a>, <a href="https://publications.waset.org/abstracts/search?q=subjective%20evaluation" title=" subjective evaluation"> subjective evaluation</a>, <a href="https://publications.waset.org/abstracts/search?q=video%20compression" title=" video compression"> video compression</a>, <a href="https://publications.waset.org/abstracts/search?q=HEVC" title=" HEVC"> HEVC</a>, <a href="https://publications.waset.org/abstracts/search?q=video%20quality%20metrics" title=" video quality metrics"> video quality metrics</a> </p> <a href="https://publications.waset.org/abstracts/18171/evaluating-the-performance-of-existing-full-reference-quality-metrics-on-high-dynamic-range-hdr-video-content" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/18171.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">524</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">42466</span> Lecture Video Indexing and Retrieval Using Topic Keywords</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=B.%20J.%20Sandesh">B. J. Sandesh</a>, <a href="https://publications.waset.org/abstracts/search?q=Saurabha%20Jirgi"> Saurabha Jirgi</a>, <a href="https://publications.waset.org/abstracts/search?q=S.%20Vidya"> S. Vidya</a>, <a href="https://publications.waset.org/abstracts/search?q=Prakash%20Eljer"> Prakash Eljer</a>, <a href="https://publications.waset.org/abstracts/search?q=Gowri%20Srinivasa"> Gowri Srinivasa</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In this paper, we propose a framework to help users to search and retrieve the portions in the lecture video of their interest. This is achieved by temporally segmenting and indexing the lecture video using the topic keywords. We use transcribed text from the video and documents relevant to the video topic extracted from the web for this purpose. The keywords for indexing are found by applying the non-negative matrix factorization (NMF) topic modeling techniques on the web documents. Our proposed technique first creates indices on the transcribed documents using the topic keywords, and these are mapped to the video to find the start and end time of the portions of the video for a particular topic. This time information is stored in the index table along with the topic keyword which is used to retrieve the specific portions of the video for the query provided by the users. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=video%20indexing%20and%20retrieval" title="video indexing and retrieval">video indexing and retrieval</a>, <a href="https://publications.waset.org/abstracts/search?q=lecture%20videos" title=" lecture videos"> lecture videos</a>, <a href="https://publications.waset.org/abstracts/search?q=content%20based%20video%20search" title=" content based video search"> content based video search</a>, <a href="https://publications.waset.org/abstracts/search?q=multimodal%20indexing" title=" multimodal indexing"> multimodal indexing</a> </p> <a href="https://publications.waset.org/abstracts/77066/lecture-video-indexing-and-retrieval-using-topic-keywords" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/77066.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">250</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">42465</span> Viral Advertising: Popularity and Willingness to Share among the Czech Internet Population</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Martin%20Klepek">Martin Klepek</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper presents results of primary quantitative research on viral advertising with focus on popularity and willingness to share viral video among Czech Internet population. It starts with brief theoretical debate on viral advertising, which is used for the comparison of the results. For purpose of collecting data, online questionnaire survey was given to 384 respondents. Statistics utilized in this research included frequency, percentage, correlation and Pearson’s Chi-square test. Data was evaluated using SPSS software. The research analysis disclosed high popularity of viral advertising video among Czech Internet population but implies lower willingness to share it. Significant relationship between likability of viral video technique and age of the viewer was found. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=internet%20advertising" title="internet advertising">internet advertising</a>, <a href="https://publications.waset.org/abstracts/search?q=internet%20population" title=" internet population"> internet population</a>, <a href="https://publications.waset.org/abstracts/search?q=promotion" title=" promotion"> promotion</a>, <a href="https://publications.waset.org/abstracts/search?q=marketing%20communication" title=" marketing communication"> marketing communication</a>, <a href="https://publications.waset.org/abstracts/search?q=viral%20advertising" title=" viral advertising"> viral advertising</a>, <a href="https://publications.waset.org/abstracts/search?q=viral%20video" title=" viral video"> viral video</a> </p> <a href="https://publications.waset.org/abstracts/8612/viral-advertising-popularity-and-willingness-to-share-among-the-czech-internet-population" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/8612.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">474</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">42464</span> Surveillance Video Summarization Based on Histogram Differencing and Sum Conditional Variance</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Nada%20Jasim%20Habeeb">Nada Jasim Habeeb</a>, <a href="https://publications.waset.org/abstracts/search?q=Rana%20Saad%20Mohammed"> Rana Saad Mohammed</a>, <a href="https://publications.waset.org/abstracts/search?q=Muntaha%20Khudair%20Abbass"> Muntaha Khudair Abbass </a> </p> <p class="card-text"><strong>Abstract:</strong></p> For more efficient and fast video summarization, this paper presents a surveillance video summarization method. The presented method works to improve video summarization technique. This method depends on temporal differencing to extract most important data from large video stream. This method uses histogram differencing and Sum Conditional Variance which is robust against to illumination variations in order to extract motion objects. The experimental results showed that the presented method gives better output compared with temporal differencing based summarization techniques. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=temporal%20differencing" title="temporal differencing">temporal differencing</a>, <a href="https://publications.waset.org/abstracts/search?q=video%20summarization" title=" video summarization"> video summarization</a>, <a href="https://publications.waset.org/abstracts/search?q=histogram%20differencing" title=" histogram differencing"> histogram differencing</a>, <a href="https://publications.waset.org/abstracts/search?q=sum%20conditional%20variance" title=" sum conditional variance"> sum conditional variance</a> </p> <a href="https://publications.waset.org/abstracts/54404/surveillance-video-summarization-based-on-histogram-differencing-and-sum-conditional-variance" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/54404.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">348</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">42463</span> Video-Based Psychoeducation for Caregivers of Persons with Schizophrenia</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Jilu%20David">Jilu David</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Background: Schizophrenia is one of the most misunderstood mental illnesses across the globe. Lack of understanding about mental illnesses often delay treatment, severely affects the functionality of the person, and causes distress to the family. The study, Video-based Psychoeducation for Caregivers of Persons with Schizophrenia, consisted of developing a psychoeducational video about Schizophrenia, its symptoms, causes, treatment, and the importance of family support. Methodology: A quasi-experimental pre-post design was used to understand the feasibility of the study. Qualitative analysis strengthened the feasibility outcomes. Knowledge About Schizophrenia Interview was used to assess the level of knowledge of 10 participants, before and after the screening of the video. Results: Themes of usefulness, length, content, educational component, format of the intervention, and language emerged in the qualitative analysis. There was a statistically significant difference in the knowledge level of participants before and after the video screening. Conclusion: The statistical and qualitative analysis revealed that the video-based psychoeducation program was feasible and that it facilitated a general improvement in knowledge of the participants. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=Schizophrenia" title="Schizophrenia">Schizophrenia</a>, <a href="https://publications.waset.org/abstracts/search?q=mental%20illness" title=" mental illness"> mental illness</a>, <a href="https://publications.waset.org/abstracts/search?q=psychoeducation" title=" psychoeducation"> psychoeducation</a>, <a href="https://publications.waset.org/abstracts/search?q=video-based%20psychoeducation" title=" video-based psychoeducation"> video-based psychoeducation</a>, <a href="https://publications.waset.org/abstracts/search?q=family%20support" title=" family support"> family support</a> </p> <a href="https://publications.waset.org/abstracts/122698/video-based-psychoeducation-for-caregivers-of-persons-with-schizophrenia" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/122698.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">131</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">42462</span> Fuzzy Inference-Assisted Saliency-Aware Convolution Neural Networks for Multi-View Summarization</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Tanveer%20Hussain">Tanveer Hussain</a>, <a href="https://publications.waset.org/abstracts/search?q=Khan%20Muhammad"> Khan Muhammad</a>, <a href="https://publications.waset.org/abstracts/search?q=Amin%20Ullah"> Amin Ullah</a>, <a href="https://publications.waset.org/abstracts/search?q=Mi%20Young%20Lee"> Mi Young Lee</a>, <a href="https://publications.waset.org/abstracts/search?q=Sung%20Wook%20Baik"> Sung Wook Baik</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The Big Data generated from distributed vision sensors installed on large scale in smart cities create hurdles in its efficient and beneficial exploration for browsing, retrieval, and indexing. This paper presents a three-folded framework for effective video summarization of such data and provide a compact and representative format of Big Video Data. In the first fold, the paper acquires input video data from the installed cameras and collect clues such as type and count of objects and clarity of the view from a chunk of pre-defined number of frames of each view. The decision of representative view selection for a particular interval is based on fuzzy inference system, acquiring a precise and human resembling decision, reinforced by the known clues as a part of the second fold. In the third fold, the paper forwards the selected view frames to the summary generation mechanism that is supported by a saliency-aware convolution neural network (CNN) model. The new trend of fuzzy rules for view selection followed by CNN architecture for saliency computation makes the multi-view video summarization (MVS) framework a suitable candidate for real-world practice in smart cities. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=big%20video%20data%20analysis" title="big video data analysis">big video data analysis</a>, <a href="https://publications.waset.org/abstracts/search?q=fuzzy%20logic" title=" fuzzy logic"> fuzzy logic</a>, <a href="https://publications.waset.org/abstracts/search?q=multi-view%20video%20summarization" title=" multi-view video summarization"> multi-view video summarization</a>, <a href="https://publications.waset.org/abstracts/search?q=saliency%20detection" title=" saliency detection"> saliency detection</a> </p> <a href="https://publications.waset.org/abstracts/135176/fuzzy-inference-assisted-saliency-aware-convolution-neural-networks-for-multi-view-summarization" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/135176.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">188</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">42461</span> Effects of Video Games and Online Chat on Mathematics Performance in High School: An Approach of Multivariate Data Analysis</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Lina%20Wu">Lina Wu</a>, <a href="https://publications.waset.org/abstracts/search?q=Wenyi%20Lu"> Wenyi Lu</a>, <a href="https://publications.waset.org/abstracts/search?q=Ye%20Li"> Ye Li</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Regarding heavy video game players for boys and super online chat lovers for girls as a symbolic phrase in the current adolescent culture, this project of data analysis verifies the displacement effect on deteriorating mathematics performance. To evaluate correlation or regression coefficients between a factor of playing video games or chatting online and mathematics performance compared with other factors, we use multivariate analysis technique and take gender difference into account. We find the most important reason for the negative sign of the displacement effect on mathematics performance due to students’ poor academic background. Statistical analysis methods in this project could be applied to study internet users’ academic performance from the high school education to the college education. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=correlation%20coefficients" title="correlation coefficients">correlation coefficients</a>, <a href="https://publications.waset.org/abstracts/search?q=displacement%20effect" title=" displacement effect"> displacement effect</a>, <a href="https://publications.waset.org/abstracts/search?q=multivariate%20analysis%20technique" title=" multivariate analysis technique"> multivariate analysis technique</a>, <a href="https://publications.waset.org/abstracts/search?q=regression%20coefficients" title=" regression coefficients"> regression coefficients</a> </p> <a href="https://publications.waset.org/abstracts/42703/effects-of-video-games-and-online-chat-on-mathematics-performance-in-high-school-an-approach-of-multivariate-data-analysis" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/42703.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">363</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">42460</span> Video Stabilization Using Feature Point Matching</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Shamsundar%20Kulkarni">Shamsundar Kulkarni</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Video capturing by non-professionals will lead to unanticipated effects. Such as image distortion, image blurring etc. Hence, many researchers study such drawbacks to enhance the quality of videos. In this paper, an algorithm is proposed to stabilize jittery videos .A stable output video will be attained without the effect of jitter which is caused due to shaking of handheld camera during video recording. Firstly, salient points from each frame from the input video are identified and processed followed by optimizing and stabilize the video. Optimization includes the quality of the video stabilization. This method has shown good result in terms of stabilization and it discarded distortion from the output videos recorded in different circumstances. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=video%20stabilization" title="video stabilization">video stabilization</a>, <a href="https://publications.waset.org/abstracts/search?q=point%20feature%20matching" title=" point feature matching"> point feature matching</a>, <a href="https://publications.waset.org/abstracts/search?q=salient%20points" title=" salient points"> salient points</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20quality%20measurement" title=" image quality measurement"> image quality measurement</a> </p> <a href="https://publications.waset.org/abstracts/57341/video-stabilization-using-feature-point-matching" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/57341.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">313</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">42459</span> The Use of Video in Increasing Speaking Ability of the First Year Students of SMAN 12 Pekanbaru in the Academic Year 2011/2012</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Elvira%20Wahyuni">Elvira Wahyuni</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This study is a classroom action research. The general objective of this study was to find out students’ speaking ability through teaching English by using video and to find out the effectiveness of using video in teaching English to improve students’ speaking ability. The subjects of this study were 34 of the first-year students of SMAN 12 Pekanbaru who were learning English as a foreign language (EFL). Students were given pre-test before the treatment and post-test after the treatment. Quantitative data was collected by using speaking test requiring the students to respond to the recorded questions. Qualitative data was collected through observation sheets and field notes. The research finding reveals that there is a significant improvement of the students’ speaking ability through the use of video in speaking class. The qualitative data gave a description and additional information about the learning process done by the students. The research findings indicate that the use of video in teaching and learning is good in increasing learning outcome. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=English%20teaching" title="English teaching">English teaching</a>, <a href="https://publications.waset.org/abstracts/search?q=fun%20learning" title=" fun learning"> fun learning</a>, <a href="https://publications.waset.org/abstracts/search?q=speaking%20ability" title=" speaking ability"> speaking ability</a>, <a href="https://publications.waset.org/abstracts/search?q=video" title=" video"> video</a> </p> <a href="https://publications.waset.org/abstracts/72779/the-use-of-video-in-increasing-speaking-ability-of-the-first-year-students-of-sman-12-pekanbaru-in-the-academic-year-20112012" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/72779.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">256</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">42458</span> Human Behavior Modeling in Video Surveillance of Conference Halls </h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Nour%20Charara">Nour Charara</a>, <a href="https://publications.waset.org/abstracts/search?q=Hussein%20Charara"> Hussein Charara</a>, <a href="https://publications.waset.org/abstracts/search?q=Omar%20Abou%20Khaled"> Omar Abou Khaled</a>, <a href="https://publications.waset.org/abstracts/search?q=Hani%20Abdallah"> Hani Abdallah</a>, <a href="https://publications.waset.org/abstracts/search?q=Elena%20Mugellini"> Elena Mugellini</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In this paper, we present a human behavior modeling approach in videos scenes. This approach is used to model the normal behaviors in the conference halls. We exploited the Probabilistic Latent Semantic Analysis technique (PLSA), using the 'Bag-of-Terms' paradigm, as a tool for exploring video data to learn the model by grouping similar activities. Our term vocabulary consists of 3D spatio-temporal patch groups assigned by the direction of motion. Our video representation ensures the spatial information, the object trajectory, and the motion. The main importance of this approach is that it can be adapted to detect abnormal behaviors in order to ensure and enhance human security. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=activity%20modeling" title="activity modeling">activity modeling</a>, <a href="https://publications.waset.org/abstracts/search?q=clustering" title=" clustering"> clustering</a>, <a href="https://publications.waset.org/abstracts/search?q=PLSA" title=" PLSA"> PLSA</a>, <a href="https://publications.waset.org/abstracts/search?q=video%20representation" title=" video representation"> video representation</a> </p> <a href="https://publications.waset.org/abstracts/70466/human-behavior-modeling-in-video-surveillance-of-conference-halls" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/70466.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">394</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">42457</span> A Passive Digital Video Authentication Technique Using Wavelet Based Optical Flow Variation Thresholding</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=R.%20S.%20Remya">R. S. Remya</a>, <a href="https://publications.waset.org/abstracts/search?q=U.%20S.%20Sethulekshmi"> U. S. Sethulekshmi</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Detecting the authenticity of a video is an important issue in digital forensics as Video is used as a silent evidence in court such as in child pornography, movie piracy cases, insurance claims, cases involving scientific fraud, traffic monitoring etc. The biggest threat to video data is the availability of modern open video editing tools which enable easy editing of videos without leaving any trace of tampering. In this paper, we propose an efficient passive method for inter-frame video tampering detection, its type and location by estimating the optical flow of wavelet features of adjacent frames and thresholding the variation in the estimated feature. The performance of the algorithm is compared with the z-score thresholding and achieved an efficiency above 95% on all the tested databases. The proposed method works well for videos with dynamic (forensics) as well as static (surveillance) background. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=discrete%20wavelet%20transform" title="discrete wavelet transform">discrete wavelet transform</a>, <a href="https://publications.waset.org/abstracts/search?q=optical%20flow" title=" optical flow"> optical flow</a>, <a href="https://publications.waset.org/abstracts/search?q=optical%20flow%20variation" title=" optical flow variation"> optical flow variation</a>, <a href="https://publications.waset.org/abstracts/search?q=video%20tampering" title=" video tampering"> video tampering</a> </p> <a href="https://publications.waset.org/abstracts/45252/a-passive-digital-video-authentication-technique-using-wavelet-based-optical-flow-variation-thresholding" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/45252.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">359</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">42456</span> Design, Development by Functional Analysis in UML and Static Test of a Multimedia Voice and Video Communication Platform on IP for a Use Adapted to the Context of Local Businesses in Lubumbashi</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Blaise%20Fyama">Blaise Fyama</a>, <a href="https://publications.waset.org/abstracts/search?q=Elie%20Museng"> Elie Museng</a>, <a href="https://publications.waset.org/abstracts/search?q=Grace%20Mukoma"> Grace Mukoma</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In this article we present a java implementation of video telephony using the SIP protocol (Session Initiation Protocol). After a functional analysis of the SIP protocol, we relied on the work of Italian researchers of University of Parma-Italy to acquire adequate libraries for the development of our own communication tool. In order to optimize the code and improve the prototype, we used, in an incremental approach, test techniques based on a static analysis based on the evaluation of the complexity of the software with the application of metrics and the number cyclomatic of Mccabe. The objective is to promote the emergence of local start-ups producing IP video in a well understood local context. We have arrived at the creation of a video telephony tool whose code is optimized. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=static%20analysis" title="static analysis">static analysis</a>, <a href="https://publications.waset.org/abstracts/search?q=coding%20complexity%20metric%20mccabe" title=" coding complexity metric mccabe"> coding complexity metric mccabe</a>, <a href="https://publications.waset.org/abstracts/search?q=Sip" title=" Sip"> Sip</a>, <a href="https://publications.waset.org/abstracts/search?q=uml" title=" uml"> uml</a> </p> <a href="https://publications.waset.org/abstracts/126181/design-development-by-functional-analysis-in-uml-and-static-test-of-a-multimedia-voice-and-video-communication-platform-on-ip-for-a-use-adapted-to-the-context-of-local-businesses-in-lubumbashi" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/126181.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">119</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">42455</span> Sentiment Analysis on the East Timor Accession Process to the ASEAN</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Marcelino%20Caetano%20Noronha">Marcelino Caetano Noronha</a>, <a href="https://publications.waset.org/abstracts/search?q=Vosco%20Pereira"> Vosco Pereira</a>, <a href="https://publications.waset.org/abstracts/search?q=Jose%20Soares%20Pinto"> Jose Soares Pinto</a>, <a href="https://publications.waset.org/abstracts/search?q=Ferdinando%20Da%20C.%20Saores"> Ferdinando Da C. Saores</a> </p> <p class="card-text"><strong>Abstract:</strong></p> One particularly popular social media platform is Youtube. It’s a video-sharing platform where users can submit videos, and other users can like, dislike or comment on the videos. In this study, we conduct a binary classification task on YouTube’s video comments and review from the users regarding the accession process of Timor Leste to become the eleventh member of the Association of South East Asian Nations (ASEAN). We scrape the data directly from the public YouTube video and apply several pre-processing and weighting techniques. Before conducting the classification, we categorized the data into two classes, namely positive and negative. In the classification part, we apply Support Vector Machine (SVM) algorithm. By comparing with Naïve Bayes Algorithm, the experiment showed SVM achieved 84.1% of Accuracy, 94.5% of Precision, and Recall 73.8% simultaneously. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=classification" title="classification">classification</a>, <a href="https://publications.waset.org/abstracts/search?q=YouTube" title=" YouTube"> YouTube</a>, <a href="https://publications.waset.org/abstracts/search?q=sentiment%20analysis" title=" sentiment analysis"> sentiment analysis</a>, <a href="https://publications.waset.org/abstracts/search?q=support%20sector%20machine" title=" support sector machine"> support sector machine</a> </p> <a href="https://publications.waset.org/abstracts/162327/sentiment-analysis-on-the-east-timor-accession-process-to-the-asean" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/162327.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">108</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">42454</span> Video Shot Detection and Key Frame Extraction Using Faber-Shauder DWT and SVD</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Assma%20Azeroual">Assma Azeroual</a>, <a href="https://publications.waset.org/abstracts/search?q=Karim%20Afdel"> Karim Afdel</a>, <a href="https://publications.waset.org/abstracts/search?q=Mohamed%20El%20Hajji"> Mohamed El Hajji</a>, <a href="https://publications.waset.org/abstracts/search?q=Hassan%20Douzi"> Hassan Douzi</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Key frame extraction methods select the most representative frames of a video, which can be used in different areas of video processing such as video retrieval, video summary, and video indexing. In this paper we present a novel approach for extracting key frames from video sequences. The frame is characterized uniquely by his contours which are represented by the dominant blocks. These dominant blocks are located on the contours and its near textures. When the video frames have a noticeable changement, its dominant blocks changed, then we can extracte a key frame. The dominant blocks of every frame is computed, and then feature vectors are extracted from the dominant blocks image of each frame and arranged in a feature matrix. Singular Value Decomposition is used to calculate sliding windows ranks of those matrices. Finally the computed ranks are traced and then we are able to extract key frames of a video. Experimental results show that the proposed approach is robust against a large range of digital effects used during shot transition. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=FSDWT" title="FSDWT">FSDWT</a>, <a href="https://publications.waset.org/abstracts/search?q=key%20frame%20extraction" title=" key frame extraction"> key frame extraction</a>, <a href="https://publications.waset.org/abstracts/search?q=shot%20detection" title=" shot detection"> shot detection</a>, <a href="https://publications.waset.org/abstracts/search?q=singular%20value%20decomposition" title=" singular value decomposition"> singular value decomposition</a> </p> <a href="https://publications.waset.org/abstracts/18296/video-shot-detection-and-key-frame-extraction-using-faber-shauder-dwt-and-svd" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/18296.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">397</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">42453</span> Speech Perception by Video Hosting Services Actors: Urban Planning Conflicts</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=M.%20Pilgun">M. Pilgun</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The report presents the results of a study of the specifics of speech perception by actors of video hosting services on the material of urban planning conflicts. To analyze the content, the multimodal approach using neural network technologies is employed. Analysis of word associations and associative networks of relevant stimulus revealed the evaluative reactions of the actors. Analysis of the data identified key topics that generated negative and positive perceptions from the participants. The calculation of social stress and social well-being indices based on user-generated content made it possible to build a rating of road transport construction objects according to the degree of negative and positive perception by actors. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=social%20media" title="social media">social media</a>, <a href="https://publications.waset.org/abstracts/search?q=speech%20perception" title=" speech perception"> speech perception</a>, <a href="https://publications.waset.org/abstracts/search?q=video%20hosting" title=" video hosting"> video hosting</a>, <a href="https://publications.waset.org/abstracts/search?q=networks" title=" networks"> networks</a> </p> <a href="https://publications.waset.org/abstracts/137315/speech-perception-by-video-hosting-services-actors-urban-planning-conflicts" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/137315.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">147</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">42452</span> Video-Observation: A Phenomenological Research Tool for International Relation?</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Andreas%20Aagaard%20Nohr">Andreas Aagaard Nohr</a> </p> <p class="card-text"><strong>Abstract:</strong></p> International Relations is an academic discipline which is rarely in direct contact with its field. However, there has in recent years been a growing interest in the different agents within and beyond the state and their associated practices; yet some of the research tools with which to study them are not widely used. This paper introduces video-observation as a method for the study of IR and argues that it offers a unique way of studying the complexity of the everyday context of actors. The paper is divided into two main parts: First, the philosophical and methodological underpinnings of the kind of data that video-observation produces are discussed; primarily through a discussion of the phenomenology of Husserl, Heidegger, and Merleau-Ponty. Second, taking simulation of a WTO negotiation round as an example, the paper discusses how the data created can be analysed: in particular with regard to the structure of events, the temporal and spatial organization of activities, rhythm and periodicity, and the concrete role of artefacts and documents. The paper concludes with a discussion of the ontological, epistemological, and practical challenges and limitations that ought to be considered if video-observation is chosen as a method within the field of IR. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=video-observation" title="video-observation">video-observation</a>, <a href="https://publications.waset.org/abstracts/search?q=phenomenology" title=" phenomenology"> phenomenology</a>, <a href="https://publications.waset.org/abstracts/search?q=international%20relations" title=" international relations"> international relations</a> </p> <a href="https://publications.waset.org/abstracts/19142/video-observation-a-phenomenological-research-tool-for-international-relation" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/19142.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">447</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">42451</span> The Developing of Teaching Materials Online for Students in Thailand</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Pitimanus%20Bunlue">Pitimanus Bunlue</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The objectives of this study were to identify the unique characteristics of Salaya Old market, Phutthamonthon, Nakhon Pathom and develop the effective video media to promote the homeland awareness among local people and the characteristic features of this community were collectively summarized based on historical data, community observation, and people’s interview. The acquired data were used to develop a media describing prominent features of the community. The quality of the media was later assessed by interviewing local people in the old market in terms of content accuracy, video, and narration qualities, and sense of homeland awareness after watching the video. The result shows a 6-minute video media containing historical data and outstanding features of this community was developed. Based on the interview, the content accuracy was good. The picture quality and the narration were very good. Most people developed a sense of homeland awareness after watching the video also as well. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=audio-visual" title="audio-visual">audio-visual</a>, <a href="https://publications.waset.org/abstracts/search?q=creating%20homeland%20awareness" title=" creating homeland awareness"> creating homeland awareness</a>, <a href="https://publications.waset.org/abstracts/search?q=Phutthamonthon%20Nakhon%20Pathom" title=" Phutthamonthon Nakhon Pathom"> Phutthamonthon Nakhon Pathom</a>, <a href="https://publications.waset.org/abstracts/search?q=research%20and%20development" title=" research and development"> research and development</a> </p> <a href="https://publications.waset.org/abstracts/55281/the-developing-of-teaching-materials-online-for-students-in-thailand" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/55281.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">291</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">42450</span> Multimodal Convolutional Neural Network for Musical Instrument Recognition</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Yagya%20Raj%20Pandeya">Yagya Raj Pandeya</a>, <a href="https://publications.waset.org/abstracts/search?q=Joonwhoan%20Lee"> Joonwhoan Lee</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The dynamic behavior of music and video makes it difficult to evaluate musical instrument playing in a video by computer system. Any television or film video clip with music information are rich sources for analyzing musical instruments using modern machine learning technologies. In this research, we integrate the audio and video information sources using convolutional neural network (CNN) and pass network learned features through recurrent neural network (RNN) to preserve the dynamic behaviors of audio and video. We use different pre-trained CNN for music and video feature extraction and then fine tune each model. The music network use 2D convolutional network and video network use 3D convolution (C3D). Finally, we concatenate each music and video feature by preserving the time varying features. The long short term memory (LSTM) network is used for long-term dynamic feature characterization and then use late fusion with generalized mean. The proposed network performs better performance to recognize the musical instrument using audio-video multimodal neural network. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=multimodal" title="multimodal">multimodal</a>, <a href="https://publications.waset.org/abstracts/search?q=3D%20convolution" title=" 3D convolution"> 3D convolution</a>, <a href="https://publications.waset.org/abstracts/search?q=music-video%20feature%20extraction" title=" music-video feature extraction"> music-video feature extraction</a>, <a href="https://publications.waset.org/abstracts/search?q=generalized%20mean" title=" generalized mean"> generalized mean</a> </p> <a href="https://publications.waset.org/abstracts/104041/multimodal-convolutional-neural-network-for-musical-instrument-recognition" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/104041.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">215</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">42449</span> Tackling the Digital Divide: Enhancing Video Consultation Access for Digital Illiterate Patients in the Hospital</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Wieke%20Ellen%20Bouwes">Wieke Ellen Bouwes</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This study aims to unravel which factors enhance accessibility of video consultations (VCs) for patients with low digital literacy. Thirteen in-depth interviews with patients, hospital employees, eHealth experts, and digital support organizations were held. Patients with low digital literacy received in-home support during real-time video consultations and are observed during the set-up of these consultations. Key findings highlight the importance of patient acceptance, emphasizing video consultations benefits and avoiding standardized courses. The lack of a uniform video consultation system across healthcare providers poses a barrier. Familiarity with support organizations – to support patients in usage of digital tools - among healthcare practitioners enhances accessibility. Moreover, considerations regarding the Dutch General Data Protection Regulation (GDPR) law influence support patients receive. Also, provider readiness to use video consultations influences patient access. Further, alignment between learning styles and support methods seems to determine abilities to learn how to use video consultations. Future research could delve into tailored learning styles and technological solutions for remote access to further explore effectiveness of learning methods. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=video%20consultations" title="video consultations">video consultations</a>, <a href="https://publications.waset.org/abstracts/search?q=digital%20literacy%20skills" title=" digital literacy skills"> digital literacy skills</a>, <a href="https://publications.waset.org/abstracts/search?q=effectiveness%20of%20support" title=" effectiveness of support"> effectiveness of support</a>, <a href="https://publications.waset.org/abstracts/search?q=intra-%20and%20inter-organizational%20relationships" title=" intra- and inter-organizational relationships"> intra- and inter-organizational relationships</a>, <a href="https://publications.waset.org/abstracts/search?q=patient%20acceptance%20of%20video%20consultations" title=" patient acceptance of video consultations"> patient acceptance of video consultations</a> </p> <a href="https://publications.waset.org/abstracts/173756/tackling-the-digital-divide-enhancing-video-consultation-access-for-digital-illiterate-patients-in-the-hospital" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/173756.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">74</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">42448</span> Acute Bronchiolitis: Impact of an Educational Video on Mothers’ Knowledge, Attitudes, and Practices</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Atitallah%20Sofien">Atitallah Sofien</a>, <a href="https://publications.waset.org/abstracts/search?q=Missaoui%20Nada"> Missaoui Nada</a>, <a href="https://publications.waset.org/abstracts/search?q=Ben%20Rabeh%20Rania"> Ben Rabeh Rania</a>, <a href="https://publications.waset.org/abstracts/search?q=Yahyaoui%20Salem"> Yahyaoui Salem</a>, <a href="https://publications.waset.org/abstracts/search?q=Mazigh%20Sonia"> Mazigh Sonia</a>, <a href="https://publications.waset.org/abstracts/search?q=Bouyahia%20Olfa"> Bouyahia Olfa</a>, <a href="https://publications.waset.org/abstracts/search?q=Boukthir%20Samir"> Boukthir Samir</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Introduction: Acute bronchiolitis (AB) is a real public health problem on a global and national scale. Its treatment is most often outpatient. The use of audio-visual supports, such as educational videos, is an innovation in therapeutic education in outpatient treatment. The aim of our study was to evaluate the impact of an educational video on the knowledge, attitudes, and practices of mothers of infants with AB. Methodology: This was a descriptive, analytical, and cross-sectional study with prospective data collection, including mothers of infants with AB. We assessed mothers' knowledge, attitudes, and practices regarding AB, and we created an educational video. We used a questionnaire written in Tunisian Arabic concerning sociodemographic data, mothers' knowledge and attitudes regarding AB, and their opinions on the video, as well as an observation grid to evaluate their practices on the nasopharyngeal unblocking technique. We compared the different parameters before and after watching the video. Results: We noted a statistically significant improvement in mothers' knowledge scores on AB (7.46 in the pre-test versus 14.08 in the post-test; p≤0.05), practices (12.42 in the pre-test versus 18 in the post-test; p≤0.05) and attitudes (5.86 in pre-test versus 9.02 in post-test; p≤0.05). Conclusion: The use of an educational video has a positive impact on the knowledge, practices, and attitudes of mothers towards AB. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=acute%20bronchiolitis" title="acute bronchiolitis">acute bronchiolitis</a>, <a href="https://publications.waset.org/abstracts/search?q=therapeutic%20education" title=" therapeutic education"> therapeutic education</a>, <a href="https://publications.waset.org/abstracts/search?q=mothers" title=" mothers"> mothers</a>, <a href="https://publications.waset.org/abstracts/search?q=educational%20video" title=" educational video"> educational video</a> </p> <a href="https://publications.waset.org/abstracts/175669/acute-bronchiolitis-impact-of-an-educational-video-on-mothers-knowledge-attitudes-and-practices" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/175669.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">68</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">42447</span> The Effect of Video Using in Teaching Speaking on Students of Non-Native English Speakers at STIE Perbanas Surabaya</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Kartika%20Marta%20Budiana">Kartika Marta Budiana </a> </p> <p class="card-text"><strong>Abstract:</strong></p> Low competence in speaking for the students of Non English native speakers have been crucial so far for the teachers in language teaching in Indonesia. This study attempts to explore the effect of video using in teaching speaking onstudents of non-native English speakers at STIE Perbanas Surabaya. This includes investigate the students` attitudes toward the video used in classroom. This is a quantitative research that is an experimental one based on analyses derived the concepts of from teaching speaking and the use of video. There are two classes observed, the experimental and the control one. The experimental consist of 28 students and the control class consists of 25 students. Before the treatment given, both of the group is given the pre-test to check their ability level. Then, after the treatment is given, the post-test is given to the both groups. Then, the students were given treatment how to conduct a meeting that they learnt from a video of business English. The post test was held after they undergone a treatment. The instruments to get the data are the oral test and questionnaire. The data of this study is students` score and from the tests` score it can be seen there is a positive significant difference in the experimental group. The t-test to test hypothesize also shows that it is accepted which said that there is an improvement on the students` speaking competence achievement. In conclusion, the video effects on the significant difference on the students speaking competence achievement. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=video" title="video">video</a>, <a href="https://publications.waset.org/abstracts/search?q=teaching" title=" teaching"> teaching</a>, <a href="https://publications.waset.org/abstracts/search?q=speaking" title=" speaking"> speaking</a>, <a href="https://publications.waset.org/abstracts/search?q=Indonesia" title=" Indonesia "> Indonesia </a> </p> <a href="https://publications.waset.org/abstracts/37887/the-effect-of-video-using-in-teaching-speaking-on-students-of-non-native-english-speakers-at-stie-perbanas-surabaya" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/37887.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">435</span> </span> </div> </div> <ul class="pagination"> <li class="page-item disabled"><span class="page-link">‹</span></li> <li class="page-item active"><span class="page-link">1</span></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=big%20video%20data%20analysis&page=2">2</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=big%20video%20data%20analysis&page=3">3</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=big%20video%20data%20analysis&page=4">4</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=big%20video%20data%20analysis&page=5">5</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=big%20video%20data%20analysis&page=6">6</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=big%20video%20data%20analysis&page=7">7</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=big%20video%20data%20analysis&page=8">8</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=big%20video%20data%20analysis&page=9">9</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=big%20video%20data%20analysis&page=10">10</a></li> <li class="page-item disabled"><span class="page-link">...</span></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=big%20video%20data%20analysis&page=1415">1415</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=big%20video%20data%20analysis&page=1416">1416</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=big%20video%20data%20analysis&page=2" rel="next">›</a></li> </ul> </div> </main> <footer> <div id="infolinks" class="pt-3 pb-2"> <div class="container"> <div style="background-color:#f5f5f5;" class="p-3"> <div class="row"> <div class="col-md-2"> <ul class="list-unstyled"> About <li><a href="https://waset.org/page/support">About Us</a></li> <li><a href="https://waset.org/page/support#legal-information">Legal</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/WASET-16th-foundational-anniversary.pdf">WASET celebrates its 16th foundational anniversary</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Account <li><a href="https://waset.org/profile">My Account</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Explore <li><a href="https://waset.org/disciplines">Disciplines</a></li> <li><a href="https://waset.org/conferences">Conferences</a></li> <li><a href="https://waset.org/conference-programs">Conference Program</a></li> <li><a href="https://waset.org/committees">Committees</a></li> <li><a href="https://publications.waset.org">Publications</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Research <li><a href="https://publications.waset.org/abstracts">Abstracts</a></li> <li><a href="https://publications.waset.org">Periodicals</a></li> <li><a href="https://publications.waset.org/archive">Archive</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Open Science <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Science-Philosophy.pdf">Open Science Philosophy</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Science-Award.pdf">Open Science Award</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Society-Open-Science-and-Open-Innovation.pdf">Open Innovation</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Postdoctoral-Fellowship-Award.pdf">Postdoctoral Fellowship Award</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Scholarly-Research-Review.pdf">Scholarly Research Review</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Support <li><a href="https://waset.org/page/support">Support</a></li> <li><a href="https://waset.org/profile/messages/create">Contact Us</a></li> <li><a href="https://waset.org/profile/messages/create">Report Abuse</a></li> </ul> </div> </div> </div> </div> </div> <div class="container text-center"> <hr style="margin-top:0;margin-bottom:.3rem;"> <a href="https://creativecommons.org/licenses/by/4.0/" target="_blank" class="text-muted small">Creative Commons Attribution 4.0 International License</a> <div id="copy" class="mt-2">© 2024 World Academy of Science, Engineering and Technology</div> </div> </footer> <a href="javascript:" id="return-to-top"><i class="fas fa-arrow-up"></i></a> <div class="modal" id="modal-template"> <div class="modal-dialog"> <div class="modal-content"> <div class="row m-0 mt-1"> <div class="col-md-12"> <button type="button" class="close" data-dismiss="modal" aria-label="Close"><span aria-hidden="true">×</span></button> </div> </div> <div class="modal-body"></div> </div> </div> </div> <script src="https://cdn.waset.org/static/plugins/jquery-3.3.1.min.js"></script> <script src="https://cdn.waset.org/static/plugins/bootstrap-4.2.1/js/bootstrap.bundle.min.js"></script> <script src="https://cdn.waset.org/static/js/site.js?v=150220211556"></script> <script> jQuery(document).ready(function() { /*jQuery.get("https://publications.waset.org/xhr/user-menu", function (response) { jQuery('#mainNavMenu').append(response); });*/ jQuery.get({ url: "https://publications.waset.org/xhr/user-menu", cache: false }).then(function(response){ jQuery('#mainNavMenu').append(response); }); }); </script> </body> </html>