CINXE.COM
Search results for: video quality assessment
<!DOCTYPE html> <html lang="en" dir="ltr"> <head> <!-- Google tag (gtag.js) --> <script async src="https://www.googletagmanager.com/gtag/js?id=G-P63WKM1TM1"></script> <script> window.dataLayer = window.dataLayer || []; function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'G-P63WKM1TM1'); </script> <!-- Yandex.Metrika counter --> <script type="text/javascript" > (function(m,e,t,r,i,k,a){m[i]=m[i]||function(){(m[i].a=m[i].a||[]).push(arguments)}; m[i].l=1*new Date(); for (var j = 0; j < document.scripts.length; j++) {if (document.scripts[j].src === r) { return; }} k=e.createElement(t),a=e.getElementsByTagName(t)[0],k.async=1,k.src=r,a.parentNode.insertBefore(k,a)}) (window, document, "script", "https://mc.yandex.ru/metrika/tag.js", "ym"); ym(55165297, "init", { clickmap:false, trackLinks:true, accurateTrackBounce:true, webvisor:false }); </script> <noscript><div><img src="https://mc.yandex.ru/watch/55165297" style="position:absolute; left:-9999px;" alt="" /></div></noscript> <!-- /Yandex.Metrika counter --> <!-- Matomo --> <!-- End Matomo Code --> <title>Search results for: video quality assessment</title> <meta name="description" content="Search results for: video quality assessment"> <meta name="keywords" content="video quality assessment"> <meta name="viewport" content="width=device-width, initial-scale=1, minimum-scale=1, maximum-scale=1, user-scalable=no"> <meta charset="utf-8"> <link href="https://cdn.waset.org/favicon.ico" type="image/x-icon" rel="shortcut icon"> <link href="https://cdn.waset.org/static/plugins/bootstrap-4.2.1/css/bootstrap.min.css" rel="stylesheet"> <link href="https://cdn.waset.org/static/plugins/fontawesome/css/all.min.css" rel="stylesheet"> <link href="https://cdn.waset.org/static/css/site.css?v=150220211555" rel="stylesheet"> </head> <body> <header> <div class="container"> <nav class="navbar navbar-expand-lg navbar-light"> <a class="navbar-brand" href="https://waset.org"> <img src="https://cdn.waset.org/static/images/wasetc.png" alt="Open Science Research Excellence" title="Open Science Research Excellence" /> </a> <button class="d-block d-lg-none navbar-toggler ml-auto" type="button" data-toggle="collapse" data-target="#navbarMenu" aria-controls="navbarMenu" aria-expanded="false" aria-label="Toggle navigation"> <span class="navbar-toggler-icon"></span> </button> <div class="w-100"> <div class="d-none d-lg-flex flex-row-reverse"> <form method="get" action="https://waset.org/search" class="form-inline my-2 my-lg-0"> <input class="form-control mr-sm-2" type="search" placeholder="Search Conferences" value="video quality assessment" name="q" aria-label="Search"> <button class="btn btn-light my-2 my-sm-0" type="submit"><i class="fas fa-search"></i></button> </form> </div> <div class="collapse navbar-collapse mt-1" id="navbarMenu"> <ul class="navbar-nav ml-auto align-items-center" id="mainNavMenu"> <li class="nav-item"> <a class="nav-link" href="https://waset.org/conferences" title="Conferences in 2024/2025/2026">Conferences</a> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/disciplines" title="Disciplines">Disciplines</a> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/committees" rel="nofollow">Committees</a> </li> <li class="nav-item dropdown"> <a class="nav-link dropdown-toggle" href="#" id="navbarDropdownPublications" role="button" data-toggle="dropdown" aria-haspopup="true" aria-expanded="false"> Publications </a> <div class="dropdown-menu" aria-labelledby="navbarDropdownPublications"> <a class="dropdown-item" href="https://publications.waset.org/abstracts">Abstracts</a> <a class="dropdown-item" href="https://publications.waset.org">Periodicals</a> <a class="dropdown-item" href="https://publications.waset.org/archive">Archive</a> </div> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/page/support" title="Support">Support</a> </li> </ul> </div> </div> </nav> </div> </header> <main> <div class="container mt-4"> <div class="row"> <div class="col-md-9 mx-auto"> <form method="get" action="https://publications.waset.org/abstracts/search"> <div id="custom-search-input"> <div class="input-group"> <i class="fas fa-search"></i> <input type="text" class="search-query" name="q" placeholder="Author, Title, Abstract, Keywords" value="video quality assessment"> <input type="submit" class="btn_search" value="Search"> </div> </div> </form> </div> </div> <div class="row mt-3"> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Commenced</strong> in January 2007</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Frequency:</strong> Monthly</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Edition:</strong> International</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Paper Count:</strong> 15139</div> </div> </div> </div> <h1 class="mt-3 mb-3 text-center" style="font-size:1.6rem;">Search results for: video quality assessment</h1> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">15139</span> Subjective Quality Assessment for Impaired Videos with Varying Spatial and Temporal Information</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Muhammad%20Rehan%20Usman">Muhammad Rehan Usman</a>, <a href="https://publications.waset.org/abstracts/search?q=Muhammad%20Arslan%20Usman"> Muhammad Arslan Usman</a>, <a href="https://publications.waset.org/abstracts/search?q=Soo%20Young%20Shin"> Soo Young Shin</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The new era of digital communication has brought up many challenges that network operators need to overcome. The high demand of mobile data rates require improved networks, which is a challenge for the operators in terms of maintaining the quality of experience (QoE) for their consumers. In live video transmission, there is a sheer need for live surveillance of the videos in order to maintain the quality of the network. For this purpose objective algorithms are employed to monitor the quality of the videos that are transmitted over a network. In order to test these objective algorithms, subjective quality assessment of the streamed videos is required, as the human eye is the best source of perceptual assessment. In this paper we have conducted subjective evaluation of videos with varying spatial and temporal impairments. These videos were impaired with frame freezing distortions so that the impact of frame freezing on the quality of experience could be studied. We present subjective Mean Opinion Score (MOS) for these videos that can be used for fine tuning the objective algorithms for video quality assessment. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=frame%20freezing" title="frame freezing">frame freezing</a>, <a href="https://publications.waset.org/abstracts/search?q=mean%20opinion%20score" title=" mean opinion score"> mean opinion score</a>, <a href="https://publications.waset.org/abstracts/search?q=objective%20assessment" title=" objective assessment"> objective assessment</a>, <a href="https://publications.waset.org/abstracts/search?q=subjective%20evaluation" title=" subjective evaluation"> subjective evaluation</a> </p> <a href="https://publications.waset.org/abstracts/26962/subjective-quality-assessment-for-impaired-videos-with-varying-spatial-and-temporal-information" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/26962.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">494</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">15138</span> Video Stabilization Using Feature Point Matching</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Shamsundar%20Kulkarni">Shamsundar Kulkarni</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Video capturing by non-professionals will lead to unanticipated effects. Such as image distortion, image blurring etc. Hence, many researchers study such drawbacks to enhance the quality of videos. In this paper, an algorithm is proposed to stabilize jittery videos .A stable output video will be attained without the effect of jitter which is caused due to shaking of handheld camera during video recording. Firstly, salient points from each frame from the input video are identified and processed followed by optimizing and stabilize the video. Optimization includes the quality of the video stabilization. This method has shown good result in terms of stabilization and it discarded distortion from the output videos recorded in different circumstances. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=video%20stabilization" title="video stabilization">video stabilization</a>, <a href="https://publications.waset.org/abstracts/search?q=point%20feature%20matching" title=" point feature matching"> point feature matching</a>, <a href="https://publications.waset.org/abstracts/search?q=salient%20points" title=" salient points"> salient points</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20quality%20measurement" title=" image quality measurement"> image quality measurement</a> </p> <a href="https://publications.waset.org/abstracts/57341/video-stabilization-using-feature-point-matching" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/57341.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">313</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">15137</span> Evaluating the Performance of Existing Full-Reference Quality Metrics on High Dynamic Range (HDR) Video Content</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Maryam%20Azimi">Maryam Azimi</a>, <a href="https://publications.waset.org/abstracts/search?q=Amin%20Banitalebi-Dehkordi"> Amin Banitalebi-Dehkordi</a>, <a href="https://publications.waset.org/abstracts/search?q=Yuanyuan%20Dong"> Yuanyuan Dong</a>, <a href="https://publications.waset.org/abstracts/search?q=Mahsa%20T.%20Pourazad"> Mahsa T. Pourazad</a>, <a href="https://publications.waset.org/abstracts/search?q=Panos%20Nasiopoulos"> Panos Nasiopoulos</a> </p> <p class="card-text"><strong>Abstract:</strong></p> While there exists a wide variety of Low Dynamic Range (LDR) quality metrics, only a limited number of metrics are designed specifically for the High Dynamic Range (HDR) content. With the introduction of HDR video compression standardization effort by international standardization bodies, the need for an efficient video quality metric for HDR applications has become more pronounced. The objective of this study is to compare the performance of the existing full-reference LDR and HDR video quality metrics on HDR content and identify the most effective one for HDR applications. To this end, a new HDR video data set is created, which consists of representative indoor and outdoor video sequences with different brightness, motion levels and different representing types of distortions. The quality of each distorted video in this data set is evaluated both subjectively and objectively. The correlation between the subjective and objective results confirm that VIF quality metric outperforms all to their tested metrics in the presence of the tested types of distortions. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=HDR" title="HDR">HDR</a>, <a href="https://publications.waset.org/abstracts/search?q=dynamic%20range" title=" dynamic range"> dynamic range</a>, <a href="https://publications.waset.org/abstracts/search?q=LDR" title=" LDR"> LDR</a>, <a href="https://publications.waset.org/abstracts/search?q=subjective%20evaluation" title=" subjective evaluation"> subjective evaluation</a>, <a href="https://publications.waset.org/abstracts/search?q=video%20compression" title=" video compression"> video compression</a>, <a href="https://publications.waset.org/abstracts/search?q=HEVC" title=" HEVC"> HEVC</a>, <a href="https://publications.waset.org/abstracts/search?q=video%20quality%20metrics" title=" video quality metrics"> video quality metrics</a> </p> <a href="https://publications.waset.org/abstracts/18171/evaluating-the-performance-of-existing-full-reference-quality-metrics-on-high-dynamic-range-hdr-video-content" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/18171.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">525</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">15136</span> A Multi Sensor Monochrome Video Fusion Using Image Quality Assessment</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=M.%20Prema%20Kumar">M. Prema Kumar</a>, <a href="https://publications.waset.org/abstracts/search?q=P.%20Rajesh%20Kumar"> P. Rajesh Kumar</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The increasing interest in image fusion (combining images of two or more modalities such as infrared and visible light radiation) has led to a need for accurate and reliable image assessment methods. This paper gives a novel approach of merging the information content from several videos taken from the same scene in order to rack up a combined video that contains the finest information coming from different source videos. This process is known as video fusion which helps in providing superior quality (The term quality, connote measurement on the particular application.) image than the source images. In this technique different sensors (whose redundant information can be reduced) are used for various cameras that are imperative for capturing the required images and also help in reducing. In this paper Image fusion technique based on multi-resolution singular value decomposition (MSVD) has been used. The image fusion by MSVD is almost similar to that of wavelets. The idea behind MSVD is to replace the FIR filters in wavelet transform with singular value decomposition (SVD). It is computationally very simple and is well suited for real time applications like in remote sensing and in astronomy. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=multi%20sensor%20image%20fusion" title="multi sensor image fusion">multi sensor image fusion</a>, <a href="https://publications.waset.org/abstracts/search?q=MSVD" title=" MSVD"> MSVD</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20processing" title=" image processing"> image processing</a>, <a href="https://publications.waset.org/abstracts/search?q=monochrome%20video" title=" monochrome video"> monochrome video</a> </p> <a href="https://publications.waset.org/abstracts/14866/a-multi-sensor-monochrome-video-fusion-using-image-quality-assessment" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/14866.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">572</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">15135</span> The Impact of Temporal Impairment on Quality of Experience (QoE) in Video Streaming: A No Reference (NR) Subjective and Objective Study</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Muhammad%20Arslan%20Usman">Muhammad Arslan Usman</a>, <a href="https://publications.waset.org/abstracts/search?q=Muhammad%20Rehan%20Usman"> Muhammad Rehan Usman</a>, <a href="https://publications.waset.org/abstracts/search?q=Soo%20Young%20Shin"> Soo Young Shin</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Live video streaming is one of the most widely used service among end users, yet it is a big challenge for the network operators in terms of quality. The only way to provide excellent Quality of Experience (QoE) to the end users is continuous monitoring of live video streaming. For this purpose, there are several objective algorithms available that monitor the quality of the video in a live stream. Subjective tests play a very important role in fine tuning the results of objective algorithms. As human perception is considered to be the most reliable source for assessing the quality of a video stream, subjective tests are conducted in order to develop more reliable objective algorithms. Temporal impairments in a live video stream can have a negative impact on the end users. In this paper we have conducted subjective evaluation tests on a set of video sequences containing temporal impairment known as frame freezing. Frame Freezing is considered as a transmission error as well as a hardware error which can result in loss of video frames on the reception side of a transmission system. In our subjective tests, we have performed tests on videos that contain a single freezing event and also for videos that contain multiple freezing events. We have recorded our subjective test results for all the videos in order to give a comparison on the available No Reference (NR) objective algorithms. Finally, we have shown the performance of no reference algorithms used for objective evaluation of videos and suggested the algorithm that works better. The outcome of this study shows the importance of QoE and its effect on human perception. The results for the subjective evaluation can serve the purpose for validating objective algorithms. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=objective%20evaluation" title="objective evaluation">objective evaluation</a>, <a href="https://publications.waset.org/abstracts/search?q=subjective%20evaluation" title=" subjective evaluation"> subjective evaluation</a>, <a href="https://publications.waset.org/abstracts/search?q=quality%20of%20experience%20%28QoE%29" title=" quality of experience (QoE)"> quality of experience (QoE)</a>, <a href="https://publications.waset.org/abstracts/search?q=video%20quality%20assessment%20%28VQA%29" title=" video quality assessment (VQA) "> video quality assessment (VQA) </a> </p> <a href="https://publications.waset.org/abstracts/26960/the-impact-of-temporal-impairment-on-quality-of-experience-qoe-in-video-streaming-a-no-reference-nr-subjective-and-objective-study" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/26960.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">602</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">15134</span> Efficient Storage and Intelligent Retrieval of Multimedia Streams Using H. 265</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=S.%20Sarumathi">S. Sarumathi</a>, <a href="https://publications.waset.org/abstracts/search?q=C.%20Deepadharani"> C. Deepadharani</a>, <a href="https://publications.waset.org/abstracts/search?q=Garimella%20Archana"> Garimella Archana</a>, <a href="https://publications.waset.org/abstracts/search?q=S.%20Dakshayani"> S. Dakshayani</a>, <a href="https://publications.waset.org/abstracts/search?q=D.%20Logeshwaran"> D. Logeshwaran</a>, <a href="https://publications.waset.org/abstracts/search?q=D.%20Jayakumar"> D. Jayakumar</a>, <a href="https://publications.waset.org/abstracts/search?q=Vijayarangan%20Natarajan"> Vijayarangan Natarajan</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The need of the hour for the customers who use a dial-up or a low broadband connection for their internet services is to access HD video data. This can be achieved by developing a new video format using H. 265. This is the latest video codec standard developed by ISO/IEC Moving Picture Experts Group (MPEG) and ITU-T Video Coding Experts Group (VCEG) on April 2013. This new standard for video compression has the potential to deliver higher performance than the earlier standards such as H. 264/AVC. In comparison with H. 264, HEVC offers a clearer, higher quality image at half the original bitrate. At this lower bitrate, it is possible to transmit high definition videos using low bandwidth. It doubles the data compression ratio supporting 8K Ultra HD and resolutions up to 8192×4320. In the proposed model, we design a new video format which supports this H. 265 standard. The major areas of applications in the coming future would lead to enhancements in the performance level of digital television like Tata Sky and Sun Direct, BluRay Discs, Mobile Video, Video Conferencing and Internet and Live Video streaming. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=access%20HD%20video" title="access HD video">access HD video</a>, <a href="https://publications.waset.org/abstracts/search?q=H.%20265%20video%20standard" title=" H. 265 video standard"> H. 265 video standard</a>, <a href="https://publications.waset.org/abstracts/search?q=high%20performance" title=" high performance"> high performance</a>, <a href="https://publications.waset.org/abstracts/search?q=high%20quality%20image" title=" high quality image"> high quality image</a>, <a href="https://publications.waset.org/abstracts/search?q=low%20bandwidth" title=" low bandwidth"> low bandwidth</a>, <a href="https://publications.waset.org/abstracts/search?q=new%20video%20format" title=" new video format"> new video format</a>, <a href="https://publications.waset.org/abstracts/search?q=video%20streaming%20applications" title=" video streaming applications"> video streaming applications</a> </p> <a href="https://publications.waset.org/abstracts/1881/efficient-storage-and-intelligent-retrieval-of-multimedia-streams-using-h-265" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/1881.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">354</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">15133</span> H.263 Based Video Transceiver for Wireless Camera System</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Won-Ho%20Kim">Won-Ho Kim</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In this paper, a design of H.263 based wireless video transceiver is presented for wireless camera system. It uses standard WIFI transceiver and the covering area is up to 100m. Furthermore the standard H.263 video encoding technique is used for video compression since wireless video transmitter is unable to transmit high capacity raw data in real time and the implemented system is capable of streaming at speed of less than 1Mbps using NTSC 720x480 video. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=wireless%20video%20transceiver" title="wireless video transceiver">wireless video transceiver</a>, <a href="https://publications.waset.org/abstracts/search?q=video%20surveillance%20camera" title=" video surveillance camera"> video surveillance camera</a>, <a href="https://publications.waset.org/abstracts/search?q=H.263%20video%20encoding%20digital%20signal%20processing" title=" H.263 video encoding digital signal processing"> H.263 video encoding digital signal processing</a> </p> <a href="https://publications.waset.org/abstracts/12951/h263-based-video-transceiver-for-wireless-camera-system" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/12951.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">364</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">15132</span> Extraction of Text Subtitles in Multimedia Systems</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Amarjit%20Singh">Amarjit Singh</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In this paper, a method for extraction of text subtitles in large video is proposed. The video data needs to be annotated for many multimedia applications. Text is incorporated in digital video for the motive of providing useful information about that video. So need arises to detect text present in video to understanding and video indexing. This is achieved in two steps. First step is text localization and the second step is text verification. The method of text detection can be extended to text recognition which finds applications in automatic video indexing; video annotation and content based video retrieval. The method has been tested on various types of videos. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=video" title="video">video</a>, <a href="https://publications.waset.org/abstracts/search?q=subtitles" title=" subtitles"> subtitles</a>, <a href="https://publications.waset.org/abstracts/search?q=extraction" title=" extraction"> extraction</a>, <a href="https://publications.waset.org/abstracts/search?q=annotation" title=" annotation"> annotation</a>, <a href="https://publications.waset.org/abstracts/search?q=frames" title=" frames"> frames</a> </p> <a href="https://publications.waset.org/abstracts/24441/extraction-of-text-subtitles-in-multimedia-systems" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/24441.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">601</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">15131</span> Video Summarization: Techniques and Applications</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Zaynab%20El%20Khattabi">Zaynab El Khattabi</a>, <a href="https://publications.waset.org/abstracts/search?q=Youness%20Tabii"> Youness Tabii</a>, <a href="https://publications.waset.org/abstracts/search?q=Abdelhamid%20Benkaddour"> Abdelhamid Benkaddour</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Nowadays, huge amount of multimedia repositories make the browsing, retrieval and delivery of video contents very slow and even difficult tasks. Video summarization has been proposed to improve faster browsing of large video collections and more efficient content indexing and access. In this paper, we focus on approaches to video summarization. The video summaries can be generated in many different forms. However, two fundamentals ways to generate summaries are static and dynamic. We present different techniques for each mode in the literature and describe some features used for generating video summaries. We conclude with perspective for further research. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=video%20summarization" title="video summarization">video summarization</a>, <a href="https://publications.waset.org/abstracts/search?q=static%20summarization" title=" static summarization"> static summarization</a>, <a href="https://publications.waset.org/abstracts/search?q=video%20skimming" title=" video skimming"> video skimming</a>, <a href="https://publications.waset.org/abstracts/search?q=semantic%20features" title=" semantic features"> semantic features</a> </p> <a href="https://publications.waset.org/abstracts/27644/video-summarization-techniques-and-applications" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/27644.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">401</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">15130</span> Video-Based System for Support of Robot-Enhanced Gait Rehabilitation of Stroke Patients</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Matja%C5%BE%20Divjak">Matjaž Divjak</a>, <a href="https://publications.waset.org/abstracts/search?q=Simon%20Zeli%C4%8D"> Simon Zelič</a>, <a href="https://publications.waset.org/abstracts/search?q=Ale%C5%A1%20Holobar"> Aleš Holobar</a> </p> <p class="card-text"><strong>Abstract:</strong></p> We present a dedicated video-based monitoring system for quantification of patient’s attention to visual feedback during robot assisted gait rehabilitation. Two different approaches for eye gaze and head pose tracking are tested and compared. Several metrics for assessment of patient’s attention are also presented. Experimental results with healthy volunteers demonstrate that unobtrusive video-based gaze tracking during the robot-assisted gait rehabilitation is possible and is sufficiently robust for quantification of patient’s attention and assessment of compliance with the rehabilitation therapy. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=video-based%20attention%20monitoring" title="video-based attention monitoring">video-based attention monitoring</a>, <a href="https://publications.waset.org/abstracts/search?q=gaze%20estimation" title=" gaze estimation"> gaze estimation</a>, <a href="https://publications.waset.org/abstracts/search?q=stroke%20rehabilitation" title=" stroke rehabilitation"> stroke rehabilitation</a>, <a href="https://publications.waset.org/abstracts/search?q=user%20compliance" title=" user compliance"> user compliance</a> </p> <a href="https://publications.waset.org/abstracts/11930/video-based-system-for-support-of-robot-enhanced-gait-rehabilitation-of-stroke-patients" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/11930.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">426</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">15129</span> Evaluation of Video Development about Exclusive Breastfeeding as a Nutrition Education Media for Posyandu Cadre</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Ari%20Istiany">Ari Istiany</a>, <a href="https://publications.waset.org/abstracts/search?q=Guspri%20Devi%20Artanti"> Guspri Devi Artanti</a>, <a href="https://publications.waset.org/abstracts/search?q=M.%20Si"> M. Si</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Based on the results Riskesdas, it is known that breastfeeding awareness about the importance of exclusive breastfeeding is still low at only 15.3 %. These conditions resulted in a very infant at risk for infectious diseases, such as diarrhea and acute respiratory infection. Therefore, the aim of this study to evaluate the video development about exclusive breastfeeding as a nutrition education media for posyandu cadre. This research used development methods for making the video about exclusive breastfeeding. The study was conducted in urban areas Rawamangun, East Jakarta. Respondents of this study were 1 media experts from the Department of Educational Technology - UNJ, 2 subject matter experts from Department of Home Economics - UNJ and 20 posyandu cadres to assess the quality of the video. Aspects assessed include the legibility of text, image display quality, color composition, clarity of sound, music appropriateness, duration, suitability of the material and language. Data were analyzed descriptively likes frequency distribution table, the average value, and deviation standard. The result of this study showed that the average score assessment according to media experts, subject matter experts, and posyandu cadres respectively was 3.43 ± 0.51 (good), 4.37 ± 0.52 (very good) and 3.6 ± 0.73 (good). The conclusion is on exclusive breastfeeding video as feasible as a media for nutrition education. While suggestions for the improvement of visual media is multiply illustrations, add material about the correct way of breastfeeding and healthy baby pictures. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=exclusive%20breastfeeding" title="exclusive breastfeeding">exclusive breastfeeding</a>, <a href="https://publications.waset.org/abstracts/search?q=posyandu%20cadre" title=" posyandu cadre"> posyandu cadre</a>, <a href="https://publications.waset.org/abstracts/search?q=video" title=" video"> video</a>, <a href="https://publications.waset.org/abstracts/search?q=nutrition%20education" title=" nutrition education"> nutrition education</a> </p> <a href="https://publications.waset.org/abstracts/2521/evaluation-of-video-development-about-exclusive-breastfeeding-as-a-nutrition-education-media-for-posyandu-cadre" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/2521.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">411</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">15128</span> Keyframe Extraction Using Face Quality Assessment and Convolution Neural Network</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Rahma%20Abed">Rahma Abed</a>, <a href="https://publications.waset.org/abstracts/search?q=Sahbi%20Bahroun"> Sahbi Bahroun</a>, <a href="https://publications.waset.org/abstracts/search?q=Ezzeddine%20Zagrouba"> Ezzeddine Zagrouba</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Due to the huge amount of data in videos, extracting the relevant frames became a necessity and an essential step prior to performing face recognition. In this context, we propose a method for extracting keyframes from videos based on face quality and deep learning for a face recognition task. This method has two steps. We start by generating face quality scores for each face image based on the use of three face feature extractors, including Gabor, LBP, and HOG. The second step consists in training a Deep Convolutional Neural Network in a supervised manner in order to select the frames that have the best face quality. The obtained results show the effectiveness of the proposed method compared to the methods of the state of the art. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=keyframe%20extraction" title="keyframe extraction">keyframe extraction</a>, <a href="https://publications.waset.org/abstracts/search?q=face%20quality%20assessment" title=" face quality assessment"> face quality assessment</a>, <a href="https://publications.waset.org/abstracts/search?q=face%20in%20video%20recognition" title=" face in video recognition"> face in video recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=convolution%20neural%20network" title=" convolution neural network"> convolution neural network</a> </p> <a href="https://publications.waset.org/abstracts/111347/keyframe-extraction-using-face-quality-assessment-and-convolution-neural-network" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/111347.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">233</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">15127</span> Anonymous Editing Prevention Technique Using Gradient Method for High-Quality Video</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Jiwon%20Lee">Jiwon Lee</a>, <a href="https://publications.waset.org/abstracts/search?q=Chanho%20Jung"> Chanho Jung</a>, <a href="https://publications.waset.org/abstracts/search?q=Si-Hwan%20Jang"> Si-Hwan Jang</a>, <a href="https://publications.waset.org/abstracts/search?q=Kyung-Ill%20Kim"> Kyung-Ill Kim</a>, <a href="https://publications.waset.org/abstracts/search?q=Sanghyun%20Joo"> Sanghyun Joo</a>, <a href="https://publications.waset.org/abstracts/search?q=Wook-Ho%20Son"> Wook-Ho Son</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Since the advances in digital imaging technologies have led to development of high quality digital devices, there are a lot of illegal copies of copyrighted video content on the internet. Thus, we propose a high-quality (HQ) video watermarking scheme that can prevent these illegal copies from spreading out. The proposed scheme is applied spatial and temporal gradient methods to improve the fidelity and detection performance. Also, the scheme duplicates the watermark signal temporally to alleviate the signal reduction caused by geometric and signal-processing distortions. Experimental results show that the proposed scheme achieves better performance than previously proposed schemes and it has high fidelity. The proposed scheme can be used in broadcast monitoring or traitor tracking applications which need fast detection process to prevent illegally recorded video content from spreading out. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=editing%20prevention%20technique" title="editing prevention technique">editing prevention technique</a>, <a href="https://publications.waset.org/abstracts/search?q=gradient%20method" title=" gradient method"> gradient method</a>, <a href="https://publications.waset.org/abstracts/search?q=luminance%20change" title=" luminance change"> luminance change</a>, <a href="https://publications.waset.org/abstracts/search?q=video%20watermarking" title=" video watermarking"> video watermarking</a> </p> <a href="https://publications.waset.org/abstracts/42072/anonymous-editing-prevention-technique-using-gradient-method-for-high-quality-video" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/42072.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">456</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">15126</span> Symbol Synchronization and Resource Reuse Schemes for Layered Video Multicast Service in Long Term Evolution Networks</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Chung-Nan%20Lee">Chung-Nan Lee</a>, <a href="https://publications.waset.org/abstracts/search?q=Sheng-Wei%20Chu"> Sheng-Wei Chu</a>, <a href="https://publications.waset.org/abstracts/search?q=You-Chiun%20Wang"> You-Chiun Wang</a> </p> <p class="card-text"><strong>Abstract:</strong></p> LTE (Long Term Evolution) employs the eMBMS (evolved Multimedia Broadcast/Multicast Service) protocol to deliver video streams to a multicast group of users. However, it requires all multicast members to receive a video stream in the same transmission rate, which would degrade the overall service quality when some users encounter bad channel conditions. To overcome this problem, this paper provides two efficient resource allocation schemes in such LTE network: The symbol synchronization (S2) scheme assumes that the macro and pico eNodeBs use the same frequency channel to deliver the video stream to all users. It then adopts a multicast transmission index to guarantee the fairness among users. On the other hand, the resource reuse (R2) scheme allows eNodeBs to transmit data on different frequency channels. Then, by introducing the concept of frequency reuse, it can further improve the overall service quality. Extensive simulation results show that the S2 and R2 schemes can respectively improve around 50% of fairness and 14% of video quality as compared with the common maximum throughput method. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=LTE%20networks" title="LTE networks">LTE networks</a>, <a href="https://publications.waset.org/abstracts/search?q=multicast" title=" multicast"> multicast</a>, <a href="https://publications.waset.org/abstracts/search?q=resource%20allocation" title=" resource allocation"> resource allocation</a>, <a href="https://publications.waset.org/abstracts/search?q=layered%20video" title=" layered video"> layered video</a> </p> <a href="https://publications.waset.org/abstracts/57088/symbol-synchronization-and-resource-reuse-schemes-for-layered-video-multicast-service-in-long-term-evolution-networks" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/57088.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">389</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">15125</span> Factorial Design Analysis for Quality of Video on MANET</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Hyoup-Sang%20Yoon">Hyoup-Sang Yoon</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The quality of video transmitted by mobile ad hoc networks (MANETs) can be influenced by several factors, including protocol layers; parameter settings of each protocol. In this paper, we are concerned with understanding the functional relationship between these influential factors and objective video quality in MANETs. We illustrate a systematic statistical design of experiments (DOE) strategy can be used to analyse MANET parameters and performance. Using a 2k factorial design, we quantify the main and interactive effects of 7 factors on a response metric (i.e., mean opinion score (MOS) calculated by PSNR with Evalvid package) we then develop a first-order linear regression model between the influential factors and the performance metric. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=evalvid" title="evalvid">evalvid</a>, <a href="https://publications.waset.org/abstracts/search?q=full%20factorial%20design" title=" full factorial design"> full factorial design</a>, <a href="https://publications.waset.org/abstracts/search?q=mobile%20ad%20hoc%20networks" title=" mobile ad hoc networks"> mobile ad hoc networks</a>, <a href="https://publications.waset.org/abstracts/search?q=ns-2" title=" ns-2"> ns-2</a> </p> <a href="https://publications.waset.org/abstracts/6956/factorial-design-analysis-for-quality-of-video-on-manet" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/6956.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">413</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">15124</span> Performance of High Efficiency Video Codec over Wireless Channels</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Mohd%20Ayyub%20Khan">Mohd Ayyub Khan</a>, <a href="https://publications.waset.org/abstracts/search?q=Nadeem%20Akhtar"> Nadeem Akhtar</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Due to recent advances in wireless communication technologies and hand-held devices, there is a huge demand for video-based applications such as video surveillance, video conferencing, remote surgery, Digital Video Broadcast (DVB), IPTV, online learning courses, YouTube, WhatsApp, Instagram, Facebook, Interactive Video Games. However, the raw videos posses very high bandwidth which makes the compression a must before its transmission over the wireless channels. The High Efficiency Video Codec (HEVC) (also called H.265) is latest state-of-the-art video coding standard developed by the Joint effort of ITU-T and ISO/IEC teams. HEVC is targeted for high resolution videos such as 4K or 8K resolutions that can fulfil the recent demands for video services. The compression ratio achieved by the HEVC is twice as compared to its predecessor H.264/AVC for same quality level. The compression efficiency is generally increased by removing more correlation between the frames/pixels using complex techniques such as extensive intra and inter prediction techniques. As more correlation is removed, the chances of interdependency among coded bits increases. Thus, bit errors may have large effect on the reconstructed video. Sometimes even single bit error can lead to catastrophic failure of the reconstructed video. In this paper, we study the performance of HEVC bitstream over additive white Gaussian noise (AWGN) channel. Moreover, HEVC over Quadrature Amplitude Modulation (QAM) combined with forward error correction (FEC) schemes are also explored over the noisy channel. The video will be encoded using HEVC, and the coded bitstream is channel coded to provide some redundancies. The channel coded bitstream is then modulated using QAM and transmitted over AWGN channel. At the receiver, the symbols are demodulated and channel decoded to obtain the video bitstream. The bitstream is then used to reconstruct the video using HEVC decoder. It is observed that as the signal to noise ratio of channel is decreased the quality of the reconstructed video decreases drastically. Using proper FEC codes, the quality of the video can be restored up to certain extent. Thus, the performance analysis of HEVC presented in this paper may assist in designing the optimized code rate of FEC such that the quality of the reconstructed video is maximized over wireless channels. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=AWGN" title="AWGN">AWGN</a>, <a href="https://publications.waset.org/abstracts/search?q=forward%20error%20correction" title=" forward error correction"> forward error correction</a>, <a href="https://publications.waset.org/abstracts/search?q=HEVC" title=" HEVC"> HEVC</a>, <a href="https://publications.waset.org/abstracts/search?q=video%20coding" title=" video coding"> video coding</a>, <a href="https://publications.waset.org/abstracts/search?q=QAM" title=" QAM"> QAM</a> </p> <a href="https://publications.waset.org/abstracts/92062/performance-of-high-efficiency-video-codec-over-wireless-channels" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/92062.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">149</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">15123</span> Efficient Video Compression Technique Using Convolutional Neural Networks and Generative Adversarial Network</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=P.%20Karthick">P. Karthick</a>, <a href="https://publications.waset.org/abstracts/search?q=K.%20Mahesh"> K. Mahesh</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Video has become an increasingly significant component of our digital everyday contact. With the advancement of greater contents and shows of the resolution, its significant volume poses serious obstacles to the objective of receiving, distributing, compressing, and revealing video content of high quality. In this paper, we propose the primary beginning to complete a deep video compression model that jointly upgrades all video compression components. The video compression method involves splitting the video into frames, comparing the images using convolutional neural networks (CNN) to remove duplicates, repeating the single image instead of the duplicate images by recognizing and detecting minute changes using generative adversarial network (GAN) and recorded with long short-term memory (LSTM). Instead of the complete image, the small changes generated using GAN are substituted, which helps in frame level compression. Pixel wise comparison is performed using K-nearest neighbours (KNN) over the frame, clustered with K-means, and singular value decomposition (SVD) is applied for each and every frame in the video for all three color channels [Red, Green, Blue] to decrease the dimension of the utility matrix [R, G, B] by extracting its latent factors. Video frames are packed with parameters with the aid of a codec and converted to video format, and the results are compared with the original video. Repeated experiments on several videos with different sizes, duration, frames per second (FPS), and quality results demonstrate a significant resampling rate. On average, the result produced had approximately a 10% deviation in quality and more than 50% in size when compared with the original video. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=video%20compression" title="video compression">video compression</a>, <a href="https://publications.waset.org/abstracts/search?q=K-means%20clustering" title=" K-means clustering"> K-means clustering</a>, <a href="https://publications.waset.org/abstracts/search?q=convolutional%20neural%20network" title=" convolutional neural network"> convolutional neural network</a>, <a href="https://publications.waset.org/abstracts/search?q=generative%20adversarial%20network" title=" generative adversarial network"> generative adversarial network</a>, <a href="https://publications.waset.org/abstracts/search?q=singular%20value%20decomposition" title=" singular value decomposition"> singular value decomposition</a>, <a href="https://publications.waset.org/abstracts/search?q=pixel%20visualization" title=" pixel visualization"> pixel visualization</a>, <a href="https://publications.waset.org/abstracts/search?q=stochastic%20gradient%20descent" title=" stochastic gradient descent"> stochastic gradient descent</a>, <a href="https://publications.waset.org/abstracts/search?q=frame%20per%20second%20extraction" title=" frame per second extraction"> frame per second extraction</a>, <a href="https://publications.waset.org/abstracts/search?q=RGB%20channel%20extraction" title=" RGB channel extraction"> RGB channel extraction</a>, <a href="https://publications.waset.org/abstracts/search?q=self-detection%20and%20deciding%20system" title=" self-detection and deciding system"> self-detection and deciding system</a> </p> <a href="https://publications.waset.org/abstracts/138827/efficient-video-compression-technique-using-convolutional-neural-networks-and-generative-adversarial-network" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/138827.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">187</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">15122</span> Evaluation of Video Quality Metrics and Performance Comparison on Contents Taken from Most Commonly Used Devices</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Pratik%20Dhabal%20Deo">Pratik Dhabal Deo</a>, <a href="https://publications.waset.org/abstracts/search?q=Manoj%20P."> Manoj P.</a> </p> <p class="card-text"><strong>Abstract:</strong></p> With the increasing number of social media users, the amount of video content available has also significantly increased. Currently, the number of smartphone users is at its peak, and many are increasingly using their smartphones as their main photography and recording devices. There have been a lot of developments in the field of Video Quality Assessment (VQA) and metrics like VMAF, SSIM etc. are said to be some of the best performing metrics, but the evaluation of these metrics is dominantly done on professionally taken video contents using professional tools, lighting conditions etc. No study particularly pinpointing the performance of the metrics on the contents taken by users on very commonly available devices has been done. Datasets that contain a huge number of videos from different high-end devices make it difficult to analyze the performance of the metrics on the content from most used devices even if they contain contents taken in poor lighting conditions using lower-end devices. These devices face a lot of distortions due to various factors since the spectrum of contents recorded on these devices is huge. In this paper, we have presented an analysis of the objective VQA metrics on contents taken only from most used devices and their performance on them, focusing on full-reference metrics. To carry out this research, we created a custom dataset containing a total of 90 videos that have been taken from three most commonly used devices, and android smartphone, an IOS smartphone and a DSLR. On the videos taken on each of these devices, the six most common types of distortions that users face have been applied on addition to already existing H.264 compression based on four reference videos. These six applied distortions have three levels of degradation each. A total of the five most popular VQA metrics have been evaluated on this dataset and the highest values and the lowest values of each of the metrics on the distortions have been recorded. Finally, it is found that blur is the artifact on which most of the metrics didn’t perform well. Thus, in order to understand the results better the amount of blur in the data set has been calculated and an additional evaluation of the metrics was done using HEVC codec, which is the next version of H.264 compression, on the camera that proved to be the sharpest among the devices. The results have shown that as the resolution increases, the performance of the metrics tends to become more accurate and the best performing metric among them is VQM with very few inconsistencies and inaccurate results when the compression applied is H.264, but when the compression is applied is HEVC, SSIM and VMAF have performed significantly better. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=distortion" title="distortion">distortion</a>, <a href="https://publications.waset.org/abstracts/search?q=metrics" title=" metrics"> metrics</a>, <a href="https://publications.waset.org/abstracts/search?q=performance" title=" performance"> performance</a>, <a href="https://publications.waset.org/abstracts/search?q=resolution" title=" resolution"> resolution</a>, <a href="https://publications.waset.org/abstracts/search?q=video%20quality%20assessment" title=" video quality assessment"> video quality assessment</a> </p> <a href="https://publications.waset.org/abstracts/145939/evaluation-of-video-quality-metrics-and-performance-comparison-on-contents-taken-from-most-commonly-used-devices" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/145939.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">203</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">15121</span> Lecture Video Indexing and Retrieval Using Topic Keywords</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=B.%20J.%20Sandesh">B. J. Sandesh</a>, <a href="https://publications.waset.org/abstracts/search?q=Saurabha%20Jirgi"> Saurabha Jirgi</a>, <a href="https://publications.waset.org/abstracts/search?q=S.%20Vidya"> S. Vidya</a>, <a href="https://publications.waset.org/abstracts/search?q=Prakash%20Eljer"> Prakash Eljer</a>, <a href="https://publications.waset.org/abstracts/search?q=Gowri%20Srinivasa"> Gowri Srinivasa</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In this paper, we propose a framework to help users to search and retrieve the portions in the lecture video of their interest. This is achieved by temporally segmenting and indexing the lecture video using the topic keywords. We use transcribed text from the video and documents relevant to the video topic extracted from the web for this purpose. The keywords for indexing are found by applying the non-negative matrix factorization (NMF) topic modeling techniques on the web documents. Our proposed technique first creates indices on the transcribed documents using the topic keywords, and these are mapped to the video to find the start and end time of the portions of the video for a particular topic. This time information is stored in the index table along with the topic keyword which is used to retrieve the specific portions of the video for the query provided by the users. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=video%20indexing%20and%20retrieval" title="video indexing and retrieval">video indexing and retrieval</a>, <a href="https://publications.waset.org/abstracts/search?q=lecture%20videos" title=" lecture videos"> lecture videos</a>, <a href="https://publications.waset.org/abstracts/search?q=content%20based%20video%20search" title=" content based video search"> content based video search</a>, <a href="https://publications.waset.org/abstracts/search?q=multimodal%20indexing" title=" multimodal indexing"> multimodal indexing</a> </p> <a href="https://publications.waset.org/abstracts/77066/lecture-video-indexing-and-retrieval-using-topic-keywords" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/77066.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">250</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">15120</span> Distributed Processing for Content Based Lecture Video Retrieval on Hadoop Framework</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=U.%20S.%20N.%20Raju">U. S. N. Raju</a>, <a href="https://publications.waset.org/abstracts/search?q=Kothuri%20Sai%20Kiran"> Kothuri Sai Kiran</a>, <a href="https://publications.waset.org/abstracts/search?q=Meena%20G.%20Kamal"> Meena G. Kamal</a>, <a href="https://publications.waset.org/abstracts/search?q=Vinay%20Nikhil%20Pabba"> Vinay Nikhil Pabba</a>, <a href="https://publications.waset.org/abstracts/search?q=Suresh%20Kanaparthi"> Suresh Kanaparthi</a> </p> <p class="card-text"><strong>Abstract:</strong></p> There is huge amount of lecture video data available for public use, and many more lecture videos are being created and uploaded every day. Searching for videos on required topics from this huge database is a challenging task. Therefore, an efficient method for video retrieval is needed. An approach for automated video indexing and video search in large lecture video archives is presented. As the amount of video lecture data is huge, it is very inefficient to do the processing in a centralized computation framework. Hence, Hadoop Framework for distributed computing for Big Video Data is used. First, step in the process is automatic video segmentation and key-frame detection to offer a visual guideline for the video content navigation. In the next step, we extract textual metadata by applying video Optical Character Recognition (OCR) technology on key-frames. The OCR and detected slide text line types are adopted for keyword extraction, by which both video- and segment-level keywords are extracted for content-based video browsing and search. The performance of the indexing process can be improved for a large database by using distributed computing on Hadoop framework. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=video%20lectures" title="video lectures">video lectures</a>, <a href="https://publications.waset.org/abstracts/search?q=big%20video%20data" title=" big video data"> big video data</a>, <a href="https://publications.waset.org/abstracts/search?q=video%20retrieval" title=" video retrieval"> video retrieval</a>, <a href="https://publications.waset.org/abstracts/search?q=hadoop" title=" hadoop"> hadoop</a> </p> <a href="https://publications.waset.org/abstracts/26648/distributed-processing-for-content-based-lecture-video-retrieval-on-hadoop-framework" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/26648.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">534</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">15119</span> Structural Analysis on the Composition of Video Game Virtual Spaces</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Qin%20Luofeng">Qin Luofeng</a>, <a href="https://publications.waset.org/abstracts/search?q=Shen%20Siqi"> Shen Siqi</a> </p> <p class="card-text"><strong>Abstract:</strong></p> For the 58 years since the first video game came into being, the video game industry is getting through an explosive evolution from then on. Video games exert great influence on society and become a reflection of public life to some extent. Video game virtual spaces are where activities are taking place like real spaces. And that’s the reason why some architects pay attention to video games. However, compared to the researches on the appearance of games, we observe a lack of theoretical comprehensive on the construction of video game virtual spaces. The research method of this paper is to collect literature and conduct theoretical research about the virtual space in video games firstly. And then analogizing the opinions on the space phenomena from the theory of literature and films. Finally, this paper proposes a three-layer framework for the construction of video game virtual spaces: “algorithmic space-narrative space players space”, which correspond to the exterior, expressive, affective parts of the game space. Also, we illustrate each sub-space according to numerous instances of published video games. Hoping this writing could promote the interactive development of video games and architecture. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=video%20game" title="video game">video game</a>, <a href="https://publications.waset.org/abstracts/search?q=virtual%20space" title=" virtual space"> virtual space</a>, <a href="https://publications.waset.org/abstracts/search?q=narrativity" title=" narrativity"> narrativity</a>, <a href="https://publications.waset.org/abstracts/search?q=social%20space" title=" social space"> social space</a>, <a href="https://publications.waset.org/abstracts/search?q=emotional%20connection" title=" emotional connection"> emotional connection</a> </p> <a href="https://publications.waset.org/abstracts/118519/structural-analysis-on-the-composition-of-video-game-virtual-spaces" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/118519.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">267</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">15118</span> Key Frame Based Video Summarization via Dependency Optimization</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Janya%20Sainui">Janya Sainui</a> </p> <p class="card-text"><strong>Abstract:</strong></p> As a rapid growth of digital videos and data communications, video summarization that provides a shorter version of the video for fast video browsing and retrieval is necessary. Key frame extraction is one of the mechanisms to generate video summary. In general, the extracted key frames should both represent the entire video content and contain minimum redundancy. However, most of the existing approaches heuristically select key frames; hence, the selected key frames may not be the most different frames and/or not cover the entire content of a video. In this paper, we propose a method of video summarization which provides the reasonable objective functions for selecting key frames. In particular, we apply a statistical dependency measure called quadratic mutual informaion as our objective functions for maximizing the coverage of the entire video content as well as minimizing the redundancy among selected key frames. The proposed key frame extraction algorithm finds key frames as an optimization problem. Through experiments, we demonstrate the success of the proposed video summarization approach that produces video summary with better coverage of the entire video content while less redundancy among key frames comparing to the state-of-the-art approaches. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=video%20summarization" title="video summarization">video summarization</a>, <a href="https://publications.waset.org/abstracts/search?q=key%20frame%20extraction" title=" key frame extraction"> key frame extraction</a>, <a href="https://publications.waset.org/abstracts/search?q=dependency%20measure" title=" dependency measure"> dependency measure</a>, <a href="https://publications.waset.org/abstracts/search?q=quadratic%20mutual%20information" title=" quadratic mutual information"> quadratic mutual information</a> </p> <a href="https://publications.waset.org/abstracts/75218/key-frame-based-video-summarization-via-dependency-optimization" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/75218.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">266</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">15117</span> An Investigation of Surface Water Quality in an Industrial Area Using Integrated Approaches</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Priti%20Saha">Priti Saha</a>, <a href="https://publications.waset.org/abstracts/search?q=Biswajit%20Paul"> Biswajit Paul</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Rapid urbanization and industrialization has increased the pollution load in surface water bodies. However, these water bodies are major source of water for drinking, irrigation, industrial activities and fishery. Therefore, water quality assessment is paramount importance to evaluate its suitability for all these purposes. This study focus to evaluate the surface water quality of an industrial city in eastern India through integrating interdisciplinary techniques. The multi-purpose Water Quality Index (WQI) assess the suitability for drinking, irrigation as well as fishery of forty-eight sampling locations, where 8.33% have excellent water quality (WQI:0-25) for fishery and 10.42%, 20.83% and 45.83% have good quality (WQI:25-50), which represents its suitability for drinking irrigation and fishery respectively. However, the industrial water quality was assessed through Ryznar Stability Index (LSI), which affirmed that only 6.25% of sampling locations have neither corrosive nor scale forming properties (RSI: 6.2-6.8). Integration of these statistical analysis with geographical information system (GIS) helps in spatial assessment. It identifies of the regions where the water quality is suitable for its use in drinking, irrigation, fishery as well as industrial activities. This research demonstrates the effectiveness of statistical and GIS techniques for water quality assessment. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=surface%20water" title="surface water">surface water</a>, <a href="https://publications.waset.org/abstracts/search?q=water%20quality%20assessment" title=" water quality assessment"> water quality assessment</a>, <a href="https://publications.waset.org/abstracts/search?q=water%20quality%20index" title=" water quality index"> water quality index</a>, <a href="https://publications.waset.org/abstracts/search?q=spatial%20assessment" title=" spatial assessment"> spatial assessment</a> </p> <a href="https://publications.waset.org/abstracts/103597/an-investigation-of-surface-water-quality-in-an-industrial-area-using-integrated-approaches" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/103597.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">180</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">15116</span> The Developing of Teaching Materials Online for Students in Thailand</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Pitimanus%20Bunlue">Pitimanus Bunlue</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The objectives of this study were to identify the unique characteristics of Salaya Old market, Phutthamonthon, Nakhon Pathom and develop the effective video media to promote the homeland awareness among local people and the characteristic features of this community were collectively summarized based on historical data, community observation, and people’s interview. The acquired data were used to develop a media describing prominent features of the community. The quality of the media was later assessed by interviewing local people in the old market in terms of content accuracy, video, and narration qualities, and sense of homeland awareness after watching the video. The result shows a 6-minute video media containing historical data and outstanding features of this community was developed. Based on the interview, the content accuracy was good. The picture quality and the narration were very good. Most people developed a sense of homeland awareness after watching the video also as well. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=audio-visual" title="audio-visual">audio-visual</a>, <a href="https://publications.waset.org/abstracts/search?q=creating%20homeland%20awareness" title=" creating homeland awareness"> creating homeland awareness</a>, <a href="https://publications.waset.org/abstracts/search?q=Phutthamonthon%20Nakhon%20Pathom" title=" Phutthamonthon Nakhon Pathom"> Phutthamonthon Nakhon Pathom</a>, <a href="https://publications.waset.org/abstracts/search?q=research%20and%20development" title=" research and development"> research and development</a> </p> <a href="https://publications.waset.org/abstracts/55281/the-developing-of-teaching-materials-online-for-students-in-thailand" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/55281.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">291</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">15115</span> Online Versus Face-To-Face – How Do Video Consultations Change The Doctor-Patient-Interaction</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Markus%20Feufel">Markus Feufel</a>, <a href="https://publications.waset.org/abstracts/search?q=Friederike%20Kendel"> Friederike Kendel</a>, <a href="https://publications.waset.org/abstracts/search?q=Caren%20Hilger"> Caren Hilger</a>, <a href="https://publications.waset.org/abstracts/search?q=Selamawit%20Woldai"> Selamawit Woldai</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Since the corona pandemic, the use of video consultation has increased remarkably. For vulnerable groups such as oncological patients, the advantages seem obvious. But how does video consultation potentially change the doctor-patient relationship compared to face-to-face consultation? Which barriers may hinder the effective use of this consultation format in practice? We are presenting first results from a mixed-methods field study, funded by Federal Ministry of Health, which will provide the basis for a hands-on guide for both physicians and patients on how to improve the quality of video consultations. We use a quasi-experimental design to analyze qualitative and quantitative differences between face-to-face and video consultations based on video recordings of N = 64 actual counseling sessions (n = 32 for each consultation format). Data will be recorded from n = 32 gynecological and n = 32 urological cancer patients at two clinics. After the consultation, all patients will be asked to fill out a questionnaire about their consultation experience. For quantitative analyses, the counseling sessions will be systematically compared in terms of verbal and nonverbal communication patterns. Relative frequencies of eye contact and the information exchanged will be compared using 𝝌2 -tests. The validated questionnaire MAPPIN'Obsdyad will be used to assess the expression of shared decision-making parameters. In addition, semi-structured interviews will be conducted with n = 10 physicians and n = 10 patients experienced with video consultation, for which a qualitative content analysis will be conducted. We will elaborate the comprehensive methodological approach we used to compare video vs. face-to-face consultations and present first evidence on how video consultations change the doctor-patient interaction. We will also outline possible barriers of video consultations and best practices on how they may be overcome. Based on the results, we will present and discuss recommendations outlining best practices for how to prepare and conduct high-quality video consultations from the perspective of both physicians and patients. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=video%20consultation" title="video consultation">video consultation</a>, <a href="https://publications.waset.org/abstracts/search?q=patient-doctor-relationship" title=" patient-doctor-relationship"> patient-doctor-relationship</a>, <a href="https://publications.waset.org/abstracts/search?q=digital%20applications" title=" digital applications"> digital applications</a>, <a href="https://publications.waset.org/abstracts/search?q=technical%20barriers" title=" technical barriers"> technical barriers</a> </p> <a href="https://publications.waset.org/abstracts/153083/online-versus-face-to-face-how-do-video-consultations-change-the-doctor-patient-interaction" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/153083.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">140</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">15114</span> A New Categorization of Image Quality Metrics Based on a Model of Human Quality Perception</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Maria%20Grazia%20Albanesi">Maria Grazia Albanesi</a>, <a href="https://publications.waset.org/abstracts/search?q=Riccardo%20Amadeo"> Riccardo Amadeo</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This study presents a new model of the human image quality assessment process: the aim is to highlight the foundations of the image quality metrics proposed in literature, by identifying the cognitive/physiological or mathematical principles of their development and the relation with the actual human quality assessment process. The model allows to create a novel categorization of objective and subjective image quality metrics. Our work includes an overview of the most used or effective objective metrics in literature, and, for each of them, we underline its main characteristics, with reference to the rationale of the proposed model and categorization. From the results of this operation, we underline a problem that affects all the presented metrics: the fact that many aspects of human biases are not taken in account at all. We then propose a possible methodology to address this issue. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=eye-tracking" title="eye-tracking">eye-tracking</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20quality%20assessment%20metric" title=" image quality assessment metric"> image quality assessment metric</a>, <a href="https://publications.waset.org/abstracts/search?q=MOS" title=" MOS"> MOS</a>, <a href="https://publications.waset.org/abstracts/search?q=quality%20of%20user%20experience" title=" quality of user experience"> quality of user experience</a>, <a href="https://publications.waset.org/abstracts/search?q=visual%20perception" title=" visual perception"> visual perception</a> </p> <a href="https://publications.waset.org/abstracts/8906/a-new-categorization-of-image-quality-metrics-based-on-a-model-of-human-quality-perception" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/8906.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">411</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">15113</span> Evolving Software Assessment and Certification Models Using Ant Colony Optimization Algorithm</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Saad%20M.%20Darwish">Saad M. Darwish</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Recently, software quality issues have come to be seen as important subject as we see an enormous growth of agencies involved in software industries. However, these agencies cannot guarantee the quality of their products, thus leaving users in uncertainties. Software certification is the extension of quality by means that quality needs to be measured prior to certification granting process. This research participates in solving the problem of software assessment by proposing a model for assessment and certification of software product that uses a fuzzy inference engine to integrate both of process–driven and application-driven quality assurance strategies. The key idea of the on hand model is to improve the compactness and the interpretability of the model’s fuzzy rules via employing an ant colony optimization algorithm (ACO), which tries to find good rules description by dint of compound rules initially expressed with traditional single rules. The model has been tested by case study and the results have demonstrated feasibility and practicability of the model in a real environment. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=software%20quality" title="software quality">software quality</a>, <a href="https://publications.waset.org/abstracts/search?q=quality%20assurance" title=" quality assurance"> quality assurance</a>, <a href="https://publications.waset.org/abstracts/search?q=software%20certification%20model" title=" software certification model"> software certification model</a>, <a href="https://publications.waset.org/abstracts/search?q=software%20assessment" title=" software assessment"> software assessment</a> </p> <a href="https://publications.waset.org/abstracts/18443/evolving-software-assessment-and-certification-models-using-ant-colony-optimization-algorithm" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/18443.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">524</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">15112</span> Potential Usefulness of Video Lectures as a Tool to Improve Synchronous and Asynchronous the Online Education</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Omer%20Shujat%20Bhatti">Omer Shujat Bhatti</a>, <a href="https://publications.waset.org/abstracts/search?q=Afshan%20Huma"> Afshan Huma</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Online educational system were considered a great opportunity for distance learning. In recent days of COVID19 pandemic, it enable the continuation of educational activities at all levels of education, from primary school to the top level universities. One of the key considered element in supporting the online educational system is video lectures. The current research explored the usefulness of the video lectures delivered to technical students of masters level with a focus on MSc Sustainable Environmental design students who have diverse backgrounds in the formal educational system. Hence they were unable to cope right away with the online system and faced communication and understanding issues in the lecture session due to internet and allied connectivity issues. Researcher used self prepared video lectures for respective subjects and provided them to the students using Youtube channel and subject based Whatsapp groups. Later, students were asked about the usefulness of the lectures towards a better understanding of the subject and an overall enhanced learning experience. More than 80% of the students appreciated the effort and requested it to be part of the overall system. Data collection was done using an online questionnaire which was prior briefed to the students with the purpose of research. It was concluded that video lectures should be considered an integral part of the lecture sessions and must be provided prior to the lecture session, ensuring a better quality of delivery. It was also recommended that the existing system must be upgraded to support the availability of these video lectures through the portal. Teachers training must be provided to help develop quality video content ensuring that is able to cover the content and courses taught. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=video%20lectures" title="video lectures">video lectures</a>, <a href="https://publications.waset.org/abstracts/search?q=online%20distance%20education" title=" online distance education"> online distance education</a>, <a href="https://publications.waset.org/abstracts/search?q=synchronous%20instruction" title=" synchronous instruction"> synchronous instruction</a>, <a href="https://publications.waset.org/abstracts/search?q=asynchronous%20communication" title=" asynchronous communication"> asynchronous communication</a> </p> <a href="https://publications.waset.org/abstracts/153679/potential-usefulness-of-video-lectures-as-a-tool-to-improve-synchronous-and-asynchronous-the-online-education" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/153679.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">116</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">15111</span> Video Shot Detection and Key Frame Extraction Using Faber-Shauder DWT and SVD</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Assma%20Azeroual">Assma Azeroual</a>, <a href="https://publications.waset.org/abstracts/search?q=Karim%20Afdel"> Karim Afdel</a>, <a href="https://publications.waset.org/abstracts/search?q=Mohamed%20El%20Hajji"> Mohamed El Hajji</a>, <a href="https://publications.waset.org/abstracts/search?q=Hassan%20Douzi"> Hassan Douzi</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Key frame extraction methods select the most representative frames of a video, which can be used in different areas of video processing such as video retrieval, video summary, and video indexing. In this paper we present a novel approach for extracting key frames from video sequences. The frame is characterized uniquely by his contours which are represented by the dominant blocks. These dominant blocks are located on the contours and its near textures. When the video frames have a noticeable changement, its dominant blocks changed, then we can extracte a key frame. The dominant blocks of every frame is computed, and then feature vectors are extracted from the dominant blocks image of each frame and arranged in a feature matrix. Singular Value Decomposition is used to calculate sliding windows ranks of those matrices. Finally the computed ranks are traced and then we are able to extract key frames of a video. Experimental results show that the proposed approach is robust against a large range of digital effects used during shot transition. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=FSDWT" title="FSDWT">FSDWT</a>, <a href="https://publications.waset.org/abstracts/search?q=key%20frame%20extraction" title=" key frame extraction"> key frame extraction</a>, <a href="https://publications.waset.org/abstracts/search?q=shot%20detection" title=" shot detection"> shot detection</a>, <a href="https://publications.waset.org/abstracts/search?q=singular%20value%20decomposition" title=" singular value decomposition"> singular value decomposition</a> </p> <a href="https://publications.waset.org/abstracts/18296/video-shot-detection-and-key-frame-extraction-using-faber-shauder-dwt-and-svd" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/18296.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">398</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">15110</span> Translation Quality Assessment: Proposing a Linguistic-Based Model for Translation Criticism with Considering Ideology and Power Relations</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Mehrnoosh%20Pirhayati">Mehrnoosh Pirhayati</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In this study, the researcher tried to propose a model of Translation Criticism (TC) regarding the phenomenon of Translation Quality Assessment (TQA). With changing the general view on re/writing as an illegal act, the researcher defined a scale for the act of translation and determined the redline of translation with other products. This research attempts to show TC as a related phenomenon to TQA. This study shows that TQA with using the rules and factors of TC as depicted in both product-oriented analysis and process-oriented analysis, determines the orientation or the level of the quality of translation. This study also depicts that TC, regarding TQA’s perspective, reveals the aim of the translation of original text and the root of ideological manipulation and re/writing. On the other hand, this study stresses the existence of a direct relationship between the linguistic materials and semiotic codes of a text or book. This study can be fruitful for translators, scholars, translation criticizers, and translation quality assessors, and also it is applicable in the area of pedagogy. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=a%20model%20of%20translation%20criticism" title="a model of translation criticism">a model of translation criticism</a>, <a href="https://publications.waset.org/abstracts/search?q=a%20model%20of%20translation%20quality%20assessment" title=" a model of translation quality assessment"> a model of translation quality assessment</a>, <a href="https://publications.waset.org/abstracts/search?q=critical%20discourse%20analysis%20%28CDA%29" title=" critical discourse analysis (CDA)"> critical discourse analysis (CDA)</a>, <a href="https://publications.waset.org/abstracts/search?q=re%2Fwriting" title=" re/writing"> re/writing</a>, <a href="https://publications.waset.org/abstracts/search?q=translation%20criticism%20%28TC%29" title=" translation criticism (TC)"> translation criticism (TC)</a>, <a href="https://publications.waset.org/abstracts/search?q=translation%20quality%20assessment%20%28TQA%29" title=" translation quality assessment (TQA)"> translation quality assessment (TQA)</a> </p> <a href="https://publications.waset.org/abstracts/138890/translation-quality-assessment-proposing-a-linguistic-based-model-for-translation-criticism-with-considering-ideology-and-power-relations" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/138890.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">320</span> </span> </div> </div> <ul class="pagination"> <li class="page-item disabled"><span class="page-link">‹</span></li> <li class="page-item active"><span class="page-link">1</span></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=video%20quality%20assessment&page=2">2</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=video%20quality%20assessment&page=3">3</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=video%20quality%20assessment&page=4">4</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=video%20quality%20assessment&page=5">5</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=video%20quality%20assessment&page=6">6</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=video%20quality%20assessment&page=7">7</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=video%20quality%20assessment&page=8">8</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=video%20quality%20assessment&page=9">9</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=video%20quality%20assessment&page=10">10</a></li> <li class="page-item disabled"><span class="page-link">...</span></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=video%20quality%20assessment&page=504">504</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=video%20quality%20assessment&page=505">505</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=video%20quality%20assessment&page=2" rel="next">›</a></li> </ul> </div> </main> <footer> <div id="infolinks" class="pt-3 pb-2"> <div class="container"> <div style="background-color:#f5f5f5;" class="p-3"> <div class="row"> <div class="col-md-2"> <ul class="list-unstyled"> About <li><a href="https://waset.org/page/support">About Us</a></li> <li><a href="https://waset.org/page/support#legal-information">Legal</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/WASET-16th-foundational-anniversary.pdf">WASET celebrates its 16th foundational anniversary</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Account <li><a href="https://waset.org/profile">My Account</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Explore <li><a href="https://waset.org/disciplines">Disciplines</a></li> <li><a href="https://waset.org/conferences">Conferences</a></li> <li><a href="https://waset.org/conference-programs">Conference Program</a></li> <li><a href="https://waset.org/committees">Committees</a></li> <li><a href="https://publications.waset.org">Publications</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Research <li><a href="https://publications.waset.org/abstracts">Abstracts</a></li> <li><a href="https://publications.waset.org">Periodicals</a></li> <li><a href="https://publications.waset.org/archive">Archive</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Open Science <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Science-Philosophy.pdf">Open Science Philosophy</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Science-Award.pdf">Open Science Award</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Society-Open-Science-and-Open-Innovation.pdf">Open Innovation</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Postdoctoral-Fellowship-Award.pdf">Postdoctoral Fellowship Award</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Scholarly-Research-Review.pdf">Scholarly Research Review</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Support <li><a href="https://waset.org/page/support">Support</a></li> <li><a href="https://waset.org/profile/messages/create">Contact Us</a></li> <li><a href="https://waset.org/profile/messages/create">Report Abuse</a></li> </ul> </div> </div> </div> </div> </div> <div class="container text-center"> <hr style="margin-top:0;margin-bottom:.3rem;"> <a href="https://creativecommons.org/licenses/by/4.0/" target="_blank" class="text-muted small">Creative Commons Attribution 4.0 International License</a> <div id="copy" class="mt-2">© 2024 World Academy of Science, Engineering and Technology</div> </div> </footer> <a href="javascript:" id="return-to-top"><i class="fas fa-arrow-up"></i></a> <div class="modal" id="modal-template"> <div class="modal-dialog"> <div class="modal-content"> <div class="row m-0 mt-1"> <div class="col-md-12"> <button type="button" class="close" data-dismiss="modal" aria-label="Close"><span aria-hidden="true">×</span></button> </div> </div> <div class="modal-body"></div> </div> </div> </div> <script src="https://cdn.waset.org/static/plugins/jquery-3.3.1.min.js"></script> <script src="https://cdn.waset.org/static/plugins/bootstrap-4.2.1/js/bootstrap.bundle.min.js"></script> <script src="https://cdn.waset.org/static/js/site.js?v=150220211556"></script> <script> jQuery(document).ready(function() { /*jQuery.get("https://publications.waset.org/xhr/user-menu", function (response) { jQuery('#mainNavMenu').append(response); });*/ jQuery.get({ url: "https://publications.waset.org/xhr/user-menu", cache: false }).then(function(response){ jQuery('#mainNavMenu').append(response); }); }); </script> </body> </html>