CINXE.COM

Search results for: H.263 video encoding digital signal processing

<!DOCTYPE html> <html lang="en" dir="ltr"> <head> <!-- Google tag (gtag.js) --> <script async src="https://www.googletagmanager.com/gtag/js?id=G-P63WKM1TM1"></script> <script> window.dataLayer = window.dataLayer || []; function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'G-P63WKM1TM1'); </script> <!-- Yandex.Metrika counter --> <script type="text/javascript" > (function(m,e,t,r,i,k,a){m[i]=m[i]||function(){(m[i].a=m[i].a||[]).push(arguments)}; m[i].l=1*new Date(); for (var j = 0; j < document.scripts.length; j++) {if (document.scripts[j].src === r) { return; }} k=e.createElement(t),a=e.getElementsByTagName(t)[0],k.async=1,k.src=r,a.parentNode.insertBefore(k,a)}) (window, document, "script", "https://mc.yandex.ru/metrika/tag.js", "ym"); ym(55165297, "init", { clickmap:false, trackLinks:true, accurateTrackBounce:true, webvisor:false }); </script> <noscript><div><img src="https://mc.yandex.ru/watch/55165297" style="position:absolute; left:-9999px;" alt="" /></div></noscript> <!-- /Yandex.Metrika counter --> <!-- Matomo --> <!-- End Matomo Code --> <title>Search results for: H.263 video encoding digital signal processing</title> <meta name="description" content="Search results for: H.263 video encoding digital signal processing"> <meta name="keywords" content="H.263 video encoding digital signal processing"> <meta name="viewport" content="width=device-width, initial-scale=1, minimum-scale=1, maximum-scale=1, user-scalable=no"> <meta charset="utf-8"> <link href="https://cdn.waset.org/favicon.ico" type="image/x-icon" rel="shortcut icon"> <link href="https://cdn.waset.org/static/plugins/bootstrap-4.2.1/css/bootstrap.min.css" rel="stylesheet"> <link href="https://cdn.waset.org/static/plugins/fontawesome/css/all.min.css" rel="stylesheet"> <link href="https://cdn.waset.org/static/css/site.css?v=150220211555" rel="stylesheet"> </head> <body> <header> <div class="container"> <nav class="navbar navbar-expand-lg navbar-light"> <a class="navbar-brand" href="https://waset.org"> <img src="https://cdn.waset.org/static/images/wasetc.png" alt="Open Science Research Excellence" title="Open Science Research Excellence" /> </a> <button class="d-block d-lg-none navbar-toggler ml-auto" type="button" data-toggle="collapse" data-target="#navbarMenu" aria-controls="navbarMenu" aria-expanded="false" aria-label="Toggle navigation"> <span class="navbar-toggler-icon"></span> </button> <div class="w-100"> <div class="d-none d-lg-flex flex-row-reverse"> <form method="get" action="https://waset.org/search" class="form-inline my-2 my-lg-0"> <input class="form-control mr-sm-2" type="search" placeholder="Search Conferences" value="H.263 video encoding digital signal processing" name="q" aria-label="Search"> <button class="btn btn-light my-2 my-sm-0" type="submit"><i class="fas fa-search"></i></button> </form> </div> <div class="collapse navbar-collapse mt-1" id="navbarMenu"> <ul class="navbar-nav ml-auto align-items-center" id="mainNavMenu"> <li class="nav-item"> <a class="nav-link" href="https://waset.org/conferences" title="Conferences in 2024/2025/2026">Conferences</a> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/disciplines" title="Disciplines">Disciplines</a> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/committees" rel="nofollow">Committees</a> </li> <li class="nav-item dropdown"> <a class="nav-link dropdown-toggle" href="#" id="navbarDropdownPublications" role="button" data-toggle="dropdown" aria-haspopup="true" aria-expanded="false"> Publications </a> <div class="dropdown-menu" aria-labelledby="navbarDropdownPublications"> <a class="dropdown-item" href="https://publications.waset.org/abstracts">Abstracts</a> <a class="dropdown-item" href="https://publications.waset.org">Periodicals</a> <a class="dropdown-item" href="https://publications.waset.org/archive">Archive</a> </div> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/page/support" title="Support">Support</a> </li> </ul> </div> </div> </nav> </div> </header> <main> <div class="container mt-4"> <div class="row"> <div class="col-md-9 mx-auto"> <form method="get" action="https://publications.waset.org/abstracts/search"> <div id="custom-search-input"> <div class="input-group"> <i class="fas fa-search"></i> <input type="text" class="search-query" name="q" placeholder="Author, Title, Abstract, Keywords" value="H.263 video encoding digital signal processing"> <input type="submit" class="btn_search" value="Search"> </div> </div> </form> </div> </div> <div class="row mt-3"> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Commenced</strong> in January 2007</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Frequency:</strong> Monthly</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Edition:</strong> International</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Paper Count:</strong> 8333</div> </div> </div> </div> <h1 class="mt-3 mb-3 text-center" style="font-size:1.6rem;">Search results for: H.263 video encoding digital signal processing</h1> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">8333</span> H.263 Based Video Transceiver for Wireless Camera System</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Won-Ho%20Kim">Won-Ho Kim</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In this paper, a design of H.263 based wireless video transceiver is presented for wireless camera system. It uses standard WIFI transceiver and the covering area is up to 100m. Furthermore the standard H.263 video encoding technique is used for video compression since wireless video transmitter is unable to transmit high capacity raw data in real time and the implemented system is capable of streaming at speed of less than 1Mbps using NTSC 720x480 video. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=wireless%20video%20transceiver" title="wireless video transceiver">wireless video transceiver</a>, <a href="https://publications.waset.org/abstracts/search?q=video%20surveillance%20camera" title=" video surveillance camera"> video surveillance camera</a>, <a href="https://publications.waset.org/abstracts/search?q=H.263%20video%20encoding%20digital%20signal%20processing" title=" H.263 video encoding digital signal processing"> H.263 video encoding digital signal processing</a> </p> <a href="https://publications.waset.org/abstracts/12951/h263-based-video-transceiver-for-wireless-camera-system" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/12951.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">364</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">8332</span> Development of a Tesla Music Coil from Signal Processing</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Samaniego%20Campoverde%20Jos%C3%A9%20Enrique">Samaniego Campoverde José Enrique</a>, <a href="https://publications.waset.org/abstracts/search?q=Rosero%20Mu%C3%B1oz%20Jorge%20Enrique"> Rosero Muñoz Jorge Enrique</a>, <a href="https://publications.waset.org/abstracts/search?q=Luzcando%20Narea%20Lorena%20Elizabeth"> Luzcando Narea Lorena Elizabeth</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper presents a practical and theoretical model for the operation of the Tesla coil using digital signal processing. The research is based on the analysis of ten scientific papers exploring the development and operation of the Tesla coil. Starting from the Testa coil, several modifications were carried out on the Tesla coil, with the aim of amplifying the digital signal by making use of digital signal processing. To achieve this, an amplifier with a transistor and digital filters provided by MATLAB software were used, which were chosen according to the characteristics of the signals in question. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=tesla%20coil" title="tesla coil">tesla coil</a>, <a href="https://publications.waset.org/abstracts/search?q=digital%20signal%20process" title=" digital signal process"> digital signal process</a>, <a href="https://publications.waset.org/abstracts/search?q=equalizer" title=" equalizer"> equalizer</a>, <a href="https://publications.waset.org/abstracts/search?q=graphical%20environment" title=" graphical environment"> graphical environment</a> </p> <a href="https://publications.waset.org/abstracts/170965/development-of-a-tesla-music-coil-from-signal-processing" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/170965.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">117</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">8331</span> Anonymous Editing Prevention Technique Using Gradient Method for High-Quality Video</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Jiwon%20Lee">Jiwon Lee</a>, <a href="https://publications.waset.org/abstracts/search?q=Chanho%20Jung"> Chanho Jung</a>, <a href="https://publications.waset.org/abstracts/search?q=Si-Hwan%20Jang"> Si-Hwan Jang</a>, <a href="https://publications.waset.org/abstracts/search?q=Kyung-Ill%20Kim"> Kyung-Ill Kim</a>, <a href="https://publications.waset.org/abstracts/search?q=Sanghyun%20Joo"> Sanghyun Joo</a>, <a href="https://publications.waset.org/abstracts/search?q=Wook-Ho%20Son"> Wook-Ho Son</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Since the advances in digital imaging technologies have led to development of high quality digital devices, there are a lot of illegal copies of copyrighted video content on the internet. Thus, we propose a high-quality (HQ) video watermarking scheme that can prevent these illegal copies from spreading out. The proposed scheme is applied spatial and temporal gradient methods to improve the fidelity and detection performance. Also, the scheme duplicates the watermark signal temporally to alleviate the signal reduction caused by geometric and signal-processing distortions. Experimental results show that the proposed scheme achieves better performance than previously proposed schemes and it has high fidelity. The proposed scheme can be used in broadcast monitoring or traitor tracking applications which need fast detection process to prevent illegally recorded video content from spreading out. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=editing%20prevention%20technique" title="editing prevention technique">editing prevention technique</a>, <a href="https://publications.waset.org/abstracts/search?q=gradient%20method" title=" gradient method"> gradient method</a>, <a href="https://publications.waset.org/abstracts/search?q=luminance%20change" title=" luminance change"> luminance change</a>, <a href="https://publications.waset.org/abstracts/search?q=video%20watermarking" title=" video watermarking"> video watermarking</a> </p> <a href="https://publications.waset.org/abstracts/42072/anonymous-editing-prevention-technique-using-gradient-method-for-high-quality-video" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/42072.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">456</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">8330</span> An Ultrasonic Signal Processing System for Tomographic Imaging of Reinforced Concrete Structures</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Edwin%20Forero-Garcia">Edwin Forero-Garcia</a>, <a href="https://publications.waset.org/abstracts/search?q=Jaime%20Vitola"> Jaime Vitola</a>, <a href="https://publications.waset.org/abstracts/search?q=Brayan%20Cardenas"> Brayan Cardenas</a>, <a href="https://publications.waset.org/abstracts/search?q=Johan%20Casagua"> Johan Casagua</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This research article presents the integration of electronic and computer systems, which developed an ultrasonic signal processing system that performs the capture, adaptation, and analog-digital conversion to later carry out its processing and visualization. The capture and adaptation of the signal were carried out from the design and implementation of an analog electronic system distributed in stages: 1. Coupling of impedances; 2. Analog filter; 3. Signal amplifier. After the signal conditioning was carried out, the ultrasonic information was digitized using a digital microcontroller to carry out its respective processing. The digital processing of the signals was carried out in MATLAB software for the elaboration of A-Scan, B and D-Scan types of ultrasonic images. Then, advanced processing was performed using the SAFT technique to improve the resolution of the Scan-B-type images. Thus, the information from the ultrasonic images was displayed in a user interface developed in .Net with Visual Studio. For the validation of the system, ultrasonic signals were acquired, and in this way, the non-invasive inspection of the structures was carried out and thus able to identify the existing pathologies in them. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=acquisition" title="acquisition">acquisition</a>, <a href="https://publications.waset.org/abstracts/search?q=signal%20processing" title=" signal processing"> signal processing</a>, <a href="https://publications.waset.org/abstracts/search?q=ultrasound" title=" ultrasound"> ultrasound</a>, <a href="https://publications.waset.org/abstracts/search?q=SAFT" title=" SAFT"> SAFT</a>, <a href="https://publications.waset.org/abstracts/search?q=HMI" title=" HMI"> HMI</a> </p> <a href="https://publications.waset.org/abstracts/162674/an-ultrasonic-signal-processing-system-for-tomographic-imaging-of-reinforced-concrete-structures" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/162674.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">107</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">8329</span> Optimizing Quantum Machine Learning with Amplitude and Phase Encoding Techniques</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Om%20Viroje">Om Viroje</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Quantum machine learning represents a frontier in computational technology, promising significant advancements in data processing capabilities. This study explores the significance of data encoding techniques, specifically amplitude and phase encoding, in this emerging field. By employing a comparative analysis methodology, the research evaluates how these encoding techniques affect the accuracy, efficiency, and noise resilience of quantum algorithms. Our findings reveal that amplitude encoding enhances algorithmic accuracy and noise tolerance, whereas phase encoding significantly boosts computational efficiency. These insights are crucial for developing robust quantum frameworks that can be effectively applied in real-world scenarios. In conclusion, optimizing encoding strategies is essential for advancing quantum machine learning, potentially transforming various industries through improved data processing and analysis. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=quantum%20machine%20learning" title="quantum machine learning">quantum machine learning</a>, <a href="https://publications.waset.org/abstracts/search?q=data%20encoding" title=" data encoding"> data encoding</a>, <a href="https://publications.waset.org/abstracts/search?q=amplitude%20encoding" title=" amplitude encoding"> amplitude encoding</a>, <a href="https://publications.waset.org/abstracts/search?q=phase%20encoding" title=" phase encoding"> phase encoding</a>, <a href="https://publications.waset.org/abstracts/search?q=noise%20resilience" title=" noise resilience"> noise resilience</a> </p> <a href="https://publications.waset.org/abstracts/193480/optimizing-quantum-machine-learning-with-amplitude-and-phase-encoding-techniques" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/193480.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">14</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">8328</span> Detecting and Disabling Digital Cameras Using D3CIP Algorithm Based on Image Processing</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=S.%20Vignesh">S. Vignesh</a>, <a href="https://publications.waset.org/abstracts/search?q=K.%20S.%20Rangasamy"> K. S. Rangasamy</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The paper deals with the device capable of detecting and disabling digital cameras. The system locates the camera and then neutralizes it. Every digital camera has an image sensor known as a CCD, which is retro-reflective and sends light back directly to its original source at the same angle. The device shines infrared LED light, which is invisible to the human eye, at a distance of about 20 feet. It then collects video of these reflections with a camcorder. Then the video of the reflections is transferred to a computer connected to the device, where it is sent through image processing algorithms that pick out infrared light bouncing back. Once the camera is detected, the device would project an invisible infrared laser into the camera's lens, thereby overexposing the photo and rendering it useless. Low levels of infrared laser neutralize digital cameras but are neither a health danger to humans nor a physical damage to cameras. We also discuss the simplified design of the above device that can used in theatres to prevent piracy. The domains being covered here are optics and image processing. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=CCD" title="CCD">CCD</a>, <a href="https://publications.waset.org/abstracts/search?q=optics" title=" optics"> optics</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20processing" title=" image processing"> image processing</a>, <a href="https://publications.waset.org/abstracts/search?q=D3CIP" title=" D3CIP"> D3CIP</a> </p> <a href="https://publications.waset.org/abstracts/1736/detecting-and-disabling-digital-cameras-using-d3cip-algorithm-based-on-image-processing" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/1736.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">357</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">8327</span> Detection of Clipped Fragments in Speech Signals</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Sergei%20Aleinik">Sergei Aleinik</a>, <a href="https://publications.waset.org/abstracts/search?q=Yuri%20Matveev"> Yuri Matveev</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In this paper a novel method for the detection of clipping in speech signals is described. It is shown that the new method has better performance than known clipping detection methods, is easy to implement, and is robust to changes in signal amplitude, size of data, etc. Statistical simulation results are presented. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=clipping" title="clipping">clipping</a>, <a href="https://publications.waset.org/abstracts/search?q=clipped%20signal" title=" clipped signal"> clipped signal</a>, <a href="https://publications.waset.org/abstracts/search?q=speech%20signal%20processing" title=" speech signal processing"> speech signal processing</a>, <a href="https://publications.waset.org/abstracts/search?q=digital%20signal%20processing" title=" digital signal processing"> digital signal processing</a> </p> <a href="https://publications.waset.org/abstracts/4816/detection-of-clipped-fragments-in-speech-signals" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/4816.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">392</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">8326</span> Going Viral: Constructively Aligning the Use of Digital Video to Effectively Support Faculty Development</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Samuel%20Olugbenga%20King">Samuel Olugbenga King</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This review article, which is a synthesis of the relevant research literature, focuses on the capabilities of digital video to support, facilitate and enhance faculty development. Based on the literature review, faculty development (i.e., academic or educational development) requires the continued adoption of cohesive, theoretical frameworks to guide research and practice; incorporation of relevant tools from analogous fields, such as teacher professional development; systematic program evaluations; and detailed descriptions of practice to further practice and creative development. A cohesive, five-heuristic framework is subsequently outlined to inform the design and evaluation of the use of digital video, so as to address the barriers to advancing faculty development, as identified through the literature review. Alternative impact evaluation approaches are also described, while the limitations of using digital video for faculty development are highlighted. This paper is therefore conceived as one way to meaningfully leverage the educational affordances of digital video to address some lingering gaps in faculty development. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=digital%20video" title="digital video">digital video</a>, <a href="https://publications.waset.org/abstracts/search?q=faculty%2Feducational%20development" title=" faculty/educational development"> faculty/educational development</a>, <a href="https://publications.waset.org/abstracts/search?q=evaluation" title=" evaluation"> evaluation</a>, <a href="https://publications.waset.org/abstracts/search?q=scholarship%20of%20teaching%20and%20learning%20%28SoTL%29" title=" scholarship of teaching and learning (SoTL)"> scholarship of teaching and learning (SoTL)</a> </p> <a href="https://publications.waset.org/abstracts/51184/going-viral-constructively-aligning-the-use-of-digital-video-to-effectively-support-faculty-development" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/51184.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">352</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">8325</span> Tackling the Digital Divide: Enhancing Video Consultation Access for Digital Illiterate Patients in the Hospital</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Wieke%20Ellen%20Bouwes">Wieke Ellen Bouwes</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This study aims to unravel which factors enhance accessibility of video consultations (VCs) for patients with low digital literacy. Thirteen in-depth interviews with patients, hospital employees, eHealth experts, and digital support organizations were held. Patients with low digital literacy received in-home support during real-time video consultations and are observed during the set-up of these consultations. Key findings highlight the importance of patient acceptance, emphasizing video consultations benefits and avoiding standardized courses. The lack of a uniform video consultation system across healthcare providers poses a barrier. Familiarity with support organizations – to support patients in usage of digital tools - among healthcare practitioners enhances accessibility. Moreover, considerations regarding the Dutch General Data Protection Regulation (GDPR) law influence support patients receive. Also, provider readiness to use video consultations influences patient access. Further, alignment between learning styles and support methods seems to determine abilities to learn how to use video consultations. Future research could delve into tailored learning styles and technological solutions for remote access to further explore effectiveness of learning methods. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=video%20consultations" title="video consultations">video consultations</a>, <a href="https://publications.waset.org/abstracts/search?q=digital%20literacy%20skills" title=" digital literacy skills"> digital literacy skills</a>, <a href="https://publications.waset.org/abstracts/search?q=effectiveness%20of%20support" title=" effectiveness of support"> effectiveness of support</a>, <a href="https://publications.waset.org/abstracts/search?q=intra-%20and%20inter-organizational%20relationships" title=" intra- and inter-organizational relationships"> intra- and inter-organizational relationships</a>, <a href="https://publications.waset.org/abstracts/search?q=patient%20acceptance%20of%20video%20consultations" title=" patient acceptance of video consultations"> patient acceptance of video consultations</a> </p> <a href="https://publications.waset.org/abstracts/173756/tackling-the-digital-divide-enhancing-video-consultation-access-for-digital-illiterate-patients-in-the-hospital" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/173756.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">74</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">8324</span> Forensic Challenges in Source Device Identification for Digital Videos</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Mustapha%20Aminu%20Bagiwa">Mustapha Aminu Bagiwa</a>, <a href="https://publications.waset.org/abstracts/search?q=Ainuddin%20Wahid%20Abdul%20Wahab"> Ainuddin Wahid Abdul Wahab</a>, <a href="https://publications.waset.org/abstracts/search?q=Mohd%20Yamani%20Idna%20Idris"> Mohd Yamani Idna Idris</a>, <a href="https://publications.waset.org/abstracts/search?q=Suleman%20Khan"> Suleman Khan</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Video source device identification has become a problem of concern in numerous domains especially in multimedia security and digital investigation. This is because videos are now used as evidence in legal proceedings. Source device identification aim at identifying the source of digital devices using the content they produced. However, due to affordable processing tools and the influx in digital content generating devices, source device identification is still a major problem within the digital forensic community. In this paper, we discuss source device identification for digital videos by identifying techniques that were proposed in the literature for model or specific device identification. This is aimed at identifying salient open challenges for future research. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=video%20forgery" title="video forgery">video forgery</a>, <a href="https://publications.waset.org/abstracts/search?q=source%20camcorder" title=" source camcorder"> source camcorder</a>, <a href="https://publications.waset.org/abstracts/search?q=device%20identification" title=" device identification"> device identification</a>, <a href="https://publications.waset.org/abstracts/search?q=forgery%20detection" title=" forgery detection "> forgery detection </a> </p> <a href="https://publications.waset.org/abstracts/21641/forensic-challenges-in-source-device-identification-for-digital-videos" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/21641.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">631</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">8323</span> Remote Video Supervision via DVB-H Channels</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Hanen%20Ghabi">Hanen Ghabi</a>, <a href="https://publications.waset.org/abstracts/search?q=Youssef%20Oudhini"> Youssef Oudhini</a>, <a href="https://publications.waset.org/abstracts/search?q=Hassen%20Mnif"> Hassen Mnif</a> </p> <p class="card-text"><strong>Abstract:</strong></p> By reference to recent publications dealing with the same problem, and as a follow-up to this research work already published, we propose in this article a new original idea of tele supervision exploiting the opportunities offered by the DVB-H system. The objective is to exploit the RF channels of the DVB-H network in order to insert digital remote monitoring images dedicated to a remote solar power plant. Indeed, the DVB-H (Digital Video Broadcast-Handheld) broadcasting system was designed and deployed for digital broadcasting on the same platform as the parent system, DVB-T. We claim to be able to exploit this approach in order to satisfy the operator of remote photovoltaic sites (and others) in order to remotely control the components of isolated installations by means of video surveillance. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=video%20surveillance" title="video surveillance">video surveillance</a>, <a href="https://publications.waset.org/abstracts/search?q=digital%20video%20broadcast-handheld" title=" digital video broadcast-handheld"> digital video broadcast-handheld</a>, <a href="https://publications.waset.org/abstracts/search?q=photovoltaic%20sites" title=" photovoltaic sites"> photovoltaic sites</a>, <a href="https://publications.waset.org/abstracts/search?q=AVC" title=" AVC"> AVC</a> </p> <a href="https://publications.waset.org/abstracts/147516/remote-video-supervision-via-dvb-h-channels" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/147516.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">184</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">8322</span> Extraction of Text Subtitles in Multimedia Systems</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Amarjit%20Singh">Amarjit Singh</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In this paper, a method for extraction of text subtitles in large video is proposed. The video data needs to be annotated for many multimedia applications. Text is incorporated in digital video for the motive of providing useful information about that video. So need arises to detect text present in video to understanding and video indexing. This is achieved in two steps. First step is text localization and the second step is text verification. The method of text detection can be extended to text recognition which finds applications in automatic video indexing; video annotation and content based video retrieval. The method has been tested on various types of videos. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=video" title="video">video</a>, <a href="https://publications.waset.org/abstracts/search?q=subtitles" title=" subtitles"> subtitles</a>, <a href="https://publications.waset.org/abstracts/search?q=extraction" title=" extraction"> extraction</a>, <a href="https://publications.waset.org/abstracts/search?q=annotation" title=" annotation"> annotation</a>, <a href="https://publications.waset.org/abstracts/search?q=frames" title=" frames"> frames</a> </p> <a href="https://publications.waset.org/abstracts/24441/extraction-of-text-subtitles-in-multimedia-systems" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/24441.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">601</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">8321</span> Robustness of MIMO-OFDM Schemes for Future Digital TV to Carrier Frequency Offset</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=D.%20Sankara%20Reddy">D. Sankara Reddy</a>, <a href="https://publications.waset.org/abstracts/search?q=T.%20Kranthi%20Kumar"> T. Kranthi Kumar</a>, <a href="https://publications.waset.org/abstracts/search?q=K.%20Sreevani"> K. Sreevani</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper investigates the impact of carrier frequency offset (CFO) on the performance of different MIMO-OFDM schemes with high spectral efficiency for next generation of terrestrial digital TV. We show that all studied MIMO-OFDM schemes are sensitive to CFO when it is greater than 1% of intercarrier spacing. We show also that the Alamouti scheme is the most sensitive MIMO scheme to CFO. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=modulation%20and%20multiplexing%20%28MIMO-OFDM%29" title="modulation and multiplexing (MIMO-OFDM)">modulation and multiplexing (MIMO-OFDM)</a>, <a href="https://publications.waset.org/abstracts/search?q=signal%20processing%20for%20transmission%0D%0Acarrier%20frequency%20offset" title=" signal processing for transmission carrier frequency offset"> signal processing for transmission carrier frequency offset</a>, <a href="https://publications.waset.org/abstracts/search?q=future%20digital%20TV" title=" future digital TV"> future digital TV</a>, <a href="https://publications.waset.org/abstracts/search?q=imaging%20and%20signal%20processing" title=" imaging and signal processing"> imaging and signal processing</a> </p> <a href="https://publications.waset.org/abstracts/22713/robustness-of-mimo-ofdm-schemes-for-future-digital-tv-to-carrier-frequency-offset" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/22713.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">487</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">8320</span> A Study on the Different Components of a Typical Back-Scattered Chipless RFID Tag Reflection </h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Fatemeh%20Babaeian">Fatemeh Babaeian</a>, <a href="https://publications.waset.org/abstracts/search?q=Nemai%20Chandra%20Karmakar"> Nemai Chandra Karmakar</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Chipless RFID system is a wireless system for tracking and identification which use passive tags for encoding data. The advantage of using chipless RFID tag is having a planar tag which is printable on different low-cost materials like paper and plastic. The printed tag can be attached to different items in the labelling level. Since the price of chipless RFID tag can be as low as a fraction of a cent, this technology has the potential to compete with the conventional optical barcode labels. However, due to the passive structure of the tag, data processing of the reflection signal is a crucial challenge. The captured reflected signal from a tag attached to an item consists of different components which are the reflection from the reader antenna, the reflection from the item, the tag structural mode RCS component and the antenna mode RCS of the tag. All these components are summed up in both time and frequency domains. The effect of reflection from the item and the structural mode RCS component can distort/saturate the frequency domain signal and cause difficulties in extracting the desired component which is the antenna mode RCS. Therefore, it is required to study the reflection of the tag in both time and frequency domains to have a better understanding of the nature of the captured chipless RFID signal. The other benefits of this study can be to find an optimised encoding technique in tag design level and to find the best processing algorithm the chipless RFID signal in decoding level. In this paper, the reflection from a typical backscattered chipless RFID tag with six resonances is analysed, and different components of the signal are separated in both time and frequency domains. Moreover, the time domain signal corresponding to each resonator of the tag is studied. The data for this processing was captured from simulation in CST Microwave Studio 2017. The outcome of this study is understanding different components of a measured signal in a chipless RFID system and a discovering a research gap which is a need to find an optimum detection algorithm for tag ID extraction. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=antenna%20mode%20RCS" title="antenna mode RCS">antenna mode RCS</a>, <a href="https://publications.waset.org/abstracts/search?q=chipless%20RFID%20tag" title=" chipless RFID tag"> chipless RFID tag</a>, <a href="https://publications.waset.org/abstracts/search?q=resonance" title=" resonance"> resonance</a>, <a href="https://publications.waset.org/abstracts/search?q=structural%20mode%20RCS" title=" structural mode RCS"> structural mode RCS</a> </p> <a href="https://publications.waset.org/abstracts/103734/a-study-on-the-different-components-of-a-typical-back-scattered-chipless-rfid-tag-reflection" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/103734.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">200</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">8319</span> Classification of Cochannel Signals Using Cyclostationary Signal Processing and Deep Learning</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Bryan%20Crompton">Bryan Crompton</a>, <a href="https://publications.waset.org/abstracts/search?q=Daniel%20Giger"> Daniel Giger</a>, <a href="https://publications.waset.org/abstracts/search?q=Tanay%20Mehta"> Tanay Mehta</a>, <a href="https://publications.waset.org/abstracts/search?q=Apurva%20Mody"> Apurva Mody</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The task of classifying radio frequency (RF) signals has seen recent success in employing deep neural network models. In this work, we present a combined signal processing and machine learning approach to signal classification for cochannel anomalous signals. The power spectral density and cyclostationary signal processing features of a captured signal are computed and fed into a neural net to produce a classification decision. Our combined signal preprocessing and machine learning approach allows for simpler neural networks with fast training times and small computational resource requirements for inference with longer preprocessing time. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=signal%20processing" title="signal processing">signal processing</a>, <a href="https://publications.waset.org/abstracts/search?q=machine%20learning" title=" machine learning"> machine learning</a>, <a href="https://publications.waset.org/abstracts/search?q=cyclostationary%20signal%20processing" title=" cyclostationary signal processing"> cyclostationary signal processing</a>, <a href="https://publications.waset.org/abstracts/search?q=signal%20classification" title=" signal classification"> signal classification</a> </p> <a href="https://publications.waset.org/abstracts/164958/classification-of-cochannel-signals-using-cyclostationary-signal-processing-and-deep-learning" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/164958.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">107</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">8318</span> Video Shot Detection and Key Frame Extraction Using Faber-Shauder DWT and SVD</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Assma%20Azeroual">Assma Azeroual</a>, <a href="https://publications.waset.org/abstracts/search?q=Karim%20Afdel"> Karim Afdel</a>, <a href="https://publications.waset.org/abstracts/search?q=Mohamed%20El%20Hajji"> Mohamed El Hajji</a>, <a href="https://publications.waset.org/abstracts/search?q=Hassan%20Douzi"> Hassan Douzi</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Key frame extraction methods select the most representative frames of a video, which can be used in different areas of video processing such as video retrieval, video summary, and video indexing. In this paper we present a novel approach for extracting key frames from video sequences. The frame is characterized uniquely by his contours which are represented by the dominant blocks. These dominant blocks are located on the contours and its near textures. When the video frames have a noticeable changement, its dominant blocks changed, then we can extracte a key frame. The dominant blocks of every frame is computed, and then feature vectors are extracted from the dominant blocks image of each frame and arranged in a feature matrix. Singular Value Decomposition is used to calculate sliding windows ranks of those matrices. Finally the computed ranks are traced and then we are able to extract key frames of a video. Experimental results show that the proposed approach is robust against a large range of digital effects used during shot transition. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=FSDWT" title="FSDWT">FSDWT</a>, <a href="https://publications.waset.org/abstracts/search?q=key%20frame%20extraction" title=" key frame extraction"> key frame extraction</a>, <a href="https://publications.waset.org/abstracts/search?q=shot%20detection" title=" shot detection"> shot detection</a>, <a href="https://publications.waset.org/abstracts/search?q=singular%20value%20decomposition" title=" singular value decomposition"> singular value decomposition</a> </p> <a href="https://publications.waset.org/abstracts/18296/video-shot-detection-and-key-frame-extraction-using-faber-shauder-dwt-and-svd" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/18296.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">398</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">8317</span> Voice Signal Processing and Coding in MATLAB Generating a Plasma Signal in a Tesla Coil for a Security System</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Juan%20Jimenez">Juan Jimenez</a>, <a href="https://publications.waset.org/abstracts/search?q=Erika%20Yambay"> Erika Yambay</a>, <a href="https://publications.waset.org/abstracts/search?q=Dayana%20Pilco"> Dayana Pilco</a>, <a href="https://publications.waset.org/abstracts/search?q=Brayan%20Parra"> Brayan Parra</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper presents an investigation of voice signal processing and coding using MATLAB, with the objective of generating a plasma signal on a Tesla coil within a security system. The approach focuses on using advanced voice signal processing techniques to encode and modulate the audio signal, which is then amplified and applied to a Tesla coil. The result is the creation of a striking visual effect of voice-controlled plasma with specific applications in security systems. The article explores the technical aspects of voice signal processing, the generation of the plasma signal, and its relationship to security. The implications and creative potential of this technology are discussed, highlighting its relevance at the forefront of research in signal processing and visual effect generation in the field of security systems. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=voice%20signal%20processing" title="voice signal processing">voice signal processing</a>, <a href="https://publications.waset.org/abstracts/search?q=voice%20signal%20coding" title=" voice signal coding"> voice signal coding</a>, <a href="https://publications.waset.org/abstracts/search?q=MATLAB" title=" MATLAB"> MATLAB</a>, <a href="https://publications.waset.org/abstracts/search?q=plasma%20signal" title=" plasma signal"> plasma signal</a>, <a href="https://publications.waset.org/abstracts/search?q=Tesla%20coil" title=" Tesla coil"> Tesla coil</a>, <a href="https://publications.waset.org/abstracts/search?q=security%20system" title=" security system"> security system</a>, <a href="https://publications.waset.org/abstracts/search?q=visual%20effects" title=" visual effects"> visual effects</a>, <a href="https://publications.waset.org/abstracts/search?q=audiovisual%20interaction" title=" audiovisual interaction"> audiovisual interaction</a> </p> <a href="https://publications.waset.org/abstracts/170828/voice-signal-processing-and-coding-in-matlab-generating-a-plasma-signal-in-a-tesla-coil-for-a-security-system" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/170828.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">93</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">8316</span> Distributed Processing for Content Based Lecture Video Retrieval on Hadoop Framework</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=U.%20S.%20N.%20Raju">U. S. N. Raju</a>, <a href="https://publications.waset.org/abstracts/search?q=Kothuri%20Sai%20Kiran"> Kothuri Sai Kiran</a>, <a href="https://publications.waset.org/abstracts/search?q=Meena%20G.%20Kamal"> Meena G. Kamal</a>, <a href="https://publications.waset.org/abstracts/search?q=Vinay%20Nikhil%20Pabba"> Vinay Nikhil Pabba</a>, <a href="https://publications.waset.org/abstracts/search?q=Suresh%20Kanaparthi"> Suresh Kanaparthi</a> </p> <p class="card-text"><strong>Abstract:</strong></p> There is huge amount of lecture video data available for public use, and many more lecture videos are being created and uploaded every day. Searching for videos on required topics from this huge database is a challenging task. Therefore, an efficient method for video retrieval is needed. An approach for automated video indexing and video search in large lecture video archives is presented. As the amount of video lecture data is huge, it is very inefficient to do the processing in a centralized computation framework. Hence, Hadoop Framework for distributed computing for Big Video Data is used. First, step in the process is automatic video segmentation and key-frame detection to offer a visual guideline for the video content navigation. In the next step, we extract textual metadata by applying video Optical Character Recognition (OCR) technology on key-frames. The OCR and detected slide text line types are adopted for keyword extraction, by which both video- and segment-level keywords are extracted for content-based video browsing and search. The performance of the indexing process can be improved for a large database by using distributed computing on Hadoop framework. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=video%20lectures" title="video lectures">video lectures</a>, <a href="https://publications.waset.org/abstracts/search?q=big%20video%20data" title=" big video data"> big video data</a>, <a href="https://publications.waset.org/abstracts/search?q=video%20retrieval" title=" video retrieval"> video retrieval</a>, <a href="https://publications.waset.org/abstracts/search?q=hadoop" title=" hadoop"> hadoop</a> </p> <a href="https://publications.waset.org/abstracts/26648/distributed-processing-for-content-based-lecture-video-retrieval-on-hadoop-framework" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/26648.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">534</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">8315</span> Improvement of Piezoresistive Pressure Sensor Accuracy by Means of Current Loop Circuit Using Optimal Digital Signal Processing </h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Peter%20A.%20L%E2%80%99vov">Peter A. L’vov</a>, <a href="https://publications.waset.org/abstracts/search?q=Roman%20S.%20Konovalov"> Roman S. Konovalov</a>, <a href="https://publications.waset.org/abstracts/search?q=Alexey%20A.%20L%E2%80%99vov"> Alexey A. L’vov</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The paper presents the advanced digital modification of the conventional current loop circuit for pressure piezoelectric transducers. The optimal DSP algorithms of current loop responses by the maximum likelihood method are applied for diminishing of measurement errors. The loop circuit has some additional advantages such as the possibility to operate with any type of resistance or reactance sensors, and a considerable increase in accuracy and quality of measurements to be compared with AC bridges. The results obtained are dedicated to replace high-accuracy and expensive measuring bridges with current loop circuits. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=current%20loop" title="current loop">current loop</a>, <a href="https://publications.waset.org/abstracts/search?q=maximum%20likelihood%20method" title=" maximum likelihood method"> maximum likelihood method</a>, <a href="https://publications.waset.org/abstracts/search?q=optimal%20digital%20signal%20processing" title=" optimal digital signal processing"> optimal digital signal processing</a>, <a href="https://publications.waset.org/abstracts/search?q=precise%20pressure%20measurement" title=" precise pressure measurement"> precise pressure measurement</a> </p> <a href="https://publications.waset.org/abstracts/22685/improvement-of-piezoresistive-pressure-sensor-accuracy-by-means-of-current-loop-circuit-using-optimal-digital-signal-processing" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/22685.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">529</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">8314</span> Graph Similarity: Algebraic Model and Its Application to Nonuniform Signal Processing</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Nileshkumar%20Vishnav">Nileshkumar Vishnav</a>, <a href="https://publications.waset.org/abstracts/search?q=Aditya%20Tatu"> Aditya Tatu</a> </p> <p class="card-text"><strong>Abstract:</strong></p> A recent approach of representing graph signals and graph filters as polynomials is useful for graph signal processing. In this approach, the adjacency matrix plays pivotal role; instead of the more common approach involving graph-Laplacian. In this work, we follow the adjacency matrix based approach and corresponding algebraic signal model. We further expand the theory and introduce the concept of similarity of two graphs. The similarity of graphs is useful in that key properties (such as filter-response, algebra related to graph) get transferred from one graph to another. We demonstrate potential applications of the relation between two similar graphs, such as nonuniform filter design, DTMF detection and signal reconstruction. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=graph%20signal%20processing" title="graph signal processing">graph signal processing</a>, <a href="https://publications.waset.org/abstracts/search?q=algebraic%20signal%20processing" title=" algebraic signal processing"> algebraic signal processing</a>, <a href="https://publications.waset.org/abstracts/search?q=graph%20similarity" title=" graph similarity"> graph similarity</a>, <a href="https://publications.waset.org/abstracts/search?q=isospectral%20graphs" title=" isospectral graphs"> isospectral graphs</a>, <a href="https://publications.waset.org/abstracts/search?q=nonuniform%20signal%20processing" title=" nonuniform signal processing"> nonuniform signal processing</a> </p> <a href="https://publications.waset.org/abstracts/59404/graph-similarity-algebraic-model-and-its-application-to-nonuniform-signal-processing" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/59404.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">352</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">8313</span> A Hybrid Digital Watermarking Scheme</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Nazish%20Saleem%20Abbas">Nazish Saleem Abbas</a>, <a href="https://publications.waset.org/abstracts/search?q=Muhammad%20Haris%20Jamil"> Muhammad Haris Jamil</a>, <a href="https://publications.waset.org/abstracts/search?q=Hamid%20Sharif"> Hamid Sharif</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Digital watermarking is a technique that allows an individual to add and hide secret information, copyright notice, or other verification message inside a digital audio, video, or image. Today, with the advancement of technology, modern healthcare systems manage patients’ diagnostic information in a digital way in many countries. When transmitted between hospitals through the internet, the medical data becomes vulnerable to attacks and requires security and confidentiality. Digital watermarking techniques are used in order to ensure the authenticity, security and management of medical images and related information. This paper proposes a watermarking technique that embeds a watermark in medical images imperceptibly and securely. In this work, digital watermarking on medical images is carried out using the Least Significant Bit (LSB) with the Discrete Cosine Transform (DCT). The proposed methods of embedding and extraction of a watermark in a watermarked image are performed in the frequency domain using LSB by XOR operation. The quality of the watermarked medical image is measured by the Peak signal-to-noise ratio (PSNR). It was observed that the watermarked medical image obtained performing XOR operation between DCT and LSB survived compression attack having a PSNR up to 38.98. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=watermarking" title="watermarking">watermarking</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20processing" title=" image processing"> image processing</a>, <a href="https://publications.waset.org/abstracts/search?q=DCT" title=" DCT"> DCT</a>, <a href="https://publications.waset.org/abstracts/search?q=LSB" title=" LSB"> LSB</a>, <a href="https://publications.waset.org/abstracts/search?q=PSNR" title=" PSNR"> PSNR</a> </p> <a href="https://publications.waset.org/abstracts/185946/a-hybrid-digital-watermarking-scheme" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/185946.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">47</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">8312</span> Smartphone Video Source Identification Based on Sensor Pattern Noise</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Raquel%20Ramos%20L%C3%B3pez">Raquel Ramos López</a>, <a href="https://publications.waset.org/abstracts/search?q=Anissa%20El-Khattabi"> Anissa El-Khattabi</a>, <a href="https://publications.waset.org/abstracts/search?q=Ana%20Lucila%20Sandoval%20Orozco"> Ana Lucila Sandoval Orozco</a>, <a href="https://publications.waset.org/abstracts/search?q=Luis%20Javier%20Garc%C3%ADa%20Villalba"> Luis Javier García Villalba</a> </p> <p class="card-text"><strong>Abstract:</strong></p> An increasing number of mobile devices with integrated cameras has meant that most digital video comes from these devices. These digital videos can be made anytime, anywhere and for different purposes. They can also be shared on the Internet in a short period of time and may sometimes contain recordings of illegal acts. The need to reliably trace the origin becomes evident when these videos are used for forensic purposes. This work proposes an algorithm to identify the brand and model of mobile device which generated the video. Its procedure is as follows: after obtaining the relevant video information, a classification algorithm based on sensor noise and Wavelet Transform performs the aforementioned identification process. We also present experimental results that support the validity of the techniques used and show promising results. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=digital%20video" title="digital video">digital video</a>, <a href="https://publications.waset.org/abstracts/search?q=forensics%20analysis" title=" forensics analysis"> forensics analysis</a>, <a href="https://publications.waset.org/abstracts/search?q=key%20frame" title=" key frame"> key frame</a>, <a href="https://publications.waset.org/abstracts/search?q=mobile%20device" title=" mobile device"> mobile device</a>, <a href="https://publications.waset.org/abstracts/search?q=PRNU" title=" PRNU"> PRNU</a>, <a href="https://publications.waset.org/abstracts/search?q=sensor%20noise" title=" sensor noise"> sensor noise</a>, <a href="https://publications.waset.org/abstracts/search?q=source%20identification" title=" source identification"> source identification</a> </p> <a href="https://publications.waset.org/abstracts/70332/smartphone-video-source-identification-based-on-sensor-pattern-noise" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/70332.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">428</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">8311</span> Vibroacoustic Modulation with Chirp Signal</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Dong%20Liu">Dong Liu</a> </p> <p class="card-text"><strong>Abstract:</strong></p> By sending a high-frequency probe wave and a low-frequency pump wave to a specimen, the vibroacoustic method evaluates the defect’s severity according to the modulation index of the received signal. Many studies experimentally proved the significant sensitivity of the modulation index to the tiny contact type defect. However, it has also been found that the modulation index was highly affected by the frequency of probe or pump waves. Therefore, the chirp signal has been introduced to the VAM method since it can assess multiple frequencies in a relatively short time duration, so the robustness of the VAM method could be enhanced. Consequently, the signal processing method needs to be modified accordingly. Various studies utilized different algorithms or combinations of algorithms for processing the VAM signal method by chirp excitation. These signal process methods were compared and used for processing a VAM signal acquired from the steel samples. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=vibroacoustic%20modulation" title="vibroacoustic modulation">vibroacoustic modulation</a>, <a href="https://publications.waset.org/abstracts/search?q=nonlinear%20acoustic%20modulation" title=" nonlinear acoustic modulation"> nonlinear acoustic modulation</a>, <a href="https://publications.waset.org/abstracts/search?q=nonlinear%20acoustic%20NDT%26E" title=" nonlinear acoustic NDT&amp;E"> nonlinear acoustic NDT&amp;E</a>, <a href="https://publications.waset.org/abstracts/search?q=signal%20processing" title=" signal processing"> signal processing</a>, <a href="https://publications.waset.org/abstracts/search?q=structural%20health%20monitoring" title=" structural health monitoring"> structural health monitoring</a> </p> <a href="https://publications.waset.org/abstracts/155764/vibroacoustic-modulation-with-chirp-signal" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/155764.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">99</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">8310</span> Hybridization of Mathematical Transforms for Robust Video Watermarking Technique</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Harpal%20Singh">Harpal Singh</a>, <a href="https://publications.waset.org/abstracts/search?q=Sakshi%20Batra"> Sakshi Batra</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The widespread and easy accesses to multimedia contents and possibility to make numerous copies without loss of significant fidelity have roused the requirement of digital rights management. Thus this problem can be effectively solved by Digital watermarking technology. This is a concept of embedding some sort of data or special pattern (watermark) in the multimedia content; this information will later prove ownership in case of a dispute, trace the marked document’s dissemination, identify a misappropriating person or simply inform user about the rights-holder. The primary motive of digital watermarking is to embed the data imperceptibly and robustly in the host information. Extensive counts of watermarking techniques have been developed to embed copyright marks or data in digital images, video, audio and other multimedia objects. With the development of digital video-based innovations, copyright dilemma for the multimedia industry increases. Video watermarking had been proposed in recent years to serve the issue of illicit copying and allocation of videos. It is the process of embedding copyright information in video bit streams. Practically video watermarking schemes have to address some serious challenges as compared to image watermarking schemes like real-time requirements in the video broadcasting, large volume of inherently redundant data between frames, the unbalance between the motion and motionless regions etc. and they are particularly vulnerable to attacks, for example, frame swapping, statistical analysis, rotation, noise, median and crop attacks. In this paper, an effective, robust and imperceptible video watermarking algorithm is proposed based on hybridization of powerful mathematical transforms; Fractional Fourier Transform (FrFT), Discrete Wavelet transforms (DWT) and Singular Value Decomposition (SVD) using redundant wavelet. This scheme utilizes various transforms for embedding watermarks on different layers by using Hybrid systems. For this purpose, the video frames are portioned into layers (RGB) and the watermark is being embedded in two forms in the video frames using SVD portioning of the watermark, and DWT sub-band decomposition of host video, to facilitate copyright safeguard as well as reliability. The FrFT orders are used as the encryption key that allows the watermarking method to be more robust against various attacks. The fidelity of the scheme is enhanced by introducing key generation and wavelet based key embedding watermarking scheme. Thus, for watermark embedding and extraction, same key is required. Therefore the key must be shared between the owner and the verifier via some safe network. This paper demonstrates the performance by considering different qualitative metrics namely Peak Signal to Noise ratio, Structure similarity index and correlation values and also apply some attacks to prove the robustness. The Experimental results are presented to demonstrate that the proposed scheme can withstand a variety of video processing attacks as well as imperceptibility. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=discrete%20wavelet%20transform" title="discrete wavelet transform">discrete wavelet transform</a>, <a href="https://publications.waset.org/abstracts/search?q=robustness" title=" robustness"> robustness</a>, <a href="https://publications.waset.org/abstracts/search?q=video%20watermarking" title=" video watermarking"> video watermarking</a>, <a href="https://publications.waset.org/abstracts/search?q=watermark" title=" watermark"> watermark</a> </p> <a href="https://publications.waset.org/abstracts/89255/hybridization-of-mathematical-transforms-for-robust-video-watermarking-technique" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/89255.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">224</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">8309</span> A Passive Digital Video Authentication Technique Using Wavelet Based Optical Flow Variation Thresholding</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=R.%20S.%20Remya">R. S. Remya</a>, <a href="https://publications.waset.org/abstracts/search?q=U.%20S.%20Sethulekshmi"> U. S. Sethulekshmi</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Detecting the authenticity of a video is an important issue in digital forensics as Video is used as a silent evidence in court such as in child pornography, movie piracy cases, insurance claims, cases involving scientific fraud, traffic monitoring etc. The biggest threat to video data is the availability of modern open video editing tools which enable easy editing of videos without leaving any trace of tampering. In this paper, we propose an efficient passive method for inter-frame video tampering detection, its type and location by estimating the optical flow of wavelet features of adjacent frames and thresholding the variation in the estimated feature. The performance of the algorithm is compared with the z-score thresholding and achieved an efficiency above 95% on all the tested databases. The proposed method works well for videos with dynamic (forensics) as well as static (surveillance) background. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=discrete%20wavelet%20transform" title="discrete wavelet transform">discrete wavelet transform</a>, <a href="https://publications.waset.org/abstracts/search?q=optical%20flow" title=" optical flow"> optical flow</a>, <a href="https://publications.waset.org/abstracts/search?q=optical%20flow%20variation" title=" optical flow variation"> optical flow variation</a>, <a href="https://publications.waset.org/abstracts/search?q=video%20tampering" title=" video tampering"> video tampering</a> </p> <a href="https://publications.waset.org/abstracts/45252/a-passive-digital-video-authentication-technique-using-wavelet-based-optical-flow-variation-thresholding" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/45252.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">359</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">8308</span> Temporal Progression of Episodic Memory as Function of Encoding Condition and Age: Further Investigation of Action Memory in School-Aged Children</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Farzaneh%20Badinlou">Farzaneh Badinlou</a>, <a href="https://publications.waset.org/abstracts/search?q=Reza%20Kormi-Nouri"> Reza Kormi-Nouri</a>, <a href="https://publications.waset.org/abstracts/search?q=Monika%20Knopf"> Monika Knopf</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Studies of adults' episodic memory have found that enacted encoding not only improve recall performance but also retrieve faster during the recall period. The current study focused on exploring the temporal progression of different encoding conditions in younger and older school children. 204 students from two age group of 8 and 14 participated in this study. During the study phase, we studied action encoding in two forms; participants performed the phrases by themselves (SPT), and observed the performance of the experimenter (EPT), which were compared with verbal encoding; participants listened to verbal action phrases (VT). At test phase, we used immediate and delayed free recall tests. We observed significant differences in memory performance as function of age group, and encoding conditions in both immediate and delayed free recall tests. Moreover, temporal progression of recall was faster in older children when compared with younger ones. The interaction of age-group and encoding condition was only significant in delayed recall displaying that younger children performed better in EPT whereas older children outperformed in SPT. It was proposed that enactment effect in form of SPT enhances item-specific processing, whereas EPT improves relational information processing and this differential processes are responsible for the results achieved in younger and older children. The role of memory strategies and information processing methods in younger and older children were considered in this study. Moreover, the temporal progression of recall was faster in action encoding in the form of SPT and EPT compared with verbal encoding in both immediate and delayed free recall and size of enactment effect was constantly increased throughout the recall period. The results of the present study provide further evidence that the action memory is explained with an emphasis on the notion of information processing and strategic views. These results also reveal the temporal progression of recall as a new dimension of episodic memory in children. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=action%20memory" title="action memory">action memory</a>, <a href="https://publications.waset.org/abstracts/search?q=enactment%20effect" title=" enactment effect"> enactment effect</a>, <a href="https://publications.waset.org/abstracts/search?q=episodic%20memory" title=" episodic memory"> episodic memory</a>, <a href="https://publications.waset.org/abstracts/search?q=school-aged%20children" title=" school-aged children"> school-aged children</a>, <a href="https://publications.waset.org/abstracts/search?q=temporal%20progression" title=" temporal progression"> temporal progression</a> </p> <a href="https://publications.waset.org/abstracts/71738/temporal-progression-of-episodic-memory-as-function-of-encoding-condition-and-age-further-investigation-of-action-memory-in-school-aged-children" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/71738.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">273</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">8307</span> High Level Synthesis of Canny Edge Detection Algorithm on Zynq Platform</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Hanaa%20M.%20Abdelgawad">Hanaa M. Abdelgawad</a>, <a href="https://publications.waset.org/abstracts/search?q=Mona%20Safar"> Mona Safar</a>, <a href="https://publications.waset.org/abstracts/search?q=Ayman%20M.%20Wahba"> Ayman M. Wahba</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Real-time image and video processing is a demand in many computer vision applications, e.g. video surveillance, traffic management and medical imaging. The processing of those video applications requires high computational power. Therefore, the optimal solution is the collaboration of CPU and hardware accelerators. In this paper, a Canny edge detection hardware accelerator is proposed. Canny edge detection is one of the common blocks in the pre-processing phase of image and video processing pipeline. Our presented approach targets offloading the Canny edge detection algorithm from processing system (PS) to programmable logic (PL) taking the advantage of High Level Synthesis (HLS) tool flow to accelerate the implementation on Zynq platform. The resulting implementation enables up to a 100x performance improvement through hardware acceleration. The CPU utilization drops down and the frame rate jumps to 60 fps of 1080p full HD input video stream. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=high%20level%20synthesis" title="high level synthesis">high level synthesis</a>, <a href="https://publications.waset.org/abstracts/search?q=canny%20edge%20detection" title=" canny edge detection"> canny edge detection</a>, <a href="https://publications.waset.org/abstracts/search?q=hardware%20accelerators" title=" hardware accelerators"> hardware accelerators</a>, <a href="https://publications.waset.org/abstracts/search?q=computer%20vision" title=" computer vision"> computer vision</a> </p> <a href="https://publications.waset.org/abstracts/21304/high-level-synthesis-of-canny-edge-detection-algorithm-on-zynq-platform" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/21304.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">478</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">8306</span> Motion Estimator Architecture with Optimized Number of Processing Elements for High Efficiency Video Coding</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Seongsoo%20Lee">Seongsoo Lee</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Motion estimation occupies the heaviest computation in HEVC (high efficiency video coding). Many fast algorithms such as TZS (test zone search) have been proposed to reduce the computation. Still the huge computation of the motion estimation is a critical issue in the implementation of HEVC video codec. In this paper, motion estimator architecture with optimized number of PEs (processing element) is presented by exploiting early termination. It also reduces hardware size by exploiting parallel processing. The presented motion estimator architecture has 8 PEs, and it can efficiently perform TZS with very high utilization of PEs. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=motion%20estimation" title="motion estimation">motion estimation</a>, <a href="https://publications.waset.org/abstracts/search?q=test%20zone%20search" title=" test zone search"> test zone search</a>, <a href="https://publications.waset.org/abstracts/search?q=high%20efficiency%20video%20coding" title=" high efficiency video coding"> high efficiency video coding</a>, <a href="https://publications.waset.org/abstracts/search?q=processing%20element" title=" processing element"> processing element</a>, <a href="https://publications.waset.org/abstracts/search?q=optimization" title=" optimization"> optimization</a> </p> <a href="https://publications.waset.org/abstracts/70881/motion-estimator-architecture-with-optimized-number-of-processing-elements-for-high-efficiency-video-coding" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/70881.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">363</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">8305</span> Key Frame Based Video Summarization via Dependency Optimization</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Janya%20Sainui">Janya Sainui</a> </p> <p class="card-text"><strong>Abstract:</strong></p> As a rapid growth of digital videos and data communications, video summarization that provides a shorter version of the video for fast video browsing and retrieval is necessary. Key frame extraction is one of the mechanisms to generate video summary. In general, the extracted key frames should both represent the entire video content and contain minimum redundancy. However, most of the existing approaches heuristically select key frames; hence, the selected key frames may not be the most different frames and/or not cover the entire content of a video. In this paper, we propose a method of video summarization which provides the reasonable objective functions for selecting key frames. In particular, we apply a statistical dependency measure called quadratic mutual informaion as our objective functions for maximizing the coverage of the entire video content as well as minimizing the redundancy among selected key frames. The proposed key frame extraction algorithm finds key frames as an optimization problem. Through experiments, we demonstrate the success of the proposed video summarization approach that produces video summary with better coverage of the entire video content while less redundancy among key frames comparing to the state-of-the-art approaches. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=video%20summarization" title="video summarization">video summarization</a>, <a href="https://publications.waset.org/abstracts/search?q=key%20frame%20extraction" title=" key frame extraction"> key frame extraction</a>, <a href="https://publications.waset.org/abstracts/search?q=dependency%20measure" title=" dependency measure"> dependency measure</a>, <a href="https://publications.waset.org/abstracts/search?q=quadratic%20mutual%20information" title=" quadratic mutual information"> quadratic mutual information</a> </p> <a href="https://publications.waset.org/abstracts/75218/key-frame-based-video-summarization-via-dependency-optimization" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/75218.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">266</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">8304</span> Low Power Glitch Free Dual Output Coarse Digitally Controlled Delay Lines</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=K.%20Shaji%20Mon">K. Shaji Mon</a>, <a href="https://publications.waset.org/abstracts/search?q=P.%20R.%20John%20Sreenidhi"> P. R. John Sreenidhi</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In deep-submicrometer CMOS processes, time-domain resolution of a digital signal is becoming higher than voltage resolution of analog signals. This claim is nowadays pushing toward a new circuit design paradigm in which the traditional analog signal processing is expected to be progressively substituted by the processing of times in the digital domain. Within this novel paradigm, digitally controlled delay lines (DCDL) should play the role of digital-to-analog converters in traditional, analog-intensive, circuits. Digital delay locked loops are highly prevalent in integrated systems.The proposed paper addresses the glitches present in delay circuits along with area,power dissipation and signal integrity.The digitally controlled delay lines(DCDL) under study have been designed in a 90 nm CMOS technology 6 layer metal Copper Strained SiGe Low K Dielectric. Simulation and synthesis results show that the novel circuits exhibit no glitches for dual output coarse DCDL with less power dissipation and consumes less area compared to the glitch free NAND based DCDL. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=glitch%20free" title="glitch free">glitch free</a>, <a href="https://publications.waset.org/abstracts/search?q=NAND-based%20DCDL" title=" NAND-based DCDL"> NAND-based DCDL</a>, <a href="https://publications.waset.org/abstracts/search?q=CMOS" title=" CMOS"> CMOS</a>, <a href="https://publications.waset.org/abstracts/search?q=deep-submicrometer" title=" deep-submicrometer"> deep-submicrometer</a> </p> <a href="https://publications.waset.org/abstracts/2876/low-power-glitch-free-dual-output-coarse-digitally-controlled-delay-lines" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/2876.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">245</span> </span> </div> </div> <ul class="pagination"> <li class="page-item disabled"><span class="page-link">&lsaquo;</span></li> <li class="page-item active"><span class="page-link">1</span></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=H.263%20video%20encoding%20digital%20signal%20processing&amp;page=2">2</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=H.263%20video%20encoding%20digital%20signal%20processing&amp;page=3">3</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=H.263%20video%20encoding%20digital%20signal%20processing&amp;page=4">4</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=H.263%20video%20encoding%20digital%20signal%20processing&amp;page=5">5</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=H.263%20video%20encoding%20digital%20signal%20processing&amp;page=6">6</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=H.263%20video%20encoding%20digital%20signal%20processing&amp;page=7">7</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=H.263%20video%20encoding%20digital%20signal%20processing&amp;page=8">8</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=H.263%20video%20encoding%20digital%20signal%20processing&amp;page=9">9</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=H.263%20video%20encoding%20digital%20signal%20processing&amp;page=10">10</a></li> <li class="page-item disabled"><span class="page-link">...</span></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=H.263%20video%20encoding%20digital%20signal%20processing&amp;page=277">277</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=H.263%20video%20encoding%20digital%20signal%20processing&amp;page=278">278</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=H.263%20video%20encoding%20digital%20signal%20processing&amp;page=2" rel="next">&rsaquo;</a></li> </ul> </div> </main> <footer> <div id="infolinks" class="pt-3 pb-2"> <div class="container"> <div style="background-color:#f5f5f5;" class="p-3"> <div class="row"> <div class="col-md-2"> <ul class="list-unstyled"> About <li><a href="https://waset.org/page/support">About Us</a></li> <li><a href="https://waset.org/page/support#legal-information">Legal</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/WASET-16th-foundational-anniversary.pdf">WASET celebrates its 16th foundational anniversary</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Account <li><a href="https://waset.org/profile">My Account</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Explore <li><a href="https://waset.org/disciplines">Disciplines</a></li> <li><a href="https://waset.org/conferences">Conferences</a></li> <li><a href="https://waset.org/conference-programs">Conference Program</a></li> <li><a href="https://waset.org/committees">Committees</a></li> <li><a href="https://publications.waset.org">Publications</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Research <li><a href="https://publications.waset.org/abstracts">Abstracts</a></li> <li><a href="https://publications.waset.org">Periodicals</a></li> <li><a href="https://publications.waset.org/archive">Archive</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Open Science <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Science-Philosophy.pdf">Open Science Philosophy</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Science-Award.pdf">Open Science Award</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Society-Open-Science-and-Open-Innovation.pdf">Open Innovation</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Postdoctoral-Fellowship-Award.pdf">Postdoctoral Fellowship Award</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Scholarly-Research-Review.pdf">Scholarly Research Review</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Support <li><a href="https://waset.org/page/support">Support</a></li> <li><a href="https://waset.org/profile/messages/create">Contact Us</a></li> <li><a href="https://waset.org/profile/messages/create">Report Abuse</a></li> </ul> </div> </div> </div> </div> </div> <div class="container text-center"> <hr style="margin-top:0;margin-bottom:.3rem;"> <a href="https://creativecommons.org/licenses/by/4.0/" target="_blank" class="text-muted small">Creative Commons Attribution 4.0 International License</a> <div id="copy" class="mt-2">&copy; 2024 World Academy of Science, Engineering and Technology</div> </div> </footer> <a href="javascript:" id="return-to-top"><i class="fas fa-arrow-up"></i></a> <div class="modal" id="modal-template"> <div class="modal-dialog"> <div class="modal-content"> <div class="row m-0 mt-1"> <div class="col-md-12"> <button type="button" class="close" data-dismiss="modal" aria-label="Close"><span aria-hidden="true">&times;</span></button> </div> </div> <div class="modal-body"></div> </div> </div> </div> <script src="https://cdn.waset.org/static/plugins/jquery-3.3.1.min.js"></script> <script src="https://cdn.waset.org/static/plugins/bootstrap-4.2.1/js/bootstrap.bundle.min.js"></script> <script src="https://cdn.waset.org/static/js/site.js?v=150220211556"></script> <script> jQuery(document).ready(function() { /*jQuery.get("https://publications.waset.org/xhr/user-menu", function (response) { jQuery('#mainNavMenu').append(response); });*/ jQuery.get({ url: "https://publications.waset.org/xhr/user-menu", cache: false }).then(function(response){ jQuery('#mainNavMenu').append(response); }); }); </script> </body> </html>

Pages: 1 2 3 4 5 6 7 8 9 10