CINXE.COM

Search results for: monocular camera

<!DOCTYPE html> <html lang="en" dir="ltr"> <head> <!-- Google tag (gtag.js) --> <script async src="https://www.googletagmanager.com/gtag/js?id=G-P63WKM1TM1"></script> <script> window.dataLayer = window.dataLayer || []; function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'G-P63WKM1TM1'); </script> <!-- Yandex.Metrika counter --> <script type="text/javascript" > (function(m,e,t,r,i,k,a){m[i]=m[i]||function(){(m[i].a=m[i].a||[]).push(arguments)}; m[i].l=1*new Date(); for (var j = 0; j < document.scripts.length; j++) {if (document.scripts[j].src === r) { return; }} k=e.createElement(t),a=e.getElementsByTagName(t)[0],k.async=1,k.src=r,a.parentNode.insertBefore(k,a)}) (window, document, "script", "https://mc.yandex.ru/metrika/tag.js", "ym"); ym(55165297, "init", { clickmap:false, trackLinks:true, accurateTrackBounce:true, webvisor:false }); </script> <noscript><div><img src="https://mc.yandex.ru/watch/55165297" style="position:absolute; left:-9999px;" alt="" /></div></noscript> <!-- /Yandex.Metrika counter --> <!-- Matomo --> <!-- End Matomo Code --> <title>Search results for: monocular camera</title> <meta name="description" content="Search results for: monocular camera"> <meta name="keywords" content="monocular camera"> <meta name="viewport" content="width=device-width, initial-scale=1, minimum-scale=1, maximum-scale=1, user-scalable=no"> <meta charset="utf-8"> <link href="https://cdn.waset.org/favicon.ico" type="image/x-icon" rel="shortcut icon"> <link href="https://cdn.waset.org/static/plugins/bootstrap-4.2.1/css/bootstrap.min.css" rel="stylesheet"> <link href="https://cdn.waset.org/static/plugins/fontawesome/css/all.min.css" rel="stylesheet"> <link href="https://cdn.waset.org/static/css/site.css?v=150220211555" rel="stylesheet"> </head> <body> <header> <div class="container"> <nav class="navbar navbar-expand-lg navbar-light"> <a class="navbar-brand" href="https://waset.org"> <img src="https://cdn.waset.org/static/images/wasetc.png" alt="Open Science Research Excellence" title="Open Science Research Excellence" /> </a> <button class="d-block d-lg-none navbar-toggler ml-auto" type="button" data-toggle="collapse" data-target="#navbarMenu" aria-controls="navbarMenu" aria-expanded="false" aria-label="Toggle navigation"> <span class="navbar-toggler-icon"></span> </button> <div class="w-100"> <div class="d-none d-lg-flex flex-row-reverse"> <form method="get" action="https://waset.org/search" class="form-inline my-2 my-lg-0"> <input class="form-control mr-sm-2" type="search" placeholder="Search Conferences" value="monocular camera" name="q" aria-label="Search"> <button class="btn btn-light my-2 my-sm-0" type="submit"><i class="fas fa-search"></i></button> </form> </div> <div class="collapse navbar-collapse mt-1" id="navbarMenu"> <ul class="navbar-nav ml-auto align-items-center" id="mainNavMenu"> <li class="nav-item"> <a class="nav-link" href="https://waset.org/conferences" title="Conferences in 2024/2025/2026">Conferences</a> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/disciplines" title="Disciplines">Disciplines</a> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/committees" rel="nofollow">Committees</a> </li> <li class="nav-item dropdown"> <a class="nav-link dropdown-toggle" href="#" id="navbarDropdownPublications" role="button" data-toggle="dropdown" aria-haspopup="true" aria-expanded="false"> Publications </a> <div class="dropdown-menu" aria-labelledby="navbarDropdownPublications"> <a class="dropdown-item" href="https://publications.waset.org/abstracts">Abstracts</a> <a class="dropdown-item" href="https://publications.waset.org">Periodicals</a> <a class="dropdown-item" href="https://publications.waset.org/archive">Archive</a> </div> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/page/support" title="Support">Support</a> </li> </ul> </div> </div> </nav> </div> </header> <main> <div class="container mt-4"> <div class="row"> <div class="col-md-9 mx-auto"> <form method="get" action="https://publications.waset.org/abstracts/search"> <div id="custom-search-input"> <div class="input-group"> <i class="fas fa-search"></i> <input type="text" class="search-query" name="q" placeholder="Author, Title, Abstract, Keywords" value="monocular camera"> <input type="submit" class="btn_search" value="Search"> </div> </div> </form> </div> </div> <div class="row mt-3"> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Commenced</strong> in January 2007</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Frequency:</strong> Monthly</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Edition:</strong> International</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Paper Count:</strong> 603</div> </div> </div> </div> <h1 class="mt-3 mb-3 text-center" style="font-size:1.6rem;">Search results for: monocular camera</h1> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">603</span> A Monocular Measurement for 3D Objects Based on Distance Area Number and New Minimize Projection Error Optimization Algorithms</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Feixiang%20Zhao">Feixiang Zhao</a>, <a href="https://publications.waset.org/abstracts/search?q=Shuangcheng%20Jia"> Shuangcheng Jia</a>, <a href="https://publications.waset.org/abstracts/search?q=Qian%20Li"> Qian Li</a> </p> <p class="card-text"><strong>Abstract:</strong></p> High-precision measurement of the target鈥檚 position and size is one of the hotspots in the field of vision inspection. This paper proposes a three-dimensional object positioning and measurement method using a monocular camera and GPS, namely the Distance Area Number-New Minimize Projection Error (DAN-NMPE). Our algorithm contains two parts: DAN and NMPE; specifically, DAN is a picture sequence algorithm, NMPE is a relatively positive optimization algorithm, which greatly improves the measurement accuracy of the target鈥檚 position and size. Comprehensive experiments validate the effectiveness of our proposed method on a self-made traffic sign dataset. The results show that with the laser point cloud as the ground truth, the size and position errors of the traffic sign measured by this method are 卤 5% and 0.48 卤 0.3m, respectively. In addition, we also compared it with the current mainstream method, which uses a monocular camera to locate and measure traffic signs. DAN-NMPE attains significant improvements compared to existing state-of-the-art methods, which improves the measurement accuracy of size and position by 50% and 15.8%, respectively. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=monocular%20camera" title="monocular camera">monocular camera</a>, <a href="https://publications.waset.org/abstracts/search?q=GPS" title=" GPS"> GPS</a>, <a href="https://publications.waset.org/abstracts/search?q=positioning" title=" positioning"> positioning</a>, <a href="https://publications.waset.org/abstracts/search?q=measurement" title=" measurement"> measurement</a> </p> <a href="https://publications.waset.org/abstracts/147790/a-monocular-measurement-for-3d-objects-based-on-distance-area-number-and-new-minimize-projection-error-optimization-algorithms" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/147790.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">144</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">602</span> Subpixel Corner Detection for Monocular Camera Linear Model Research</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Guorong%20Sui">Guorong Sui</a>, <a href="https://publications.waset.org/abstracts/search?q=Xingwei%20Jia"> Xingwei Jia</a>, <a href="https://publications.waset.org/abstracts/search?q=Fei%20Tong"> Fei Tong</a>, <a href="https://publications.waset.org/abstracts/search?q=Xiumin%20Gao"> Xiumin Gao</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Camera calibration is a fundamental issue of high precision noncontact measurement. And it is necessary to analyze and study the reliability and application range of its linear model which is often used in the camera calibration. According to the imaging features of monocular cameras, a camera model which is based on the image pixel coordinates and three dimensional space coordinates is built. Using our own customized template, the image pixel coordinate is obtained by the subpixel corner detection method. Without considering the aberration of the optical system, the feature extraction and linearity analysis of the line segment in the template are performed. Moreover, the experiment is repeated 11 times by constantly varying the measuring distance. At last, the linearity of the camera is achieved by fitting 11 groups of data. The camera model measurement results show that the relative error does not exceed 1%, and the repeated measurement error is not more than 0.1 mm magnitude. Meanwhile, it is found that the model has some measurement differences in the different region and object distance. The experiment results show this linear model is simple and practical, and have good linearity within a certain object distance. These experiment results provide a powerful basis for establishment of the linear model of camera. These works will have potential value to the actual engineering measurement. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=camera%20linear%20model" title="camera linear model">camera linear model</a>, <a href="https://publications.waset.org/abstracts/search?q=geometric%20imaging%20relationship" title=" geometric imaging relationship"> geometric imaging relationship</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20pixel%20coordinates" title=" image pixel coordinates"> image pixel coordinates</a>, <a href="https://publications.waset.org/abstracts/search?q=three%20dimensional%20space%20coordinates" title=" three dimensional space coordinates"> three dimensional space coordinates</a>, <a href="https://publications.waset.org/abstracts/search?q=sub-pixel%20corner%20detection" title=" sub-pixel corner detection"> sub-pixel corner detection</a> </p> <a href="https://publications.waset.org/abstracts/77747/subpixel-corner-detection-for-monocular-camera-linear-model-research" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/77747.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">277</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">601</span> Monocular 3D Person Tracking AIA Demographic Classification and Projective Image Processing </h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=McClain%20Thiel">McClain Thiel</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Object detection and localization has historically required two or more sensors due to the loss of information from 3D to 2D space, however, most surveillance systems currently in use in the real world only have one sensor per location. Generally, this consists of a single low-resolution camera positioned above the area under observation (mall, jewelry store, traffic camera). This is not sufficient for robust 3D tracking for applications such as security or more recent relevance, contract tracing. This paper proposes a lightweight system for 3D person tracking that requires no additional hardware, based on compressed object detection convolutional-nets, facial landmark detection, and projective geometry. This approach involves classifying the target into a demographic category and then making assumptions about the relative locations of facial landmarks from the demographic information, and from there using simple projective geometry and known constants to find the target's location in 3D space. Preliminary testing, although severely lacking, suggests reasonable success in 3D tracking under ideal conditions. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=monocular%20distancing" title="monocular distancing">monocular distancing</a>, <a href="https://publications.waset.org/abstracts/search?q=computer%20vision" title=" computer vision"> computer vision</a>, <a href="https://publications.waset.org/abstracts/search?q=facial%20analysis" title=" facial analysis"> facial analysis</a>, <a href="https://publications.waset.org/abstracts/search?q=3D%20localization" title=" 3D localization "> 3D localization </a> </p> <a href="https://publications.waset.org/abstracts/129037/monocular-3d-person-tracking-aia-demographic-classification-and-projective-image-processing" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/129037.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">139</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">600</span> Monocular Depth Estimation Benchmarking with Thermal Dataset</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Ali%20Akyar">Ali Akyar</a>, <a href="https://publications.waset.org/abstracts/search?q=Osman%20Serdar%20Gedik"> Osman Serdar Gedik</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Depth estimation is a challenging computer vision task that involves estimating the distance between objects in a scene and the camera. It predicts how far each pixel in the 2D image is from the capturing point. There are some important Monocular Depth Estimation (MDE) studies that are based on Vision Transformers (ViT). We benchmark three major studies. The first work aims to build a simple and powerful foundation model that deals with any images under any condition. The second work proposes a method by mixing multiple datasets during training and a robust training objective. The third work combines generalization performance and state-of-the-art results on specific datasets. Although there are studies with thermal images too, we wanted to benchmark these three non-thermal, state-of-the-art studies with a hybrid image dataset which is taken by Multi-Spectral Dynamic Imaging (MSX) technology. MSX technology produces detailed thermal images by bringing together the thermal and visual spectrums. Using this technology, our dataset images are not blur and poorly detailed as the normal thermal images. On the other hand, they are not taken at the perfect light conditions as RGB images. We compared three methods under test with our thermal dataset which was not done before. Additionally, we propose an image enhancement deep learning model for thermal data. This model helps extract the features required for monocular depth estimation. The experimental results demonstrate that, after using our proposed model, the performance of these three methods under test increased significantly for thermal image depth prediction. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=monocular%20depth%20estimation" title="monocular depth estimation">monocular depth estimation</a>, <a href="https://publications.waset.org/abstracts/search?q=thermal%20dataset" title=" thermal dataset"> thermal dataset</a>, <a href="https://publications.waset.org/abstracts/search?q=benchmarking" title=" benchmarking"> benchmarking</a>, <a href="https://publications.waset.org/abstracts/search?q=vision%20transformers" title=" vision transformers"> vision transformers</a> </p> <a href="https://publications.waset.org/abstracts/186398/monocular-depth-estimation-benchmarking-with-thermal-dataset" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/186398.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">32</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">599</span> Monocular Visual Odometry for Three Different View Angles by Intel Realsense T265 with the Measurement of Remote</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Heru%20Syah%20Putra">Heru Syah Putra</a>, <a href="https://publications.waset.org/abstracts/search?q=Aji%20Tri%20Pamungkas%20Nurcahyo"> Aji Tri Pamungkas Nurcahyo</a>, <a href="https://publications.waset.org/abstracts/search?q=Chuang-Jan%20Chang"> Chuang-Jan Chang</a> </p> <p class="card-text"><strong>Abstract:</strong></p> MOIL-SDK method refers to the spatial angle that forms a view with a different perspective from the Fisheye image. Visual Odometry forms a trusted application for extending projects by tracking using image sequences. A real-time, precise, and persistent approach that is able to contribute to the work when taking datasets and generate ground truth as a reference for the estimates of each image using the FAST Algorithm method in finding Keypoints that are evaluated during the tracking process with the 5-point Algorithm with RANSAC, as well as produce accurate estimates the camera trajectory for each rotational, translational movement on the X, Y, and Z axes. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=MOIL-SDK" title="MOIL-SDK">MOIL-SDK</a>, <a href="https://publications.waset.org/abstracts/search?q=intel%20realsense%20T265" title=" intel realsense T265"> intel realsense T265</a>, <a href="https://publications.waset.org/abstracts/search?q=Fisheye%20image" title=" Fisheye image"> Fisheye image</a>, <a href="https://publications.waset.org/abstracts/search?q=monocular%20visual%20odometry" title=" monocular visual odometry"> monocular visual odometry</a> </p> <a href="https://publications.waset.org/abstracts/147340/monocular-visual-odometry-for-three-different-view-angles-by-intel-realsense-t265-with-the-measurement-of-remote" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/147340.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">134</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">598</span> Video Sharing System Based On Wi-fi Camera</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Qidi%20Lin">Qidi Lin</a>, <a href="https://publications.waset.org/abstracts/search?q=Jinbin%20Huang"> Jinbin Huang</a>, <a href="https://publications.waset.org/abstracts/search?q=Weile%20Liang"> Weile Liang</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper introduces a video sharing platform based on WiFi, which consists of camera, mobile phone and PC server. This platform can receive wireless signal from the camera and show the live video on the mobile phone captured by camera. In addition that, it is able to send commands to camera and control the camera鈥檚 holder to rotate. The platform can be applied to interactive teaching and dangerous area鈥檚 monitoring and so on. Testing results show that the platform can share the live video of mobile phone. Furthermore, if the system鈥檚 PC sever and the camera and many mobile phones are connected together, it can transfer photos concurrently. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=Wifi%20Camera" title="Wifi Camera">Wifi Camera</a>, <a href="https://publications.waset.org/abstracts/search?q=socket%20mobile" title=" socket mobile"> socket mobile</a>, <a href="https://publications.waset.org/abstracts/search?q=platform%20video%20monitoring" title=" platform video monitoring"> platform video monitoring</a>, <a href="https://publications.waset.org/abstracts/search?q=remote%20control" title=" remote control"> remote control</a> </p> <a href="https://publications.waset.org/abstracts/31912/video-sharing-system-based-on-wi-fi-camera" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/31912.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">337</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">597</span> Design of Speed Bump Recognition System Integrated with Adjustable Shock Absorber Control</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Ming-Yen%20Chang">Ming-Yen Chang</a>, <a href="https://publications.waset.org/abstracts/search?q=Sheng-Hung%20Ke"> Sheng-Hung Ke</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This research focuses on the development of a speed bump identification system for real-time control of adjustable shock absorbers in vehicular suspension systems. The study initially involved the collection of images of various speed bumps, and rubber speed bump profiles found on roadways. These images were utilized for training and recognition purposes through the deep learning object detection algorithm YOLOv5. Subsequently, the trained speed bump identification program was integrated with an in-vehicle camera system for live image capture during driving. These images were instantly transmitted to a computer for processing. Using the principles of monocular vision ranging, the distance between the vehicle and an approaching speed bump was determined. The appropriate control distance was established through both practical vehicle measurements and theoretical calculations. Collaboratively, with the electronically adjustable shock absorbers equipped in the vehicle, a shock absorber control system was devised to dynamically adapt the damping force just prior to encountering a speed bump. This system effectively mitigates passenger discomfort and enhances ride quality. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=adjustable%20shock%20absorbers" title="adjustable shock absorbers">adjustable shock absorbers</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20recognition" title=" image recognition"> image recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=monocular%20vision%20ranging" title=" monocular vision ranging"> monocular vision ranging</a>, <a href="https://publications.waset.org/abstracts/search?q=ride" title=" ride"> ride</a> </p> <a href="https://publications.waset.org/abstracts/175109/design-of-speed-bump-recognition-system-integrated-with-adjustable-shock-absorber-control" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/175109.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">66</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">596</span> Optical Flow Localisation and Appearance Mapping (OFLAAM) for Long-Term Navigation</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Daniel%20Pastor">Daniel Pastor</a>, <a href="https://publications.waset.org/abstracts/search?q=Hyo-Sang%20Shin"> Hyo-Sang Shin</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper presents a novel method to use optical flow navigation for long-term navigation. Unlike standard SLAM approaches for augmented reality, OFLAAM is designed for Micro Air Vehicles (MAV). It uses an optical flow camera pointing downwards, an IMU and a monocular camera pointing frontwards. That configuration avoids the expensive mapping and tracking of the 3D features. It only maps these features in a vocabulary list by a localization module to tackle the loss of the navigation estimation. That module, based on the well-established algorithm DBoW2, will be also used to close the loop and allow long-term navigation in confined areas. That combination of high-speed optical flow navigation with a low rate localization algorithm allows fully autonomous navigation for MAV, at the same time it reduces the overall computational load. This framework is implemented in ROS (Robot Operating System) and tested attached to a laptop. A representative scenarios is used to analyse the performance of the system. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=vision" title="vision">vision</a>, <a href="https://publications.waset.org/abstracts/search?q=UAV" title=" UAV"> UAV</a>, <a href="https://publications.waset.org/abstracts/search?q=navigation" title=" navigation"> navigation</a>, <a href="https://publications.waset.org/abstracts/search?q=SLAM" title=" SLAM"> SLAM</a> </p> <a href="https://publications.waset.org/abstracts/20509/optical-flow-localisation-and-appearance-mapping-oflaam-for-long-term-navigation" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/20509.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">606</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">595</span> Fundamental Study on Reconstruction of 3D Image Using Camera and Ultrasound</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Takaaki%20Miyabe">Takaaki Miyabe</a>, <a href="https://publications.waset.org/abstracts/search?q=Hideharu%20Takahashi"> Hideharu Takahashi</a>, <a href="https://publications.waset.org/abstracts/search?q=Hiroshige%20Kikura"> Hiroshige Kikura</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The Government of Japan and Tokyo Electric Power Company Holdings, Incorporated (TEPCO) are struggling with the decommissioning of Fukushima Daiichi Nuclear Power Plants, especially fuel debris retrieval. In fuel debris retrieval, amount of fuel debris, location, characteristics, and distribution information are important. Recently, a survey was conducted using a robot with a small camera. Progress report in remote robot and camera research has speculated that fuel debris is present both at the bottom of the Pressure Containment Vessel (PCV) and inside the Reactor Pressure Vessel (RPV). The investigation found a 'tie plate' at the bottom of the containment, this is handles on the fuel rod. As a result, it is assumed that a hole large enough to allow the tie plate to fall is opened at the bottom of the reactor pressure vessel. Therefore, exploring the existence of holes that lead to inside the RCV is also an issue. Investigations of the lower part of the RPV are currently underway, but no investigations have been made inside or above the PCV. Therefore, a survey must be conducted for future fuel debris retrieval. The environment inside of the RPV cannot be imagined due to the effect of the melted fuel. To do this, we need a way to accurately check the internal situation. What we propose here is the adaptation of a technology called 'Structure from Motion' that reconstructs a 3D image from multiple photos taken by a single camera. The plan is to mount a monocular camera on the tip of long-arm robot, reach it to the upper part of the PCV, and to taking video. Now, we are making long-arm robot that has long-arm and used at high level radiation environment. However, the environment above the pressure vessel is not known exactly. Also, fog may be generated by the cooling water of fuel debris, and the radiation level in the environment may be high. Since camera alone cannot provide sufficient sensing in these environments, we will further propose using ultrasonic measurement technology in addition to cameras. Ultrasonic sensor can be resistant to environmental changes such as fog, and environments with high radiation dose. these systems can be used for a long time. The purpose is to develop a system adapted to the inside of the containment vessel by combining a camera and an ultrasound. Therefore, in this research, we performed a basic experiment on 3D image reconstruction using a camera and ultrasound. In this report, we select the good and bad condition of each sensing, and propose the reconstruction and detection method. The results revealed the strengths and weaknesses of each approach. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=camera" title="camera">camera</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20processing" title=" image processing"> image processing</a>, <a href="https://publications.waset.org/abstracts/search?q=reconstruction" title=" reconstruction"> reconstruction</a>, <a href="https://publications.waset.org/abstracts/search?q=ultrasound" title=" ultrasound"> ultrasound</a> </p> <a href="https://publications.waset.org/abstracts/119953/fundamental-study-on-reconstruction-of-3d-image-using-camera-and-ultrasound" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/119953.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">104</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">594</span> A Study of Effective Stereo Matching Method for Long-Wave Infrared Camera Module</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Hyun-Koo%20Kim">Hyun-Koo Kim</a>, <a href="https://publications.waset.org/abstracts/search?q=Yonghun%20Kim"> Yonghun Kim</a>, <a href="https://publications.waset.org/abstracts/search?q=Yong-Hoon%20Kim"> Yong-Hoon Kim</a>, <a href="https://publications.waset.org/abstracts/search?q=Ju%20Hee%20Lee"> Ju Hee Lee</a>, <a href="https://publications.waset.org/abstracts/search?q=Myungho%20Song"> Myungho Song</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In this paper, we have described an efficient stereo matching method and pedestrian detection method using stereo types LWIR camera. We compared with three types stereo camera algorithm as block matching, ELAS, and SGM. For pedestrian detection using stereo LWIR camera, we used that SGM stereo matching method, free space detection method using u/v-disparity, and HOG feature based pedestrian detection. According to testing result, SGM method has better performance than block matching and ELAS algorithm. Combination of SGM, free space detection, and pedestrian detection using HOG features and SVM classification can detect pedestrian of 30m distance and has a distance error about 30 cm. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=advanced%20driver%20assistance%20system" title="advanced driver assistance system">advanced driver assistance system</a>, <a href="https://publications.waset.org/abstracts/search?q=pedestrian%20detection" title=" pedestrian detection"> pedestrian detection</a>, <a href="https://publications.waset.org/abstracts/search?q=stereo%20matching%20method" title=" stereo matching method"> stereo matching method</a>, <a href="https://publications.waset.org/abstracts/search?q=stereo%20long-wave%20IR%20camera" title=" stereo long-wave IR camera"> stereo long-wave IR camera</a> </p> <a href="https://publications.waset.org/abstracts/58413/a-study-of-effective-stereo-matching-method-for-long-wave-infrared-camera-module" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/58413.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">414</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">593</span> Image Features Comparison-Based Position Estimation Method Using a Camera Sensor</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Jinseon%20Song">Jinseon Song</a>, <a href="https://publications.waset.org/abstracts/search?q=Yongwan%20Park"> Yongwan Park</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In this paper, propose method that can user&rsquo;s position that based on database is built from single camera. Previous positioning calculate distance by arrival-time of signal like GPS (Global Positioning System), RF(Radio Frequency). However, these previous method have weakness because these have large error range according to signal interference. Method for solution estimate position by camera sensor. But, signal camera is difficult to obtain relative position data and stereo camera is difficult to provide real-time position data because of a lot of image data, too. First of all, in this research we build image database at space that able to provide positioning service with single camera. Next, we judge similarity through image matching of database image and transmission image from user. Finally, we decide position of user through position of most similar database image. For verification of propose method, we experiment at real-environment like indoor and outdoor. Propose method is wide positioning range and this method can verify not only position of user but also direction. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=positioning" title="positioning">positioning</a>, <a href="https://publications.waset.org/abstracts/search?q=distance" title=" distance"> distance</a>, <a href="https://publications.waset.org/abstracts/search?q=camera" title=" camera"> camera</a>, <a href="https://publications.waset.org/abstracts/search?q=features" title=" features"> features</a>, <a href="https://publications.waset.org/abstracts/search?q=SURF%28Speed-Up%20Robust%20Features%29" title=" SURF(Speed-Up Robust Features)"> SURF(Speed-Up Robust Features)</a>, <a href="https://publications.waset.org/abstracts/search?q=database" title=" database"> database</a>, <a href="https://publications.waset.org/abstracts/search?q=estimation" title=" estimation"> estimation</a> </p> <a href="https://publications.waset.org/abstracts/11844/image-features-comparison-based-position-estimation-method-using-a-camera-sensor" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/11844.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">349</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">592</span> Image-Based UAV Vertical Distance and Velocity Estimation Algorithm during the Vertical Landing Phase Using Low-Resolution Images</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Seyed-Yaser%20Nabavi-Chashmi">Seyed-Yaser Nabavi-Chashmi</a>, <a href="https://publications.waset.org/abstracts/search?q=Davood%20Asadi"> Davood Asadi</a>, <a href="https://publications.waset.org/abstracts/search?q=Karim%20Ahmadi"> Karim Ahmadi</a>, <a href="https://publications.waset.org/abstracts/search?q=Eren%20Demir"> Eren Demir</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The landing phase of a UAV is very critical as there are many uncertainties in this phase, which can easily entail a hard landing or even a crash. In this paper, the estimation of relative distance and velocity to the ground, as one of the most important processes during the landing phase, is studied. Using accurate measurement sensors as an alternative approach can be very expensive for sensors like LIDAR, or with a limited operational range, for sensors like ultrasonic sensors. Additionally, absolute positioning systems like GPS or IMU cannot provide distance to the ground independently. The focus of this paper is to determine whether we can measure the relative distance and velocity of UAV and ground in the landing phase using just low-resolution images taken by a monocular camera. The Lucas-Konda feature detection technique is employed to extract the most suitable feature in a series of images taken during the UAV landing. Two different approaches based on Extended Kalman Filters (EKF) have been proposed, and their performance in estimation of the relative distance and velocity are compared. The first approach uses the kinematics of the UAV as the process and the calculated optical flow as the measurement; On the other hand, the second approach uses the feature鈥檚 projection on the camera plane (pixel position) as the measurement while employing both the kinematics of the UAV and the dynamics of variation of projected point as the process to estimate both relative distance and relative velocity. To verify the results, a sequence of low-quality images taken by a camera that is moving on a specifically developed testbed has been used to compare the performance of the proposed algorithm. The case studies show that the quality of images results in considerable noise, which reduces the performance of the first approach. On the other hand, using the projected feature position is much less sensitive to the noise and estimates the distance and velocity with relatively high accuracy. This approach also can be used to predict the future projected feature position, which can drastically decrease the computational workload, as an important criterion for real-time applications. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=altitude%20estimation" title="altitude estimation">altitude estimation</a>, <a href="https://publications.waset.org/abstracts/search?q=drone" title=" drone"> drone</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20processing" title=" image processing"> image processing</a>, <a href="https://publications.waset.org/abstracts/search?q=trajectory%20planning" title=" trajectory planning"> trajectory planning</a> </p> <a href="https://publications.waset.org/abstracts/147377/image-based-uav-vertical-distance-and-velocity-estimation-algorithm-during-the-vertical-landing-phase-using-low-resolution-images" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/147377.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">113</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">591</span> An Efficient Fundamental Matrix Estimation for Moving Object Detection</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Yeongyu%20Choi">Yeongyu Choi</a>, <a href="https://publications.waset.org/abstracts/search?q=Ju%20H.%20Park"> Ju H. Park</a>, <a href="https://publications.waset.org/abstracts/search?q=S.%20M.%20Lee"> S. M. Lee</a>, <a href="https://publications.waset.org/abstracts/search?q=Ho-Youl%20Jung"> Ho-Youl Jung</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In this paper, an improved method for estimating fundamental matrix is proposed. The method is applied effectively to monocular camera based moving object detection. The method consists of corner points detection, moving object&rsquo;s motion estimation and fundamental matrix calculation. The corner points are obtained by using Harris corner detector, motions of moving objects is calculated from pyramidal Lucas-Kanade optical flow algorithm. Through epipolar geometry analysis using RANSAC, the fundamental matrix is calculated. In this method, we have improved the performances of moving object detection by using two threshold values that determine inlier or outlier. Through the simulations, we compare the performances with varying the two threshold values. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=corner%20detection" title="corner detection">corner detection</a>, <a href="https://publications.waset.org/abstracts/search?q=optical%20flow" title=" optical flow"> optical flow</a>, <a href="https://publications.waset.org/abstracts/search?q=epipolar%20geometry" title=" epipolar geometry"> epipolar geometry</a>, <a href="https://publications.waset.org/abstracts/search?q=RANSAC" title=" RANSAC"> RANSAC</a> </p> <a href="https://publications.waset.org/abstracts/79103/an-efficient-fundamental-matrix-estimation-for-moving-object-detection" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/79103.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">409</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">590</span> X-Corner Detection for Camera Calibration Using Saddle Points</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Abdulrahman%20S.%20Alturki">Abdulrahman S. Alturki</a>, <a href="https://publications.waset.org/abstracts/search?q=John%20S.%20Loomis"> John S. Loomis</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper discusses a corner detection algorithm for camera calibration. Calibration is a necessary step in many computer vision and image processing applications. Robust corner detection for an image of a checkerboard is required to determine intrinsic and extrinsic parameters. In this paper, an algorithm for fully automatic and robust X-corner detection is presented. Checkerboard corner points are automatically found in each image without user interaction or any prior information regarding the number of rows or columns. The approach represents each X-corner with a quadratic fitting function. Using the fact that the X-corners are saddle points, the coefficients in the fitting function are used to identify each corner location. The automation of this process greatly simplifies calibration. Our method is robust against noise and different camera orientations. Experimental analysis shows the accuracy of our method using actual images acquired at different camera locations and orientations. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=camera%20calibration" title="camera calibration">camera calibration</a>, <a href="https://publications.waset.org/abstracts/search?q=corner%20detector" title=" corner detector"> corner detector</a>, <a href="https://publications.waset.org/abstracts/search?q=edge%20detector" title=" edge detector"> edge detector</a>, <a href="https://publications.waset.org/abstracts/search?q=saddle%20points" title=" saddle points"> saddle points</a> </p> <a href="https://publications.waset.org/abstracts/40538/x-corner-detection-for-camera-calibration-using-saddle-points" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/40538.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">406</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">589</span> Frame Camera and Event Camera in Stereo Pair for High-Resolution Sensing</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Khen%20Cohen">Khen Cohen</a>, <a href="https://publications.waset.org/abstracts/search?q=Daniel%20Yankelevich"> Daniel Yankelevich</a>, <a href="https://publications.waset.org/abstracts/search?q=David%20Mendlovic"> David Mendlovic</a>, <a href="https://publications.waset.org/abstracts/search?q=Dan%20Raviv"> Dan Raviv</a> </p> <p class="card-text"><strong>Abstract:</strong></p> We present a 3D stereo system for high-resolution sensing in both the spatial and the temporal domains by combining a frame-based camera and an event-based camera. We establish a method to merge both devices into one unite system and introduce a calibration process, followed by a correspondence technique and interpolation algorithm for 3D reconstruction. We further provide quantitative analysis about our system in terms of depth resolution and additional parameter analysis. We show experimentally how our system performs temporal super-resolution up to effectively 1ms and can detect fast-moving objects and human micro-movements that can be used for micro-expression analysis. We also demonstrate how our method can extract colored events for an event-based camera without any degradation in the spatial resolution, compared to a colored filter array. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=DVS-CIS%20stereo%20vision" title="DVS-CIS stereo vision">DVS-CIS stereo vision</a>, <a href="https://publications.waset.org/abstracts/search?q=micro-movements" title=" micro-movements"> micro-movements</a>, <a href="https://publications.waset.org/abstracts/search?q=temporal%20super-resolution" title=" temporal super-resolution"> temporal super-resolution</a>, <a href="https://publications.waset.org/abstracts/search?q=3D%20reconstruction" title=" 3D reconstruction"> 3D reconstruction</a> </p> <a href="https://publications.waset.org/abstracts/143524/frame-camera-and-event-camera-in-stereo-pair-for-high-resolution-sensing" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/143524.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">297</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">588</span> H.263 Based Video Transceiver for Wireless Camera System</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Won-Ho%20Kim">Won-Ho Kim</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In this paper, a design of H.263 based wireless video transceiver is presented for wireless camera system. It uses standard WIFI transceiver and the covering area is up to 100m. Furthermore the standard H.263 video encoding technique is used for video compression since wireless video transmitter is unable to transmit high capacity raw data in real time and the implemented system is capable of streaming at speed of less than 1Mbps using NTSC 720x480 video. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=wireless%20video%20transceiver" title="wireless video transceiver">wireless video transceiver</a>, <a href="https://publications.waset.org/abstracts/search?q=video%20surveillance%20camera" title=" video surveillance camera"> video surveillance camera</a>, <a href="https://publications.waset.org/abstracts/search?q=H.263%20video%20encoding%20digital%20signal%20processing" title=" H.263 video encoding digital signal processing"> H.263 video encoding digital signal processing</a> </p> <a href="https://publications.waset.org/abstracts/12951/h263-based-video-transceiver-for-wireless-camera-system" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/12951.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">364</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">587</span> The Contribution of Lower Visual Channels and Evolutionary Origin of the Tunnel Effect</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Shai%20Gabay">Shai Gabay</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The tunnel effect describes the phenomenon where a moving object seems to persist even when temporarily hidden from view. Numerous studies indicate that humans, infants, and nonhuman primates possess object persistence, relying on spatiotemporal cues to track objects that are dynamically occluded. While this ability is associated with neural activity in the cerebral neocortex of humans and mammals, the role of subcortical mechanisms remains ambiguous. In our current investigation, we explore the functional contribution of monocular aspects of the visual system, predominantly subcortical, to the representation of occluded objects. This is achieved by manipulating whether the reappearance of an object occurs in the same or different eye from its disappearance. Additionally, we employ Archerfish, renowned for their precision in dislodging insect prey with water jets, as a phylogenetic model to probe the evolutionary origins of the tunnel effect. Our findings reveal the active involvement of subcortical structures in the mental representation of occluded objects, a process evident even in species that do not possess cortical tissue. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=archerfish" title="archerfish">archerfish</a>, <a href="https://publications.waset.org/abstracts/search?q=tunnel%20effect" title=" tunnel effect"> tunnel effect</a>, <a href="https://publications.waset.org/abstracts/search?q=mental%20representations" title=" mental representations"> mental representations</a>, <a href="https://publications.waset.org/abstracts/search?q=monocular%20channels" title=" monocular channels"> monocular channels</a>, <a href="https://publications.waset.org/abstracts/search?q=subcortical%20structures" title=" subcortical structures"> subcortical structures</a> </p> <a href="https://publications.waset.org/abstracts/185847/the-contribution-of-lower-visual-channels-and-evolutionary-origin-of-the-tunnel-effect" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/185847.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">45</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">586</span> A Wide View Scheme for Automobile&#039;s Black Box</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Jaemyoung%20Lee">Jaemyoung Lee</a> </p> <p class="card-text"><strong>Abstract:</strong></p> We propose a wide view camera scheme for automobile's black box. The proposed scheme uses the commercially available camera lenses of which view angles are about 120掳}^{\circ}掳. In the proposed scheme, we extend the view angle to approximately 200掳 ^{\circ}掳 using two cameras at the front side instead of three lenses with conventional black boxes. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=camera" title="camera">camera</a>, <a href="https://publications.waset.org/abstracts/search?q=black%20box" title=" black box"> black box</a>, <a href="https://publications.waset.org/abstracts/search?q=view%20angle" title=" view angle"> view angle</a>, <a href="https://publications.waset.org/abstracts/search?q=automobile" title=" automobile"> automobile</a> </p> <a href="https://publications.waset.org/abstracts/2582/a-wide-view-scheme-for-automobiles-black-box" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/2582.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">413</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">585</span> Modal Analysis of a Cantilever Beam Using an Inexpensive Smartphone Camera: Motion Magnification Technique</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Hasan%20Hassoun">Hasan Hassoun</a>, <a href="https://publications.waset.org/abstracts/search?q=Jaafar%20Hallal"> Jaafar Hallal</a>, <a href="https://publications.waset.org/abstracts/search?q=Denis%20Duhamel"> Denis Duhamel</a>, <a href="https://publications.waset.org/abstracts/search?q=Mohammad%20Hammoud"> Mohammad Hammoud</a>, <a href="https://publications.waset.org/abstracts/search?q=Ali%20Hage%20Diab"> Ali Hage Diab</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper aims to prove the accuracy of an inexpensive smartphone camera as a non-contact vibration sensor to recover the vibration modes of a vibrating structure such as a cantilever beam. A video of a vibrating beam is filmed using a smartphone camera and then processed by the motion magnification technique. Based on this method, the first two natural frequencies and their associated mode shapes are estimated experimentally and compared to the analytical ones. Results show a relative error of less than 4% between the experimental and analytical approaches for the first two natural frequencies of the beam. Also, for the first two-mode shapes, a Modal Assurance Criterion (MAC) value of above 0.9 between the two approaches is obtained. This slight error between the different techniques ensures the viability of a cheap smartphone camera as a non-contact vibration sensor, particularly for structures vibrating at relatively low natural frequencies. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=modal%20analysis" title="modal analysis">modal analysis</a>, <a href="https://publications.waset.org/abstracts/search?q=motion%20magnification" title=" motion magnification"> motion magnification</a>, <a href="https://publications.waset.org/abstracts/search?q=smartphone%20camera" title=" smartphone camera"> smartphone camera</a>, <a href="https://publications.waset.org/abstracts/search?q=structural%20vibration" title=" structural vibration"> structural vibration</a>, <a href="https://publications.waset.org/abstracts/search?q=vibration%20modes" title=" vibration modes"> vibration modes</a> </p> <a href="https://publications.waset.org/abstracts/127525/modal-analysis-of-a-cantilever-beam-using-an-inexpensive-smartphone-camera-motion-magnification-technique" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/127525.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">148</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">584</span> GIS-Based Automatic Flight Planning of Camera-Equipped UAVs for Fire Emergency Response</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Mohammed%20Sulaiman">Mohammed Sulaiman</a>, <a href="https://publications.waset.org/abstracts/search?q=Hexu%20Liu"> Hexu Liu</a>, <a href="https://publications.waset.org/abstracts/search?q=Mohamed%20Binalhaj"> Mohamed Binalhaj</a>, <a href="https://publications.waset.org/abstracts/search?q=William%20W.%20Liou"> William W. Liou</a>, <a href="https://publications.waset.org/abstracts/search?q=Osama%20Abudayyeh"> Osama Abudayyeh</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Emerging technologies such as camera-equipped unmanned aerial vehicles (UAVs) are increasingly being applied in building fire rescue to provide real-time visualization and 3D reconstruction of the entire fireground. However, flight planning of camera-equipped UAVs is usually a manual process, which is not sufficient to fulfill the needs of emergency management. This research proposes a Geographic Information System (GIS)-based approach to automatic flight planning of camera-equipped UAVs for building fire emergency response. In this research, Haversine formula and lawn mowing patterns are employed to automate flight planning based on geometrical and spatial information from GIS. The resulting flight mission satisfies the requirements of 3D reconstruction purposes of the fireground, in consideration of flight execution safety and visibility of camera frames. The proposed approach is implemented within a GIS environment through an application programming interface. A case study is used to demonstrate the effectiveness of the proposed approach. The result shows that flight mission can be generated in a timely manner for application to fire emergency response. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=GIS" title="GIS">GIS</a>, <a href="https://publications.waset.org/abstracts/search?q=camera-equipped%20UAVs" title=" camera-equipped UAVs"> camera-equipped UAVs</a>, <a href="https://publications.waset.org/abstracts/search?q=automatic%20flight%20planning" title=" automatic flight planning"> automatic flight planning</a>, <a href="https://publications.waset.org/abstracts/search?q=fire%20emergency%20response" title=" fire emergency response"> fire emergency response</a> </p> <a href="https://publications.waset.org/abstracts/125166/gis-based-automatic-flight-planning-of-camera-equipped-uavs-for-fire-emergency-response" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/125166.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">125</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">583</span> Object Recognition System Operating from Different Type Vehicles Using Raspberry and OpenCV</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Maria%20Pavlova">Maria Pavlova</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In our days, it is possible to put the camera on different vehicles like quadcopter, train, airplane and etc. The camera also can be the input sensor in many different systems. That means the object recognition like non separate part of monitoring control can be key part of the most intelligent systems. The aim of this paper is to focus of the object recognition process during vehicles movement. During the vehicle鈥檚 movement the camera takes pictures from the environment without storage in Data Base. In case the camera detects a special object (for example human or animal), the system saves the picture and sends it to the work station in real time. This functionality will be very useful in emergency or security situations where is necessary to find a specific object. In another application, the camera can be mounted on crossroad where do not have many people and if one or more persons come on the road, the traffic lights became the green and they can cross the road. In this papers is presented the system has solved the aforementioned problems. It is presented architecture of the object recognition system includes the camera, Raspberry platform, GPS system, neural network, software and Data Base. The camera in the system takes the pictures. The object recognition is done in real time using the OpenCV library and Raspberry microcontroller. An additional feature of this library is the ability to display the GPS coordinates of the captured objects position. The results from this processes will be sent to remote station. So, in this case, we can know the location of the specific object. By neural network, we can learn the module to solve the problems using incoming data and to be part in bigger intelligent system. The present paper focuses on the design and integration of the image recognition like a part of smart systems. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=camera" title="camera">camera</a>, <a href="https://publications.waset.org/abstracts/search?q=object%20recognition" title=" object recognition"> object recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=OpenCV" title=" OpenCV"> OpenCV</a>, <a href="https://publications.waset.org/abstracts/search?q=Raspberry" title=" Raspberry"> Raspberry</a> </p> <a href="https://publications.waset.org/abstracts/81695/object-recognition-system-operating-from-different-type-vehicles-using-raspberry-and-opencv" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/81695.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">218</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">582</span> Person Re-Identification using Siamese Convolutional Neural Network</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Sello%20Mokwena">Sello Mokwena</a>, <a href="https://publications.waset.org/abstracts/search?q=Monyepao%20Thabang"> Monyepao Thabang</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In this study, we propose a comprehensive approach to address the challenges in person re-identification models. By combining a centroid tracking algorithm with a Siamese convolutional neural network model, our method excels in detecting, tracking, and capturing robust person features across non-overlapping camera views. The algorithm efficiently identifies individuals in the camera network, while the neural network extracts fine-grained global features for precise cross-image comparisons. The approach's effectiveness is further accentuated by leveraging the camera network topology for guidance. Our empirical analysis on benchmark datasets highlights its competitive performance, particularly evident when background subtraction techniques are selectively applied, underscoring its potential in advancing person re-identification techniques. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=camera%20network" title="camera network">camera network</a>, <a href="https://publications.waset.org/abstracts/search?q=convolutional%20neural%20network%20topology" title=" convolutional neural network topology"> convolutional neural network topology</a>, <a href="https://publications.waset.org/abstracts/search?q=person%20tracking" title=" person tracking"> person tracking</a>, <a href="https://publications.waset.org/abstracts/search?q=person%20re-identification" title=" person re-identification"> person re-identification</a>, <a href="https://publications.waset.org/abstracts/search?q=siamese" title=" siamese"> siamese</a> </p> <a href="https://publications.waset.org/abstracts/171989/person-re-identification-using-siamese-convolutional-neural-network" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/171989.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">72</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">581</span> Hand Gesture Recognition Interface Based on IR Camera</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Yang-Keun%20Ahn">Yang-Keun Ahn</a>, <a href="https://publications.waset.org/abstracts/search?q=Kwang-Soon%20Choi"> Kwang-Soon Choi</a>, <a href="https://publications.waset.org/abstracts/search?q=Young-Choong%20Park"> Young-Choong Park</a>, <a href="https://publications.waset.org/abstracts/search?q=Kwang-Mo%20Jung"> Kwang-Mo Jung</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Vision based user interfaces to control TVs and PCs have the advantage of being able to perform natural control without being limited to a specific device. Accordingly, various studies on hand gesture recognition using RGB cameras or depth cameras have been conducted. However, such cameras have the disadvantage of lacking in accuracy or the construction cost being large. The proposed method uses a low cost IR camera to accurately differentiate between the hand and the background. Also, complicated learning and template matching methodologies are not used, and the correlation between the fingertips extracted through curvatures is utilized to recognize Click and Move gestures. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=recognition" title="recognition">recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=hand%20gestures" title=" hand gestures"> hand gestures</a>, <a href="https://publications.waset.org/abstracts/search?q=infrared%20camera" title=" infrared camera"> infrared camera</a>, <a href="https://publications.waset.org/abstracts/search?q=RGB%20cameras" title=" RGB cameras"> RGB cameras</a> </p> <a href="https://publications.waset.org/abstracts/13373/hand-gesture-recognition-interface-based-on-ir-camera" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/13373.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">406</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">580</span> An Investigation of Direct and Indirect Geo-Referencing Techniques on the Accuracy of Points in Photogrammetry</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=F.%20Yildiz">F. Yildiz</a>, <a href="https://publications.waset.org/abstracts/search?q=S.%20Y.%20Oturanc"> S. Y. Oturanc</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Advances technology in the field of photogrammetry replaces analog cameras with reflection on aircraft GPS/IMU system with a digital aerial camera. In this system, when determining the position of the camera with the GPS, camera rotations are also determined by the IMU systems. All around the world, digital aerial cameras have been used for the photogrammetry applications in the last ten years. In this way, in terms of the work done in photogrammetry it is possible to use time effectively, costs to be reduced to a minimum level, the opportunity to make fast and accurate. Geo-referencing techniques that are the cornerstone of the GPS / INS systems, photogrammetric triangulation of images required for balancing (interior and exterior orientation) brings flexibility to the process. Also geo-referencing process; needed in the application of photogrammetry targets to help to reduce the number of ground control points. In this study, the use of direct and indirect geo-referencing techniques on the accuracy of the points was investigated in the production of photogrammetric mapping. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=photogrammetry" title="photogrammetry">photogrammetry</a>, <a href="https://publications.waset.org/abstracts/search?q=GPS%2FIMU%20systems" title=" GPS/IMU systems"> GPS/IMU systems</a>, <a href="https://publications.waset.org/abstracts/search?q=geo-referecing" title=" geo-referecing"> geo-referecing</a>, <a href="https://publications.waset.org/abstracts/search?q=digital%20aerial%20camera" title=" digital aerial camera"> digital aerial camera</a> </p> <a href="https://publications.waset.org/abstracts/13852/an-investigation-of-direct-and-indirect-geo-referencing-techniques-on-the-accuracy-of-points-in-photogrammetry" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/13852.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">411</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">579</span> Self-Calibration of Fish-Eye Camera for Advanced Driver Assistance Systems</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Atef%20Alaaeddine%20Sarraj">Atef Alaaeddine Sarraj</a>, <a href="https://publications.waset.org/abstracts/search?q=Brendan%20Jackman"> Brendan Jackman</a>, <a href="https://publications.waset.org/abstracts/search?q=Frank%20Walsh"> Frank Walsh</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Tomorrow鈥檚 car will be more automated and increasingly connected. Innovative and intuitive interfaces are essential to accompany this functional enrichment. For that, today the automotive companies are competing to offer an advanced driver assistance system (ADAS) which will be able to provide enhanced navigation, collision avoidance, intersection support and lane keeping. These vision-based functions require an accurately calibrated camera. To achieve such differentiation in ADAS requires sophisticated sensors and efficient algorithms. This paper explores the different calibration methods applicable to vehicle-mounted fish-eye cameras with arbitrary fields of view and defines the first steps towards a self-calibration method that adequately addresses ADAS requirements. In particular, we present a self-calibration method after comparing different camera calibration algorithms in the context of ADAS requirements. Our method gathers data from unknown scenes while the car is moving, estimates the camera intrinsic and extrinsic parameters and corrects the wide-angle distortion. Our solution enables continuous and real-time detection of objects, pedestrians, road markings and other cars. In contrast, other camera calibration algorithms for ADAS need pre-calibration, while the presented method calibrates the camera without prior knowledge of the scene and in real-time. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=advanced%20driver%20assistance%20system%20%28ADAS%29" title="advanced driver assistance system (ADAS)">advanced driver assistance system (ADAS)</a>, <a href="https://publications.waset.org/abstracts/search?q=fish-eye" title=" fish-eye"> fish-eye</a>, <a href="https://publications.waset.org/abstracts/search?q=real-time" title=" real-time"> real-time</a>, <a href="https://publications.waset.org/abstracts/search?q=self-calibration" title=" self-calibration"> self-calibration</a> </p> <a href="https://publications.waset.org/abstracts/70853/self-calibration-of-fish-eye-camera-for-advanced-driver-assistance-systems" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/70853.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">252</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">578</span> A Simple Autonomous Hovering and Operating Control of Multicopter Using Only Web Camera</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Kazuya%20Sato">Kazuya Sato</a>, <a href="https://publications.waset.org/abstracts/search?q=Toru%20Kasahara"> Toru Kasahara</a>, <a href="https://publications.waset.org/abstracts/search?q=Junji%20Kuroda"> Junji Kuroda</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In this paper, an autonomous hovering control method of multicopter using only Web camera is proposed. Recently, various control method of an autonomous flight for multicopter are proposed. But, in the previously proposed methods, a motion capture system (i.e., OptiTrack) and laser range finder are often used to measure the position and posture of multicopter. To achieve an autonomous flight control of multicopter with simple equipment, we propose an autonomous flight control method using AR marker and Web camera. AR marker can measure the position of multicopter with Cartesian coordinate in three dimensional, then its position connects with aileron, elevator, and accelerator throttle operation. A simple PID control method is applied to the each operation and adjust the controller gains. Experimental result are given to show the effectiveness of our proposed method. Moreover, another simple operation method for autonomous flight control multicopter is also proposed. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=autonomous%20hovering%20control" title="autonomous hovering control">autonomous hovering control</a>, <a href="https://publications.waset.org/abstracts/search?q=multicopter" title=" multicopter"> multicopter</a>, <a href="https://publications.waset.org/abstracts/search?q=Web%20camera" title=" Web camera"> Web camera</a>, <a href="https://publications.waset.org/abstracts/search?q=operation" title=" operation "> operation </a> </p> <a href="https://publications.waset.org/abstracts/20333/a-simple-autonomous-hovering-and-operating-control-of-multicopter-using-only-web-camera" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/20333.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">562</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">577</span> Open Source, Open Hardware Ground Truth for Visual Odometry and Simultaneous Localization and Mapping Applications</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Janusz%20Bedkowski">Janusz Bedkowski</a>, <a href="https://publications.waset.org/abstracts/search?q=Grzegorz%20Kisala"> Grzegorz Kisala</a>, <a href="https://publications.waset.org/abstracts/search?q=Michal%20Wlasiuk"> Michal Wlasiuk</a>, <a href="https://publications.waset.org/abstracts/search?q=Piotr%20Pokorski"> Piotr Pokorski</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Ground-truth data is essential for VO (Visual Odometry) and SLAM (Simultaneous Localization and Mapping) quantitative evaluation using e.g. ATE (Absolute Trajectory Error) and RPE (Relative Pose Error). Many open-access data sets provide raw and ground-truth data for benchmark purposes. The issue appears when one would like to validate Visual Odometry and/or SLAM approaches on data captured using the device for which the algorithm is targeted for example mobile phone and disseminate data for other researchers. For this reason, we propose an open source, open hardware groundtruth system that provides an accurate and precise trajectory with a 3D point cloud. It is based on LiDAR Livox Mid-360 with a non-repetitive scanning pattern, on-board Raspberry Pi 4B computer, battery and software for off-line calculations (camera to LiDAR calibration, LiDAR odometry, SLAM, georeferencing). We show how this system can be used for the evaluation of various the state of the art algorithms (Stella SLAM, ORB SLAM3, DSO) in typical indoor monocular VO/SLAM. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=SLAM" title="SLAM">SLAM</a>, <a href="https://publications.waset.org/abstracts/search?q=ground%20truth" title=" ground truth"> ground truth</a>, <a href="https://publications.waset.org/abstracts/search?q=navigation" title=" navigation"> navigation</a>, <a href="https://publications.waset.org/abstracts/search?q=LiDAR" title=" LiDAR"> LiDAR</a>, <a href="https://publications.waset.org/abstracts/search?q=visual%20odometry" title=" visual odometry"> visual odometry</a>, <a href="https://publications.waset.org/abstracts/search?q=mapping" title=" mapping"> mapping</a> </p> <a href="https://publications.waset.org/abstracts/187389/open-source-open-hardware-ground-truth-for-visual-odometry-and-simultaneous-localization-and-mapping-applications" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/187389.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">69</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">576</span> An Automated Procedure for Estimating the Glomerular Filtration Rate and Determining the Normality or Abnormality of the Kidney Stages Using an Artificial Neural Network</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Hossain%20A.">Hossain A.</a>, <a href="https://publications.waset.org/abstracts/search?q=Chowdhury%20S.%20I."> Chowdhury S. I.</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Introduction: The use of a gamma camera is a standard procedure in nuclear medicine facilities or hospitals to diagnose chronic kidney disease (CKD), but the gamma camera does not precisely stage the disease. The authors sought to determine whether they could use an artificial neural network to determine whether CKD was in normal or abnormal stages based on GFR values (ANN). Method: The 250 kidney patients (Training 188, Testing 62) who underwent an ultrasonography test to diagnose a renal test in our nuclear medical center were scanned using a gamma camera. Before the scanning procedure, the patients received an injection of 鈦光伖岬怲c-DTPA. The gamma camera computes the pre- and post-syringe radioactive counts after the injection has been pushed into the patient's vein. The artificial neural network uses the softmax function with cross-entropy loss to determine whether CKD is normal or abnormal based on the GFR value in the output layer. Results: The proposed ANN model had a 99.20 % accuracy according to K-fold cross-validation. The sensitivity and specificity were 99.10 and 99.20 %, respectively. AUC was 0.994. Conclusion: The proposed model can distinguish between normal and abnormal stages of CKD by using an artificial neural network. The gamma camera could be upgraded to diagnose normal or abnormal stages of CKD with an appropriate GFR value following the clinical application of the proposed model. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=artificial%20neural%20network" title="artificial neural network">artificial neural network</a>, <a href="https://publications.waset.org/abstracts/search?q=glomerular%20filtration%20rate" title=" glomerular filtration rate"> glomerular filtration rate</a>, <a href="https://publications.waset.org/abstracts/search?q=stages%20of%20the%20kidney" title=" stages of the kidney"> stages of the kidney</a>, <a href="https://publications.waset.org/abstracts/search?q=gamma%20camera" title=" gamma camera"> gamma camera</a> </p> <a href="https://publications.waset.org/abstracts/153994/an-automated-procedure-for-estimating-the-glomerular-filtration-rate-and-determining-the-normality-or-abnormality-of-the-kidney-stages-using-an-artificial-neural-network" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/153994.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">103</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">575</span> Smart Side View Mirror Camera for Real Time System</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Nunziata%20Ivana%20Guarneri">Nunziata Ivana Guarneri</a>, <a href="https://publications.waset.org/abstracts/search?q=Arcangelo%20Bruna"> Arcangelo Bruna</a>, <a href="https://publications.waset.org/abstracts/search?q=Giuseppe%20Spampinato"> Giuseppe Spampinato</a>, <a href="https://publications.waset.org/abstracts/search?q=Antonio%20Buemi"> Antonio Buemi</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In the last decade, automotive companies have invested a lot in terms of innovation about many aspects regarding the automatic driver assistance systems. One innovation regards the usage of a smart camera placed on the car&rsquo;s side mirror for monitoring the back and lateral road situation. A common road scenario is the overtaking of the preceding car and, in this case, a brief distraction or a loss of concentration can lead the driver to undertake this action, even if there is an already overtaking vehicle, leading to serious accidents. A valid support for a secure drive can be a smart camera system, which is able to automatically analyze the road scenario and consequentially to warn the driver when another vehicle is overtaking. This paper describes a method for monitoring the side view of a vehicle by using camera optical flow motion vectors. The proposed solution detects the presence of incoming vehicles, assesses their distance from the host car, and warns the driver through different levels of alert according to the estimated distance. Due to the low complexity and computational cost, the proposed system ensures real time performances. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=camera%20calibration" title="camera calibration">camera calibration</a>, <a href="https://publications.waset.org/abstracts/search?q=ego-motion" title=" ego-motion"> ego-motion</a>, <a href="https://publications.waset.org/abstracts/search?q=Kalman%20filters" title=" Kalman filters"> Kalman filters</a>, <a href="https://publications.waset.org/abstracts/search?q=object%20tracking" title=" object tracking"> object tracking</a>, <a href="https://publications.waset.org/abstracts/search?q=real%20time%20systems" title=" real time systems"> real time systems</a> </p> <a href="https://publications.waset.org/abstracts/79998/smart-side-view-mirror-camera-for-real-time-system" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/79998.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">228</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">574</span> Multiplayer RC-car Driving System in a Collaborative Augmented Reality Environment</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Kikuo%20Asai">Kikuo Asai</a>, <a href="https://publications.waset.org/abstracts/search?q=Yuji%20Sugimoto"> Yuji Sugimoto</a> </p> <p class="card-text"><strong>Abstract:</strong></p> We developed a prototype system for multiplayer RC-car driving in a collaborative Augmented Reality (AR) environment. The tele-existence environment is constructed by superimposing digital data onto images captured by a camera on an RC-car, enabling players to experience an augmented coexistence of the digital content and the real world. Marker-based tracking was used for estimating position and orientation of the camera. The plural RC-cars can be operated in a field where square markers are arranged. The video images captured by the camera are transmitted to a PC for visual tracking. The RC-cars are also tracked by using an infrared camera attached to the ceiling, so that the instability is reduced in the visual tracking. Multimedia data such as texts and graphics are visualized to be overlaid onto the video images in the geometrically correct manner. The prototype system allows a tele-existence sensation to be augmented in a collaborative AR environment. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=multiplayer" title="multiplayer">multiplayer</a>, <a href="https://publications.waset.org/abstracts/search?q=RC-car" title=" RC-car"> RC-car</a>, <a href="https://publications.waset.org/abstracts/search?q=collaborative%20environment" title=" collaborative environment"> collaborative environment</a>, <a href="https://publications.waset.org/abstracts/search?q=augmented%20reality" title=" augmented reality"> augmented reality</a> </p> <a href="https://publications.waset.org/abstracts/4359/multiplayer-rc-car-driving-system-in-a-collaborative-augmented-reality-environment" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/4359.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">289</span> </span> </div> </div> <ul class="pagination"> <li class="page-item disabled"><span class="page-link">&lsaquo;</span></li> <li class="page-item active"><span class="page-link">1</span></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=monocular%20camera&amp;page=2">2</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=monocular%20camera&amp;page=3">3</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=monocular%20camera&amp;page=4">4</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=monocular%20camera&amp;page=5">5</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=monocular%20camera&amp;page=6">6</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=monocular%20camera&amp;page=7">7</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=monocular%20camera&amp;page=8">8</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=monocular%20camera&amp;page=9">9</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=monocular%20camera&amp;page=10">10</a></li> <li class="page-item disabled"><span class="page-link">...</span></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=monocular%20camera&amp;page=20">20</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=monocular%20camera&amp;page=21">21</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=monocular%20camera&amp;page=2" rel="next">&rsaquo;</a></li> </ul> </div> </main> <footer> <div id="infolinks" class="pt-3 pb-2"> <div class="container"> <div style="background-color:#f5f5f5;" class="p-3"> <div class="row"> <div class="col-md-2"> <ul class="list-unstyled"> About <li><a href="https://waset.org/page/support">About Us</a></li> <li><a href="https://waset.org/page/support#legal-information">Legal</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/WASET-16th-foundational-anniversary.pdf">WASET celebrates its 16th foundational anniversary</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Account <li><a href="https://waset.org/profile">My Account</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Explore <li><a href="https://waset.org/disciplines">Disciplines</a></li> <li><a href="https://waset.org/conferences">Conferences</a></li> <li><a href="https://waset.org/conference-programs">Conference Program</a></li> <li><a href="https://waset.org/committees">Committees</a></li> <li><a href="https://publications.waset.org">Publications</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Research <li><a href="https://publications.waset.org/abstracts">Abstracts</a></li> <li><a href="https://publications.waset.org">Periodicals</a></li> <li><a href="https://publications.waset.org/archive">Archive</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Open Science <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Science-Philosophy.pdf">Open Science Philosophy</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Science-Award.pdf">Open Science Award</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Society-Open-Science-and-Open-Innovation.pdf">Open Innovation</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Postdoctoral-Fellowship-Award.pdf">Postdoctoral Fellowship Award</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Scholarly-Research-Review.pdf">Scholarly Research Review</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Support <li><a href="https://waset.org/page/support">Support</a></li> <li><a href="https://waset.org/profile/messages/create">Contact Us</a></li> <li><a href="https://waset.org/profile/messages/create">Report Abuse</a></li> </ul> </div> </div> </div> </div> </div> <div class="container text-center"> <hr style="margin-top:0;margin-bottom:.3rem;"> <a href="https://creativecommons.org/licenses/by/4.0/" target="_blank" class="text-muted small">Creative Commons Attribution 4.0 International License</a> <div id="copy" class="mt-2">&copy; 2024 World Academy of Science, Engineering and Technology</div> </div> </footer> <a href="javascript:" id="return-to-top"><i class="fas fa-arrow-up"></i></a> <div class="modal" id="modal-template"> <div class="modal-dialog"> <div class="modal-content"> <div class="row m-0 mt-1"> <div class="col-md-12"> <button type="button" class="close" data-dismiss="modal" aria-label="Close"><span aria-hidden="true">&times;</span></button> </div> </div> <div class="modal-body"></div> </div> </div> </div> <script src="https://cdn.waset.org/static/plugins/jquery-3.3.1.min.js"></script> <script src="https://cdn.waset.org/static/plugins/bootstrap-4.2.1/js/bootstrap.bundle.min.js"></script> <script src="https://cdn.waset.org/static/js/site.js?v=150220211556"></script> <script> jQuery(document).ready(function() { /*jQuery.get("https://publications.waset.org/xhr/user-menu", function (response) { jQuery('#mainNavMenu').append(response); });*/ jQuery.get({ url: "https://publications.waset.org/xhr/user-menu", cache: false }).then(function(response){ jQuery('#mainNavMenu').append(response); }); }); </script> </body> </html>

Pages: 1 2 3 4 5 6 7 8 9 10