CINXE.COM

Search results for: blind system identification

<!DOCTYPE html> <html lang="en" dir="ltr"> <head> <!-- Google tag (gtag.js) --> <script async src="https://www.googletagmanager.com/gtag/js?id=G-P63WKM1TM1"></script> <script> window.dataLayer = window.dataLayer || []; function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'G-P63WKM1TM1'); </script> <!-- Yandex.Metrika counter --> <script type="text/javascript" > (function(m,e,t,r,i,k,a){m[i]=m[i]||function(){(m[i].a=m[i].a||[]).push(arguments)}; m[i].l=1*new Date(); for (var j = 0; j < document.scripts.length; j++) {if (document.scripts[j].src === r) { return; }} k=e.createElement(t),a=e.getElementsByTagName(t)[0],k.async=1,k.src=r,a.parentNode.insertBefore(k,a)}) (window, document, "script", "https://mc.yandex.ru/metrika/tag.js", "ym"); ym(55165297, "init", { clickmap:false, trackLinks:true, accurateTrackBounce:true, webvisor:false }); </script> <noscript><div><img src="https://mc.yandex.ru/watch/55165297" style="position:absolute; left:-9999px;" alt="" /></div></noscript> <!-- /Yandex.Metrika counter --> <!-- Matomo --> <!-- End Matomo Code --> <title>Search results for: blind system identification</title> <meta name="description" content="Search results for: blind system identification"> <meta name="keywords" content="blind system identification"> <meta name="viewport" content="width=device-width, initial-scale=1, minimum-scale=1, maximum-scale=1, user-scalable=no"> <meta charset="utf-8"> <link href="https://cdn.waset.org/favicon.ico" type="image/x-icon" rel="shortcut icon"> <link href="https://cdn.waset.org/static/plugins/bootstrap-4.2.1/css/bootstrap.min.css" rel="stylesheet"> <link href="https://cdn.waset.org/static/plugins/fontawesome/css/all.min.css" rel="stylesheet"> <link href="https://cdn.waset.org/static/css/site.css?v=150220211555" rel="stylesheet"> </head> <body> <header> <div class="container"> <nav class="navbar navbar-expand-lg navbar-light"> <a class="navbar-brand" href="https://waset.org"> <img src="https://cdn.waset.org/static/images/wasetc.png" alt="Open Science Research Excellence" title="Open Science Research Excellence" /> </a> <button class="d-block d-lg-none navbar-toggler ml-auto" type="button" data-toggle="collapse" data-target="#navbarMenu" aria-controls="navbarMenu" aria-expanded="false" aria-label="Toggle navigation"> <span class="navbar-toggler-icon"></span> </button> <div class="w-100"> <div class="d-none d-lg-flex flex-row-reverse"> <form method="get" action="https://waset.org/search" class="form-inline my-2 my-lg-0"> <input class="form-control mr-sm-2" type="search" placeholder="Search Conferences" value="blind system identification" name="q" aria-label="Search"> <button class="btn btn-light my-2 my-sm-0" type="submit"><i class="fas fa-search"></i></button> </form> </div> <div class="collapse navbar-collapse mt-1" id="navbarMenu"> <ul class="navbar-nav ml-auto align-items-center" id="mainNavMenu"> <li class="nav-item"> <a class="nav-link" href="https://waset.org/conferences" title="Conferences in 2024/2025/2026">Conferences</a> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/disciplines" title="Disciplines">Disciplines</a> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/committees" rel="nofollow">Committees</a> </li> <li class="nav-item dropdown"> <a class="nav-link dropdown-toggle" href="#" id="navbarDropdownPublications" role="button" data-toggle="dropdown" aria-haspopup="true" aria-expanded="false"> Publications </a> <div class="dropdown-menu" aria-labelledby="navbarDropdownPublications"> <a class="dropdown-item" href="https://publications.waset.org/abstracts">Abstracts</a> <a class="dropdown-item" href="https://publications.waset.org">Periodicals</a> <a class="dropdown-item" href="https://publications.waset.org/archive">Archive</a> </div> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/page/support" title="Support">Support</a> </li> </ul> </div> </div> </nav> </div> </header> <main> <div class="container mt-4"> <div class="row"> <div class="col-md-9 mx-auto"> <form method="get" action="https://publications.waset.org/abstracts/search"> <div id="custom-search-input"> <div class="input-group"> <i class="fas fa-search"></i> <input type="text" class="search-query" name="q" placeholder="Author, Title, Abstract, Keywords" value="blind system identification"> <input type="submit" class="btn_search" value="Search"> </div> </div> </form> </div> </div> <div class="row mt-3"> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Commenced</strong> in January 2007</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Frequency:</strong> Monthly</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Edition:</strong> International</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Paper Count:</strong> 20054</div> </div> </div> </div> <h1 class="mt-3 mb-3 text-center" style="font-size:1.6rem;">Search results for: blind system identification</h1> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">20024</span> Modeling of a UAV Longitudinal Dynamics through System Identification Technique</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Asadullah%20I.%20Qazi">Asadullah I. Qazi</a>, <a href="https://publications.waset.org/abstracts/search?q=Mansoor%20Ahsan"> Mansoor Ahsan</a>, <a href="https://publications.waset.org/abstracts/search?q=Zahir%20Ashraf"> Zahir Ashraf</a>, <a href="https://publications.waset.org/abstracts/search?q=Uzair%20Ahmad"> Uzair Ahmad </a> </p> <p class="card-text"><strong>Abstract:</strong></p> System identification of an Unmanned Aerial Vehicle (UAV), to acquire its mathematical model, is a significant step in the process of aircraft flight automation. The need for reliable mathematical model is an established requirement for autopilot design, flight simulator development, aircraft performance appraisal, analysis of aircraft modifications, preflight testing of prototype aircraft and investigation of fatigue life and stress distribution etc.&nbsp; This research is aimed at system identification of a fixed wing UAV by means of specifically designed flight experiment. The purposely designed flight maneuvers were performed on the UAV and aircraft states were recorded during these flights. Acquired data were preprocessed for noise filtering and bias removal followed by parameter estimation of longitudinal dynamics transfer functions using MATLAB system identification toolbox. Black box identification based transfer function models, in response to elevator and throttle inputs, were estimated using least square error&nbsp;&nbsp; technique. The identification results show a high confidence level and goodness of fit between the estimated model and actual aircraft response. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=fixed%20wing%20UAV" title="fixed wing UAV">fixed wing UAV</a>, <a href="https://publications.waset.org/abstracts/search?q=system%20identification" title=" system identification"> system identification</a>, <a href="https://publications.waset.org/abstracts/search?q=black%20box%20modeling" title=" black box modeling"> black box modeling</a>, <a href="https://publications.waset.org/abstracts/search?q=longitudinal%20dynamics" title=" longitudinal dynamics"> longitudinal dynamics</a>, <a href="https://publications.waset.org/abstracts/search?q=least%20square%20error" title=" least square error"> least square error</a> </p> <a href="https://publications.waset.org/abstracts/70091/modeling-of-a-uav-longitudinal-dynamics-through-system-identification-technique" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/70091.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">325</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">20023</span> A Way of Converting Color Images to Gray Scale Ones for the Color-Blind: Applying to the part of the Tokyo Subway Map</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Katsuhiro%20Narikiyo">Katsuhiro Narikiyo</a>, <a href="https://publications.waset.org/abstracts/search?q=Shota%20Hashikawa"> Shota Hashikawa</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper proposes a way of removing noises and reducing the number of colors contained in a JPEG image. Main purpose of this project is to convert color images to monochrome images for the color-blind. We treat the crispy color images like the Tokyo subway map. Each color in the image has an important information. But for the color blinds, similar colors cannot be distinguished. If we can convert those colors to different gray values, they can distinguish them. Therefore we try to convert color images to monochrome images. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=color-blind" title="color-blind">color-blind</a>, <a href="https://publications.waset.org/abstracts/search?q=JPEG" title=" JPEG"> JPEG</a>, <a href="https://publications.waset.org/abstracts/search?q=monochrome%20image" title=" monochrome image"> monochrome image</a>, <a href="https://publications.waset.org/abstracts/search?q=denoise" title=" denoise"> denoise</a> </p> <a href="https://publications.waset.org/abstracts/2968/a-way-of-converting-color-images-to-gray-scale-ones-for-the-color-blind-applying-to-the-part-of-the-tokyo-subway-map" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/2968.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">357</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">20022</span> Application of Low-order Modeling Techniques and Neural-Network Based Models for System Identification</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Venkatesh%20Pulletikurthi">Venkatesh Pulletikurthi</a>, <a href="https://publications.waset.org/abstracts/search?q=Karthik%20B.%20Ariyur"> Karthik B. Ariyur</a>, <a href="https://publications.waset.org/abstracts/search?q=Luciano%20Castillo"> Luciano Castillo</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The system identification from the turbulence wakes will lead to the tactical advantage to prepare and also, to predict the trajectory of the opponents’ movements. A low-order modeling technique, POD, is used to predict the object based on the wake pattern and compared with pre-trained image recognition neural network (NN) to classify the wake patterns into objects. It is demonstrated that low-order modeling, POD, is able to predict the objects better compared to pretrained NN by ~30%. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=the%20bluff%20body%20wakes" title="the bluff body wakes">the bluff body wakes</a>, <a href="https://publications.waset.org/abstracts/search?q=low-order%20modeling" title=" low-order modeling"> low-order modeling</a>, <a href="https://publications.waset.org/abstracts/search?q=neural%20network" title=" neural network"> neural network</a>, <a href="https://publications.waset.org/abstracts/search?q=system%20identification" title=" system identification"> system identification</a> </p> <a href="https://publications.waset.org/abstracts/146168/application-of-low-order-modeling-techniques-and-neural-network-based-models-for-system-identification" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/146168.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">180</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">20021</span> Smart Unmanned Parking System Based on Radio Frequency Identification Technology</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Yu%20Qin">Yu Qin</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In order to tackle the ever-growing problem of the lack of parking space, this paper presents the design and implementation of a smart unmanned parking system that is based on RFID (radio frequency identification) technology and Wireless communication technology. This system uses RFID technology to achieve the identification function (transmitted by 2.4 G wireless module) and is equipped with an STM32L053 micro controller as the main control chip of the smart vehicle. This chip can accomplish automatic parking (in/out), charging and other functions. On this basis, it can also help users easily query the information that is stored in the database through the Internet. Experimental tests have shown that the system has the features of low power consumption and stable operation, among others. It can effectively improve the level of automation control of the parking lot management system and has enormous application prospects. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=RFID" title="RFID">RFID</a>, <a href="https://publications.waset.org/abstracts/search?q=embedded%20system" title=" embedded system"> embedded system</a>, <a href="https://publications.waset.org/abstracts/search?q=unmanned" title=" unmanned"> unmanned</a>, <a href="https://publications.waset.org/abstracts/search?q=parking%20management" title=" parking management"> parking management</a> </p> <a href="https://publications.waset.org/abstracts/81174/smart-unmanned-parking-system-based-on-radio-frequency-identification-technology" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/81174.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">333</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">20020</span> Face Tracking and Recognition Using Deep Learning Approach</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Degale%20Desta">Degale Desta</a>, <a href="https://publications.waset.org/abstracts/search?q=Cheng%20Jian"> Cheng Jian</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The most important factor in identifying a person is their face. Even identical twins have their own distinct faces. As a result, identification and face recognition are needed to tell one person from another. A face recognition system is a verification tool used to establish a person's identity using biometrics. Nowadays, face recognition is a common technique used in a variety of applications, including home security systems, criminal identification, and phone unlock systems. This system is more secure because it only requires a facial image instead of other dependencies like a key or card. Face detection and face identification are the two phases that typically make up a human recognition system.The idea behind designing and creating a face recognition system using deep learning with Azure ML Python's OpenCV is explained in this paper. Face recognition is a task that can be accomplished using deep learning, and given the accuracy of this method, it appears to be a suitable approach. To show how accurate the suggested face recognition system is, experimental results are given in 98.46% accuracy using Fast-RCNN Performance of algorithms under different training conditions. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=deep%20learning" title="deep learning">deep learning</a>, <a href="https://publications.waset.org/abstracts/search?q=face%20recognition" title=" face recognition"> face recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=identification" title=" identification"> identification</a>, <a href="https://publications.waset.org/abstracts/search?q=fast-RCNN" title=" fast-RCNN"> fast-RCNN</a> </p> <a href="https://publications.waset.org/abstracts/163134/face-tracking-and-recognition-using-deep-learning-approach" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/163134.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">140</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">20019</span> Development of a Computer Vision System for the Blind and Visually Impaired Person</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Rodrigo%20C.%20Belleza">Rodrigo C. Belleza</a>, <a href="https://publications.waset.org/abstracts/search?q=Jr."> Jr.</a>, <a href="https://publications.waset.org/abstracts/search?q=Roselyn%20A.%20Maa%C3%B1o"> Roselyn A. Maaño</a>, <a href="https://publications.waset.org/abstracts/search?q=Karl%20Patrick%20E.%20Camota"> Karl Patrick E. Camota</a>, <a href="https://publications.waset.org/abstracts/search?q=Darwin%20Kim%20Q.%20Bulawan"> Darwin Kim Q. Bulawan</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Eyes are an essential and conspicuous organ of the human body. Human eyes are outward and inward portals of the body that allows to see the outside world and provides glimpses into ones inner thoughts and feelings. Inevitable blindness and visual impairments may result from eye-related disease, trauma, or congenital or degenerative conditions that cannot be corrected by conventional means. The study emphasizes innovative tools that will serve as an aid to the blind and visually impaired (VI) individuals. The researchers fabricated a prototype that utilizes the Microsoft Kinect for Windows and Arduino microcontroller board. The prototype facilitates advanced gesture recognition, voice recognition, obstacle detection and indoor environment navigation. Open Computer Vision (OpenCV) performs image analysis, and gesture tracking to transform Kinect data to the desired output. A computer vision technology device provides greater accessibility for those with vision impairments. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=algorithms" title="algorithms">algorithms</a>, <a href="https://publications.waset.org/abstracts/search?q=blind" title=" blind"> blind</a>, <a href="https://publications.waset.org/abstracts/search?q=computer%20vision" title=" computer vision"> computer vision</a>, <a href="https://publications.waset.org/abstracts/search?q=embedded%20systems" title=" embedded systems"> embedded systems</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20analysis" title=" image analysis"> image analysis</a> </p> <a href="https://publications.waset.org/abstracts/2016/development-of-a-computer-vision-system-for-the-blind-and-visually-impaired-person" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/2016.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">318</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">20018</span> Pneumoperitoneum Creation Assisted with Optical Coherence Tomography and Automatic Identification</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Eric%20Yi-Hsiu%20Huang">Eric Yi-Hsiu Huang</a>, <a href="https://publications.waset.org/abstracts/search?q=Meng-Chun%20Kao"> Meng-Chun Kao</a>, <a href="https://publications.waset.org/abstracts/search?q=Wen-Chuan%20Kuo"> Wen-Chuan Kuo</a> </p> <p class="card-text"><strong>Abstract:</strong></p> For every laparoscopic surgery, a safe pneumoperitoneumcreation (gaining access to the peritoneal cavity) is the first and essential step. However, closed pneumoperitoneum is usually obtained by blind insertion of a Veress needle into the peritoneal cavity, which may carry potential risks suchas bowel and vascular injury.Until now, there remains no definite measure to visually confirm the position of the needle tip inside the peritoneal cavity. Therefore, this study established an image-guided Veress needle method by combining a fiber probe with optical coherence tomography (OCT). An algorithm was also proposed for determining the exact location of the needle tip through the acquisition of OCT images. Our method not only generates a series of “live” two-dimensional (2D) images during the needle puncture toward the peritoneal cavity but also can eliminate operator variation in image judgment, thus improving peritoneal access safety. This study was approved by the Ethics Committee of Taipei Veterans General Hospital (Taipei VGH IACUC 2020-144). A total of 2400 in vivo OCT images, independent of each other, were acquired from experiments of forty peritoneal punctures on two piglets. Characteristic OCT image patterns could be observed during the puncturing process. The ROC curve demonstrates the discrimination capability of these quantitative image features of the classifier, showing the accuracy of the classifier for determining the inside vs. outside of the peritoneal was 98% (AUC=0.98). In summary, the present study demonstrates the ability of the combination of our proposed automatic identification method and OCT imaging for automatically and objectively identifying the location of the needle tip. OCT images translate the blind closed technique of peritoneal access into a visualized procedure, thus improving peritoneal access safety. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=pneumoperitoneum" title="pneumoperitoneum">pneumoperitoneum</a>, <a href="https://publications.waset.org/abstracts/search?q=optical%20coherence%20tomography" title=" optical coherence tomography"> optical coherence tomography</a>, <a href="https://publications.waset.org/abstracts/search?q=automatic%20identification" title=" automatic identification"> automatic identification</a>, <a href="https://publications.waset.org/abstracts/search?q=veress%20needle" title=" veress needle"> veress needle</a> </p> <a href="https://publications.waset.org/abstracts/149622/pneumoperitoneum-creation-assisted-with-optical-coherence-tomography-and-automatic-identification" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/149622.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">134</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">20017</span> Identification of Nonlinear Systems Structured by Hammerstein-Wiener Model </h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=A.%20Brouri">A. Brouri</a>, <a href="https://publications.waset.org/abstracts/search?q=F.%20Giri"> F. Giri</a>, <a href="https://publications.waset.org/abstracts/search?q=A.%20Mkhida"> A. Mkhida</a>, <a href="https://publications.waset.org/abstracts/search?q=A.%20Elkarkri"> A. Elkarkri</a>, <a href="https://publications.waset.org/abstracts/search?q=M.%20L.%20Chhibat"> M. L. Chhibat</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Standard Hammerstein-Wiener models consist of a linear subsystem sandwiched by two memoryless nonlinearities. Presently, the linear subsystem is allowed to be parametric or not, continuous- or discrete-time. The input and output nonlinearities are polynomial and may be noninvertible. A two-stage identification method is developed such the parameters of all nonlinear elements are estimated first using the Kozen-Landau polynomial decomposition algorithm. The obtained estimates are then based upon in the identification of the linear subsystem, making use of suitable pre-ad post-compensators. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=nonlinear%20system%20identification" title="nonlinear system identification">nonlinear system identification</a>, <a href="https://publications.waset.org/abstracts/search?q=Hammerstein-Wiener%20systems" title=" Hammerstein-Wiener systems"> Hammerstein-Wiener systems</a>, <a href="https://publications.waset.org/abstracts/search?q=frequency%20identification" title=" frequency identification"> frequency identification</a>, <a href="https://publications.waset.org/abstracts/search?q=polynomial%20decomposition" title=" polynomial decomposition"> polynomial decomposition</a> </p> <a href="https://publications.waset.org/abstracts/7969/identification-of-nonlinear-systems-structured-by-hammerstein-wiener-model" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/7969.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">511</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">20016</span> A Palmprint Identification System Based Multi-Layer Perceptron</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=David%20P.%20Tantua">David P. Tantua</a>, <a href="https://publications.waset.org/abstracts/search?q=Abdulkader%20Helwan"> Abdulkader Helwan</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Biometrics has been recently used for the human identification systems using the biological traits such as the fingerprints and iris scanning. Identification systems based biometrics show great efficiency and accuracy in such human identification applications. However, these types of systems are so far based on some image processing techniques only, which may decrease the efficiency of such applications. Thus, this paper aims to develop a human palmprint identification system using multi-layer perceptron neural network which has the capability to learn using a backpropagation learning algorithms. The developed system uses images obtained from a public database available on the internet (CASIA). The processing system is as follows: image filtering using median filter, image adjustment, image skeletonizing, edge detection using canny operator to extract features, clear unwanted components of the image. The second phase is to feed those processed images into a neural network classifier which will adaptively learn and create a class for each different image. 100 different images are used for training the system. Since this is an identification system, it should be tested with the same images. Therefore, the same 100 images are used for testing it, and any image out of the training set should be unrecognized. The experimental results shows that this developed system has a great accuracy 100% and it can be implemented in real life applications. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=biometrics" title="biometrics">biometrics</a>, <a href="https://publications.waset.org/abstracts/search?q=biological%20traits" title=" biological traits"> biological traits</a>, <a href="https://publications.waset.org/abstracts/search?q=multi-layer%20perceptron%20neural%20network" title=" multi-layer perceptron neural network"> multi-layer perceptron neural network</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20skeletonizing" title=" image skeletonizing"> image skeletonizing</a>, <a href="https://publications.waset.org/abstracts/search?q=edge%20detection%20using%20canny%20operator" title=" edge detection using canny operator"> edge detection using canny operator</a> </p> <a href="https://publications.waset.org/abstracts/26617/a-palmprint-identification-system-based-multi-layer-perceptron" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/26617.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">371</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">20015</span> Evaluation of DNA Microarray System in the Identification of Microorganisms Isolated from Blood</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Merih%20%C5%9Eim%C5%9Fek">Merih Şimşek</a>, <a href="https://publications.waset.org/abstracts/search?q=Recep%20Ke%C5%9Fli"> Recep Keşli</a>, <a href="https://publications.waset.org/abstracts/search?q=%C3%96zg%C3%BCl%20%C3%87etinkaya"> Özgül Çetinkaya</a>, <a href="https://publications.waset.org/abstracts/search?q=Cengiz%20Demir"> Cengiz Demir</a>, <a href="https://publications.waset.org/abstracts/search?q=Adem%20Aslan"> Adem Aslan</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Bacteremia is a clinical entity with high morbidity and mortality rates when immediate diagnose, or treatment cannot be achieved. Microorganisms which can cause sepsis or bacteremia are easily isolated from blood cultures. Fifty-five positive blood cultures were included in this study. Microorganisms in 55 blood cultures were isolated by conventional microbiological methods; afterwards, microorganisms were defined in terms of the phenotypic aspects by the Vitek-2 system. The same microorganisms in all blood culture samples were defined in terms of genotypic aspects again by Multiplex-PCR DNA Low-Density Microarray System. At the end of the identification process, the DNA microarray system’s success in identification was evaluated based on the Vitek-2 system. The Vitek-2 system and DNA Microarray system were able to identify the same microorganisms in 53 samples; on the other hand, different microorganisms were identified in the 2 blood cultures by DNA Microarray system. The microorganisms identified by Vitek-2 system were found to be identical to 96.4 % of microorganisms identified by DNA Microarrays system. In addition to bacteria identified by Vitek-2, the presence of a second bacterium has been detected in 5 blood cultures by the DNA Microarray system. It was identified 18 of 55 positive blood culture as E.coli strains with both Vitek 2 and DNA microarray systems. The same identification numbers were found 6 and 8 for Acinetobacter baumanii, 10 and 10 for K.pneumoniae, 5 and 5 for S.aureus, 7 and 11 for Enterococcus spp, 5 and 5 for P.aeruginosa, 2 and 2 for C.albicans respectively. According to these results, DNA Microarray system requires both a technical device and experienced staff support; besides, it requires more expensive kits than Vitek-2. However, this method should be used in conjunction with conventional microbiological methods. Thus, large microbiology laboratories will produce faster, more sensitive and more successful results in the identification of cultured microorganisms. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=microarray" title="microarray">microarray</a>, <a href="https://publications.waset.org/abstracts/search?q=Vitek-2" title=" Vitek-2"> Vitek-2</a>, <a href="https://publications.waset.org/abstracts/search?q=blood%20culture" title=" blood culture"> blood culture</a>, <a href="https://publications.waset.org/abstracts/search?q=bacteremia" title=" bacteremia"> bacteremia</a> </p> <a href="https://publications.waset.org/abstracts/72604/evaluation-of-dna-microarray-system-in-the-identification-of-microorganisms-isolated-from-blood" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/72604.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">350</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">20014</span> Intelligent Rheumatoid Arthritis Identification System Based Image Processing and Neural Classifier</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Abdulkader%20Helwan">Abdulkader Helwan</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Rheumatoid joint inflammation is characterized as a perpetual incendiary issue which influences the joints by hurting body tissues Therefore, there is an urgent need for an effective intelligent identification system of knee Rheumatoid arthritis especially in its early stages. This paper is to develop a new intelligent system for the identification of Rheumatoid arthritis of the knee utilizing image processing techniques and neural classifier. The system involves two principle stages. The first one is the image processing stage in which the images are processed using some techniques such as RGB to gryascale conversion, rescaling, median filtering, background extracting, images subtracting, segmentation using canny edge detection, and features extraction using pattern averaging. The extracted features are used then as inputs for the neural network which classifies the X-ray knee images as normal or abnormal (arthritic) based on a backpropagation learning algorithm which involves training of the network on 400 X-ray normal and abnormal knee images. The system was tested on 400 x-ray images and the network shows good performance during that phase, resulting in a good identification rate 97%. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=rheumatoid%20arthritis" title="rheumatoid arthritis">rheumatoid arthritis</a>, <a href="https://publications.waset.org/abstracts/search?q=intelligent%20identification" title=" intelligent identification"> intelligent identification</a>, <a href="https://publications.waset.org/abstracts/search?q=neural%20classifier" title=" neural classifier"> neural classifier</a>, <a href="https://publications.waset.org/abstracts/search?q=segmentation" title=" segmentation"> segmentation</a>, <a href="https://publications.waset.org/abstracts/search?q=backpropoagation" title=" backpropoagation"> backpropoagation</a> </p> <a href="https://publications.waset.org/abstracts/26123/intelligent-rheumatoid-arthritis-identification-system-based-image-processing-and-neural-classifier" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/26123.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">532</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">20013</span> Texture Identification Using Vision System: A Method to Predict Functionality of a Component</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Varsha%20Singh">Varsha Singh</a>, <a href="https://publications.waset.org/abstracts/search?q=Shraddha%20Prajapati"> Shraddha Prajapati</a>, <a href="https://publications.waset.org/abstracts/search?q=M.%20B.%20Kiran"> M. B. Kiran</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Texture identification is useful in predicting the functionality of a component. Many of the existing texture identification methods are of contact in nature, which limits its measuring speed. These contact measurement techniques use a diamond stylus and the diamond stylus being sharp going to damage the surface under inspection and hence these techniques can be used in statistical sampling. Though these contact methods are very accurate, they do not give complete information for full characterization of surface. In this context, the presented method assumes special significance. The method uses a relatively low cost vision system for image acquisition. Software is developed based on wavelet transform, for analyzing texture images. Specimens are made using different manufacturing process (shaping, grinding, milling etc.) During experimentation, the specimens are illuminated using proper lighting and texture images a capture using CCD camera connected to the vision system. The software installed in the vision system processes these images and subsequently identify the texture of manufacturing processes. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=diamond%20stylus" title="diamond stylus">diamond stylus</a>, <a href="https://publications.waset.org/abstracts/search?q=manufacturing%20process" title=" manufacturing process"> manufacturing process</a>, <a href="https://publications.waset.org/abstracts/search?q=texture%20identification" title=" texture identification"> texture identification</a>, <a href="https://publications.waset.org/abstracts/search?q=vision%20system" title=" vision system"> vision system</a> </p> <a href="https://publications.waset.org/abstracts/61722/texture-identification-using-vision-system-a-method-to-predict-functionality-of-a-component" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/61722.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">289</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">20012</span> Removing Barriers in Assessment and Feedback for Blind Students in Open Distance Learning</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Sindile%20Ngubane-Mokiwa">Sindile Ngubane-Mokiwa</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper addresses two questions: (1) what barriers do the blind students face with assessment and feedback in open distance learning contexts? And (2) How can these barriers be removed? The paper focuses on the distance education through which most students with disabilities elevate their chances of accessing higher education. Lack of genuine inclusion is also evident in the challenges the blind students face during the assessment. These barriers are experienced at both formative and summative stages. The insights in this paper emanate from a case study that was carried out through qualitative approaches. The data was collected through in-depth interview, life stories, and telephonic interviews. The paper provides a review of local, continental and international views on how best assessment barriers can be removed. A group of five blind students, comprising of two honours students, two master's students and one doctoral student participated in this study. The data analysis was done through thematic analysis. The findings revealed that (a) feedback to the assignment is often inaccessible; (b) the software used is incompatible; (c) learning and assessment are designed in exclusionary approaches; (d) assessment facilities are not conducive; and (e) lack of proactive innovative assessment strategies. The article concludes by recommending ways in which barriers to assessment can be removed. These include addressing inclusive assessment and feedback strategies in professional development initiatives. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=assessment%20design" title="assessment design">assessment design</a>, <a href="https://publications.waset.org/abstracts/search?q=barriers" title=" barriers"> barriers</a>, <a href="https://publications.waset.org/abstracts/search?q=disabilities" title=" disabilities"> disabilities</a>, <a href="https://publications.waset.org/abstracts/search?q=blind%20students" title=" blind students"> blind students</a>, <a href="https://publications.waset.org/abstracts/search?q=feedback" title=" feedback"> feedback</a>, <a href="https://publications.waset.org/abstracts/search?q=universal%20design%20for%20learning" title=" universal design for learning"> universal design for learning</a> </p> <a href="https://publications.waset.org/abstracts/67210/removing-barriers-in-assessment-and-feedback-for-blind-students-in-open-distance-learning" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/67210.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">360</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">20011</span> Identification of Nonlinear Systems Using Radial Basis Function Neural Network</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=C.%20Pislaru">C. Pislaru</a>, <a href="https://publications.waset.org/abstracts/search?q=A.%20Shebani"> A. Shebani</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper uses the radial basis function neural network (RBFNN) for system identification of nonlinear systems. Five nonlinear systems are used to examine the activity of RBFNN in system modeling of nonlinear systems; the five nonlinear systems are dual tank system, single tank system, DC motor system, and two academic models. The feed forward method is considered in this work for modelling the non-linear dynamic models, where the K-Means clustering algorithm used in this paper to select the centers of radial basis function network, because it is reliable, offers fast convergence and can handle large data sets. The least mean square method is used to adjust the weights to the output layer, and Euclidean distance method used to measure the width of the Gaussian function. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=system%20identification" title="system identification">system identification</a>, <a href="https://publications.waset.org/abstracts/search?q=nonlinear%20systems" title=" nonlinear systems"> nonlinear systems</a>, <a href="https://publications.waset.org/abstracts/search?q=neural%20networks" title=" neural networks"> neural networks</a>, <a href="https://publications.waset.org/abstracts/search?q=radial%20basis%20function" title=" radial basis function"> radial basis function</a>, <a href="https://publications.waset.org/abstracts/search?q=K-means%20clustering%20algorithm" title=" K-means clustering algorithm "> K-means clustering algorithm </a> </p> <a href="https://publications.waset.org/abstracts/14775/identification-of-nonlinear-systems-using-radial-basis-function-neural-network" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/14775.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">470</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">20010</span> Verification of Space System Dynamics Using the MATLAB Identification Toolbox in Space Qualification Test</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Yuri%20V.%20Kim">Yuri V. Kim</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This article presents a new approach to the Functional Testing of Space Systems (SS). It can be considered as a generic test and used for a wide class of SS that from the point of view of System Dynamics and Control may be described by the ordinary differential equations. Suggested methodology is based on using semi-natural experiment- laboratory stand that doesn’t require complicated, precise and expensive technological control-verification equipment. However, it allows for testing system as a whole totally assembled unit during Assembling, Integration and Testing (AIT) activities, involving system hardware (HW) and software (SW). The test physically activates system input (sensors) and output (actuators) and requires recording their outputs in real time. The data is then inserted in laboratory PC where it is post-experiment processed by Matlab/Simulink Identification Toolbox. It allows for estimating system dynamics in form of estimation of system differential equations by the experimental way and comparing them with expected mathematical model prematurely verified by mathematical simulation during the design process. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=system%20dynamics" title="system dynamics">system dynamics</a>, <a href="https://publications.waset.org/abstracts/search?q=space%20system%20ground%20tests%20and%20space%20qualification" title=" space system ground tests and space qualification"> space system ground tests and space qualification</a>, <a href="https://publications.waset.org/abstracts/search?q=system%20dynamics%20identification" title=" system dynamics identification"> system dynamics identification</a>, <a href="https://publications.waset.org/abstracts/search?q=satellite%20attitude%20control" title=" satellite attitude control"> satellite attitude control</a>, <a href="https://publications.waset.org/abstracts/search?q=assembling" title=" assembling"> assembling</a>, <a href="https://publications.waset.org/abstracts/search?q=integration%20and%20testing" title=" integration and testing"> integration and testing</a> </p> <a href="https://publications.waset.org/abstracts/137789/verification-of-space-system-dynamics-using-the-matlab-identification-toolbox-in-space-qualification-test" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/137789.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">163</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">20009</span> Disciplined Care for Disciplined Patients: Results from Daily Experiences of Hospitalized Patients with Blindness</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Mahmood%20Shamshiri">Mahmood Shamshiri</a> </p> <p class="card-text"><strong>Abstract:</strong></p> While visual sensation is the key gate for human-being to understand the world, visual impairment is one of the common cause of disability around the world. There is no doubt about the importance of eye sight in daily life among people, even it is understood the best gift of God to human-beings in many societies. Blind people are admitted to hospital for different health issues. Nurses and other health professionals who provide care for this group of patients need to understand their patients. Understanding the lived experience of blind people helps nurses to expand their knowledge regarding blind patients in order to provide a holistic care and improve the quality of care for blind patients. This phenomenological inquiry aimed to describe the meaning of discipline in daily life of blind people admitted in hospital. An interpretive phenomenology underpinned the philosophical approach of the study. While the interpretive phenomenology played as an umbrella role in the overall point of the study, the six methodical activities which introduced by van Manen helped the researchers to conduct the study. ‘Disciplined care for disciplined patients’ was the main theme emerged from dialogues of blind patients about their daily life in the hospital. Almost all of participants called themselves as disciplined people. The theme ‘disciplined care for disciplined patients’ appeared from four sub-themes including discipline through careful touching and listening, discipline as the ideal way of existence, discipline the preferred way of being independent, desire to take disciplined and detailed care, reactions to the undisciplined caring culture. This phenomenological inquiry to the experiences of patients with blindness in hospital revealed that they commonly are disciplined people and want to be cared in well-organized caring environment. Furthermore, they need to be familiar with the new caring environment. Well-organized and familiar environment help blind patients to increase the level of independency. In addition, blind patients prefer a detail informed and disciplined caring culture. Health professionals have to consider the concept of disciplined care in order to provide a holistic and comprehensive competent care. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=disciplined%20people" title="disciplined people">disciplined people</a>, <a href="https://publications.waset.org/abstracts/search?q=disciplined%20care" title=" disciplined care"> disciplined care</a>, <a href="https://publications.waset.org/abstracts/search?q=lived%20experience" title=" lived experience"> lived experience</a>, <a href="https://publications.waset.org/abstracts/search?q=patient%20with%20blindness" title=" patient with blindness"> patient with blindness</a> </p> <a href="https://publications.waset.org/abstracts/91963/disciplined-care-for-disciplined-patients-results-from-daily-experiences-of-hospitalized-patients-with-blindness" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/91963.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">147</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">20008</span> Model-Free Distributed Control of Dynamical Systems</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Javad%20Khazaei">Javad Khazaei</a>, <a href="https://publications.waset.org/abstracts/search?q=Rick%20Blum"> Rick Blum</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Distributed control is an efficient and flexible approach for coordination of multi-agent systems. One of the main challenges in designing a distributed controller is identifying the governing dynamics of the dynamical systems. Data-driven system identification is currently undergoing a revolution. With the availability of high-fidelity measurements and historical data, model-free identification of dynamical systems can facilitate the control design without tedious modeling of high-dimensional and/or nonlinear systems. This paper develops a distributed control design using consensus theory for linear and nonlinear dynamical systems using sparse identification of system dynamics. Compared with existing consensus designs that heavily rely on knowing the detailed system dynamics, the proposed model-free design can accurately capture the dynamics of the system with available measurements and input data and provide guaranteed performance in consensus and tracking problems. Heterogeneous damped oscillators are chosen as examples of dynamical system for validation purposes. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=consensus%20tracking" title="consensus tracking">consensus tracking</a>, <a href="https://publications.waset.org/abstracts/search?q=distributed%20control" title=" distributed control"> distributed control</a>, <a href="https://publications.waset.org/abstracts/search?q=model-free%20control" title=" model-free control"> model-free control</a>, <a href="https://publications.waset.org/abstracts/search?q=sparse%20identification%20of%20dynamical%20systems" title=" sparse identification of dynamical systems"> sparse identification of dynamical systems</a> </p> <a href="https://publications.waset.org/abstracts/144452/model-free-distributed-control-of-dynamical-systems" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/144452.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">266</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">20007</span> Blind Hybrid ARQ Retransmissions with Different Multiplexing between Time and Frequency for Ultra-Reliable Low-Latency Communications in 5G</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Mohammad%20Tawhid%20Kawser">Mohammad Tawhid Kawser</a>, <a href="https://publications.waset.org/abstracts/search?q=Ishrak%20Kabir"> Ishrak Kabir</a>, <a href="https://publications.waset.org/abstracts/search?q=Sadia%20Sultana"> Sadia Sultana</a>, <a href="https://publications.waset.org/abstracts/search?q=Tanjim%20Ahmad"> Tanjim Ahmad</a> </p> <p class="card-text"><strong>Abstract:</strong></p> A promising service category of 5G, popularly known as Ultra-Reliable Low-Latency Communications (URLLC), is devoted to providing users with the staunchest fail-safe connections in the splits of a second. The reliability of data transfer, as offered by Hybrid ARQ (HARQ), should be employed as URLLC applications are highly error-sensitive. However, the delay added by HARQ ACK/NACK and retransmissions can degrade performance as URLLC applications are highly delay-sensitive too. To improve latency while maintaining reliability, this paper proposes the use of blind transmissions of redundancy versions exploiting the frequency diversity of wide bandwidth of 5G. The blind HARQ retransmissions proposed so far consider narrow bandwidth cases, for example, dedicated short range communication (DSRC), shared channels for device-to-device (D2D) communication, etc., and thus, do not gain much from the frequency diversity. The proposal also combines blind and ACK/NACK based retransmissions for different multiplexing options between time and frequency depending on the current radio channel quality and stringency of latency requirements. The wide bandwidth of 5G justifies that the proposed blind retransmission, without waiting for ACK/NACK, is not palpably extravagant. A simulation is performed to demonstrate the improvement in latency of the proposed scheme. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=5G" title="5G">5G</a>, <a href="https://publications.waset.org/abstracts/search?q=URLLC" title=" URLLC"> URLLC</a>, <a href="https://publications.waset.org/abstracts/search?q=HARQ" title=" HARQ"> HARQ</a>, <a href="https://publications.waset.org/abstracts/search?q=latency" title=" latency"> latency</a>, <a href="https://publications.waset.org/abstracts/search?q=frequency%20diversity" title=" frequency diversity"> frequency diversity</a> </p> <a href="https://publications.waset.org/abstracts/188390/blind-hybrid-arq-retransmissions-with-different-multiplexing-between-time-and-frequency-for-ultra-reliable-low-latency-communications-in-5g" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/188390.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">36</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">20006</span> Evaluating Daylight Performance in an Office Environment in Malaysia, Using Venetian Blind System: Case Study</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Fatemeh%20Deldarabdolmaleki">Fatemeh Deldarabdolmaleki</a>, <a href="https://publications.waset.org/abstracts/search?q=Mohamad%20Fakri%20Zaky%20Bin%20Ja%27afar"> Mohamad Fakri Zaky Bin Ja&#039;afar</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Having a daylit space together with view results in a pleasant and productive environment for office employees. A daylit space is a space which utilizes daylight as a basic source of illumination to fulfill user’s visual demands and minimizes the electric energy consumption. Malaysian weather is hot and humid all over the year because of its location in the equatorial belt. however, because most of the commercial buildings in Malaysia are air-conditioned, huge glass windows are normally installed in order to keep the physical and visual relation between inside and outside. As a result of climatic situation and mentioned new trend, an ordinary office has huge heat gain, glare, and discomfort for occupants. Balancing occupant’s comfort and energy conservation in a tropical climate is a real challenge. This study concentrates on evaluating a venetian blind system using per pixel analyzing tools based on the suggested cut-out metrics by the literature. Workplace area in a private office room has been selected as a case study. Eight-day measurement experiment was conducted to investigate the effect of different venetian blind angles in an office area under daylight conditions in Serdang, Malaysia. The study goal was to explore daylight comfort of a commercially available venetian blind system, its’ daylight sufficiency and excess (8:00 AM to 5 PM) as well as Glare examination. Recently developed software, analyzing High Dynamic Range Images (HDRI captured by CCD camera), such as radiance based Evalglare and hdrscope help to investigate luminance-based metrics. The main key factors are illuminance and luminance levels, mean and maximum luminance, daylight glare probability (DGP) and luminance ratio of the selected mask regions. The findings show that in most cases, morning session needs artificial lighting in order to achieve daylight comfort. However, in some conditions (e.g. 10° and 40° slat angles) in the second half of day the workplane illuminance level exceeds the maximum of 2000 lx. Generally, a rising trend is discovered toward mean window luminance and the most unpleasant cases occur after 2 P.M. Considering the luminance criteria rating, the uncomfortable conditions occur in the afternoon session. Surprisingly in no blind condition, extreme case of window/task ratio is not common. Studying the daylight glare probability, there is not any DGP value higher than 0.35 in this experiment. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=daylighting" title="daylighting">daylighting</a>, <a href="https://publications.waset.org/abstracts/search?q=energy%20simulation" title=" energy simulation"> energy simulation</a>, <a href="https://publications.waset.org/abstracts/search?q=office%20environment" title=" office environment"> office environment</a>, <a href="https://publications.waset.org/abstracts/search?q=Venetian%20blind" title=" Venetian blind"> Venetian blind</a> </p> <a href="https://publications.waset.org/abstracts/63214/evaluating-daylight-performance-in-an-office-environment-in-malaysia-using-venetian-blind-system-case-study" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/63214.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">259</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">20005</span> Damage Localization of Deterministic-Stochastic Systems</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Yen-Po%20Wang">Yen-Po Wang</a>, <a href="https://publications.waset.org/abstracts/search?q=Ming-Chih%20Huang"> Ming-Chih Huang</a>, <a href="https://publications.waset.org/abstracts/search?q=Ming-Lian%20Chang"> Ming-Lian Chang</a> </p> <p class="card-text"><strong>Abstract:</strong></p> A scheme integrated with deterministic–stochastic subspace system identification and the method of damage localization vector is proposed in this study for damage detection of structures based on seismic response data. A series of shaking table tests using a five-storey steel frame has been conducted in National Center for Research on Earthquake Engineering (NCREE), Taiwan. Damage condition is simulated by reducing the cross-sectional area of some of the columns at the bottom. Both single and combinations of multiple damage conditions at various locations have been considered. In the system identification analysis, either full or partial observation conditions have been taken into account. It has been shown that the damaged stories can be identified from global responses of the structure to earthquakes if sufficiently observed. In addition to detecting damage(s) with respect to the intact structure, identification of new or extended damages of the as-damaged (ill-conditioned) counterpart has also been studied. The proposed scheme proves to be effective. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=damage%20locating%20vectors" title="damage locating vectors">damage locating vectors</a>, <a href="https://publications.waset.org/abstracts/search?q=deterministic-stochastic%20subspace%20system" title=" deterministic-stochastic subspace system"> deterministic-stochastic subspace system</a>, <a href="https://publications.waset.org/abstracts/search?q=shaking%20table%20tests" title=" shaking table tests"> shaking table tests</a>, <a href="https://publications.waset.org/abstracts/search?q=system%20identification" title=" system identification"> system identification</a> </p> <a href="https://publications.waset.org/abstracts/5097/damage-localization-of-deterministic-stochastic-systems" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/5097.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">327</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">20004</span> Self-Tuning Robot Control Based on Subspace Identification</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Mathias%20Marquardt">Mathias Marquardt</a>, <a href="https://publications.waset.org/abstracts/search?q=Peter%20D%C3%BCnow"> Peter Dünow</a>, <a href="https://publications.waset.org/abstracts/search?q=Sandra%20Ba%C3%9Fler"> Sandra Baßler</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The paper describes the use of subspace based identification methods for auto tuning of a state space control system. The plant is an unstable but self balancing transport robot. Because of the unstable character of the process it has to be identified from closed loop input-output data. Based on the identified model a state space controller combined with an observer is calculated. The subspace identification algorithm and the controller design procedure is combined to a auto tuning method. The capability of the approach was verified in a simulation experiments under different process conditions. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=auto%20tuning" title="auto tuning">auto tuning</a>, <a href="https://publications.waset.org/abstracts/search?q=balanced%20robot" title=" balanced robot"> balanced robot</a>, <a href="https://publications.waset.org/abstracts/search?q=closed%20loop%20identification" title=" closed loop identification"> closed loop identification</a>, <a href="https://publications.waset.org/abstracts/search?q=subspace%20identification" title=" subspace identification"> subspace identification</a> </p> <a href="https://publications.waset.org/abstracts/49108/self-tuning-robot-control-based-on-subspace-identification" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/49108.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">380</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">20003</span> Possible Impact of Shunt Surgeries on the Spatial Learning of Congenitally-Blind Children</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Waleed%20Jarjoura">Waleed Jarjoura</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In various cases of visual impairments, the individuals are referred to expert Ophthalmologists in order to establish a correct diagnosis. Children with visual-impairments confront various challenging experiences in life since early childhood throughout lifespan. In some cases, blind infants, especially due to congenital hydrocephalus, suffer from high intra-cranial pressure and, consequently, go through a ventriculo-peritoneal shunt surgery in order to limit the neurological symptoms or decrease the cognitive impairments. In this article, a detailed description of numerous crucial implications of the V/P shunt surgery, through the right posterior-inferior parieto-temporal cortex, on the observed preliminary capabilities that are pre-requisites for the acquisition of literacy skills in braille, basic Math competencies, braille printing which suggest Gerstmann syndrome in the blind. In addition, significant difficultiesorientation and mobility skills using the Cane, in general, organizational skills, and social interactions were observed. The primary conclusion of this report focuses on raising awareness among neuro-surgeons towards the need for alternative intracranial routes for V/P shunt implantation in blind infants that preserve the right posterior-inferior parieto-temporal cortex that is hypothesized to modulate the tactual-spatial cues in braille discrimination. A second conclusion targets educators and therapists that address the acquired dysfunctionsin blind individuals due to V/P shunt surgeries. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=congenital%20blindness" title="congenital blindness">congenital blindness</a>, <a href="https://publications.waset.org/abstracts/search?q=hydrocephalus" title=" hydrocephalus"> hydrocephalus</a>, <a href="https://publications.waset.org/abstracts/search?q=shunt%20surgery" title=" shunt surgery"> shunt surgery</a>, <a href="https://publications.waset.org/abstracts/search?q=spatial%20orientation" title=" spatial orientation"> spatial orientation</a> </p> <a href="https://publications.waset.org/abstracts/157230/possible-impact-of-shunt-surgeries-on-the-spatial-learning-of-congenitally-blind-children" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/157230.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">89</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">20002</span> Solar-Blind Ni-Schottky Photodetector Based on MOCVD Grown ZnGa₂O₄</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Taslim%20Khan">Taslim Khan</a>, <a href="https://publications.waset.org/abstracts/search?q=Ray%20Hua%20Horng"> Ray Hua Horng</a>, <a href="https://publications.waset.org/abstracts/search?q=Rajendra%20Singh"> Rajendra Singh</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This study presents a comprehensive analysis of the design, fabrication, and performance evaluation of a solar-blind Schottky photodetector based on ZnGa₂O₄ grown via MOCVD, utilizing Ni/Au as the Schottky electrode. ZnGa₂O₄, with its wide bandgap of 5.2 eV, is well-suited for high-performance solar-blind photodetection applications. The photodetector demonstrates an impressive responsivity of 280 A/W, indicating its exceptional sensitivity within the solar-blind ultraviolet band. One of the device's notable attributes is its high rejection ratio of 10⁵, which effectively filters out unwanted background signals, enhancing its reliability in various environments. The photodetector also boasts a photodetector responsivity contrast ratio (PDCR) of 10⁷, showcasing its ability to detect even minor changes in incident UV light. Additionally, the device features an outstanding detective of 10¹⁸ Jones, underscoring its capability to precisely detect faint UV signals. It exhibits a fast response time of 80 ms and an ON/OFF ratio of 10⁵, making it suitable for real-time UV sensing applications. The noise-equivalent power (NEP) of 10^-17 W/Hz further highlights its efficiency in detecting low-intensity UV signals. The photodetector also achieves a high forward-to-backward current rejection ratio of 10⁶, ensuring high selectivity. Furthermore, the device maintains an extremely low dark current of approximately 0.1 pA. These findings position the ZnGa₂O₄-based Schottky photodetector as a leading candidate for solar-blind UV detection applications. It offers a compelling combination of sensitivity, selectivity, and operational efficiency, making it a highly promising tool for environments requiring precise and reliable UV detection. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=wideband%20gap" title="wideband gap">wideband gap</a>, <a href="https://publications.waset.org/abstracts/search?q=solar%20blind%20photodetector" title=" solar blind photodetector"> solar blind photodetector</a>, <a href="https://publications.waset.org/abstracts/search?q=MOCVD" title=" MOCVD"> MOCVD</a>, <a href="https://publications.waset.org/abstracts/search?q=zinc%20gallate" title=" zinc gallate"> zinc gallate</a> </p> <a href="https://publications.waset.org/abstracts/186831/solar-blind-ni-schottky-photodetector-based-on-mocvd-grown-znga2o4" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/186831.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">40</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">20001</span> Low Cost Real Time Robust Identification of Impulsive Signals</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=R.%20Biondi">R. Biondi</a>, <a href="https://publications.waset.org/abstracts/search?q=G.%20Dys"> G. Dys</a>, <a href="https://publications.waset.org/abstracts/search?q=G.%20Ferone"> G. Ferone</a>, <a href="https://publications.waset.org/abstracts/search?q=T.%20Renard"> T. Renard</a>, <a href="https://publications.waset.org/abstracts/search?q=M.%20Zysman"> M. Zysman</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper describes an automated implementable system for impulsive signals detection and recognition. The system uses a Digital Signal Processing device for the detection and identification process. Here the system analyses the signals in real time in order to produce a particular response if needed. The system analyses the signals in real time in order to produce a specific output if needed. Detection is achieved through normalizing the inputs and comparing the read signals to a dynamic threshold and thus avoiding detections linked to loud or fluctuating environing noise. Identification is done through neuronal network algorithms. As a setup our system can receive signals to “learn” certain patterns. Through “learning” the system can recognize signals faster, inducing flexibility to new patterns similar to those known. Sound is captured through a simple jack input, and could be changed for an enhanced recording surface such as a wide-area recorder. Furthermore a communication module can be added to the apparatus to send alerts to another interface if needed. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=sound%20detection" title="sound detection">sound detection</a>, <a href="https://publications.waset.org/abstracts/search?q=impulsive%20signal" title=" impulsive signal"> impulsive signal</a>, <a href="https://publications.waset.org/abstracts/search?q=background%20noise" title=" background noise"> background noise</a>, <a href="https://publications.waset.org/abstracts/search?q=neural%20network" title=" neural network"> neural network</a> </p> <a href="https://publications.waset.org/abstracts/14114/low-cost-real-time-robust-identification-of-impulsive-signals" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/14114.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">320</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">20000</span> Frequency Identification of Wiener-Hammerstein Systems</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Brouri%20Adil">Brouri Adil</a>, <a href="https://publications.waset.org/abstracts/search?q=Giri%20Fouad"> Giri Fouad</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The problem of identifying Wiener-Hammerstein systems is addressed in the presence of two linear subsystems of structure totally unknown. Presently, the nonlinear element is allowed to be noninvertible. The system identification problem is dealt by developing a two-stage frequency identification method such a set of points of the nonlinearity are estimated first. Then, the frequency gains of the two linear subsystems are determined at a number of frequencies. The method involves Fourier series decomposition and only requires periodic excitation signals. All involved estimators are shown to be consistent. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=Wiener-Hammerstein%20systems" title="Wiener-Hammerstein systems">Wiener-Hammerstein systems</a>, <a href="https://publications.waset.org/abstracts/search?q=Fourier%20series%20expansions" title=" Fourier series expansions"> Fourier series expansions</a>, <a href="https://publications.waset.org/abstracts/search?q=frequency%20identification" title=" frequency identification"> frequency identification</a>, <a href="https://publications.waset.org/abstracts/search?q=automation%20science" title=" automation science"> automation science</a> </p> <a href="https://publications.waset.org/abstracts/7941/frequency-identification-of-wiener-hammerstein-systems" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/7941.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">537</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">19999</span> Combined Localization, Beamforming, and Interference Threshold Estimation in Underlay Cognitive System</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Omar%20Nasr">Omar Nasr</a>, <a href="https://publications.waset.org/abstracts/search?q=Yasser%20Naguib"> Yasser Naguib</a>, <a href="https://publications.waset.org/abstracts/search?q=Mohamed%20Hafez"> Mohamed Hafez </a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper aims at providing an innovative solution for blind interference threshold estimation in an underlay cognitive network to be used in adaptive beamforming by secondary user Transmitter and Receiver. For the task of threshold estimation, blind detection of modulation and SNR are used. For the sake of beamforming several localization algorithms are compared to settle on best one for cognitive environment. Beamforming algorithms as LCMV (Linear Constraint Minimum Variance) and MVDR (Minimum Variance Distortion less) are also proposed and compared. The idea of just nulling the primary user after knowledge of its location is discussed against the idea of working under interference threshold. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=cognitive%20%20radio" title="cognitive radio">cognitive radio</a>, <a href="https://publications.waset.org/abstracts/search?q=underlay" title=" underlay"> underlay</a>, <a href="https://publications.waset.org/abstracts/search?q=beamforming" title=" beamforming"> beamforming</a>, <a href="https://publications.waset.org/abstracts/search?q=MUSIC" title=" MUSIC"> MUSIC</a>, <a href="https://publications.waset.org/abstracts/search?q=MVDR" title=" MVDR"> MVDR</a>, <a href="https://publications.waset.org/abstracts/search?q=LCMV" title=" LCMV"> LCMV</a>, <a href="https://publications.waset.org/abstracts/search?q=threshold%20estimation" title=" threshold estimation"> threshold estimation</a> </p> <a href="https://publications.waset.org/abstracts/17541/combined-localization-beamforming-and-interference-threshold-estimation-in-underlay-cognitive-system" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/17541.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">582</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">19998</span> User Requirements Analysis for the Development of Assistive Navigation Mobile Apps for Blind and Visually Impaired People</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Paraskevi%20Theodorou">Paraskevi Theodorou</a>, <a href="https://publications.waset.org/abstracts/search?q=Apostolos%20Meliones"> Apostolos Meliones</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In the context of the development process of two assistive navigation mobile apps for blind and visually impaired people (BVI) an extensive qualitative analysis of the requirements of potential users has been conducted. The analysis was based on interviews with BVIs and aimed to elicit not only their needs with respect to autonomous navigation but also their preferences on specific features of the apps under development. The elicited requirements were structured into four main categories, namely, requirements concerning the capabilities, functionality and usability of the apps, as well as compatibility requirements with respect to other apps and services. The main categories were then further divided into nine sub-categories. This classification, along with its content, aims to become a useful tool for the researcher or the developer who is involved in the development of digital services for BVI. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=accessibility" title="accessibility">accessibility</a>, <a href="https://publications.waset.org/abstracts/search?q=assistive%20mobile%20apps" title=" assistive mobile apps"> assistive mobile apps</a>, <a href="https://publications.waset.org/abstracts/search?q=blind%20and%20visually%20impaired%20people" title=" blind and visually impaired people"> blind and visually impaired people</a>, <a href="https://publications.waset.org/abstracts/search?q=user%20requirements%20analysis" title=" user requirements analysis"> user requirements analysis</a> </p> <a href="https://publications.waset.org/abstracts/114395/user-requirements-analysis-for-the-development-of-assistive-navigation-mobile-apps-for-blind-and-visually-impaired-people" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/114395.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">123</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">19997</span> Muscle: The Tactile Texture Designed for the Blind</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Chantana%20Insra">Chantana Insra</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The research objective focuses on creating a prototype media of the tactile texture of muscles for educational institutes to help visually impaired students learn massage extra learning materials further than the ordinary curriculum. This media is designed as an extra learning material. The population in this study was 30 blinded students between 4th - 6th grades who were able to read Braille language. The research was conducted during the second semester in 2012 at The Bangkok School for the Blind. The method in choosing the population in the study was purposive sampling. The methodology of the research includes collecting data related to visually impaired people, the production of the tactile texture media, human anatomy and Thai traditional massage from literature reviews and field studies. This information was used for analyzing and designing 14 tactile texture pictures presented to experts to evaluate and test the media. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=blind" title="blind">blind</a>, <a href="https://publications.waset.org/abstracts/search?q=tactile%20texture" title=" tactile texture"> tactile texture</a>, <a href="https://publications.waset.org/abstracts/search?q=muscle" title=" muscle"> muscle</a>, <a href="https://publications.waset.org/abstracts/search?q=visual%20arts%20and%20design" title=" visual arts and design"> visual arts and design</a> </p> <a href="https://publications.waset.org/abstracts/6200/muscle-the-tactile-texture-designed-for-the-blind" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/6200.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">269</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">19996</span> Understanding the Polygon with the Eyes of Blinds</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Tu%C4%9Fba%20Horzum">Tuğba Horzum</a>, <a href="https://publications.waset.org/abstracts/search?q=Ahmet%20Arikan"> Ahmet Arikan</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper was part of a broader study that investigated what blind students (BSs) understood and how they used concept definitions (CDs) and concept images (CIs) for some mathematical concepts. This paper focused on the polygon concept. For this purpose, four open-ended questions were asked to five blind middle school students. During the interviews, BSs were presented with raised-line materials and were given opportunities to construct geometric shapes with magnetic sticks and micro-balls. Qualitative research techniques applied in grounded theory were used for analyzing documents pictures which were taken from magnetic geometric shapes that BSs constructed, raised-line materials and researcher’s observation notes and interviews. At the end of the analysis, it was observed that BSs used mostly their CIs and never took into account the CDs. Besides, BSs encountered with the difficulties associated with the combination of polygon edges’ endpoints consecutively. Additionally, they focused on the interior of the polygon and the angles which have smaller a size. Lastly, BSs were often conflicted about triangle, rectangle, square and circle whether or not a polygon. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=blind%20students" title="blind students">blind students</a>, <a href="https://publications.waset.org/abstracts/search?q=concept%20definition" title=" concept definition"> concept definition</a>, <a href="https://publications.waset.org/abstracts/search?q=concept%20image" title=" concept image"> concept image</a>, <a href="https://publications.waset.org/abstracts/search?q=polygon" title=" polygon"> polygon</a> </p> <a href="https://publications.waset.org/abstracts/38759/understanding-the-polygon-with-the-eyes-of-blinds" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/38759.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">297</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">19995</span> Utilizing the Analytic Hierarchy Process in Improving Performances of Blind Judo</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Hyun%20Chul%20Cho">Hyun Chul Cho</a>, <a href="https://publications.waset.org/abstracts/search?q=Hyunkyoung%20Oh"> Hyunkyoung Oh</a>, <a href="https://publications.waset.org/abstracts/search?q=Hyun%20Yoon"> Hyun Yoon</a>, <a href="https://publications.waset.org/abstracts/search?q=Jooyeon%20Jin"> Jooyeon Jin</a>, <a href="https://publications.waset.org/abstracts/search?q=Jae%20Won%20Lee"> Jae Won Lee</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Identifying, structuring, and racking the most important factors related to improving athletes&rsquo; performances could pave the way for improve training system. The purpose of this study was to identify the relative importance factors to improve performance of the of judo athletes with visual impairments, including blindness by using the Analytic Hierarchy Process (AHP). After reviewing the literature, the relative importance of factors affecting performance of the blind judo was selected. A group of expert reviewed the first draft of the questionnaires, and then finally selected performance factors were classified into the major categories of techniques, physical fitness, and psychological categories. Later, a pre-selected experts group was asked to review the final version of questionnaire and confirm the priories of performance factors. The order of priority was determined by performing pairwise comparisons using Expert Choice 2000. Results indicated that &ldquo;grappling&rdquo; (.303) and &ldquo;throwing&rdquo; (.234) were the most important lower hierarchy factors for blind judo skills. In addition, the most important physical factors affecting performance were &ldquo;muscular strength and endurance&rdquo; (.238). Further, among other psychological factors &ldquo;competitive anxiety&rdquo; (.393) was important factor that affects performance. It is important to offer psychological skills training to reduce anxiety of judo athletes with visual impairments and blindness, so they can compete in their optimal states. These findings offer insights into what should be considered when determining factors to improve performance of judo athletes with visual impairments and blindness. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=analytic%20hierarchy%20process" title="analytic hierarchy process">analytic hierarchy process</a>, <a href="https://publications.waset.org/abstracts/search?q=blind%20athlete" title=" blind athlete"> blind athlete</a>, <a href="https://publications.waset.org/abstracts/search?q=judo" title=" judo"> judo</a>, <a href="https://publications.waset.org/abstracts/search?q=sport%20performance" title=" sport performance"> sport performance</a> </p> <a href="https://publications.waset.org/abstracts/92334/utilizing-the-analytic-hierarchy-process-in-improving-performances-of-blind-judo" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/92334.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">217</span> </span> </div> </div> <ul class="pagination"> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=blind%20system%20identification&amp;page=1" rel="prev">&lsaquo;</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=blind%20system%20identification&amp;page=1">1</a></li> <li class="page-item active"><span class="page-link">2</span></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=blind%20system%20identification&amp;page=3">3</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=blind%20system%20identification&amp;page=4">4</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=blind%20system%20identification&amp;page=5">5</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=blind%20system%20identification&amp;page=6">6</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=blind%20system%20identification&amp;page=7">7</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=blind%20system%20identification&amp;page=8">8</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=blind%20system%20identification&amp;page=9">9</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=blind%20system%20identification&amp;page=10">10</a></li> <li class="page-item disabled"><span class="page-link">...</span></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=blind%20system%20identification&amp;page=668">668</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=blind%20system%20identification&amp;page=669">669</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=blind%20system%20identification&amp;page=3" rel="next">&rsaquo;</a></li> </ul> </div> </main> <footer> <div id="infolinks" class="pt-3 pb-2"> <div class="container"> <div style="background-color:#f5f5f5;" class="p-3"> <div class="row"> <div class="col-md-2"> <ul class="list-unstyled"> About <li><a href="https://waset.org/page/support">About Us</a></li> <li><a href="https://waset.org/page/support#legal-information">Legal</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/WASET-16th-foundational-anniversary.pdf">WASET celebrates its 16th foundational anniversary</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Account <li><a href="https://waset.org/profile">My Account</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Explore <li><a href="https://waset.org/disciplines">Disciplines</a></li> <li><a href="https://waset.org/conferences">Conferences</a></li> <li><a href="https://waset.org/conference-programs">Conference Program</a></li> <li><a href="https://waset.org/committees">Committees</a></li> <li><a href="https://publications.waset.org">Publications</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Research <li><a href="https://publications.waset.org/abstracts">Abstracts</a></li> <li><a href="https://publications.waset.org">Periodicals</a></li> <li><a href="https://publications.waset.org/archive">Archive</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Open Science <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Science-Philosophy.pdf">Open Science Philosophy</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Science-Award.pdf">Open Science Award</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Society-Open-Science-and-Open-Innovation.pdf">Open Innovation</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Postdoctoral-Fellowship-Award.pdf">Postdoctoral Fellowship Award</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Scholarly-Research-Review.pdf">Scholarly Research Review</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Support <li><a href="https://waset.org/page/support">Support</a></li> <li><a href="https://waset.org/profile/messages/create">Contact Us</a></li> <li><a href="https://waset.org/profile/messages/create">Report Abuse</a></li> </ul> </div> </div> </div> </div> </div> <div class="container text-center"> <hr style="margin-top:0;margin-bottom:.3rem;"> <a href="https://creativecommons.org/licenses/by/4.0/" target="_blank" class="text-muted small">Creative Commons Attribution 4.0 International License</a> <div id="copy" class="mt-2">&copy; 2024 World Academy of Science, Engineering and Technology</div> </div> </footer> <a href="javascript:" id="return-to-top"><i class="fas fa-arrow-up"></i></a> <div class="modal" id="modal-template"> <div class="modal-dialog"> <div class="modal-content"> <div class="row m-0 mt-1"> <div class="col-md-12"> <button type="button" class="close" data-dismiss="modal" aria-label="Close"><span aria-hidden="true">&times;</span></button> </div> </div> <div class="modal-body"></div> </div> </div> </div> <script src="https://cdn.waset.org/static/plugins/jquery-3.3.1.min.js"></script> <script src="https://cdn.waset.org/static/plugins/bootstrap-4.2.1/js/bootstrap.bundle.min.js"></script> <script src="https://cdn.waset.org/static/js/site.js?v=150220211556"></script> <script> jQuery(document).ready(function() { /*jQuery.get("https://publications.waset.org/xhr/user-menu", function (response) { jQuery('#mainNavMenu').append(response); });*/ jQuery.get({ url: "https://publications.waset.org/xhr/user-menu", cache: false }).then(function(response){ jQuery('#mainNavMenu').append(response); }); }); </script> </body> </html>

Pages: 1 2 3 4 5 6 7 8 9 10