CINXE.COM
Search results for: STARE
<!DOCTYPE html> <html lang="en" dir="ltr"> <head> <!-- Google tag (gtag.js) --> <script async src="https://www.googletagmanager.com/gtag/js?id=G-P63WKM1TM1"></script> <script> window.dataLayer = window.dataLayer || []; function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'G-P63WKM1TM1'); </script> <!-- Yandex.Metrika counter --> <script type="text/javascript" > (function(m,e,t,r,i,k,a){m[i]=m[i]||function(){(m[i].a=m[i].a||[]).push(arguments)}; m[i].l=1*new Date(); for (var j = 0; j < document.scripts.length; j++) {if (document.scripts[j].src === r) { return; }} k=e.createElement(t),a=e.getElementsByTagName(t)[0],k.async=1,k.src=r,a.parentNode.insertBefore(k,a)}) (window, document, "script", "https://mc.yandex.ru/metrika/tag.js", "ym"); ym(55165297, "init", { clickmap:false, trackLinks:true, accurateTrackBounce:true, webvisor:false }); </script> <noscript><div><img src="https://mc.yandex.ru/watch/55165297" style="position:absolute; left:-9999px;" alt="" /></div></noscript> <!-- /Yandex.Metrika counter --> <!-- Matomo --> <!-- End Matomo Code --> <title>Search results for: STARE</title> <meta name="description" content="Search results for: STARE"> <meta name="keywords" content="STARE"> <meta name="viewport" content="width=device-width, initial-scale=1, minimum-scale=1, maximum-scale=1, user-scalable=no"> <meta charset="utf-8"> <link href="https://cdn.waset.org/favicon.ico" type="image/x-icon" rel="shortcut icon"> <link href="https://cdn.waset.org/static/plugins/bootstrap-4.2.1/css/bootstrap.min.css" rel="stylesheet"> <link href="https://cdn.waset.org/static/plugins/fontawesome/css/all.min.css" rel="stylesheet"> <link href="https://cdn.waset.org/static/css/site.css?v=150220211555" rel="stylesheet"> </head> <body> <header> <div class="container"> <nav class="navbar navbar-expand-lg navbar-light"> <a class="navbar-brand" href="https://waset.org"> <img src="https://cdn.waset.org/static/images/wasetc.png" alt="Open Science Research Excellence" title="Open Science Research Excellence" /> </a> <button class="d-block d-lg-none navbar-toggler ml-auto" type="button" data-toggle="collapse" data-target="#navbarMenu" aria-controls="navbarMenu" aria-expanded="false" aria-label="Toggle navigation"> <span class="navbar-toggler-icon"></span> </button> <div class="w-100"> <div class="d-none d-lg-flex flex-row-reverse"> <form method="get" action="https://waset.org/search" class="form-inline my-2 my-lg-0"> <input class="form-control mr-sm-2" type="search" placeholder="Search Conferences" value="STARE" name="q" aria-label="Search"> <button class="btn btn-light my-2 my-sm-0" type="submit"><i class="fas fa-search"></i></button> </form> </div> <div class="collapse navbar-collapse mt-1" id="navbarMenu"> <ul class="navbar-nav ml-auto align-items-center" id="mainNavMenu"> <li class="nav-item"> <a class="nav-link" href="https://waset.org/conferences" title="Conferences in 2024/2025/2026">Conferences</a> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/disciplines" title="Disciplines">Disciplines</a> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/committees" rel="nofollow">Committees</a> </li> <li class="nav-item dropdown"> <a class="nav-link dropdown-toggle" href="#" id="navbarDropdownPublications" role="button" data-toggle="dropdown" aria-haspopup="true" aria-expanded="false"> Publications </a> <div class="dropdown-menu" aria-labelledby="navbarDropdownPublications"> <a class="dropdown-item" href="https://publications.waset.org/abstracts">Abstracts</a> <a class="dropdown-item" href="https://publications.waset.org">Periodicals</a> <a class="dropdown-item" href="https://publications.waset.org/archive">Archive</a> </div> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/page/support" title="Support">Support</a> </li> </ul> </div> </div> </nav> </div> </header> <main> <div class="container mt-4"> <div class="row"> <div class="col-md-9 mx-auto"> <form method="get" action="https://publications.waset.org/abstracts/search"> <div id="custom-search-input"> <div class="input-group"> <i class="fas fa-search"></i> <input type="text" class="search-query" name="q" placeholder="Author, Title, Abstract, Keywords" value="STARE"> <input type="submit" class="btn_search" value="Search"> </div> </div> </form> </div> </div> <div class="row mt-3"> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Commenced</strong> in January 2007</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Frequency:</strong> Monthly</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Edition:</strong> International</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Paper Count:</strong> 6</div> </div> </div> </div> <h1 class="mt-3 mb-3 text-center" style="font-size:1.6rem;">Search results for: STARE</h1> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">6</span> Automatic Detection and Classification of Diabetic Retinopathy Using Retinal Fundus Images </h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=A.%20Biran">A. Biran</a>, <a href="https://publications.waset.org/abstracts/search?q=P.%20Sobhe%20Bidari"> P. Sobhe Bidari</a>, <a href="https://publications.waset.org/abstracts/search?q=A.%20Almazroe"> A. Almazroe</a>, <a href="https://publications.waset.org/abstracts/search?q=V.%20Lakshminarayanan"> V. Lakshminarayanan</a>, <a href="https://publications.waset.org/abstracts/search?q=K.%20Raahemifar"> K. Raahemifar </a> </p> <p class="card-text"><strong>Abstract:</strong></p> Diabetic Retinopathy (DR) is a severe retinal disease which is caused by diabetes mellitus. It leads to blindness when it progress to proliferative level. Early indications of DR are the appearance of microaneurysms, hemorrhages and hard exudates. In this paper, an automatic algorithm for detection of DR has been proposed. The algorithm is based on combination of several image processing techniques including Circular Hough Transform (CHT), Contrast Limited Adaptive Histogram Equalization (CLAHE), Gabor filter and thresholding. Also, Support Vector Machine (SVM) Classifier is used to classify retinal images to normal or abnormal cases including non-proliferative or proliferative DR. The proposed method has been tested on images selected from Structured Analysis of the Retinal (STARE) database using MATLAB code. The method is perfectly able to detect DR. The sensitivity specificity and accuracy of this approach are 90%, 87.5%, and 91.4% respectively. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=diabetic%20retinopathy" title="diabetic retinopathy">diabetic retinopathy</a>, <a href="https://publications.waset.org/abstracts/search?q=fundus%20images" title=" fundus images"> fundus images</a>, <a href="https://publications.waset.org/abstracts/search?q=STARE" title=" STARE"> STARE</a>, <a href="https://publications.waset.org/abstracts/search?q=Gabor%20filter" title=" Gabor filter"> Gabor filter</a>, <a href="https://publications.waset.org/abstracts/search?q=support%20vector%20machine" title=" support vector machine"> support vector machine</a> </p> <a href="https://publications.waset.org/abstracts/49824/automatic-detection-and-classification-of-diabetic-retinopathy-using-retinal-fundus-images" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/49824.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">294</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">5</span> Automatic Method for Exudates and Hemorrhages Detection from Fundus Retinal Images</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=A.%20Biran">A. Biran</a>, <a href="https://publications.waset.org/abstracts/search?q=P.%20Sobhe%20Bidari"> P. Sobhe Bidari</a>, <a href="https://publications.waset.org/abstracts/search?q=K.%20Raahemifar"> K. Raahemifar</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Diabetic Retinopathy (DR) is an eye disease that leads to blindness. The earliest signs of DR are the appearance of red and yellow lesions on the retina called hemorrhages and exudates. Early diagnosis of DR prevents from blindness; hence, many automated algorithms have been proposed to extract hemorrhages and exudates. In this paper, an automated algorithm is presented to extract hemorrhages and exudates separately from retinal fundus images using different image processing techniques including Circular Hough Transform (CHT), Contrast Limited Adaptive Histogram Equalization (CLAHE), Gabor filter and thresholding. Since Optic Disc is the same color as the exudates, it is first localized and detected. The presented method has been tested on fundus images from Structured Analysis of the Retina (STARE) and Digital Retinal Images for Vessel Extraction (DRIVE) databases by using MATLAB codes. The results show that this method is perfectly capable of detecting hard exudates and the highly probable soft exudates. It is also capable of detecting the hemorrhages and distinguishing them from blood vessels. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=diabetic%20retinopathy" title="diabetic retinopathy">diabetic retinopathy</a>, <a href="https://publications.waset.org/abstracts/search?q=fundus" title=" fundus"> fundus</a>, <a href="https://publications.waset.org/abstracts/search?q=CHT" title=" CHT"> CHT</a>, <a href="https://publications.waset.org/abstracts/search?q=exudates" title=" exudates"> exudates</a>, <a href="https://publications.waset.org/abstracts/search?q=hemorrhages" title=" hemorrhages"> hemorrhages</a> </p> <a href="https://publications.waset.org/abstracts/52591/automatic-method-for-exudates-and-hemorrhages-detection-from-fundus-retinal-images" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/52591.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">272</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4</span> Digital Retinal Images: Background and Damaged Areas Segmentation</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Eman%20A.%20Gani">Eman A. Gani</a>, <a href="https://publications.waset.org/abstracts/search?q=Loay%20E.%20George"> Loay E. George</a>, <a href="https://publications.waset.org/abstracts/search?q=Faisel%20G.%20Mohammed"> Faisel G. Mohammed</a>, <a href="https://publications.waset.org/abstracts/search?q=Kamal%20H.%20Sager"> Kamal H. Sager</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Digital retinal images are more appropriate for automatic screening of diabetic retinopathy systems. Unfortunately, a significant percentage of these images are poor quality that hinders further analysis due to many factors (such as patient movement, inadequate or non-uniform illumination, acquisition angle and retinal pigmentation). The retinal images of poor quality need to be enhanced before the extraction of features and abnormalities. So, the segmentation of retinal image is essential for this purpose, the segmentation is employed to smooth and strengthen image by separating the background and damaged areas from the overall image thus resulting in retinal image enhancement and less processing time. In this paper, methods for segmenting colored retinal image are proposed to improve the quality of retinal image diagnosis. The methods generate two segmentation masks; i.e., background segmentation mask for extracting the background area and poor quality mask for removing the noisy areas from the retinal image. The standard retinal image databases DIARETDB0, DIARETDB1, STARE, DRIVE and some images obtained from ophthalmologists have been used to test the validation of the proposed segmentation technique. Experimental results indicate the introduced methods are effective and can lead to high segmentation accuracy. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=retinal%20images" title="retinal images">retinal images</a>, <a href="https://publications.waset.org/abstracts/search?q=fundus%20images" title=" fundus images"> fundus images</a>, <a href="https://publications.waset.org/abstracts/search?q=diabetic%20retinopathy" title=" diabetic retinopathy"> diabetic retinopathy</a>, <a href="https://publications.waset.org/abstracts/search?q=background%20segmentation" title=" background segmentation"> background segmentation</a>, <a href="https://publications.waset.org/abstracts/search?q=damaged%20areas%20segmentation" title=" damaged areas segmentation"> damaged areas segmentation</a> </p> <a href="https://publications.waset.org/abstracts/12289/digital-retinal-images-background-and-damaged-areas-segmentation" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/12289.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">403</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3</span> Detection of Micro-Unmanned Ariel Vehicles Using a Multiple-Input Multiple-Output Digital Array Radar</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Tareq%20AlNuaim">Tareq AlNuaim</a>, <a href="https://publications.waset.org/abstracts/search?q=Mubashir%20Alam"> Mubashir Alam</a>, <a href="https://publications.waset.org/abstracts/search?q=Abdulrazaq%20Aldowesh"> Abdulrazaq Aldowesh</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The usage of micro-Unmanned Ariel Vehicles (UAVs) has witnessed an enormous increase recently. Detection of such drones became a necessity nowadays to prevent any harmful activities. Typically, such targets have low velocity and low Radar Cross Section (RCS), making them indistinguishable from clutter and phase noise. Multiple-Input Multiple-Output (MIMO) Radars have many potentials; it increases the degrees of freedom on both transmit and receive ends. Such architecture allows for flexibility in operation, through utilizing the direct access to every element in the transmit/ receive array. MIMO systems allow for several array processing techniques, permitting the system to stare at targets for longer times, which improves the Doppler resolution. In this paper, a 2×2 MIMO radar prototype is developed using Software Defined Radio (SDR) technology, and its performance is evaluated against a slow-moving low radar cross section micro-UAV used by hobbyists. Radar cross section simulations were carried out using FEKO simulator, achieving an average of -14.42 dBsm at S-band. The developed prototype was experimentally evaluated achieving more than 300 meters of detection range for a DJI Mavic pro-drone <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=digital%20beamforming" title="digital beamforming">digital beamforming</a>, <a href="https://publications.waset.org/abstracts/search?q=drone%20detection" title=" drone detection"> drone detection</a>, <a href="https://publications.waset.org/abstracts/search?q=micro-UAV" title=" micro-UAV"> micro-UAV</a>, <a href="https://publications.waset.org/abstracts/search?q=MIMO" title=" MIMO"> MIMO</a>, <a href="https://publications.waset.org/abstracts/search?q=phased%20array" title=" phased array"> phased array</a> </p> <a href="https://publications.waset.org/abstracts/107642/detection-of-micro-unmanned-ariel-vehicles-using-a-multiple-input-multiple-output-digital-array-radar" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/107642.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">139</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2</span> Reduce the Impact of Wildfires by Identifying Them Early from Space and Sending Location Directly to Closest First Responders</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Gregory%20Sullivan">Gregory Sullivan</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The evolution of global warming has escalated the number and complexity of forest fires around the world. As an example, the United States and Brazil combined generated more than 30,000 forest fires last year. The impact to our environment, structures and individuals is incalculable. The world has learned to try to take this in stride, trying multiple ways to contain fires. Some countries are trying to use cameras in limited areas. There are discussions of using hundreds of low earth orbit satellites and linking them together, and, interfacing them through ground networks. These are all truly noble attempts to defeat the forest fire phenomenon. But there is a better, simpler answer. A bigger piece of the solutions puzzle is to see the fires while they are small, soon after initiation. The approach is to see the fires while they are very small and report their location (latitude and longitude) to local first responders. This is done by placing a sensor at geostationary orbit (GEO: 26,000 miles above the earth). By placing this small satellite in GEO, we can “stare” at the earth, and sense temperature changes. We do not “see” fires, but “measure” temperature changes. This has already been demonstrated on an experimental scale. Fires were seen at close to initiation, and info forwarded to first responders. it were the first to identify the fires 7 out of 8 times. The goal is to have a small independent satellite at GEO orbit focused only on forest fire initiation. Thus, with one small satellite, focused only on forest fire initiation, we hope to greatly decrease the impact to persons, property and the environment. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=space%20detection" title="space detection">space detection</a>, <a href="https://publications.waset.org/abstracts/search?q=wildfire%20early%20warning" title=" wildfire early warning"> wildfire early warning</a>, <a href="https://publications.waset.org/abstracts/search?q=demonstration%20wildfire%20detection%20and%20action%20from%20space" title=" demonstration wildfire detection and action from space"> demonstration wildfire detection and action from space</a>, <a href="https://publications.waset.org/abstracts/search?q=space%20detection%20to%20first%20responders" title=" space detection to first responders"> space detection to first responders</a> </p> <a href="https://publications.waset.org/abstracts/179337/reduce-the-impact-of-wildfires-by-identifying-them-early-from-space-and-sending-location-directly-to-closest-first-responders" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/179337.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">70</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1</span> Design and Development of Ssvep-Based Brain-Computer Interface for Limb Disabled Patients</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Zerihun%20Ketema%20Tadesse">Zerihun Ketema Tadesse</a>, <a href="https://publications.waset.org/abstracts/search?q=Dabbu%20Suman%20Reddy"> Dabbu Suman Reddy</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Brain-Computer Interfaces (BCIs) give the possibility for disabled people to communicate and control devices. This work aims at developing steady-state visual evoked potential (SSVEP)-based BCI for patients with limb disabilities. In hospitals, devices like nurse emergency call devices, lights, and TV sets are what patients use most frequently, but these devices are operated manually or using the remote control. Thus, disabled patients are not able to operate these devices by themselves. Hence, SSVEP-based BCI system that can allow disabled patients to control nurse calling device and other devices is proposed in this work. Portable LED visual stimulator that flickers at specific frequencies of 7Hz, 8Hz, 9Hz and 10Hz were developed as part of this project. Disabled patients can stare at specific flickering LED of visual stimulator and Emotiv EPOC used to acquire EEG signal in a non-invasive way. The acquired EEG signal can be processed to generate various control signals depending upon the amplitude and duration of signal components. MATLAB software is used for signal processing and analysis and also for command generation. Arduino is used as a hardware interface device to receive and transmit command signals to the experimental setup. Therefore, this study is focused on the design and development of Steady-state visually evoked potential (SSVEP)-based BCI for limb disabled patients, which helps them to operate and control devices in the hospital room/wards. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=SSVEP-BCI" title="SSVEP-BCI">SSVEP-BCI</a>, <a href="https://publications.waset.org/abstracts/search?q=Limb%20Disabled%20Patients" title=" Limb Disabled Patients"> Limb Disabled Patients</a>, <a href="https://publications.waset.org/abstracts/search?q=LED%20Visual%20Stimulator" title=" LED Visual Stimulator"> LED Visual Stimulator</a>, <a href="https://publications.waset.org/abstracts/search?q=EEG%20signal" title=" EEG signal"> EEG signal</a>, <a href="https://publications.waset.org/abstracts/search?q=control%20devices" title=" control devices"> control devices</a>, <a href="https://publications.waset.org/abstracts/search?q=hospital%20room%2Fwards" title=" hospital room/wards"> hospital room/wards</a> </p> <a href="https://publications.waset.org/abstracts/140313/design-and-development-of-ssvep-based-brain-computer-interface-for-limb-disabled-patients" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/140313.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">221</span> </span> </div> </div> </div> </main> <footer> <div id="infolinks" class="pt-3 pb-2"> <div class="container"> <div style="background-color:#f5f5f5;" class="p-3"> <div class="row"> <div class="col-md-2"> <ul class="list-unstyled"> About <li><a href="https://waset.org/page/support">About Us</a></li> <li><a href="https://waset.org/page/support#legal-information">Legal</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/WASET-16th-foundational-anniversary.pdf">WASET celebrates its 16th foundational anniversary</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Account <li><a href="https://waset.org/profile">My Account</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Explore <li><a href="https://waset.org/disciplines">Disciplines</a></li> <li><a href="https://waset.org/conferences">Conferences</a></li> <li><a href="https://waset.org/conference-programs">Conference Program</a></li> <li><a href="https://waset.org/committees">Committees</a></li> <li><a href="https://publications.waset.org">Publications</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Research <li><a href="https://publications.waset.org/abstracts">Abstracts</a></li> <li><a href="https://publications.waset.org">Periodicals</a></li> <li><a href="https://publications.waset.org/archive">Archive</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Open Science <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Science-Philosophy.pdf">Open Science Philosophy</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Science-Award.pdf">Open Science Award</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Society-Open-Science-and-Open-Innovation.pdf">Open Innovation</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Postdoctoral-Fellowship-Award.pdf">Postdoctoral Fellowship Award</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Scholarly-Research-Review.pdf">Scholarly Research Review</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Support <li><a href="https://waset.org/page/support">Support</a></li> <li><a href="https://waset.org/profile/messages/create">Contact Us</a></li> <li><a href="https://waset.org/profile/messages/create">Report Abuse</a></li> </ul> </div> </div> </div> </div> </div> <div class="container text-center"> <hr style="margin-top:0;margin-bottom:.3rem;"> <a href="https://creativecommons.org/licenses/by/4.0/" target="_blank" class="text-muted small">Creative Commons Attribution 4.0 International License</a> <div id="copy" class="mt-2">© 2024 World Academy of Science, Engineering and Technology</div> </div> </footer> <a href="javascript:" id="return-to-top"><i class="fas fa-arrow-up"></i></a> <div class="modal" id="modal-template"> <div class="modal-dialog"> <div class="modal-content"> <div class="row m-0 mt-1"> <div class="col-md-12"> <button type="button" class="close" data-dismiss="modal" aria-label="Close"><span aria-hidden="true">×</span></button> </div> </div> <div class="modal-body"></div> </div> </div> </div> <script src="https://cdn.waset.org/static/plugins/jquery-3.3.1.min.js"></script> <script src="https://cdn.waset.org/static/plugins/bootstrap-4.2.1/js/bootstrap.bundle.min.js"></script> <script src="https://cdn.waset.org/static/js/site.js?v=150220211556"></script> <script> jQuery(document).ready(function() { /*jQuery.get("https://publications.waset.org/xhr/user-menu", function (response) { jQuery('#mainNavMenu').append(response); });*/ jQuery.get({ url: "https://publications.waset.org/xhr/user-menu", cache: false }).then(function(response){ jQuery('#mainNavMenu').append(response); }); }); </script> </body> </html>