CINXE.COM

Search results for: non-Gaussian clutter

<!DOCTYPE html> <html lang="en" dir="ltr"> <head> <!-- Google tag (gtag.js) --> <script async src="https://www.googletagmanager.com/gtag/js?id=G-P63WKM1TM1"></script> <script> window.dataLayer = window.dataLayer || []; function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'G-P63WKM1TM1'); </script> <!-- Yandex.Metrika counter --> <script type="text/javascript" > (function(m,e,t,r,i,k,a){m[i]=m[i]||function(){(m[i].a=m[i].a||[]).push(arguments)}; m[i].l=1*new Date(); for (var j = 0; j < document.scripts.length; j++) {if (document.scripts[j].src === r) { return; }} k=e.createElement(t),a=e.getElementsByTagName(t)[0],k.async=1,k.src=r,a.parentNode.insertBefore(k,a)}) (window, document, "script", "https://mc.yandex.ru/metrika/tag.js", "ym"); ym(55165297, "init", { clickmap:false, trackLinks:true, accurateTrackBounce:true, webvisor:false }); </script> <noscript><div><img src="https://mc.yandex.ru/watch/55165297" style="position:absolute; left:-9999px;" alt="" /></div></noscript> <!-- /Yandex.Metrika counter --> <!-- Matomo --> <!-- End Matomo Code --> <title>Search results for: non-Gaussian clutter</title> <meta name="description" content="Search results for: non-Gaussian clutter"> <meta name="keywords" content="non-Gaussian clutter"> <meta name="viewport" content="width=device-width, initial-scale=1, minimum-scale=1, maximum-scale=1, user-scalable=no"> <meta charset="utf-8"> <link href="https://cdn.waset.org/favicon.ico" type="image/x-icon" rel="shortcut icon"> <link href="https://cdn.waset.org/static/plugins/bootstrap-4.2.1/css/bootstrap.min.css" rel="stylesheet"> <link href="https://cdn.waset.org/static/plugins/fontawesome/css/all.min.css" rel="stylesheet"> <link href="https://cdn.waset.org/static/css/site.css?v=150220211555" rel="stylesheet"> </head> <body> <header> <div class="container"> <nav class="navbar navbar-expand-lg navbar-light"> <a class="navbar-brand" href="https://waset.org"> <img src="https://cdn.waset.org/static/images/wasetc.png" alt="Open Science Research Excellence" title="Open Science Research Excellence" /> </a> <button class="d-block d-lg-none navbar-toggler ml-auto" type="button" data-toggle="collapse" data-target="#navbarMenu" aria-controls="navbarMenu" aria-expanded="false" aria-label="Toggle navigation"> <span class="navbar-toggler-icon"></span> </button> <div class="w-100"> <div class="d-none d-lg-flex flex-row-reverse"> <form method="get" action="https://waset.org/search" class="form-inline my-2 my-lg-0"> <input class="form-control mr-sm-2" type="search" placeholder="Search Conferences" value="non-Gaussian clutter" name="q" aria-label="Search"> <button class="btn btn-light my-2 my-sm-0" type="submit"><i class="fas fa-search"></i></button> </form> </div> <div class="collapse navbar-collapse mt-1" id="navbarMenu"> <ul class="navbar-nav ml-auto align-items-center" id="mainNavMenu"> <li class="nav-item"> <a class="nav-link" href="https://waset.org/conferences" title="Conferences in 2024/2025/2026">Conferences</a> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/disciplines" title="Disciplines">Disciplines</a> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/committees" rel="nofollow">Committees</a> </li> <li class="nav-item dropdown"> <a class="nav-link dropdown-toggle" href="#" id="navbarDropdownPublications" role="button" data-toggle="dropdown" aria-haspopup="true" aria-expanded="false"> Publications </a> <div class="dropdown-menu" aria-labelledby="navbarDropdownPublications"> <a class="dropdown-item" href="https://publications.waset.org/abstracts">Abstracts</a> <a class="dropdown-item" href="https://publications.waset.org">Periodicals</a> <a class="dropdown-item" href="https://publications.waset.org/archive">Archive</a> </div> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/page/support" title="Support">Support</a> </li> </ul> </div> </div> </nav> </div> </header> <main> <div class="container mt-4"> <div class="row"> <div class="col-md-9 mx-auto"> <form method="get" action="https://publications.waset.org/abstracts/search"> <div id="custom-search-input"> <div class="input-group"> <i class="fas fa-search"></i> <input type="text" class="search-query" name="q" placeholder="Author, Title, Abstract, Keywords" value="non-Gaussian clutter"> <input type="submit" class="btn_search" value="Search"> </div> </div> </form> </div> </div> <div class="row mt-3"> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Commenced</strong> in January 2007</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Frequency:</strong> Monthly</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Edition:</strong> International</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Paper Count:</strong> 34</div> </div> </div> </div> <h1 class="mt-3 mb-3 text-center" style="font-size:1.6rem;">Search results for: non-Gaussian clutter</h1> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">34</span> A Generalized Model for Performance Analysis of Airborne Radar in Clutter Scenario</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Vinod%20Kumar%20Jaysaval">Vinod Kumar Jaysaval</a>, <a href="https://publications.waset.org/abstracts/search?q=Prateek%20Agarwal"> Prateek Agarwal</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Performance prediction of airborne radar is a challenging and cumbersome task in clutter scenario for different types of targets. A generalized model requires to predict the performance of Radar for air targets as well as ground moving targets. In this paper, we propose a generalized model to bring out the performance of airborne radar for different Pulsed Repetition Frequency (PRF) as well as different type of targets. The model provides a platform to bring out different subsystem parameters for different applications and performance requirements under different types of clutter terrain. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=airborne%20radar" title="airborne radar">airborne radar</a>, <a href="https://publications.waset.org/abstracts/search?q=blind%20zone" title=" blind zone"> blind zone</a>, <a href="https://publications.waset.org/abstracts/search?q=clutter" title=" clutter"> clutter</a>, <a href="https://publications.waset.org/abstracts/search?q=probability%20of%20detection" title=" probability of detection"> probability of detection</a> </p> <a href="https://publications.waset.org/abstracts/13998/a-generalized-model-for-performance-analysis-of-airborne-radar-in-clutter-scenario" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/13998.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">470</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">33</span> Clutter Suppression Based on Singular Value Decomposition and Fast Wavelet Algorithm</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Ruomeng%20Xiao">Ruomeng Xiao</a>, <a href="https://publications.waset.org/abstracts/search?q=Zhulin%20Zong"> Zhulin Zong</a>, <a href="https://publications.waset.org/abstracts/search?q=Longfa%20Yang"> Longfa Yang</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Aiming at the problem that the target signal is difficult to detect under the strong ground clutter environment, this paper proposes a clutter suppression algorithm based on the combination of singular value decomposition and the Mallat fast wavelet algorithm. The method first carries out singular value decomposition on the radar echo data matrix, realizes the initial separation of target and clutter through the threshold processing of singular value, and then carries out wavelet decomposition on the echo data to find out the target location, and adopts the discard method to select the appropriate decomposition layer to reconstruct the target signal, which ensures the minimum loss of target information while suppressing the clutter. After the verification of the measured data, the method has a significant effect on the target extraction under low SCR, and the target reconstruction can be realized without the prior position information of the target and the method also has a certain enhancement on the output SCR compared with the traditional single wavelet processing method. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=clutter%20suppression" title="clutter suppression">clutter suppression</a>, <a href="https://publications.waset.org/abstracts/search?q=singular%20value%20decomposition" title=" singular value decomposition"> singular value decomposition</a>, <a href="https://publications.waset.org/abstracts/search?q=wavelet%20transform" title=" wavelet transform"> wavelet transform</a>, <a href="https://publications.waset.org/abstracts/search?q=Mallat%20algorithm" title=" Mallat algorithm"> Mallat algorithm</a>, <a href="https://publications.waset.org/abstracts/search?q=low%20SCR" title=" low SCR"> low SCR</a> </p> <a href="https://publications.waset.org/abstracts/181202/clutter-suppression-based-on-singular-value-decomposition-and-fast-wavelet-algorithm" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/181202.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">118</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">32</span> Adaptive Target Detection of High-Range-Resolution Radar in Non-Gaussian Clutter</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Lina%20Pan">Lina Pan</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In non-Gaussian clutter of a spherically invariant random vector, in the cases that a certain estimated covariance matrix could become singular, the adaptive target detection of high-range-resolution radar is addressed. Firstly, the restricted maximum likelihood (RML) estimates of unknown covariance matrix and scatterer amplitudes are derived for non-Gaussian clutter. And then the RML estimate of texture is obtained. Finally, a novel detector is devised. It is showed that, without secondary data, the proposed detector outperforms the existing Kelly binary integrator. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=non-Gaussian%20clutter" title="non-Gaussian clutter">non-Gaussian clutter</a>, <a href="https://publications.waset.org/abstracts/search?q=covariance%20matrix%20estimation" title=" covariance matrix estimation"> covariance matrix estimation</a>, <a href="https://publications.waset.org/abstracts/search?q=target%20detection" title=" target detection"> target detection</a>, <a href="https://publications.waset.org/abstracts/search?q=maximum%20likelihood" title=" maximum likelihood"> maximum likelihood</a> </p> <a href="https://publications.waset.org/abstracts/24879/adaptive-target-detection-of-high-range-resolution-radar-in-non-gaussian-clutter" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/24879.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">464</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">31</span> Adaptive CFAR Analysis for Non-Gaussian Distribution</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Bouchemha%20Amel">Bouchemha Amel</a>, <a href="https://publications.waset.org/abstracts/search?q=Chachoui%20Takieddine"> Chachoui Takieddine</a>, <a href="https://publications.waset.org/abstracts/search?q=H.%20Maalem"> H. Maalem</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Automatic detection of targets in a modern communication system RADAR is based primarily on the concept of adaptive CFAR detector. To have an effective detection, we must minimize the influence of disturbances due to the clutter. The detection algorithm adapts the CFAR detection threshold which is proportional to the average power of the clutter, maintaining a constant probability of false alarm. In this article, we analyze the performance of two variants of adaptive algorithms CA-CFAR and OS-CFAR and we compare the thresholds of these detectors in the marine environment (no-Gaussian) with a Weibull distribution. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=CFAR" title="CFAR">CFAR</a>, <a href="https://publications.waset.org/abstracts/search?q=threshold" title=" threshold"> threshold</a>, <a href="https://publications.waset.org/abstracts/search?q=clutter" title=" clutter"> clutter</a>, <a href="https://publications.waset.org/abstracts/search?q=distribution" title=" distribution"> distribution</a>, <a href="https://publications.waset.org/abstracts/search?q=Weibull" title=" Weibull"> Weibull</a>, <a href="https://publications.waset.org/abstracts/search?q=detection" title=" detection"> detection</a> </p> <a href="https://publications.waset.org/abstracts/21359/adaptive-cfar-analysis-for-non-gaussian-distribution" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/21359.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">588</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">30</span> Space Time Adaptive Algorithm in Bi-Static Passive Radar Systems for Clutter Mitigation</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=D.%20Venu">D. Venu</a>, <a href="https://publications.waset.org/abstracts/search?q=N.%20V.%20Koteswara%20Rao"> N. V. Koteswara Rao</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Space – time adaptive processing (STAP) is an effective tool for detecting a moving target in spaceborne or airborne radar systems. Since airborne passive radar systems utilize broadcast, navigation and excellent communication signals to perform various surveillance tasks and also has attracted significant interest from the distinct past, therefore the need of the hour is to have cost effective systems as compared to conventional active radar systems. Moreover, requirements of small number of secondary samples for effective clutter suppression in bi-static passive radar offer abundant illuminator resources for passive surveillance radar systems. This paper presents a framework for incorporating knowledge sources directly in the space-time beam former of airborne adaptive radars. STAP algorithm for clutter mitigation for passive bi-static radar has better quantitation of the reduction in sample size thereby amalgamating the earlier data bank with existing radar data sets. Also, we proposed a novel method to estimate the clutter matrix and perform STAP for efficient clutter suppression based on small sample size. Furthermore, the effectiveness of the proposed algorithm is verified using MATLAB simulations in order to validate STAP algorithm for passive bi-static radar. In conclusion, this study highlights the importance for various applications which augments traditional active radars using cost-effective measures. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=bistatic%20radar" title="bistatic radar">bistatic radar</a>, <a href="https://publications.waset.org/abstracts/search?q=clutter" title=" clutter"> clutter</a>, <a href="https://publications.waset.org/abstracts/search?q=covariance%20matrix%20passive%20radar" title=" covariance matrix passive radar"> covariance matrix passive radar</a>, <a href="https://publications.waset.org/abstracts/search?q=STAP" title=" STAP"> STAP</a> </p> <a href="https://publications.waset.org/abstracts/62372/space-time-adaptive-algorithm-in-bi-static-passive-radar-systems-for-clutter-mitigation" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/62372.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">295</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">29</span> Radar Signal Detection Using Neural Networks in Log-Normal Clutter for Multiple Targets Situations</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Boudemagh%20Naime">Boudemagh Naime</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Automatic radar detection requires some methods of adapting to variations in the background clutter in order to control their false alarm rate. The problem becomes more complicated in non-Gaussian environment. In fact, the conventional approach in real time applications requires a complex statistical modeling and much computational operations. To overcome these constraints, we propose another approach based on artificial neural network (ANN-CMLD-CFAR) using a Back Propagation (BP) training algorithm. The considered environment follows a log-normal distribution in the presence of multiple Rayleigh-targets. To evaluate the performances of the considered detector, several situations, such as scale parameter and the number of interferes targets, have been investigated. The simulation results show that the ANN-CMLD-CFAR processor outperforms the conventional statistical one. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=radat%20detection" title="radat detection">radat detection</a>, <a href="https://publications.waset.org/abstracts/search?q=ANN-CMLD-CFAR" title=" ANN-CMLD-CFAR"> ANN-CMLD-CFAR</a>, <a href="https://publications.waset.org/abstracts/search?q=log-normal%20clutter" title=" log-normal clutter"> log-normal clutter</a>, <a href="https://publications.waset.org/abstracts/search?q=statistical%20modelling" title=" statistical modelling "> statistical modelling </a> </p> <a href="https://publications.waset.org/abstracts/30070/radar-signal-detection-using-neural-networks-in-log-normal-clutter-for-multiple-targets-situations" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/30070.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">363</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">28</span> Design and Realization of Double-Delay Line Canceller (DDLC) Using Fpga</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=A.%20E.%20El-Henawey">A. E. El-Henawey</a>, <a href="https://publications.waset.org/abstracts/search?q=A.%20A.%20El-Kouny"> A. A. El-Kouny</a>, <a href="https://publications.waset.org/abstracts/search?q=M.%20M.%20Abd%20%E2%80%93El-Halim"> M. M. Abd –El-Halim</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Moving target indication (MTI) which is an anti-clutter technique that limits the display of clutter echoes. It uses the radar received information primarily to display moving targets only. The purpose of MTI is to discriminate moving targets from a background of clutter or slowly-moving chaff particles as shown in this paper. Processing system in these radars is so massive and complex; since it is supposed to perform a great amount of processing in very short time, in most radar applications the response of a single canceler is not acceptable since it does not have a wide notch in the stop-band. A double-delay canceler is an MTI delay-line canceler employing the two-delay-line configuration to improve the performance by widening the clutter-rejection notches, as compared with single-delay cancelers. This canceler is also called a double canceler, dual-delay canceler, or three-pulse canceler. In this paper, a double delay line canceler is chosen for study due to its simplicity in both concept and implementation. Discussing the implementation of a simple digital moving target indicator (DMTI) using FPGA which has distinct advantages compared to other application specific integrated circuit (ASIC) for the purposes of this work. The FPGA provides flexibility and stability which are important factors in the radar application. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=FPGA" title="FPGA">FPGA</a>, <a href="https://publications.waset.org/abstracts/search?q=MTI" title=" MTI"> MTI</a>, <a href="https://publications.waset.org/abstracts/search?q=double%20delay%20line%20canceler" title=" double delay line canceler"> double delay line canceler</a>, <a href="https://publications.waset.org/abstracts/search?q=Doppler%20Shift" title=" Doppler Shift "> Doppler Shift </a> </p> <a href="https://publications.waset.org/abstracts/2245/design-and-realization-of-double-delay-line-canceller-ddlc-using-fpga" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/2245.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">644</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">27</span> An Improved Two-dimensional Ordered Statistical Constant False Alarm Detection</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Weihao%20Wang">Weihao Wang</a>, <a href="https://publications.waset.org/abstracts/search?q=Zhulin%20Zong"> Zhulin Zong</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Two-dimensional ordered statistical constant false alarm detection is a widely used method for detecting weak target signals in radar signal processing applications. The method is based on analyzing the statistical characteristics of the noise and clutter present in the radar signal and then using this information to set an appropriate detection threshold. In this approach, the reference cell of the unit to be detected is divided into several reference subunits. These subunits are used to estimate the noise level and adjust the detection threshold, with the aim of minimizing the false alarm rate. By using an ordered statistical approach, the method is able to effectively suppress the influence of clutter and noise, resulting in a low false alarm rate. The detection process involves a number of steps, including filtering the input radar signal to remove any noise or clutter, estimating the noise level based on the statistical characteristics of the reference subunits, and finally, setting the detection threshold based on the estimated noise level. One of the main advantages of two-dimensional ordered statistical constant false alarm detection is its ability to detect weak target signals in the presence of strong clutter and noise. This is achieved by carefully analyzing the statistical properties of the signal and using an ordered statistical approach to estimate the noise level and adjust the detection threshold. In conclusion, two-dimensional ordered statistical constant false alarm detection is a powerful technique for detecting weak target signals in radar signal processing applications. By dividing the reference cell into several subunits and using an ordered statistical approach to estimate the noise level and adjust the detection threshold, this method is able to effectively suppress the influence of clutter and noise and maintain a low false alarm rate. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=two-dimensional" title="two-dimensional">two-dimensional</a>, <a href="https://publications.waset.org/abstracts/search?q=ordered%20statistical" title=" ordered statistical"> ordered statistical</a>, <a href="https://publications.waset.org/abstracts/search?q=constant%20false%20alarm" title=" constant false alarm"> constant false alarm</a>, <a href="https://publications.waset.org/abstracts/search?q=detection" title=" detection"> detection</a>, <a href="https://publications.waset.org/abstracts/search?q=weak%20target%20signals" title=" weak target signals"> weak target signals</a> </p> <a href="https://publications.waset.org/abstracts/163351/an-improved-two-dimensional-ordered-statistical-constant-false-alarm-detection" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/163351.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">78</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">26</span> Cognitive SATP for Airborne Radar Based on Slow-Time Coding</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Fanqiang%20Kong">Fanqiang Kong</a>, <a href="https://publications.waset.org/abstracts/search?q=Jindong%20Zhang"> Jindong Zhang</a>, <a href="https://publications.waset.org/abstracts/search?q=Daiyin%20Zhu"> Daiyin Zhu</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Space-time adaptive processing (STAP) techniques have been motivated as a key enabling technology for advanced airborne radar applications. In this paper, the notion of cognitive radar is extended to STAP technique, and cognitive STAP is discussed. The principle for improving signal-to-clutter ratio (SCNR) based on slow-time coding is given, and the corresponding optimization algorithm based on cyclic and power-like algorithms is presented. Numerical examples show the effectiveness of the proposed method. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=space-time%20adaptive%20processing%20%28STAP%29" title="space-time adaptive processing (STAP)">space-time adaptive processing (STAP)</a>, <a href="https://publications.waset.org/abstracts/search?q=airborne%20radar" title=" airborne radar"> airborne radar</a>, <a href="https://publications.waset.org/abstracts/search?q=signal-to-clutter%20ratio" title=" signal-to-clutter ratio"> signal-to-clutter ratio</a>, <a href="https://publications.waset.org/abstracts/search?q=slow-time%20coding" title=" slow-time coding"> slow-time coding</a> </p> <a href="https://publications.waset.org/abstracts/71518/cognitive-satp-for-airborne-radar-based-on-slow-time-coding" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/71518.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">273</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">25</span> Track Initiation Method Based on Multi-Algorithm Fusion Learning of 1DCNN And Bi-LSTM</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Zhe%20Li">Zhe Li</a>, <a href="https://publications.waset.org/abstracts/search?q=Aihua%20Cai"> Aihua Cai</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Aiming at the problem of high-density clutter and interference affecting radar detection target track initiation in ECM and complex radar mission, the traditional radar target track initiation method has been difficult to adapt. To this end, we propose a multi-algorithm fusion learning track initiation algorithm, which transforms the track initiation problem into a true-false track discrimination problem, and designs an algorithm based on 1DCNN(One-Dimensional CNN)combined with Bi-LSTM (Bi-Directional Long Short-Term Memory )for fusion classification. The experimental dataset consists of real trajectories obtained from a certain type of three-coordinate radar measurements, and the experiments are compared with traditional trajectory initiation methods such as rule-based method, logical-based method and Hough-transform-based method. The simulation results show that the overall performance of the multi-algorithm fusion learning track initiation algorithm is significantly better than that of the traditional method, and the real track initiation rate can be effectively improved under high clutter density with the average initiation time similar to the logical method. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=track%20initiation" title="track initiation">track initiation</a>, <a href="https://publications.waset.org/abstracts/search?q=multi-algorithm%20fusion" title=" multi-algorithm fusion"> multi-algorithm fusion</a>, <a href="https://publications.waset.org/abstracts/search?q=1DCNN" title=" 1DCNN"> 1DCNN</a>, <a href="https://publications.waset.org/abstracts/search?q=Bi-LSTM" title=" Bi-LSTM"> Bi-LSTM</a> </p> <a href="https://publications.waset.org/abstracts/173764/track-initiation-method-based-on-multi-algorithm-fusion-learning-of-1dcnn-and-bi-lstm" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/173764.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">94</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">24</span> Effective Nutrition Label Use on Smartphones</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Vladimir%20Kulyukin">Vladimir Kulyukin</a>, <a href="https://publications.waset.org/abstracts/search?q=Tanwir%20Zaman"> Tanwir Zaman</a>, <a href="https://publications.waset.org/abstracts/search?q=Sarat%20Kiran%20Andhavarapu"> Sarat Kiran Andhavarapu</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Research on nutrition label use identifies four factors that impede comprehension and retention of nutrition information by consumers: label’s location on the package, presentation of information within the label, label’s surface size, and surrounding visual clutter. In this paper, a system is presented that makes nutrition label use more effective for nutrition information comprehension and retention. The system’s front end is a smartphone application. The system’s back end is a four node Linux cluster for image recognition and data storage. Image frames captured on the smartphone are sent to the back end for skewed or aligned barcode recognition. When barcodes are recognized, corresponding nutrition labels are retrieved from a cloud database and presented to the user on the smartphone’s touchscreen. Each displayed nutrition label is positioned centrally on the touchscreen with no surrounding visual clutter. Wikipedia links to important nutrition terms are embedded to improve comprehension and retention of nutrition information. Standard touch gestures (e.g., zoom in/out) available on mainstream smartphones are used to manipulate the label’s surface size. The nutrition label database currently includes 200,000 nutrition labels compiled from public web sites by a custom crawler. Stress test experiments with the node cluster are presented. Implications for proactive nutrition management and food policy are discussed. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=mobile%20computing" title="mobile computing">mobile computing</a>, <a href="https://publications.waset.org/abstracts/search?q=cloud%20computing" title=" cloud computing"> cloud computing</a>, <a href="https://publications.waset.org/abstracts/search?q=nutrition%20label%20use" title=" nutrition label use"> nutrition label use</a>, <a href="https://publications.waset.org/abstracts/search?q=nutrition%20management" title=" nutrition management"> nutrition management</a>, <a href="https://publications.waset.org/abstracts/search?q=barcode%20scanning" title=" barcode scanning "> barcode scanning </a> </p> <a href="https://publications.waset.org/abstracts/6102/effective-nutrition-label-use-on-smartphones" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/6102.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">373</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">23</span> The Principle Probabilities of Space-Distance Resolution for a Monostatic Radar and Realization in Cylindrical Array</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Anatoly%20D.%20Pluzhnikov">Anatoly D. Pluzhnikov</a>, <a href="https://publications.waset.org/abstracts/search?q=Elena%20N.%20Pribludova"> Elena N. Pribludova</a>, <a href="https://publications.waset.org/abstracts/search?q=Alexander%20G.%20Ryndyk"> Alexander G. Ryndyk</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In conjunction with the problem of the target selection on a clutter background, the analysis of the scanning rate influence on the spatial-temporal signal structure, the generalized multivariate correlation function and the quality of the resolution with the increase pulse repetition frequency is made. The possibility of the object space-distance resolution, which is conditioned by the range-to-angle conversion with an increased scanning rate, is substantiated. The calculations for the real cylindrical array at high scanning rate are presented. The high scanning rate let to get the signal to noise improvement of the order of 10 dB for the space-time signal processing. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=antenna%20pattern" title="antenna pattern">antenna pattern</a>, <a href="https://publications.waset.org/abstracts/search?q=array" title=" array"> array</a>, <a href="https://publications.waset.org/abstracts/search?q=signal%20processing" title=" signal processing"> signal processing</a>, <a href="https://publications.waset.org/abstracts/search?q=spatial%20resolution" title=" spatial resolution"> spatial resolution</a> </p> <a href="https://publications.waset.org/abstracts/98259/the-principle-probabilities-of-space-distance-resolution-for-a-monostatic-radar-and-realization-in-cylindrical-array" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/98259.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">180</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">22</span> Joint Path and Push Planning among Moveable Obstacles</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Victor%20Emeli">Victor Emeli</a>, <a href="https://publications.waset.org/abstracts/search?q=Akansel%20Cosgun"> Akansel Cosgun</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper explores the navigation among movable obstacles (NAMO) problem and proposes joint path and push planning: which path to take and in what direction the obstacles should be pushed at, given a start and goal position. We present a planning algorithm for selecting a path and the obstacles to be pushed, where a rapidly-exploring random tree (RRT)-based heuristic is employed to calculate a minimal collision path. When it is necessary to apply a pushing force to slide an obstacle out of the way, the planners leverage means-end analysis through a dynamic physics simulation to determine the sequence of linear pushes to clear the necessary space. Simulation experiments show that our approach finds solutions in higher clutter percentages (up to 49%) compared to the straight-line push planner (37%) and RRT without pushing (18%). <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=motion%20planning" title="motion planning">motion planning</a>, <a href="https://publications.waset.org/abstracts/search?q=path%20planning" title=" path planning"> path planning</a>, <a href="https://publications.waset.org/abstracts/search?q=push%20planning" title=" push planning"> push planning</a>, <a href="https://publications.waset.org/abstracts/search?q=robot%20navigation" title=" robot navigation"> robot navigation</a> </p> <a href="https://publications.waset.org/abstracts/128403/joint-path-and-push-planning-among-moveable-obstacles" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/128403.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">164</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">21</span> Biologically Inspired Small Infrared Target Detection Using Local Contrast Mechanisms</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Tian%20Xia">Tian Xia</a>, <a href="https://publications.waset.org/abstracts/search?q=Yuan%20Yan%20Tang"> Yuan Yan Tang</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In order to obtain higher small target detection accuracy, this paper presents an effective algorithm inspired by the local contrast mechanism. The proposed method can enhance target signal and suppress background clutter simultaneously. In the first stage, a enhanced image is obtained using the proposed Weighted Laplacian of Gaussian. In the second stage, an adaptive threshold is adopted to segment the target. Experimental results on two changeling image sequences show that the proposed method can detect the bright and dark targets simultaneously, and is not sensitive to sea-sky line of the infrared image. So it is fit for IR small infrared target detection. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=small%20target%20detection" title="small target detection">small target detection</a>, <a href="https://publications.waset.org/abstracts/search?q=local%20contrast" title=" local contrast"> local contrast</a>, <a href="https://publications.waset.org/abstracts/search?q=human%20vision%20system" title=" human vision system"> human vision system</a>, <a href="https://publications.waset.org/abstracts/search?q=Laplacian%20of%20Gaussian" title=" Laplacian of Gaussian"> Laplacian of Gaussian</a> </p> <a href="https://publications.waset.org/abstracts/19199/biologically-inspired-small-infrared-target-detection-using-local-contrast-mechanisms" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/19199.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">468</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">20</span> An Adaptive CFAR Algorithm Based on Automatic Censoring in Heterogeneous Environments</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Naime%20Boudemagh">Naime Boudemagh</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In this work, we aim to improve the detection performances of radar systems. To this end, we propose and analyze a novel censoring technique of undesirable samples, of priori unknown positions, that may be present in the environment under investigation. Therefore, we consider heterogeneous backgrounds characterized by the presence of some irregularities such that clutter edge transitions and/or interfering targets. The proposed detector, termed automatic censoring constant false alarm (AC-CFAR), operates exclusively in a Gaussian background. It is built to allow the segmentation of the environment to regions and switch automatically to the appropriate detector; namely, the cell averaging CFAR (CA-CFAR), the censored mean level CFAR (CMLD-CFAR) or the order statistic CFAR (OS-CFAR). Monte Carlo simulations show that the AC-CFAR detector performs like the CA-CFAR in a homogeneous background. Moreover, the proposed processor exhibits considerable robustness in a heterogeneous background. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=CFAR" title="CFAR">CFAR</a>, <a href="https://publications.waset.org/abstracts/search?q=automatic%20censoring" title=" automatic censoring"> automatic censoring</a>, <a href="https://publications.waset.org/abstracts/search?q=heterogeneous%20environments" title=" heterogeneous environments"> heterogeneous environments</a>, <a href="https://publications.waset.org/abstracts/search?q=radar%20systems" title=" radar systems"> radar systems</a> </p> <a href="https://publications.waset.org/abstracts/28302/an-adaptive-cfar-algorithm-based-on-automatic-censoring-in-heterogeneous-environments" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/28302.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">602</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">19</span> Analysis of Formation Methods of Range Profiles for an X-Band Coastal Surveillance Radar</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Nguyen%20Van%20Loi">Nguyen Van Loi</a>, <a href="https://publications.waset.org/abstracts/search?q=Le%20Thanh%20Son"> Le Thanh Son</a>, <a href="https://publications.waset.org/abstracts/search?q=Tran%20Trung%20Kien"> Tran Trung Kien</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The paper deals with the problem of the formation of range profiles (RPs) for an X-band coastal surveillance radar. Two popular methods, the difference operator method, and the window-based method, are reviewed and analyzed via two tests with different datasets. The test results show that although the original window-based method achieves a better performance than the difference operator method, it has three main drawbacks that are the use of 3 or 4 peaks of an RP for creating the windows, the extension of the window size using the power sum of three adjacent cells in the left and the right sides of the windows and the same threshold applied for all types of vessels to finish the formation process of RPs. These drawbacks lead to inaccurate RPs due to the low signal-to-clutter ratio. Therefore, some suggestions are proposed to improve the original window-based method. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=range%20profile" title="range profile">range profile</a>, <a href="https://publications.waset.org/abstracts/search?q=difference%20operator%20method" title=" difference operator method"> difference operator method</a>, <a href="https://publications.waset.org/abstracts/search?q=window-based%20method" title=" window-based method"> window-based method</a>, <a href="https://publications.waset.org/abstracts/search?q=automatic%20target%20recognition" title=" automatic target recognition"> automatic target recognition</a> </p> <a href="https://publications.waset.org/abstracts/134878/analysis-of-formation-methods-of-range-profiles-for-an-x-band-coastal-surveillance-radar" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/134878.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">127</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">18</span> Automatic Censoring in K-Distribution for Multiple Targets Situations </h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Naime%20Boudemagh">Naime Boudemagh</a>, <a href="https://publications.waset.org/abstracts/search?q=Zoheir%20Hammoudi"> Zoheir Hammoudi</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The parameters estimation of the K-distribution is an essential part in radar detection. In fact, presence of interfering targets in reference cells causes a decrease in detection performances. In such situation, the estimate of the shape and the scale parameters are far from the actual values. In the order to avoid interfering targets, we propose an Automatic Censoring (AC) algorithm of radar interfering targets in K-distribution. The censoring technique used in this work offers a good discrimination between homogeneous and non-homogeneous environments. The homogeneous population is then used to estimate the unknown parameters by the classical Method of Moment (MOM). The AC algorithm does not need any prior information about the clutter parameters nor does it require both the number and the position of interfering targets. The accuracy of the estimation parameters obtained by this algorithm are validated and compared to various actual values of the shape parameter, using Monte Carlo simulations, this latter show that the probability of censing in multiple target situations are in good agreement. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=parameters%20estimation" title="parameters estimation">parameters estimation</a>, <a href="https://publications.waset.org/abstracts/search?q=method%20of%20moments" title=" method of moments"> method of moments</a>, <a href="https://publications.waset.org/abstracts/search?q=automatic%20censoring" title=" automatic censoring"> automatic censoring</a>, <a href="https://publications.waset.org/abstracts/search?q=K%20distribution" title=" K distribution "> K distribution </a> </p> <a href="https://publications.waset.org/abstracts/12906/automatic-censoring-in-k-distribution-for-multiple-targets-situations" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/12906.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">373</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">17</span> Ship Detection Requirements Analysis for Different Sea States: Validation on Real SAR Data</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Jaime%20Mart%C3%ADn-de-Nicol%C3%A1s">Jaime Martín-de-Nicolás</a>, <a href="https://publications.waset.org/abstracts/search?q=David%20Mata-Moya"> David Mata-Moya</a>, <a href="https://publications.waset.org/abstracts/search?q=Nerea%20del-Rey-Maestre"> Nerea del-Rey-Maestre</a>, <a href="https://publications.waset.org/abstracts/search?q=Pedro%20G%C3%B3mez-del-Hoyo"> Pedro Gómez-del-Hoyo</a>, <a href="https://publications.waset.org/abstracts/search?q=Mar%C3%ADa-Pilar%20Jarabo-Amores"> María-Pilar Jarabo-Amores</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Ship detection is nowadays quite an important issue in tasks related to sea traffic control, fishery management and ship search and rescue. Although it has traditionally been carried out by patrol ships or aircrafts, coverage and weather conditions and sea state can become a problem. Synthetic aperture radars can surpass these coverage limitations and work under any climatological condition. A fast CFAR ship detector based on a robust statistical modeling of sea clutter with respect to sea states in SAR images is used. In this paper, the minimum SNR required to obtain a given detection probability with a given false alarm rate for any sea state is determined. A Gaussian target model using real SAR data is considered. Results show that SNR does not depend heavily on the class considered. Provided there is some variation in the backscattering of targets in SAR imagery, the detection probability is limited and a post-processing stage based on morphology would be suitable. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=SAR" title="SAR">SAR</a>, <a href="https://publications.waset.org/abstracts/search?q=generalized%20gamma%20distribution" title=" generalized gamma distribution"> generalized gamma distribution</a>, <a href="https://publications.waset.org/abstracts/search?q=detection%20curves" title=" detection curves"> detection curves</a>, <a href="https://publications.waset.org/abstracts/search?q=radar%20detection" title=" radar detection"> radar detection</a> </p> <a href="https://publications.waset.org/abstracts/52278/ship-detection-requirements-analysis-for-different-sea-states-validation-on-real-sar-data" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/52278.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">452</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">16</span> A Practical and Efficient Evaluation Function for 3D Model Based Vehicle Matching</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Yuan%20Zheng">Yuan Zheng</a> </p> <p class="card-text"><strong>Abstract:</strong></p> 3D model-based vehicle matching provides a new way for vehicle recognition, localization and tracking. Its key is to construct an evaluation function, also called fitness function, to measure the degree of vehicle matching. The existing fitness functions often poorly perform when the clutter and occlusion exist in traffic scenarios. In this paper, we present a practical and efficient fitness function. Unlike the existing evaluation functions, the proposed fitness function is to study the vehicle matching problem from both local and global perspectives, which exploits the pixel gradient information as well as the silhouette information. In view of the discrepancy between 3D vehicle model and real vehicle, a weighting strategy is introduced to differently treat the fitting of the model&rsquo;s wireframes. Additionally, a normalization operation for the model&rsquo;s projection is performed to improve the accuracy of the matching. Experimental results on real traffic videos reveal that the proposed fitness function is efficient and robust to the cluttered background and partial occlusion. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=3D-2D%20matching" title="3D-2D matching">3D-2D matching</a>, <a href="https://publications.waset.org/abstracts/search?q=fitness%20function" title=" fitness function"> fitness function</a>, <a href="https://publications.waset.org/abstracts/search?q=3D%20vehicle%20model" title=" 3D vehicle model"> 3D vehicle model</a>, <a href="https://publications.waset.org/abstracts/search?q=local%20image%20gradient" title=" local image gradient"> local image gradient</a>, <a href="https://publications.waset.org/abstracts/search?q=silhouette%20information" title=" silhouette information"> silhouette information</a> </p> <a href="https://publications.waset.org/abstracts/45357/a-practical-and-efficient-evaluation-function-for-3d-model-based-vehicle-matching" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/45357.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">399</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">15</span> Recognition of Objects in a Maritime Environment Using a Combination of Pre- and Post-Processing of the Polynomial Fit Method</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=R.%20R.%20Hordijk">R. R. Hordijk</a>, <a href="https://publications.waset.org/abstracts/search?q=O.%20J.%20G.%20Somsen"> O. J. G. Somsen</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Traditionally, radar systems are the eyes and ears of a ship. However, these systems have their drawbacks and nowadays they are extended with systems that work with video and photos. Processing of data from these videos and photos is however very labour-intensive and efforts are being made to automate this process. A major problem when trying to recognize objects in water is that the 'background' is not homogeneous so that traditional image recognition technics do not work well. Main question is, can a method be developed which automate this recognition process. There are a large number of parameters involved to facilitate the identification of objects on such images. One is varying the resolution. In this research, the resolution of some images has been reduced to the extreme value of 1% of the original to reduce clutter before the polynomial fit (pre-processing). It turned out that the searched object was clearly recognizable as its grey value was well above the average. Another approach is to take two images of the same scene shortly after each other and compare the result. Because the water (waves) fluctuates much faster than an object floating in the water one can expect that the object is the only stable item in the two images. Both these methods (pre-processing and comparing two images of the same scene) delivered useful results. Though it is too early to conclude that with these methods all image problems can be solved they are certainly worthwhile for further research. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=image%20processing" title="image processing">image processing</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20recognition" title=" image recognition"> image recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=polynomial%20fit" title=" polynomial fit"> polynomial fit</a>, <a href="https://publications.waset.org/abstracts/search?q=water" title=" water"> water</a> </p> <a href="https://publications.waset.org/abstracts/34331/recognition-of-objects-in-a-maritime-environment-using-a-combination-of-pre-and-post-processing-of-the-polynomial-fit-method" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/34331.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">534</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">14</span> An Optimal Matching Design Method of Space-Based Optical Payload for Typical Aerial Target Detection</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Yin%20Zhang">Yin Zhang</a>, <a href="https://publications.waset.org/abstracts/search?q=Kai%20Qiao"> Kai Qiao</a>, <a href="https://publications.waset.org/abstracts/search?q=Xiyang%20Zhi"> Xiyang Zhi</a>, <a href="https://publications.waset.org/abstracts/search?q=Jinnan%20Gong"> Jinnan Gong</a>, <a href="https://publications.waset.org/abstracts/search?q=Jianming%20Hu"> Jianming Hu</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In order to effectively detect aerial targets over long distances, an optimal matching design method of space-based optical payload is proposed. Firstly, main factors affecting optical detectability of small targets under complex environment are analyzed based on the full link of a detection system, including band center, band width and spatial resolution. Then a performance characterization model representing the relationship between image signal-to-noise ratio (SCR) and the above influencing factors is established to describe a detection system. Finally, an optimal matching design example is demonstrated for a typical aerial target by simulating and analyzing its SCR under different scene clutter coupling with multi-scale characteristics, and the optimized detection band and spatial resolution are presented. The method can provide theoretical basis and scientific guidance for space-based detection system design, payload specification demonstration and information processing algorithm optimization. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=space-based%20detection" title="space-based detection">space-based detection</a>, <a href="https://publications.waset.org/abstracts/search?q=aerial%20targets" title=" aerial targets"> aerial targets</a>, <a href="https://publications.waset.org/abstracts/search?q=optical%20system%20design" title=" optical system design"> optical system design</a>, <a href="https://publications.waset.org/abstracts/search?q=detectability%20characterization" title=" detectability characterization"> detectability characterization</a> </p> <a href="https://publications.waset.org/abstracts/107378/an-optimal-matching-design-method-of-space-based-optical-payload-for-typical-aerial-target-detection" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/107378.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">168</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">13</span> Detectability Analysis of Typical Aerial Targets from Space-Based Platforms</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Yin%20Zhang">Yin Zhang</a>, <a href="https://publications.waset.org/abstracts/search?q=Kai%20Qiao"> Kai Qiao</a>, <a href="https://publications.waset.org/abstracts/search?q=Xiyang%20Zhi"> Xiyang Zhi</a>, <a href="https://publications.waset.org/abstracts/search?q=Jinnan%20Gong"> Jinnan Gong</a>, <a href="https://publications.waset.org/abstracts/search?q=Jianming%20Hu"> Jianming Hu</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In order to achieve effective detection of aerial targets over long distances from space-based platforms, the mechanism of interaction between the radiation characteristics of the aerial targets and the complex scene environment including the sunlight conditions, underlying surfaces and the atmosphere are analyzed. A large simulated database of space-based radiance images is constructed considering several typical aerial targets, target working modes (flight velocity and altitude), illumination and observation angles, background types (cloud, ocean, and urban areas) and sensor spectrums ranging from visible to thermal infrared. The target detectability is characterized by the signal-to-clutter ratio (SCR) extracted from the images. The influence laws of the target detectability are discussed under different detection bands and instantaneous fields of view (IFOV). Furthermore, the optimal center wavelengths and widths of the detection bands are suggested, and the minimum IFOV requirements are proposed. The research can provide theoretical support and scientific guidance for the design of space-based detection systems and on-board information processing algorithms. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=space-based%20detection" title="space-based detection">space-based detection</a>, <a href="https://publications.waset.org/abstracts/search?q=aerial%20targets" title=" aerial targets"> aerial targets</a>, <a href="https://publications.waset.org/abstracts/search?q=detectability%20analysis" title=" detectability analysis"> detectability analysis</a>, <a href="https://publications.waset.org/abstracts/search?q=scene%20environment" title=" scene environment"> scene environment</a> </p> <a href="https://publications.waset.org/abstracts/97443/detectability-analysis-of-typical-aerial-targets-from-space-based-platforms" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/97443.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">144</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">12</span> Motion Detection Method for Clutter Rejection in the Bio-Radar Signal Processing</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Carolina%20Gouveia">Carolina Gouveia</a>, <a href="https://publications.waset.org/abstracts/search?q=Jos%C3%A9%20Vieira"> José Vieira</a>, <a href="https://publications.waset.org/abstracts/search?q=Pedro%20Pinho"> Pedro Pinho</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The cardiopulmonary signal monitoring, without the usage of contact electrodes or any type of in-body sensors, has several applications such as sleeping monitoring and continuous monitoring of vital signals in bedridden patients. This system has also applications in the vehicular environment to monitor the driver, in order to avoid any possible accident in case of cardiac failure. Thus, the bio-radar system proposed in this paper, can measure vital signals accurately by using the Doppler effect principle that relates the received signal properties with the distance change between the radar antennas and the person&rsquo;s chest-wall. Once the bio-radar aim is to monitor subjects in real-time and during long periods of time, it is impossible to guarantee the patient immobilization, hence their random motion will interfere in the acquired signals. In this paper, a mathematical model of the bio-radar is presented, as well as its simulation in MATLAB. The used algorithm for breath rate extraction is explained and a method for DC offsets removal based in a motion detection system is proposed. Furthermore, experimental tests were conducted with a view to prove that the unavoidable random motion can be used to estimate the DC offsets accurately and thus remove them successfully. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=bio-signals" title="bio-signals">bio-signals</a>, <a href="https://publications.waset.org/abstracts/search?q=DC%20component" title=" DC component"> DC component</a>, <a href="https://publications.waset.org/abstracts/search?q=Doppler%20effect" title=" Doppler effect"> Doppler effect</a>, <a href="https://publications.waset.org/abstracts/search?q=ellipse%20fitting" title=" ellipse fitting"> ellipse fitting</a>, <a href="https://publications.waset.org/abstracts/search?q=radar" title=" radar"> radar</a>, <a href="https://publications.waset.org/abstracts/search?q=SDR" title=" SDR"> SDR</a> </p> <a href="https://publications.waset.org/abstracts/95280/motion-detection-method-for-clutter-rejection-in-the-bio-radar-signal-processing" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/95280.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">140</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">11</span> Detection of Micro-Unmanned Ariel Vehicles Using a Multiple-Input Multiple-Output Digital Array Radar</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Tareq%20AlNuaim">Tareq AlNuaim</a>, <a href="https://publications.waset.org/abstracts/search?q=Mubashir%20Alam"> Mubashir Alam</a>, <a href="https://publications.waset.org/abstracts/search?q=Abdulrazaq%20Aldowesh"> Abdulrazaq Aldowesh</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The usage of micro-Unmanned Ariel Vehicles (UAVs) has witnessed an enormous increase recently. Detection of such drones became a necessity nowadays to prevent any harmful activities. Typically, such targets have low velocity and low Radar Cross Section (RCS), making them indistinguishable from clutter and phase noise. Multiple-Input Multiple-Output (MIMO) Radars have many potentials; it increases the degrees of freedom on both transmit and receive ends. Such architecture allows for flexibility in operation, through utilizing the direct access to every element in the transmit/ receive array. MIMO systems allow for several array processing techniques, permitting the system to stare at targets for longer times, which improves the Doppler resolution. In this paper, a 2×2 MIMO radar prototype is developed using Software Defined Radio (SDR) technology, and its performance is evaluated against a slow-moving low radar cross section micro-UAV used by hobbyists. Radar cross section simulations were carried out using FEKO simulator, achieving an average of -14.42 dBsm at S-band. The developed prototype was experimentally evaluated achieving more than 300 meters of detection range for a DJI Mavic pro-drone <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=digital%20beamforming" title="digital beamforming">digital beamforming</a>, <a href="https://publications.waset.org/abstracts/search?q=drone%20detection" title=" drone detection"> drone detection</a>, <a href="https://publications.waset.org/abstracts/search?q=micro-UAV" title=" micro-UAV"> micro-UAV</a>, <a href="https://publications.waset.org/abstracts/search?q=MIMO" title=" MIMO"> MIMO</a>, <a href="https://publications.waset.org/abstracts/search?q=phased%20array" title=" phased array"> phased array</a> </p> <a href="https://publications.waset.org/abstracts/107642/detection-of-micro-unmanned-ariel-vehicles-using-a-multiple-input-multiple-output-digital-array-radar" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/107642.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">139</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">10</span> Quantitative Analysis of the Quality of Housing and Land Use in the Built-up area of Croatian Coastal City of Zadar</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Silvija%20%C5%A0iljeg">Silvija Šiljeg</a>, <a href="https://publications.waset.org/abstracts/search?q=Ante%20%C5%A0iljeg"> Ante Šiljeg</a>, <a href="https://publications.waset.org/abstracts/search?q=Branko%20Cavri%C4%87"> Branko Cavrić</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Housing is considered as a basic human need and important component of the quality of life (QoL) in urban areas worldwide. In contemporary housing studies, the concept of the quality of housing (QoH) is considered as a multi-dimensional and multi-disciplinary field. It emphasizes connection between various aspects of the QoL which could be measured by quantitative and qualitative indicators at different spatial levels (e.g. local, city, metropolitan, regional). The main goal of this paper is to examine the QoH and compare results of quantitative analysis with the clutter land use categories derived for selected local communities in Croatian Coastal City of Zadar. The qualitative housing analysis based on the four housing indicators (out of total 24 QoL indicators) has provided identification of the three Zadar’s local communities with the highest estimated QoH ranking. Furthermore, by using GIS overlay techniques, the QoH was merged with the urban environment analysis and introduction of spatial metrics based on the three categories: the element, class and environment as a whole. In terms of semantic-content analysis, the research has also generated a set of indexes suitable for evaluation of “housing state of affairs” and future decision making aiming at improvement of the QoH in selected local communities. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=housing" title="housing">housing</a>, <a href="https://publications.waset.org/abstracts/search?q=quality" title=" quality"> quality</a>, <a href="https://publications.waset.org/abstracts/search?q=indicators" title=" indicators"> indicators</a>, <a href="https://publications.waset.org/abstracts/search?q=indexes" title=" indexes"> indexes</a>, <a href="https://publications.waset.org/abstracts/search?q=urban%20environment" title=" urban environment"> urban environment</a>, <a href="https://publications.waset.org/abstracts/search?q=GIS" title=" GIS"> GIS</a>, <a href="https://publications.waset.org/abstracts/search?q=element" title=" element"> element</a>, <a href="https://publications.waset.org/abstracts/search?q=class" title=" class"> class</a> </p> <a href="https://publications.waset.org/abstracts/3647/quantitative-analysis-of-the-quality-of-housing-and-land-use-in-the-built-up-area-of-croatian-coastal-city-of-zadar" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/3647.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">410</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">9</span> Particle Filter Supported with the Neural Network for Aircraft Tracking Based on Kernel and Active Contour</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Mohammad%20Izadkhah">Mohammad Izadkhah</a>, <a href="https://publications.waset.org/abstracts/search?q=Mojtaba%20Hoseini"> Mojtaba Hoseini</a>, <a href="https://publications.waset.org/abstracts/search?q=Alireza%20Khalili%20Tehrani"> Alireza Khalili Tehrani</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In this paper we presented a new method for tracking flying targets in color video sequences based on contour and kernel. The aim of this work is to overcome the problem of losing target in changing light, large displacement, changing speed, and occlusion. The proposed method is made in three steps, estimate the target location by particle filter, segmentation target region using neural network and find the exact contours by greedy snake algorithm. In the proposed method we have used both region and contour information to create target candidate model and this model is dynamically updated during tracking. To avoid the accumulation of errors when updating, target region given to a perceptron neural network to separate the target from background. Then its output used for exact calculation of size and center of the target. Also it is used as the initial contour for the greedy snake algorithm to find the exact target&#39;s edge. The proposed algorithm has been tested on a database which contains a lot of challenges such as high speed and agility of aircrafts, background clutter, occlusions, camera movement, and so on. The experimental results show that the use of neural network increases the accuracy of tracking and segmentation. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=video%20tracking" title="video tracking">video tracking</a>, <a href="https://publications.waset.org/abstracts/search?q=particle%20filter" title=" particle filter"> particle filter</a>, <a href="https://publications.waset.org/abstracts/search?q=greedy%20snake" title=" greedy snake"> greedy snake</a>, <a href="https://publications.waset.org/abstracts/search?q=neural%20network" title=" neural network"> neural network</a> </p> <a href="https://publications.waset.org/abstracts/11913/particle-filter-supported-with-the-neural-network-for-aircraft-tracking-based-on-kernel-and-active-contour" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/11913.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">342</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">8</span> YOLO-IR: Infrared Small Object Detection in High Noise Images</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Yufeng%20Li">Yufeng Li</a>, <a href="https://publications.waset.org/abstracts/search?q=Yinan%20Ma"> Yinan Ma</a>, <a href="https://publications.waset.org/abstracts/search?q=Jing%20Wu"> Jing Wu</a>, <a href="https://publications.waset.org/abstracts/search?q=Chengnian%20Long"> Chengnian Long</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Infrared object detection aims at separating small and dim target from clutter background and its capabilities extend beyond the limits of visible light, making it invaluable in a wide range of applications such as improving safety, security, efficiency, and functionality. However, existing methods are usually sensitive to the noise of the input infrared image, leading to a decrease in target detection accuracy and an increase in the false alarm rate in high-noise environments. To address this issue, an infrared small target detection algorithm called YOLO-IR is proposed in this paper to improve the robustness to high infrared noise. To address the problem that high noise significantly reduces the clarity and reliability of target features in infrared images, we design a soft-threshold coordinate attention mechanism to improve the model’s ability to extract target features and its robustness to noise. Since the noise may overwhelm the local details of the target, resulting in the loss of small target features during depth down-sampling, we propose a deep and shallow feature fusion neck to improve the detection accuracy. In addition, because the generalized Intersection over Union (IoU)-based loss functions may be sensitive to noise and lead to unstable training in high-noise environments, we introduce a Wasserstein-distance based loss function to improve the training of the model. The experimental results show that YOLO-IR achieves a 5.0% improvement in recall and a 6.6% improvement in F1-score over existing state-of-art model. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=infrared%20small%20target%20detection" title="infrared small target detection">infrared small target detection</a>, <a href="https://publications.waset.org/abstracts/search?q=high%20noise" title=" high noise"> high noise</a>, <a href="https://publications.waset.org/abstracts/search?q=robustness" title=" robustness"> robustness</a>, <a href="https://publications.waset.org/abstracts/search?q=soft-threshold%20coordinate%20attention" title=" soft-threshold coordinate attention"> soft-threshold coordinate attention</a>, <a href="https://publications.waset.org/abstracts/search?q=feature%20fusion" title=" feature fusion"> feature fusion</a> </p> <a href="https://publications.waset.org/abstracts/180574/yolo-ir-infrared-small-object-detection-in-high-noise-images" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/180574.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">73</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">7</span> Hyperspectral Imaging and Nonlinear Fukunaga-Koontz Transform Based Food Inspection</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Hamidullah%20Binol">Hamidullah Binol</a>, <a href="https://publications.waset.org/abstracts/search?q=Abdullah%20Bal"> Abdullah Bal</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Nowadays, food safety is a great public concern; therefore, robust and effective techniques are required for detecting the safety situation of goods. Hyperspectral Imaging (HSI) is an attractive material for researchers to inspect food quality and safety estimation such as meat quality assessment, automated poultry carcass inspection, quality evaluation of fish, bruise detection of apples, quality analysis and grading of citrus fruits, bruise detection of strawberry, visualization of sugar distribution of melons, measuring ripening of tomatoes, defect detection of pickling cucumber, and classification of wheat kernels. HSI can be used to concurrently collect large amounts of spatial and spectral data on the objects being observed. This technique yields with exceptional detection skills, which otherwise cannot be achieved with either imaging or spectroscopy alone. This paper presents a nonlinear technique based on kernel Fukunaga-Koontz transform (KFKT) for detection of fat content in ground meat using HSI. The KFKT which is the nonlinear version of FKT is one of the most effective techniques for solving problems involving two-pattern nature. The conventional FKT method has been improved with kernel machines for increasing the nonlinear discrimination ability and capturing higher order of statistics of data. The proposed approach in this paper aims to segment the fat content of the ground meat by regarding the fat as target class which is tried to be separated from the remaining classes (as clutter). We have applied the KFKT on visible and nearinfrared (VNIR) hyperspectral images of ground meat to determine fat percentage. The experimental studies indicate that the proposed technique produces high detection performance for fat ratio in ground meat. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=food%20%28ground%20meat%29%20inspection" title="food (ground meat) inspection">food (ground meat) inspection</a>, <a href="https://publications.waset.org/abstracts/search?q=Fukunaga-Koontz%20transform" title=" Fukunaga-Koontz transform"> Fukunaga-Koontz transform</a>, <a href="https://publications.waset.org/abstracts/search?q=hyperspectral%20imaging" title=" hyperspectral imaging"> hyperspectral imaging</a>, <a href="https://publications.waset.org/abstracts/search?q=kernel%20methods" title=" kernel methods"> kernel methods</a> </p> <a href="https://publications.waset.org/abstracts/35980/hyperspectral-imaging-and-nonlinear-fukunaga-koontz-transform-based-food-inspection" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/35980.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">431</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">6</span> Consumers Perception of Slogans/ Taglines: A Study of Higher Education Sector in India</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Puja%20Mahesh">Puja Mahesh</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Purpose: A good slogan captures the essence of your brand's promised consumer benefit in one short phrase. A good slogan conjures up positive imagery about your business or your product. A good slogan has the element of immediacy. Immediacy does not necessarily mean that the slogan will inspire consumers to run right out and buy your product. It does mean, however, that your slogan has an immediate cognitive impact. It forces your audience to "stop-and-think" after exposure as a necessary first step toward remembering your slogan promise. A good slogan is memorable and durability. When your slogan promise is occupying prime real estate in the consumer's subconscious, it aids in recall and activates preference for your brand when you want it -when consumers are ready to buy. The objective of current study is to understand the consumer perception of slogans/taglines of higher education sector in India. Design/Methodology/Approach: Survey of 500 consumers (largely comprising of youth) will be done using questionnaire. Universities and institutes will be chosen on the basis of various streams and Credible Rankings. The perception will be taken from the respondents on the basis of scale. Findings: Catchy phrases, rhymes, music, jingles, avatars (visual representations) and unique imagery are just a few of the mnemonic clutter-busting tactics commonly used in slogans to stand apart from the competition and to aid in memory recall. The study will reveal whether it is true that catchy phrases, rhymes, music, jingles, avatars (visual representations) and unique imagery across disciplines and universities help in building stronger brands. It will also be found whether consumers pay more attention to reputation of University/ College or brand identity. Originality/Value: Researcher has not come across any study of Consumer Perception of Slogans/Taglines of Higher Education Brands in India. Also, it would be interesting to understand Consumer Perception of various colleges/streams particularly Management colleges who invest a lot of time in branding exercise. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=consumer%20perception" title="consumer perception">consumer perception</a>, <a href="https://publications.waset.org/abstracts/search?q=higher%20education" title=" higher education"> higher education</a>, <a href="https://publications.waset.org/abstracts/search?q=slogans" title=" slogans"> slogans</a>, <a href="https://publications.waset.org/abstracts/search?q=taglines" title=" taglines"> taglines</a> </p> <a href="https://publications.waset.org/abstracts/23549/consumers-perception-of-slogans-taglines-a-study-of-higher-education-sector-in-india" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/23549.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">424</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">5</span> Executive Deficits in Non-Clinical Hoarders</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Thomas%20Heffernan">Thomas Heffernan</a>, <a href="https://publications.waset.org/abstracts/search?q=Nick%20Neave"> Nick Neave</a>, <a href="https://publications.waset.org/abstracts/search?q=Colin%20%20Hamilton"> Colin Hamilton</a>, <a href="https://publications.waset.org/abstracts/search?q=Gill%20Case"> Gill Case</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Hoarding is the acquisition of and failure to discard possessions, leading to excessive clutter and significant psychological/emotional distress. From a cognitive-behavioural approach, excessive hoarding arises from information-processing deficits, as well as from problems with emotional attachment to possessions and beliefs about the nature of possessions. In terms of information processing, hoarders have shown deficits in executive functions, including working memory, planning, inhibitory control, and cognitive flexibility. However, this previous research is often confounded by co-morbid factors such as anxiety, depression, or obsessive-compulsive disorder. The current study adopted a cognitive-behavioural approach, specifically assessing executive deficits and working memory in a non-clinical sample of hoarders, compared with non-hoarders. In this study, a non-clinical sample of 40 hoarders and 73 non-hoarders (defined by The Savings Inventory-Revised) completed the Adult Executive Functioning Inventory, which measures working memory and inhibition, Dysexecutive Questionnaire-Revised, which measures general executive function and the Hospital Anxiety and Depression Scale, which measures mood. The participant sample was made up of unpaid young adult volunteers who were undergraduate students and who completed the questionnaires on a university campus. The results revealed that, after observing no differences between hoarders and non-hoarders on age, sex, and mood, hoarders reported significantly more deficits in inhibitory control and general executive function when compared with non-hoarders. There was no between-group difference on general working memory. This suggests that non-clinical hoarders have a specific difficulty with inhibition-control, which enables you to resist repeated, unwanted urges. This might explain the hoarder’s inability to resist urges to buy and keep items that are no longer of any practical use. These deficits may be underpinned by general executive function deficiencies. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=hoarding" title="hoarding">hoarding</a>, <a href="https://publications.waset.org/abstracts/search?q=memory" title=" memory"> memory</a>, <a href="https://publications.waset.org/abstracts/search?q=executive" title=" executive"> executive</a>, <a href="https://publications.waset.org/abstracts/search?q=deficits" title=" deficits"> deficits</a> </p> <a href="https://publications.waset.org/abstracts/133654/executive-deficits-in-non-clinical-hoarders" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/133654.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">193</span> </span> </div> </div> <ul class="pagination"> <li class="page-item disabled"><span class="page-link">&lsaquo;</span></li> <li class="page-item active"><span class="page-link">1</span></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=non-Gaussian%20clutter&amp;page=2">2</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=non-Gaussian%20clutter&amp;page=2" rel="next">&rsaquo;</a></li> </ul> </div> </main> <footer> <div id="infolinks" class="pt-3 pb-2"> <div class="container"> <div style="background-color:#f5f5f5;" class="p-3"> <div class="row"> <div class="col-md-2"> <ul class="list-unstyled"> About <li><a href="https://waset.org/page/support">About Us</a></li> <li><a href="https://waset.org/page/support#legal-information">Legal</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/WASET-16th-foundational-anniversary.pdf">WASET celebrates its 16th foundational anniversary</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Account <li><a href="https://waset.org/profile">My Account</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Explore <li><a href="https://waset.org/disciplines">Disciplines</a></li> <li><a href="https://waset.org/conferences">Conferences</a></li> <li><a href="https://waset.org/conference-programs">Conference Program</a></li> <li><a href="https://waset.org/committees">Committees</a></li> <li><a href="https://publications.waset.org">Publications</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Research <li><a href="https://publications.waset.org/abstracts">Abstracts</a></li> <li><a href="https://publications.waset.org">Periodicals</a></li> <li><a href="https://publications.waset.org/archive">Archive</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Open Science <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Science-Philosophy.pdf">Open Science Philosophy</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Science-Award.pdf">Open Science Award</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Society-Open-Science-and-Open-Innovation.pdf">Open Innovation</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Postdoctoral-Fellowship-Award.pdf">Postdoctoral Fellowship Award</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Scholarly-Research-Review.pdf">Scholarly Research Review</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Support <li><a href="https://waset.org/page/support">Support</a></li> <li><a href="https://waset.org/profile/messages/create">Contact Us</a></li> <li><a href="https://waset.org/profile/messages/create">Report Abuse</a></li> </ul> </div> </div> </div> </div> </div> <div class="container text-center"> <hr style="margin-top:0;margin-bottom:.3rem;"> <a href="https://creativecommons.org/licenses/by/4.0/" target="_blank" class="text-muted small">Creative Commons Attribution 4.0 International License</a> <div id="copy" class="mt-2">&copy; 2024 World Academy of Science, Engineering and Technology</div> </div> </footer> <a href="javascript:" id="return-to-top"><i class="fas fa-arrow-up"></i></a> <div class="modal" id="modal-template"> <div class="modal-dialog"> <div class="modal-content"> <div class="row m-0 mt-1"> <div class="col-md-12"> <button type="button" class="close" data-dismiss="modal" aria-label="Close"><span aria-hidden="true">&times;</span></button> </div> </div> <div class="modal-body"></div> </div> </div> </div> <script src="https://cdn.waset.org/static/plugins/jquery-3.3.1.min.js"></script> <script src="https://cdn.waset.org/static/plugins/bootstrap-4.2.1/js/bootstrap.bundle.min.js"></script> <script src="https://cdn.waset.org/static/js/site.js?v=150220211556"></script> <script> jQuery(document).ready(function() { /*jQuery.get("https://publications.waset.org/xhr/user-menu", function (response) { jQuery('#mainNavMenu').append(response); });*/ jQuery.get({ url: "https://publications.waset.org/xhr/user-menu", cache: false }).then(function(response){ jQuery('#mainNavMenu').append(response); }); }); </script> </body> </html>

Pages: 1 2 3 4 5 6 7 8 9 10