CINXE.COM

Search results for: statistical features

<!DOCTYPE html> <html lang="en" dir="ltr"> <head> <!-- Google tag (gtag.js) --> <script async src="https://www.googletagmanager.com/gtag/js?id=G-P63WKM1TM1"></script> <script> window.dataLayer = window.dataLayer || []; function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'G-P63WKM1TM1'); </script> <!-- Yandex.Metrika counter --> <script type="text/javascript" > (function(m,e,t,r,i,k,a){m[i]=m[i]||function(){(m[i].a=m[i].a||[]).push(arguments)}; m[i].l=1*new Date(); for (var j = 0; j < document.scripts.length; j++) {if (document.scripts[j].src === r) { return; }} k=e.createElement(t),a=e.getElementsByTagName(t)[0],k.async=1,k.src=r,a.parentNode.insertBefore(k,a)}) (window, document, "script", "https://mc.yandex.ru/metrika/tag.js", "ym"); ym(55165297, "init", { clickmap:false, trackLinks:true, accurateTrackBounce:true, webvisor:false }); </script> <noscript><div><img src="https://mc.yandex.ru/watch/55165297" style="position:absolute; left:-9999px;" alt="" /></div></noscript> <!-- /Yandex.Metrika counter --> <!-- Matomo --> <!-- End Matomo Code --> <title>Search results for: statistical features</title> <meta name="description" content="Search results for: statistical features"> <meta name="keywords" content="statistical features"> <meta name="viewport" content="width=device-width, initial-scale=1, minimum-scale=1, maximum-scale=1, user-scalable=no"> <meta charset="utf-8"> <link href="https://cdn.waset.org/favicon.ico" type="image/x-icon" rel="shortcut icon"> <link href="https://cdn.waset.org/static/plugins/bootstrap-4.2.1/css/bootstrap.min.css" rel="stylesheet"> <link href="https://cdn.waset.org/static/plugins/fontawesome/css/all.min.css" rel="stylesheet"> <link href="https://cdn.waset.org/static/css/site.css?v=150220211555" rel="stylesheet"> </head> <body> <header> <div class="container"> <nav class="navbar navbar-expand-lg navbar-light"> <a class="navbar-brand" href="https://waset.org"> <img src="https://cdn.waset.org/static/images/wasetc.png" alt="Open Science Research Excellence" title="Open Science Research Excellence" /> </a> <button class="d-block d-lg-none navbar-toggler ml-auto" type="button" data-toggle="collapse" data-target="#navbarMenu" aria-controls="navbarMenu" aria-expanded="false" aria-label="Toggle navigation"> <span class="navbar-toggler-icon"></span> </button> <div class="w-100"> <div class="d-none d-lg-flex flex-row-reverse"> <form method="get" action="https://waset.org/search" class="form-inline my-2 my-lg-0"> <input class="form-control mr-sm-2" type="search" placeholder="Search Conferences" value="statistical features" name="q" aria-label="Search"> <button class="btn btn-light my-2 my-sm-0" type="submit"><i class="fas fa-search"></i></button> </form> </div> <div class="collapse navbar-collapse mt-1" id="navbarMenu"> <ul class="navbar-nav ml-auto align-items-center" id="mainNavMenu"> <li class="nav-item"> <a class="nav-link" href="https://waset.org/conferences" title="Conferences in 2024/2025/2026">Conferences</a> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/disciplines" title="Disciplines">Disciplines</a> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/committees" rel="nofollow">Committees</a> </li> <li class="nav-item dropdown"> <a class="nav-link dropdown-toggle" href="#" id="navbarDropdownPublications" role="button" data-toggle="dropdown" aria-haspopup="true" aria-expanded="false"> Publications </a> <div class="dropdown-menu" aria-labelledby="navbarDropdownPublications"> <a class="dropdown-item" href="https://publications.waset.org/abstracts">Abstracts</a> <a class="dropdown-item" href="https://publications.waset.org">Periodicals</a> <a class="dropdown-item" href="https://publications.waset.org/archive">Archive</a> </div> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/page/support" title="Support">Support</a> </li> </ul> </div> </div> </nav> </div> </header> <main> <div class="container mt-4"> <div class="row"> <div class="col-md-9 mx-auto"> <form method="get" action="https://publications.waset.org/abstracts/search"> <div id="custom-search-input"> <div class="input-group"> <i class="fas fa-search"></i> <input type="text" class="search-query" name="q" placeholder="Author, Title, Abstract, Keywords" value="statistical features"> <input type="submit" class="btn_search" value="Search"> </div> </div> </form> </div> </div> <div class="row mt-3"> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Commenced</strong> in January 2007</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Frequency:</strong> Monthly</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Edition:</strong> International</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Paper Count:</strong> 7618</div> </div> </div> </div> <h1 class="mt-3 mb-3 text-center" style="font-size:1.6rem;">Search results for: statistical features</h1> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">7618</span> Development of Sleep Quality Index Using Heart Rate</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Dongjoo%20Kim">Dongjoo Kim</a>, <a href="https://publications.waset.org/abstracts/search?q=Chang-Sik%20Son"> Chang-Sik Son</a>, <a href="https://publications.waset.org/abstracts/search?q=Won-Seok%20Kang"> Won-Seok Kang</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Adequate sleep affects various parts of one&rsquo;s overall physical and mental life. As one of the methods in determining the appropriate amount of sleep, this research presents a heart rate based sleep quality index. In order to evaluate sleep quality using the heart rate, sleep data from 280 subjects taken over one month are used. Their sleep data are categorized by a three-part heart rate range. After categorizing, some features are extracted, and the statistical significances are verified for these features. The results show that some features of this sleep quality index model have statistical significance. Thus, this heart rate based sleep quality index may be a useful discriminator of sleep. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=sleep" title="sleep">sleep</a>, <a href="https://publications.waset.org/abstracts/search?q=sleep%20quality" title=" sleep quality"> sleep quality</a>, <a href="https://publications.waset.org/abstracts/search?q=heart%20rate" title=" heart rate"> heart rate</a>, <a href="https://publications.waset.org/abstracts/search?q=statistical%20analysis" title=" statistical analysis"> statistical analysis</a> </p> <a href="https://publications.waset.org/abstracts/52817/development-of-sleep-quality-index-using-heart-rate" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/52817.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">340</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">7617</span> Exploring Syntactic and Semantic Features for Text-Based Authorship Attribution</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Haiyan%20Wu">Haiyan Wu</a>, <a href="https://publications.waset.org/abstracts/search?q=Ying%20Liu"> Ying Liu</a>, <a href="https://publications.waset.org/abstracts/search?q=Shaoyun%20Shi"> Shaoyun Shi</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Authorship attribution is to extract features to identify authors of anonymous documents. Many previous works on authorship attribution focus on statistical style features (e.g., sentence/word length), content features (e.g., frequent words, n-grams). Modeling these features by regression or some transparent machine learning methods gives a portrait of the authors' writing style. But these methods do not capture the syntactic (e.g., dependency relationship) or semantic (e.g., topics) information. In recent years, some researchers model syntactic trees or latent semantic information by neural networks. However, few works take them together. Besides, predictions by neural networks are difficult to explain, which is vital in authorship attribution tasks. In this paper, we not only utilize the statistical style and content features but also take advantage of both syntactic and semantic features. Different from an end-to-end neural model, feature selection and prediction are two steps in our method. An attentive n-gram network is utilized to select useful features, and logistic regression is applied to give prediction and understandable representation of writing style. Experiments show that our extracted features can improve the state-of-the-art methods on three benchmark datasets. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=authorship%20attribution" title="authorship attribution">authorship attribution</a>, <a href="https://publications.waset.org/abstracts/search?q=attention%20mechanism" title=" attention mechanism"> attention mechanism</a>, <a href="https://publications.waset.org/abstracts/search?q=syntactic%20feature" title=" syntactic feature"> syntactic feature</a>, <a href="https://publications.waset.org/abstracts/search?q=feature%20extraction" title=" feature extraction"> feature extraction</a> </p> <a href="https://publications.waset.org/abstracts/129270/exploring-syntactic-and-semantic-features-for-text-based-authorship-attribution" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/129270.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">136</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">7616</span> Monitoring Blood Pressure Using Regression Techniques </h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Qasem%20Qananwah">Qasem Qananwah</a>, <a href="https://publications.waset.org/abstracts/search?q=Ahmad%20Dagamseh"> Ahmad Dagamseh</a>, <a href="https://publications.waset.org/abstracts/search?q=Hiam%20AlQuran"> Hiam AlQuran</a>, <a href="https://publications.waset.org/abstracts/search?q=Khalid%20Shaker%20Ibrahim"> Khalid Shaker Ibrahim</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Blood pressure helps the physicians greatly to have a deep insight into the cardiovascular system. The determination of individual blood pressure is a standard clinical procedure considered for cardiovascular system problems. The conventional techniques to measure blood pressure (e.g. cuff method) allows a limited number of readings for a certain period (e.g. every 5-10 minutes). Additionally, these systems cause turbulence to blood flow; impeding continuous blood pressure monitoring, especially in emergency cases or critically ill persons. In this paper, the most important statistical features in the photoplethysmogram (PPG) signals were extracted to estimate the blood pressure noninvasively. PPG signals from more than 40 subjects were measured and analyzed and 12 features were extracted. The features were fed to principal component analysis (PCA) to find the most important independent features that have the highest correlation with blood pressure. The results show that the stiffness index means and standard deviation for the beat-to-beat heart rate were the most important features. A model representing both features for Systolic Blood Pressure (SBP) and Diastolic Blood Pressure (DBP) was obtained using a statistical regression technique. Surface fitting is used to best fit the series of data and the results show that the error value in estimating the SBP is 4.95% and in estimating the DBP is 3.99%. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=blood%20pressure" title="blood pressure">blood pressure</a>, <a href="https://publications.waset.org/abstracts/search?q=noninvasive%20optical%20system" title=" noninvasive optical system"> noninvasive optical system</a>, <a href="https://publications.waset.org/abstracts/search?q=principal%20component%20analysis" title=" principal component analysis"> principal component analysis</a>, <a href="https://publications.waset.org/abstracts/search?q=PCA" title=" PCA"> PCA</a>, <a href="https://publications.waset.org/abstracts/search?q=continuous%20monitoring" title=" continuous monitoring"> continuous monitoring</a> </p> <a href="https://publications.waset.org/abstracts/114949/monitoring-blood-pressure-using-regression-techniques" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/114949.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">161</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">7615</span> An Approach Based on Statistics and Multi-Resolution Representation to Classify Mammograms</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Nebi%20Gedik">Nebi Gedik</a> </p> <p class="card-text"><strong>Abstract:</strong></p> One of the significant and continual public health problems in the world is breast cancer. Early detection is very important to fight the disease, and mammography has been one of the most common and reliable methods to detect the disease in the early stages. However, it is a difficult task, and computer-aided diagnosis (CAD) systems are needed to assist radiologists in providing both accurate and uniform evaluation for mass in mammograms. In this study, a multiresolution statistical method to classify mammograms as normal and abnormal in digitized mammograms is used to construct a CAD system. The mammogram images are represented by wave atom transform, and this representation is made by certain groups of coefficients, independently. The CAD system is designed by calculating some statistical features using each group of coefficients. The classification is performed by using support vector machine (SVM). <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=wave%20atom%20transform" title="wave atom transform">wave atom transform</a>, <a href="https://publications.waset.org/abstracts/search?q=statistical%20features" title=" statistical features"> statistical features</a>, <a href="https://publications.waset.org/abstracts/search?q=multi-resolution%20representation" title=" multi-resolution representation"> multi-resolution representation</a>, <a href="https://publications.waset.org/abstracts/search?q=mammogram" title=" mammogram"> mammogram</a> </p> <a href="https://publications.waset.org/abstracts/62356/an-approach-based-on-statistics-and-multi-resolution-representation-to-classify-mammograms" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/62356.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">222</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">7614</span> Statistical Feature Extraction Method for Wood Species Recognition System </h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Mohd%20Iz%27aan%20Paiz%20Bin%20Zamri">Mohd Iz&#039;aan Paiz Bin Zamri</a>, <a href="https://publications.waset.org/abstracts/search?q=Anis%20Salwa%20Mohd%20Khairuddin"> Anis Salwa Mohd Khairuddin</a>, <a href="https://publications.waset.org/abstracts/search?q=Norrima%20Mokhtar"> Norrima Mokhtar</a>, <a href="https://publications.waset.org/abstracts/search?q=Rubiyah%20Yusof"> Rubiyah Yusof</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Effective statistical feature extraction and classification are important in image-based automatic inspection and analysis. An automatic wood species recognition system is designed to perform wood inspection at custom checkpoints to avoid mislabeling of timber which will results to loss of income to the timber industry. The system focuses on analyzing the statistical pores properties of the wood images. This paper proposed a fuzzy-based feature extractor which mimics the experts&rsquo; knowledge on wood texture to extract the properties of pores distribution from the wood surface texture. The proposed feature extractor consists of two steps namely pores extraction and fuzzy pores management. The total number of statistical features extracted from each wood image is 38 features. Then, a backpropagation neural network is used to classify the wood species based on the statistical features. A comprehensive set of experiments on a database composed of 5200 macroscopic images from 52 tropical wood species was used to evaluate the performance of the proposed feature extractor. The advantage of the proposed feature extraction technique is that it mimics the experts&rsquo; interpretation on wood texture which allows human involvement when analyzing the wood texture. Experimental results show the efficiency of the proposed method. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=classification" title="classification">classification</a>, <a href="https://publications.waset.org/abstracts/search?q=feature%20extraction" title=" feature extraction"> feature extraction</a>, <a href="https://publications.waset.org/abstracts/search?q=fuzzy" title=" fuzzy"> fuzzy</a>, <a href="https://publications.waset.org/abstracts/search?q=inspection%20system" title=" inspection system"> inspection system</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20analysis" title=" image analysis"> image analysis</a>, <a href="https://publications.waset.org/abstracts/search?q=macroscopic%20images" title=" macroscopic images"> macroscopic images</a> </p> <a href="https://publications.waset.org/abstracts/36415/statistical-feature-extraction-method-for-wood-species-recognition-system" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/36415.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">425</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">7613</span> A Relationship Extraction Method from Literary Fiction Considering Korean Linguistic Features</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Hee-Jeong%20Ahn">Hee-Jeong Ahn</a>, <a href="https://publications.waset.org/abstracts/search?q=Kee-Won%20Kim"> Kee-Won Kim</a>, <a href="https://publications.waset.org/abstracts/search?q=Seung-Hoon%20Kim"> Seung-Hoon Kim</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The knowledge of the relationship between characters can help readers to understand the overall story or plot of the literary fiction. In this paper, we present a method for extracting the specific relationship between characters from a Korean literary fiction. Generally, methods for extracting relationships between characters in text are statistical or computational methods based on the sentence distance between characters without considering Korean linguistic features. Furthermore, it is difficult to extract the relationship with direction from text, such as one-sided love, because they consider only the weight of relationship, without considering the direction of the relationship. Therefore, in order to identify specific relationships between characters, we propose a statistical method considering linguistic features, such as syntactic patterns and speech verbs in Korean. The result of our method is represented by a weighted directed graph of the relationship between the characters. Furthermore, we expect that proposed method could be applied to the relationship analysis between characters of other content like movie or TV drama. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=data%20mining" title="data mining">data mining</a>, <a href="https://publications.waset.org/abstracts/search?q=Korean%20linguistic%20feature" title=" Korean linguistic feature"> Korean linguistic feature</a>, <a href="https://publications.waset.org/abstracts/search?q=literary%20fiction" title=" literary fiction"> literary fiction</a>, <a href="https://publications.waset.org/abstracts/search?q=relationship%20extraction" title=" relationship extraction"> relationship extraction</a> </p> <a href="https://publications.waset.org/abstracts/47126/a-relationship-extraction-method-from-literary-fiction-considering-korean-linguistic-features" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/47126.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">380</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">7612</span> Remaining Useful Life Estimation of Bearings Based on Nonlinear Dimensional Reduction Combined with Timing Signals</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Zhongmin%20Wang">Zhongmin Wang</a>, <a href="https://publications.waset.org/abstracts/search?q=Wudong%20Fan"> Wudong Fan</a>, <a href="https://publications.waset.org/abstracts/search?q=Hengshan%20Zhang"> Hengshan Zhang</a>, <a href="https://publications.waset.org/abstracts/search?q=Yimin%20Zhou"> Yimin Zhou</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In data-driven prognostic methods, the prediction accuracy of the estimation for remaining useful life of bearings mainly depends on the performance of health indicators, which are usually fused some statistical features extracted from vibrating signals. However, the existing health indicators have the following two drawbacks: (1) The differnet ranges of the statistical features have the different contributions to construct the health indicators, the expert knowledge is required to extract the features. (2) When convolutional neural networks are utilized to tackle time-frequency features of signals, the time-series of signals are not considered. To overcome these drawbacks, in this study, the method combining convolutional neural network with gated recurrent unit is proposed to extract the time-frequency image features. The extracted features are utilized to construct health indicator and predict remaining useful life of bearings. First, original signals are converted into time-frequency images by using continuous wavelet transform so as to form the original feature sets. Second, with convolutional and pooling layers of convolutional neural networks, the most sensitive features of time-frequency images are selected from the original feature sets. Finally, these selected features are fed into the gated recurrent unit to construct the health indicator. The results state that the proposed method shows the enhance performance than the related studies which have used the same bearing dataset provided by PRONOSTIA. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=continuous%20wavelet%20transform" title="continuous wavelet transform">continuous wavelet transform</a>, <a href="https://publications.waset.org/abstracts/search?q=convolution%20neural%20net-work" title=" convolution neural net-work"> convolution neural net-work</a>, <a href="https://publications.waset.org/abstracts/search?q=gated%20recurrent%20unit" title=" gated recurrent unit"> gated recurrent unit</a>, <a href="https://publications.waset.org/abstracts/search?q=health%20indicators" title=" health indicators"> health indicators</a>, <a href="https://publications.waset.org/abstracts/search?q=remaining%20useful%20life" title=" remaining useful life"> remaining useful life</a> </p> <a href="https://publications.waset.org/abstracts/108324/remaining-useful-life-estimation-of-bearings-based-on-nonlinear-dimensional-reduction-combined-with-timing-signals" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/108324.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">133</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">7611</span> A New Internal Architecture Based On Feature Selection for Holonic Manufacturing System </h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Jihan%20Abdulazeez%20%20Ahmed">Jihan Abdulazeez Ahmed</a>, <a href="https://publications.waset.org/abstracts/search?q=Adnan%20Mohsin%20Abdulazeez%20Brifcani"> Adnan Mohsin Abdulazeez Brifcani</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper suggests a new internal architecture of holon based on feature selection model using the combination of Bees Algorithm (BA) and Artificial Neural Network (ANN). BA is used to generate features while ANN is used as a classifier to evaluate the produced features. Proposed system is applied on the Wine data set, the statistical result proves that the proposed system is effective and has the ability to choose informative features with high accuracy. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=artificial%20neural%20network" title="artificial neural network">artificial neural network</a>, <a href="https://publications.waset.org/abstracts/search?q=bees%20algorithm" title=" bees algorithm"> bees algorithm</a>, <a href="https://publications.waset.org/abstracts/search?q=feature%20selection" title=" feature selection"> feature selection</a>, <a href="https://publications.waset.org/abstracts/search?q=Holon" title=" Holon"> Holon</a> </p> <a href="https://publications.waset.org/abstracts/33121/a-new-internal-architecture-based-on-feature-selection-for-holonic-manufacturing-system" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/33121.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">457</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">7610</span> Lambda-Levelwise Statistical Convergence of a Sequence of Fuzzy Numbers</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=F.%20Berna%20Benli">F. Berna Benli</a>, <a href="https://publications.waset.org/abstracts/search?q=%C3%96zg%C3%BCr%20Keskin"> Özgür Keskin</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Lately, many mathematicians have been studied the statistical convergence of a sequence of fuzzy numbers. We know that Lambda-statistically convergence is a kind of convergence between ordinary convergence and statistical convergence. In this paper, we will introduce the new kind of convergence such as λ-levelwise statistical convergence. Then, we will define the concept of the λ-levelwise statistical cluster and limit points of a sequence of fuzzy numbers. Also, we will discuss the relations between the sets of λ-levelwise statistical cluster points and λ-levelwise statistical limit points of sequences of fuzzy numbers. This work has been extended in this paper, where some relations have been considered such that when lambda-statistical limit inferior and lambda-statistical limit superior for lambda-statistically convergent sequences of fuzzy numbers are equal. Furthermore, lambda-statistical boundedness condition for different sequences of fuzzy numbers has been studied. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=fuzzy%20number" title="fuzzy number">fuzzy number</a>, <a href="https://publications.waset.org/abstracts/search?q=%CE%BB-levelwise%20statistical%20cluster%20points" title=" λ-levelwise statistical cluster points"> λ-levelwise statistical cluster points</a>, <a href="https://publications.waset.org/abstracts/search?q=%CE%BB-levelwise%20statistical%20convergence" title=" λ-levelwise statistical convergence"> λ-levelwise statistical convergence</a>, <a href="https://publications.waset.org/abstracts/search?q=%CE%BB-levelwise%20statistical%20limit%20points" title=" λ-levelwise statistical limit points"> λ-levelwise statistical limit points</a>, <a href="https://publications.waset.org/abstracts/search?q=%CE%BB-statistical%20cluster%20points" title=" λ-statistical cluster points"> λ-statistical cluster points</a>, <a href="https://publications.waset.org/abstracts/search?q=%CE%BB-statistical%20convergence" title=" λ-statistical convergence"> λ-statistical convergence</a>, <a href="https://publications.waset.org/abstracts/search?q=%CE%BB-statistical%20limit%20%20points" title=" λ-statistical limit points"> λ-statistical limit points</a> </p> <a href="https://publications.waset.org/abstracts/20755/lambda-levelwise-statistical-convergence-of-a-sequence-of-fuzzy-numbers" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/20755.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">477</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">7609</span> Credit Card Fraud Detection with Ensemble Model: A Meta-Heuristic Approach</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Gong%20Zhilin">Gong Zhilin</a>, <a href="https://publications.waset.org/abstracts/search?q=Jing%20Yang"> Jing Yang</a>, <a href="https://publications.waset.org/abstracts/search?q=Jian%20Yin"> Jian Yin</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The purpose of this paper is to develop a novel system for credit card fraud detection based on sequential modeling of data using hybrid deep learning models. The projected model encapsulates five major phases are pre-processing, imbalance-data handling, feature extraction, optimal feature selection, and fraud detection with an ensemble classifier. The collected raw data (input) is pre-processed to enhance the quality of the data through alleviation of the missing data, noisy data as well as null values. The pre-processed data are class imbalanced in nature, and therefore they are handled effectively with the K-means clustering-based SMOTE model. From the balanced class data, the most relevant features like improved Principal Component Analysis (PCA), statistical features (mean, median, standard deviation) and higher-order statistical features (skewness and kurtosis). Among the extracted features, the most optimal features are selected with the Self-improved Arithmetic Optimization Algorithm (SI-AOA). This SI-AOA model is the conceptual improvement of the standard Arithmetic Optimization Algorithm. The deep learning models like Long Short-Term Memory (LSTM), Convolutional Neural Network (CNN), and optimized Quantum Deep Neural Network (QDNN). The LSTM and CNN are trained with the extracted optimal features. The outcomes from LSTM and CNN will enter as input to optimized QDNN that provides the final detection outcome. Since the QDNN is the ultimate detector, its weight function is fine-tuned with the Self-improved Arithmetic Optimization Algorithm (SI-AOA). <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=credit%20card" title="credit card">credit card</a>, <a href="https://publications.waset.org/abstracts/search?q=data%20mining" title=" data mining"> data mining</a>, <a href="https://publications.waset.org/abstracts/search?q=fraud%20detection" title=" fraud detection"> fraud detection</a>, <a href="https://publications.waset.org/abstracts/search?q=money%20transactions" title=" money transactions"> money transactions</a> </p> <a href="https://publications.waset.org/abstracts/147387/credit-card-fraud-detection-with-ensemble-model-a-meta-heuristic-approach" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/147387.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">131</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">7608</span> Music Genre Classification Based on Non-Negative Matrix Factorization Features</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Soyon%20Kim">Soyon Kim</a>, <a href="https://publications.waset.org/abstracts/search?q=Edward%20Kim"> Edward Kim</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In order to retrieve information from the massive stream of songs in the music industry, music search by title, lyrics, artist, mood, and genre has become more important. Despite the subjectivity and controversy over the definition of music genres across different nations and cultures, automatic genre classification systems that facilitate the process of music categorization have been developed. Manual genre selection by music producers is being provided as statistical data for designing automatic genre classification systems. In this paper, an automatic music genre classification system utilizing non-negative matrix factorization (NMF) is proposed. Short-term characteristics of the music signal can be captured based on the timbre features such as mel-frequency cepstral coefficient (MFCC), decorrelated filter bank (DFB), octave-based spectral contrast (OSC), and octave band sum (OBS). Long-term time-varying characteristics of the music signal can be summarized with (1) the statistical features such as mean, variance, minimum, and maximum of the timbre features and (2) the modulation spectrum features such as spectral flatness measure, spectral crest measure, spectral peak, spectral valley, and spectral contrast of the timbre features. Not only these conventional basic long-term feature vectors, but also NMF based feature vectors are proposed to be used together for genre classification. In the training stage, NMF basis vectors were extracted for each genre class. The NMF features were calculated in the log spectral magnitude domain (NMF-LSM) as well as in the basic feature vector domain (NMF-BFV). For NMF-LSM, an entire full band spectrum was used. However, for NMF-BFV, only low band spectrum was used since high frequency modulation spectrum of the basic feature vectors did not contain important information for genre classification. In the test stage, using the set of pre-trained NMF basis vectors, the genre classification system extracted the NMF weighting values of each genre as the NMF feature vectors. A support vector machine (SVM) was used as a classifier. The GTZAN multi-genre music database was used for training and testing. It is composed of 10 genres and 100 songs for each genre. To increase the reliability of the experiments, 10-fold cross validation was used. For a given input song, an extracted NMF-LSM feature vector was composed of 10 weighting values that corresponded to the classification probabilities for 10 genres. An NMF-BFV feature vector also had a dimensionality of 10. Combined with the basic long-term features such as statistical features and modulation spectrum features, the NMF features provided the increased accuracy with a slight increase in feature dimensionality. The conventional basic features by themselves yielded 84.0% accuracy, but the basic features with NMF-LSM and NMF-BFV provided 85.1% and 84.2% accuracy, respectively. The basic features required dimensionality of 460, but NMF-LSM and NMF-BFV required dimensionalities of 10 and 10, respectively. Combining the basic features, NMF-LSM and NMF-BFV together with the SVM with a radial basis function (RBF) kernel produced the significantly higher classification accuracy of 88.3% with a feature dimensionality of 480. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=mel-frequency%20cepstral%20coefficient%20%28MFCC%29" title="mel-frequency cepstral coefficient (MFCC)">mel-frequency cepstral coefficient (MFCC)</a>, <a href="https://publications.waset.org/abstracts/search?q=music%20genre%20classification" title=" music genre classification"> music genre classification</a>, <a href="https://publications.waset.org/abstracts/search?q=non-negative%20matrix%20factorization%20%28NMF%29" title=" non-negative matrix factorization (NMF)"> non-negative matrix factorization (NMF)</a>, <a href="https://publications.waset.org/abstracts/search?q=support%20vector%20machine%20%28SVM%29" title=" support vector machine (SVM)"> support vector machine (SVM)</a> </p> <a href="https://publications.waset.org/abstracts/89349/music-genre-classification-based-on-non-negative-matrix-factorization-features" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/89349.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">303</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">7607</span> Towards Integrating Statistical Color Features for Human Skin Detection</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Mohd%20Zamri%20Osman">Mohd Zamri Osman</a>, <a href="https://publications.waset.org/abstracts/search?q=Mohd%20Aizaini%20Maarof"> Mohd Aizaini Maarof</a>, <a href="https://publications.waset.org/abstracts/search?q=Mohd%20Foad%20Rohani"> Mohd Foad Rohani</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Human skin detection recognized as the primary step in most of the applications such as face detection, illicit image filtering, hand recognition and video surveillance. The performance of any skin detection applications greatly relies on the two components: feature extraction and classification method. Skin color is the most vital information used for skin detection purpose. However, color feature alone sometimes could not handle images with having same color distribution with skin color. A color feature of pixel-based does not eliminate the skin-like color due to the intensity of skin and skin-like color fall under the same distribution. Hence, the statistical color analysis will be exploited such mean and standard deviation as an additional feature to increase the reliability of skin detector. In this paper, we studied the effectiveness of statistical color feature for human skin detection. Furthermore, the paper analyzed the integrated color and texture using eight classifiers with three color spaces of RGB, YCbCr, and HSV. The experimental results show that the integrating statistical feature using Random Forest classifier achieved a significant performance with an F1-score 0.969. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=color%20space" title="color space">color space</a>, <a href="https://publications.waset.org/abstracts/search?q=neural%20network" title=" neural network"> neural network</a>, <a href="https://publications.waset.org/abstracts/search?q=random%20forest" title=" random forest"> random forest</a>, <a href="https://publications.waset.org/abstracts/search?q=skin%20detection" title=" skin detection"> skin detection</a>, <a href="https://publications.waset.org/abstracts/search?q=statistical%20feature" title=" statistical feature"> statistical feature</a> </p> <a href="https://publications.waset.org/abstracts/43485/towards-integrating-statistical-color-features-for-human-skin-detection" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/43485.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">462</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">7606</span> Pantograph-Catenary Contact Force: Features Evaluation for Catenary Diagnostics</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Mehdi%20Brahimi">Mehdi Brahimi</a>, <a href="https://publications.waset.org/abstracts/search?q=Kamal%20Medjaher"> Kamal Medjaher</a>, <a href="https://publications.waset.org/abstracts/search?q=Noureddine%20Zerhouni"> Noureddine Zerhouni</a>, <a href="https://publications.waset.org/abstracts/search?q=Mohammed%20Leouatni"> Mohammed Leouatni</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The Prognostics and Health Management is a system engineering discipline which provides solutions and models to the implantation of a predictive maintenance. The approach is based on extracting useful information from monitoring data to assess the “health” state of an industrial equipment or an asset. In this paper, we examine multiple extracted features from Pantograph-Catenary contact force in order to select the most relevant ones to achieve a diagnostics function. The feature extraction methodology is based on simulation data generated thanks to a Pantograph-Catenary simulation software called INPAC and measurement data. The feature extraction method is based on both statistical and signal processing analyses. The feature selection method is based on statistical criteria. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=catenary%2Fpantograph%20interaction" title="catenary/pantograph interaction">catenary/pantograph interaction</a>, <a href="https://publications.waset.org/abstracts/search?q=diagnostics" title=" diagnostics"> diagnostics</a>, <a href="https://publications.waset.org/abstracts/search?q=Prognostics%20and%20Health%20Management%20%28PHM%29" title=" Prognostics and Health Management (PHM)"> Prognostics and Health Management (PHM)</a>, <a href="https://publications.waset.org/abstracts/search?q=quality%20of%20current%20collection" title=" quality of current collection"> quality of current collection</a> </p> <a href="https://publications.waset.org/abstracts/63877/pantograph-catenary-contact-force-features-evaluation-for-catenary-diagnostics" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/63877.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">290</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">7605</span> Image Multi-Feature Analysis by Principal Component Analysis for Visual Surface Roughness Measurement</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Wei%20Zhang">Wei Zhang</a>, <a href="https://publications.waset.org/abstracts/search?q=Yan%20He"> Yan He</a>, <a href="https://publications.waset.org/abstracts/search?q=Yan%20Wang"> Yan Wang</a>, <a href="https://publications.waset.org/abstracts/search?q=Yufeng%20Li"> Yufeng Li</a>, <a href="https://publications.waset.org/abstracts/search?q=Chuanpeng%20Hao"> Chuanpeng Hao</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Surface roughness is an important index for evaluating surface quality, needs to be accurately measured to ensure the performance of the workpiece. The roughness measurement based on machine vision involves various image features, some of which are redundant. These redundant features affect the accuracy and speed of the visual approach. Previous research used correlation analysis methods to select the appropriate features. However, this feature analysis is independent and cannot fully utilize the information of data. Besides, blindly reducing features lose a lot of useful information, resulting in unreliable results. Therefore, the focus of this paper is on providing a redundant feature removal approach for visual roughness measurement. In this paper, the statistical methods and gray-level co-occurrence matrix(GLCM) are employed to extract the texture features of machined images effectively. Then, the principal component analysis(PCA) is used to fuse all extracted features into a new one, which reduces the feature dimension and maintains the integrity of the original information. Finally, the relationship between new features and roughness is established by the support vector machine(SVM). The experimental results show that the approach can effectively solve multi-feature information redundancy of machined surface images and provides a new idea for the visual evaluation of surface roughness. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=feature%20analysis" title="feature analysis">feature analysis</a>, <a href="https://publications.waset.org/abstracts/search?q=machine%20vision" title=" machine vision"> machine vision</a>, <a href="https://publications.waset.org/abstracts/search?q=PCA" title=" PCA"> PCA</a>, <a href="https://publications.waset.org/abstracts/search?q=surface%20roughness" title=" surface roughness"> surface roughness</a>, <a href="https://publications.waset.org/abstracts/search?q=SVM" title=" SVM"> SVM</a> </p> <a href="https://publications.waset.org/abstracts/138525/image-multi-feature-analysis-by-principal-component-analysis-for-visual-surface-roughness-measurement" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/138525.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">212</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">7604</span> A Cross-Gender Statistical Analysis of Tuvinian Intonation Features in Comparison With Uzbek and Azerbaijani</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Daria%20Beziakina">Daria Beziakina</a>, <a href="https://publications.waset.org/abstracts/search?q=Elena%20Bulgakova"> Elena Bulgakova</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The paper deals with cross-gender and cross-linguistic comparison of pitch characteristics for Tuvinian with two other Turkic languages - Uzbek and Azerbaijani, based on the results of statistical analysis of pitch parameter values and intonation patterns used by male and female speakers. The main goal of our work is to obtain the ranges of pitch parameter values typical for Tuvinian speakers for the purpose of automatic language identification. We also propose a cross-gender analysis of declarative intonation in the poorly studied Tuvinian language. The ranges of pitch parameter values were obtained by means of specially developed software that deals with the distribution of pitch values and allows us to obtain statistical language-specific pitch intervals. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=speech%20analysis" title="speech analysis">speech analysis</a>, <a href="https://publications.waset.org/abstracts/search?q=statistical%20analysis" title=" statistical analysis"> statistical analysis</a>, <a href="https://publications.waset.org/abstracts/search?q=speaker%20recognition" title=" speaker recognition"> speaker recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=identification%20of%20person" title=" identification of person"> identification of person</a> </p> <a href="https://publications.waset.org/abstracts/8047/a-cross-gender-statistical-analysis-of-tuvinian-intonation-features-in-comparison-with-uzbek-and-azerbaijani" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/8047.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">347</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">7603</span> Sleep Apnea Hypopnea Syndrom Diagnosis Using Advanced ANN Techniques</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Sachin%20Singh">Sachin Singh</a>, <a href="https://publications.waset.org/abstracts/search?q=Thomas%20Penzel"> Thomas Penzel</a>, <a href="https://publications.waset.org/abstracts/search?q=Dinesh%20Nandan"> Dinesh Nandan</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Accurate identification of Sleep Apnea Hypopnea Syndrom Diagnosis is difficult problem for human expert because of variability among persons and unwanted noise. This paper proposes the diagonosis of Sleep Apnea Hypopnea Syndrome (SAHS) using airflow, ECG, Pulse and SaO2 signals. The features of each type of these signals are extracted using statistical methods and ANN learning methods. These extracted features are used to approximate the patient's Apnea Hypopnea Index(AHI) using sample signals in model. Advance signal processing is also applied to snore sound signal to locate snore event and SaO2 signal is used to support whether determined snore event is true or noise. Finally, Apnea Hypopnea Index (AHI) event is calculated as per true snore event detected. Experiment results shows that the sensitivity can reach up to 96% and specificity to 96% as AHI greater than equal to 5. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=neural%20network" title="neural network">neural network</a>, <a href="https://publications.waset.org/abstracts/search?q=AHI" title=" AHI"> AHI</a>, <a href="https://publications.waset.org/abstracts/search?q=statistical%20methods" title=" statistical methods"> statistical methods</a>, <a href="https://publications.waset.org/abstracts/search?q=autoregressive%20models" title=" autoregressive models"> autoregressive models</a> </p> <a href="https://publications.waset.org/abstracts/118581/sleep-apnea-hypopnea-syndrom-diagnosis-using-advanced-ann-techniques" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/118581.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">119</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">7602</span> A Communication Signal Recognition Algorithm Based on Holder Coefficient Characteristics</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Hui%20Zhang">Hui Zhang</a>, <a href="https://publications.waset.org/abstracts/search?q=Ye%20Tian"> Ye Tian</a>, <a href="https://publications.waset.org/abstracts/search?q=Fang%20Ye"> Fang Ye</a>, <a href="https://publications.waset.org/abstracts/search?q=Ziming%20Guo"> Ziming Guo</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Communication signal modulation recognition technology is one of the key technologies in the field of modern information warfare. At present, communication signal automatic modulation recognition methods are mainly divided into two major categories. One is the maximum likelihood hypothesis testing method based on decision theory, the other is a statistical pattern recognition method based on feature extraction. Now, the most commonly used is a statistical pattern recognition method, which includes feature extraction and classifier design. With the increasingly complex electromagnetic environment of communications, how to effectively extract the features of various signals at low signal-to-noise ratio (SNR) is a hot topic for scholars in various countries. To solve this problem, this paper proposes a feature extraction algorithm for the communication signal based on the improved Holder cloud feature. And the extreme learning machine (ELM) is used which aims at the problem of the real-time in the modern warfare to classify the extracted features. The algorithm extracts the digital features of the improved cloud model without deterministic information in a low SNR environment, and uses the improved cloud model to obtain more stable Holder cloud features and the performance of the algorithm is improved. This algorithm addresses the problem that a simple feature extraction algorithm based on Holder coefficient feature is difficult to recognize at low SNR, and it also has a better recognition accuracy. The results of simulations show that the approach in this paper still has a good classification result at low SNR, even when the SNR is -15dB, the recognition accuracy still reaches 76%. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=communication%20signal" title="communication signal">communication signal</a>, <a href="https://publications.waset.org/abstracts/search?q=feature%20extraction" title=" feature extraction"> feature extraction</a>, <a href="https://publications.waset.org/abstracts/search?q=Holder%20coefficient" title=" Holder coefficient"> Holder coefficient</a>, <a href="https://publications.waset.org/abstracts/search?q=improved%20cloud%20model" title=" improved cloud model"> improved cloud model</a> </p> <a href="https://publications.waset.org/abstracts/101463/a-communication-signal-recognition-algorithm-based-on-holder-coefficient-characteristics" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/101463.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">155</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">7601</span> Statistical Comparison of Machine and Manual Translation: A Corpus-Based Study of Gone with the Wind </h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Yanmeng%20Liu">Yanmeng Liu</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This article analyzes and compares the linguistic differences between machine translation and manual translation, through a case study of the book Gone with the Wind. As an important carrier of human feeling and thinking, the literature translation poses a huge difficulty for machine translation, and it is supposed to expose distinct translation features apart from manual translation. In order to display linguistic features objectively, tentative uses of computerized and statistical evidence to the systematic investigation of large scale translation corpora by using quantitative methods have been deployed. This study compiles bilingual corpus with four versions of Chinese translations of the book Gone with the Wind, namely, Piao by Chunhai Fan, Piao by Huairen Huang, translations by Google Translation and Baidu Translation. After processing the corpus with the software of Stanford Segmenter, Stanford Postagger, and AntConc, etc., the study analyzes linguistic data and answers the following questions: 1. How does the machine translation differ from manual translation linguistically? 2. Why do these deviances happen? This paper combines translation study with the knowledge of corpus linguistics, and concretes divergent linguistic dimensions in translated text analysis, in order to present linguistic deviances in manual and machine translation. Consequently, this study provides a more accurate and more fine-grained understanding of machine translation products, and it also proposes several suggestions for machine translation development in the future. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=corpus-based%20analysis" title="corpus-based analysis">corpus-based analysis</a>, <a href="https://publications.waset.org/abstracts/search?q=linguistic%20deviances" title=" linguistic deviances"> linguistic deviances</a>, <a href="https://publications.waset.org/abstracts/search?q=machine%20translation" title=" machine translation"> machine translation</a>, <a href="https://publications.waset.org/abstracts/search?q=statistical%20evidence" title=" statistical evidence"> statistical evidence</a> </p> <a href="https://publications.waset.org/abstracts/109650/statistical-comparison-of-machine-and-manual-translation-a-corpus-based-study-of-gone-with-the-wind" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/109650.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">144</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">7600</span> Economics of Oil and Its Stability in the Gulf Region </h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Al%20Mutawa%20A.%20Amir">Al Mutawa A. Amir</a>, <a href="https://publications.waset.org/abstracts/search?q=Liaqat%20Ali"> Liaqat Ali</a>, <a href="https://publications.waset.org/abstracts/search?q=Faisal%20Ali"> Faisal Ali</a> </p> <p class="card-text"><strong>Abstract:</strong></p> After the World War II, the world economy was disrupted and changed due to oil and its prices. The research in this paper presents the basic statistical features and economic characteristics of the Gulf economy. The main features of the Gulf economies and its heavy dependence on oil exports, its dualism between modern and traditional sectors and its rapidly increasing affluences are particularly emphasized.&nbsp; In this context, the research in this paper discussed the problems of growth versus development and has attempted to draw the implications for the future economic development of this area. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=oil%20prices" title="oil prices">oil prices</a>, <a href="https://publications.waset.org/abstracts/search?q=GCC" title=" GCC"> GCC</a>, <a href="https://publications.waset.org/abstracts/search?q=economic%20growth" title=" economic growth"> economic growth</a>, <a href="https://publications.waset.org/abstracts/search?q=gulf%20oil" title=" gulf oil"> gulf oil</a> </p> <a href="https://publications.waset.org/abstracts/64451/economics-of-oil-and-its-stability-in-the-gulf-region" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/64451.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">335</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">7599</span> Relevant LMA Features for Human Motion Recognition</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Insaf%20Ajili">Insaf Ajili</a>, <a href="https://publications.waset.org/abstracts/search?q=Malik%20Mallem"> Malik Mallem</a>, <a href="https://publications.waset.org/abstracts/search?q=Jean-Yves%20Didier"> Jean-Yves Didier</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Motion recognition from videos is actually a very complex task due to the high variability of motions. This paper describes the challenges of human motion recognition, especially motion representation step with relevant features. Our descriptor vector is inspired from Laban Movement Analysis method. We propose discriminative features using the Random Forest algorithm in order to remove redundant features and make learning algorithms operate faster and more effectively. We validate our method on MSRC-12 and UTKinect datasets. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=discriminative%20LMA%20features" title="discriminative LMA features">discriminative LMA features</a>, <a href="https://publications.waset.org/abstracts/search?q=features%20reduction" title=" features reduction"> features reduction</a>, <a href="https://publications.waset.org/abstracts/search?q=human%20motion%20recognition" title=" human motion recognition"> human motion recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=random%20forest" title=" random forest"> random forest</a> </p> <a href="https://publications.waset.org/abstracts/96299/relevant-lma-features-for-human-motion-recognition" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/96299.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">195</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">7598</span> Iris Recognition Based on the Low Order Norms of Gradient Components</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Iman%20A.%20Saad">Iman A. Saad</a>, <a href="https://publications.waset.org/abstracts/search?q=Loay%20E.%20George"> Loay E. George</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Iris pattern is an important biological feature of human body; it becomes very hot topic in both research and practical applications. In this paper, an algorithm is proposed for iris recognition and a simple, efficient and fast method is introduced to extract a set of discriminatory features using first order gradient operator applied on grayscale images. The gradient based features are robust, up to certain extents, against the variations may occur in contrast or brightness of iris image samples; the variations are mostly occur due lightening differences and camera changes. At first, the iris region is located, after that it is remapped to a rectangular area of size 360x60 pixels. Also, a new method is proposed for detecting eyelash and eyelid points; it depends on making image statistical analysis, to mark the eyelash and eyelid as a noise points. In order to cover the features localization (variation), the rectangular iris image is partitioned into N overlapped sub-images (blocks); then from each block a set of different average directional gradient densities values is calculated to be used as texture features vector. The applied gradient operators are taken along the horizontal, vertical and diagonal directions. The low order norms of gradient components were used to establish the feature vector. Euclidean distance based classifier was used as a matching metric for determining the degree of similarity between the features vector extracted from the tested iris image and template features vectors stored in the database. Experimental tests were performed using 2639 iris images from CASIA V4-Interival database, the attained recognition accuracy has reached up to 99.92%. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=iris%20recognition" title="iris recognition">iris recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=contrast%20stretching" title=" contrast stretching"> contrast stretching</a>, <a href="https://publications.waset.org/abstracts/search?q=gradient%20features" title=" gradient features"> gradient features</a>, <a href="https://publications.waset.org/abstracts/search?q=texture%20features" title=" texture features"> texture features</a>, <a href="https://publications.waset.org/abstracts/search?q=Euclidean%20metric" title=" Euclidean metric"> Euclidean metric</a> </p> <a href="https://publications.waset.org/abstracts/13277/iris-recognition-based-on-the-low-order-norms-of-gradient-components" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/13277.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">334</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">7597</span> Students&#039; Statistical Reasoning and Attitudes towards Statistics in Blended Learning, E-Learning and On-Campus Learning</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Petros%20Roussos">Petros Roussos</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The present study focused on students' statistical reasoning related to Null Hypothesis Statistical Testing and p-values. Its objective was to test the hypothesis that neither the place (classroom, at a distance, online) nor the medium that actually supports the learning (ICT, internet, books) has an effect on understanding of statistical concepts. In addition, it was expected that students' attitudes towards statistics would not predict understanding of statistical concepts. The sample consisted of 385 undergraduate and postgraduate students from six state and private universities (five in Greece and one in Cyprus). Students were administered two questionnaires: a) the Greek version of the Survey of Attitudes Toward Statistics, and b) a short instrument which measures students' understanding of statistical significance and p-values. Results suggest that attitudes towards statistics do not predict students' understanding of statistical concepts, whereas the medium did not have an effect. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=attitudes%20towards%20statistics" title="attitudes towards statistics">attitudes towards statistics</a>, <a href="https://publications.waset.org/abstracts/search?q=blended%20learning" title=" blended learning"> blended learning</a>, <a href="https://publications.waset.org/abstracts/search?q=e-learning" title=" e-learning"> e-learning</a>, <a href="https://publications.waset.org/abstracts/search?q=statistical%20reasoning" title=" statistical reasoning"> statistical reasoning</a> </p> <a href="https://publications.waset.org/abstracts/46506/students-statistical-reasoning-and-attitudes-towards-statistics-in-blended-learning-e-learning-and-on-campus-learning" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/46506.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">310</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">7596</span> Statistical Wavelet Features, PCA, and SVM-Based Approach for EEG Signals Classification</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=R.%20K.%20Chaurasiya">R. K. Chaurasiya</a>, <a href="https://publications.waset.org/abstracts/search?q=N.%20D.%20Londhe"> N. D. Londhe</a>, <a href="https://publications.waset.org/abstracts/search?q=S.%20Ghosh"> S. Ghosh</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The study of the electrical signals produced by neural activities of human brain is called Electroencephalography. In this paper, we propose an automatic and efficient EEG signal classification approach. The proposed approach is used to classify the EEG signal into two classes: epileptic seizure or not. In the proposed approach, we start with extracting the features by applying Discrete Wavelet Transform (DWT) in order to decompose the EEG signals into sub-bands. These features, extracted from details and approximation coefficients of DWT sub-bands, are used as input to Principal Component Analysis (PCA). The classification is based on reducing the feature dimension using PCA and deriving the support-vectors using Support Vector Machine (SVM). The experimental are performed on real and standard dataset. A very high level of classification accuracy is obtained in the result of classification. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=discrete%20wavelet%20transform" title="discrete wavelet transform">discrete wavelet transform</a>, <a href="https://publications.waset.org/abstracts/search?q=electroencephalogram" title=" electroencephalogram"> electroencephalogram</a>, <a href="https://publications.waset.org/abstracts/search?q=pattern%20recognition" title=" pattern recognition"> pattern recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=principal%20component%20analysis" title=" principal component analysis"> principal component analysis</a>, <a href="https://publications.waset.org/abstracts/search?q=support%20vector%20machine" title=" support vector machine"> support vector machine</a> </p> <a href="https://publications.waset.org/abstracts/18113/statistical-wavelet-features-pca-and-svm-based-approach-for-eeg-signals-classification" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/18113.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">638</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">7595</span> Classification of Computer Generated Images from Photographic Images Using Convolutional Neural Networks</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Chaitanya%20Chawla">Chaitanya Chawla</a>, <a href="https://publications.waset.org/abstracts/search?q=Divya%20Panwar"> Divya Panwar</a>, <a href="https://publications.waset.org/abstracts/search?q=Gurneesh%20Singh%20Anand"> Gurneesh Singh Anand</a>, <a href="https://publications.waset.org/abstracts/search?q=M.%20P.%20S%20Bhatia"> M. P. S Bhatia</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper presents a deep-learning mechanism for classifying computer generated images and photographic images. The proposed method accounts for a convolutional layer capable of automatically learning correlation between neighbouring pixels. In the current form, Convolutional Neural Network (CNN) will learn features based on an image&#39;s content instead of the structural features of the image. The layer is particularly designed to subdue an image&#39;s content and robustly learn the sensor pattern noise features (usually inherited from image processing in a camera) as well as the statistical properties of images. The paper was assessed on latest natural and computer generated images, and it was concluded that it performs better than the current state of the art methods. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=image%20forensics" title="image forensics">image forensics</a>, <a href="https://publications.waset.org/abstracts/search?q=computer%20graphics" title=" computer graphics"> computer graphics</a>, <a href="https://publications.waset.org/abstracts/search?q=classification" title=" classification"> classification</a>, <a href="https://publications.waset.org/abstracts/search?q=deep%20learning" title=" deep learning"> deep learning</a>, <a href="https://publications.waset.org/abstracts/search?q=convolutional%20neural%20networks" title=" convolutional neural networks"> convolutional neural networks</a> </p> <a href="https://publications.waset.org/abstracts/95266/classification-of-computer-generated-images-from-photographic-images-using-convolutional-neural-networks" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/95266.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">336</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">7594</span> Impact of Variability in Delineation on PET Radiomics Features in Lung Tumors</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Mahsa%20Falahatpour">Mahsa Falahatpour</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Introduction: This study aims to explore how inter-observer variability in manual tumor segmentation impacts the reliability of radiomic features in non–small cell lung cancer (NSCLC). Methods: The study included twenty-three NSCLC tumors. Each patient had three tumor segmentations (VOL1, VOL2, VOL3) contoured on PET/CT scans by three radiation oncologists. Dice coefficients (DCS) were used to measure the segmentation variability. Radiomic features were extracted with 3D-slicer software, consisting of 66 features: first-order (n=15), second-order (GLCM, GLDM, GLRLM, and GLSZM) (n=33). The inter-observer variability of radiomic features was assessed using the intraclass correlation coefficient (ICC). An ICC > 0.8 indicates good stability. Results: The mean DSC of VOL1, VOL2, and VOL3 was 0.80 ± 0.04, 0.85 ± 0.03, and 0.76 ± 0.06, respectively. 92% of all extracted radiomic features were found to be stable (ICC > 0.8). The GLCM texture features had the highest stability (96%), followed by GLRLM features (90%) and GLSZM features (87%). The DSC was found to be highly correlated with the stability of radiomic features. Conclusion: The variability in inter-observer segmentation significantly impacts radiomics analysis, leading to a reduction in the number of appropriate radiomic features. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=PET%2FCT" title="PET/CT">PET/CT</a>, <a href="https://publications.waset.org/abstracts/search?q=radiomics" title=" radiomics"> radiomics</a>, <a href="https://publications.waset.org/abstracts/search?q=radiotherapy" title=" radiotherapy"> radiotherapy</a>, <a href="https://publications.waset.org/abstracts/search?q=segmentation" title=" segmentation"> segmentation</a>, <a href="https://publications.waset.org/abstracts/search?q=NSCLC" title=" NSCLC"> NSCLC</a> </p> <a href="https://publications.waset.org/abstracts/186981/impact-of-variability-in-delineation-on-pet-radiomics-features-in-lung-tumors" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/186981.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">44</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">7593</span> Local Binary Patterns-Based Statistical Data Analysis for Accurate Soccer Match Prediction</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Mohammad%20Ghahramani">Mohammad Ghahramani</a>, <a href="https://publications.waset.org/abstracts/search?q=Fahimeh%20Saei%20Manesh"> Fahimeh Saei Manesh</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Winning a soccer game is based on thorough and deep analysis of the ongoing match. On the other hand, giant gambling companies are in vital need of such analysis to reduce their loss against their customers. In this research work, we perform deep, real-time analysis on every soccer match around the world that distinguishes our work from others by focusing on particular seasons, teams and partial analytics. Our contributions are presented in the platform called “Analyst Masters.” First, we introduce various sources of information available for soccer analysis for teams around the world that helped us record live statistical data and information from more than 50,000 soccer matches a year. Our second and main contribution is to introduce our proposed in-play performance evaluation. The third contribution is developing new features from stable soccer matches. The statistics of soccer matches and their odds before and in-play are considered in the image format versus time including the halftime. Local Binary patterns, (LBP) is then employed to extract features from the image. Our analyses reveal incredibly interesting features and rules if a soccer match has reached enough stability. For example, our “8-minute rule” implies if 'Team A' scores a goal and can maintain the result for at least 8 minutes then the match would end in their favor in a stable match. We could also make accurate predictions before the match of scoring less/more than 2.5 goals. We benefit from the Gradient Boosting Trees, GBT, to extract highly related features. Once the features are selected from this pool of data, the Decision trees decide if the match is stable. A stable match is then passed to a post-processing stage to check its properties such as betters’ and punters’ behavior and its statistical data to issue the prediction. The proposed method was trained using 140,000 soccer matches and tested on more than 100,000 samples achieving 98% accuracy to select stable matches. Our database from 240,000 matches shows that one can get over 20% betting profit per month using Analyst Masters. Such consistent profit outperforms human experts and shows the inefficiency of the betting market. Top soccer tipsters achieve 50% accuracy and 8% monthly profit in average only on regional matches. Both our collected database of more than 240,000 soccer matches from 2012 and our algorithm would greatly benefit coaches and punters to get accurate analysis. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=soccer" title="soccer">soccer</a>, <a href="https://publications.waset.org/abstracts/search?q=analytics" title=" analytics"> analytics</a>, <a href="https://publications.waset.org/abstracts/search?q=machine%20learning" title=" machine learning"> machine learning</a>, <a href="https://publications.waset.org/abstracts/search?q=database" title=" database"> database</a> </p> <a href="https://publications.waset.org/abstracts/73340/local-binary-patterns-based-statistical-data-analysis-for-accurate-soccer-match-prediction" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/73340.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">238</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">7592</span> Tree Species Classification Using Effective Features of Polarimetric SAR and Hyperspectral Images</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Milad%20Vahidi">Milad Vahidi</a>, <a href="https://publications.waset.org/abstracts/search?q=Mahmod%20R.%20Sahebi"> Mahmod R. Sahebi</a>, <a href="https://publications.waset.org/abstracts/search?q=Mehrnoosh%20Omati"> Mehrnoosh Omati</a>, <a href="https://publications.waset.org/abstracts/search?q=Reza%20Mohammadi"> Reza Mohammadi</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Forest management organizations need information to perform their work effectively. Remote sensing is an effective method to acquire information from the Earth. Two datasets of remote sensing images were used to classify forested regions. Firstly, all of extractable features from hyperspectral and PolSAR images were extracted. The optical features were spectral indexes related to the chemical, water contents, structural indexes, effective bands and absorption features. Also, PolSAR features were the original data, target decomposition components, and SAR discriminators features. Secondly, the particle swarm optimization (PSO) and the genetic algorithms (GA) were applied to select optimization features. Furthermore, the support vector machine (SVM) classifier was used to classify the image. The results showed that the combination of PSO and SVM had higher overall accuracy than the other cases. This combination provided overall accuracy about 90.56%. The effective features were the spectral index, the bands in shortwave infrared (SWIR) and the visible ranges and certain PolSAR features. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=hyperspectral" title="hyperspectral">hyperspectral</a>, <a href="https://publications.waset.org/abstracts/search?q=PolSAR" title=" PolSAR"> PolSAR</a>, <a href="https://publications.waset.org/abstracts/search?q=feature%20selection" title=" feature selection"> feature selection</a>, <a href="https://publications.waset.org/abstracts/search?q=SVM" title=" SVM"> SVM</a> </p> <a href="https://publications.waset.org/abstracts/95461/tree-species-classification-using-effective-features-of-polarimetric-sar-and-hyperspectral-images" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/95461.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">416</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">7591</span> Local Spectrum Feature Extraction for Face Recognition</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Muhammad%20Imran%20Ahmad">Muhammad Imran Ahmad</a>, <a href="https://publications.waset.org/abstracts/search?q=Ruzelita%20Ngadiran"> Ruzelita Ngadiran</a>, <a href="https://publications.waset.org/abstracts/search?q=Mohd%20Nazrin%20Md%20Isa"> Mohd Nazrin Md Isa</a>, <a href="https://publications.waset.org/abstracts/search?q=Nor%20Ashidi%20Mat%20Isa"> Nor Ashidi Mat Isa</a>, <a href="https://publications.waset.org/abstracts/search?q=Mohd%20ZaizuIlyas"> Mohd ZaizuIlyas</a>, <a href="https://publications.waset.org/abstracts/search?q=Raja%20Abdullah%20Raja%20Ahmad"> Raja Abdullah Raja Ahmad</a>, <a href="https://publications.waset.org/abstracts/search?q=Said%20Amirul%20Anwar%20Ab%20Hamid"> Said Amirul Anwar Ab Hamid</a>, <a href="https://publications.waset.org/abstracts/search?q=Muzammil%20Jusoh"> Muzammil Jusoh </a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper presents two technique, local feature extraction using image spectrum and low frequency spectrum modelling using GMM to capture the underlying statistical information to improve the performance of face recognition system. Local spectrum features are extracted using overlap sub block window that are mapping on the face image. For each of this block, spatial domain is transformed to frequency domain using DFT. A low frequency coefficient is preserved by discarding high frequency coefficients by applying rectangular mask on the spectrum of the facial image. Low frequency information is non Gaussian in the feature space and by using combination of several Gaussian function that has different statistical properties, the best feature representation can be model using probability density function. The recognition process is performed using maximum likelihood value computed using pre-calculate GMM components. The method is tested using FERET data sets and is able to achieved 92% recognition rates. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=local%20features%20modelling" title="local features modelling">local features modelling</a>, <a href="https://publications.waset.org/abstracts/search?q=face%20recognition%20system" title=" face recognition system"> face recognition system</a>, <a href="https://publications.waset.org/abstracts/search?q=Gaussian%20mixture%20models" title=" Gaussian mixture models"> Gaussian mixture models</a>, <a href="https://publications.waset.org/abstracts/search?q=Feret" title=" Feret"> Feret</a> </p> <a href="https://publications.waset.org/abstracts/17388/local-spectrum-feature-extraction-for-face-recognition" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/17388.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">667</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">7590</span> Statistical Convergence for the Approximation of Linear Positive Operators</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Neha%20Bhardwaj">Neha Bhardwaj</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In this paper, we consider positive linear operators and study the Voronovskaya type result of the operator then obtain an error estimate in terms of the higher order modulus of continuity of the function being approximated and its A-statistical convergence. Also, we compute the corresponding rate of A-statistical convergence for the linear positive operators. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=Poisson%20distribution" title="Poisson distribution">Poisson distribution</a>, <a href="https://publications.waset.org/abstracts/search?q=Voronovskaya" title=" Voronovskaya"> Voronovskaya</a>, <a href="https://publications.waset.org/abstracts/search?q=modulus%20of%20continuity" title=" modulus of continuity"> modulus of continuity</a>, <a href="https://publications.waset.org/abstracts/search?q=a-statistical%20convergence" title=" a-statistical convergence"> a-statistical convergence</a> </p> <a href="https://publications.waset.org/abstracts/70017/statistical-convergence-for-the-approximation-of-linear-positive-operators" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/70017.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">333</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">7589</span> Active Features Determination: A Unified Framework</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Meenal%20Badki">Meenal Badki</a> </p> <p class="card-text"><strong>Abstract:</strong></p> We address the issue of active feature determination, where the objective is to determine the set of examples on which additional data (such as lab tests) needs to be gathered, given a large number of examples with some features (such as demographics) and some examples with all the features (such as the complete Electronic Health Record). We note that certain features may be more costly, unique, or laborious to gather. Our proposal is a general active learning approach that is independent of classifiers and similarity metrics. It allows us to identify examples that differ from the full data set and obtain all the features for the examples that match. Our comprehensive evaluation shows the efficacy of this approach, which is driven by four authentic clinical tasks. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=feature%20determination" title="feature determination">feature determination</a>, <a href="https://publications.waset.org/abstracts/search?q=classification" title=" classification"> classification</a>, <a href="https://publications.waset.org/abstracts/search?q=active%20learning" title=" active learning"> active learning</a>, <a href="https://publications.waset.org/abstracts/search?q=sample-efficiency" title=" sample-efficiency"> sample-efficiency</a> </p> <a href="https://publications.waset.org/abstracts/180994/active-features-determination-a-unified-framework" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/180994.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">75</span> </span> </div> </div> <ul class="pagination"> <li class="page-item disabled"><span class="page-link">&lsaquo;</span></li> <li class="page-item active"><span class="page-link">1</span></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=statistical%20features&amp;page=2">2</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=statistical%20features&amp;page=3">3</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=statistical%20features&amp;page=4">4</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=statistical%20features&amp;page=5">5</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=statistical%20features&amp;page=6">6</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=statistical%20features&amp;page=7">7</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=statistical%20features&amp;page=8">8</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=statistical%20features&amp;page=9">9</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=statistical%20features&amp;page=10">10</a></li> <li class="page-item disabled"><span class="page-link">...</span></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=statistical%20features&amp;page=253">253</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=statistical%20features&amp;page=254">254</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=statistical%20features&amp;page=2" rel="next">&rsaquo;</a></li> </ul> </div> </main> <footer> <div id="infolinks" class="pt-3 pb-2"> <div class="container"> <div style="background-color:#f5f5f5;" class="p-3"> <div class="row"> <div class="col-md-2"> <ul class="list-unstyled"> About <li><a href="https://waset.org/page/support">About Us</a></li> <li><a href="https://waset.org/page/support#legal-information">Legal</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/WASET-16th-foundational-anniversary.pdf">WASET celebrates its 16th foundational anniversary</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Account <li><a href="https://waset.org/profile">My Account</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Explore <li><a href="https://waset.org/disciplines">Disciplines</a></li> <li><a href="https://waset.org/conferences">Conferences</a></li> <li><a href="https://waset.org/conference-programs">Conference Program</a></li> <li><a href="https://waset.org/committees">Committees</a></li> <li><a href="https://publications.waset.org">Publications</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Research <li><a href="https://publications.waset.org/abstracts">Abstracts</a></li> <li><a href="https://publications.waset.org">Periodicals</a></li> <li><a href="https://publications.waset.org/archive">Archive</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Open Science <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Science-Philosophy.pdf">Open Science Philosophy</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Science-Award.pdf">Open Science Award</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Society-Open-Science-and-Open-Innovation.pdf">Open Innovation</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Postdoctoral-Fellowship-Award.pdf">Postdoctoral Fellowship Award</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Scholarly-Research-Review.pdf">Scholarly Research Review</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Support <li><a href="https://waset.org/page/support">Support</a></li> <li><a href="https://waset.org/profile/messages/create">Contact Us</a></li> <li><a href="https://waset.org/profile/messages/create">Report Abuse</a></li> </ul> </div> </div> </div> </div> </div> <div class="container text-center"> <hr style="margin-top:0;margin-bottom:.3rem;"> <a href="https://creativecommons.org/licenses/by/4.0/" target="_blank" class="text-muted small">Creative Commons Attribution 4.0 International License</a> <div id="copy" class="mt-2">&copy; 2024 World Academy of Science, Engineering and Technology</div> </div> </footer> <a href="javascript:" id="return-to-top"><i class="fas fa-arrow-up"></i></a> <div class="modal" id="modal-template"> <div class="modal-dialog"> <div class="modal-content"> <div class="row m-0 mt-1"> <div class="col-md-12"> <button type="button" class="close" data-dismiss="modal" aria-label="Close"><span aria-hidden="true">&times;</span></button> </div> </div> <div class="modal-body"></div> </div> </div> </div> <script src="https://cdn.waset.org/static/plugins/jquery-3.3.1.min.js"></script> <script src="https://cdn.waset.org/static/plugins/bootstrap-4.2.1/js/bootstrap.bundle.min.js"></script> <script src="https://cdn.waset.org/static/js/site.js?v=150220211556"></script> <script> jQuery(document).ready(function() { /*jQuery.get("https://publications.waset.org/xhr/user-menu", function (response) { jQuery('#mainNavMenu').append(response); });*/ jQuery.get({ url: "https://publications.waset.org/xhr/user-menu", cache: false }).then(function(response){ jQuery('#mainNavMenu').append(response); }); }); </script> </body> </html>

Pages: 1 2 3 4 5 6 7 8 9 10