CINXE.COM
Search results for: features
<!DOCTYPE html> <html lang="en" dir="ltr"> <head> <!-- Google tag (gtag.js) --> <script async src="https://www.googletagmanager.com/gtag/js?id=G-P63WKM1TM1"></script> <script> window.dataLayer = window.dataLayer || []; function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'G-P63WKM1TM1'); </script> <!-- Yandex.Metrika counter --> <script type="text/javascript" > (function(m,e,t,r,i,k,a){m[i]=m[i]||function(){(m[i].a=m[i].a||[]).push(arguments)}; m[i].l=1*new Date(); for (var j = 0; j < document.scripts.length; j++) {if (document.scripts[j].src === r) { return; }} k=e.createElement(t),a=e.getElementsByTagName(t)[0],k.async=1,k.src=r,a.parentNode.insertBefore(k,a)}) (window, document, "script", "https://mc.yandex.ru/metrika/tag.js", "ym"); ym(55165297, "init", { clickmap:false, trackLinks:true, accurateTrackBounce:true, webvisor:false }); </script> <noscript><div><img src="https://mc.yandex.ru/watch/55165297" style="position:absolute; left:-9999px;" alt="" /></div></noscript> <!-- /Yandex.Metrika counter --> <!-- Matomo --> <!-- End Matomo Code --> <title>Search results for: features</title> <meta name="description" content="Search results for: features"> <meta name="keywords" content="features"> <meta name="viewport" content="width=device-width, initial-scale=1, minimum-scale=1, maximum-scale=1, user-scalable=no"> <meta charset="utf-8"> <link href="https://cdn.waset.org/favicon.ico" type="image/x-icon" rel="shortcut icon"> <link href="https://cdn.waset.org/static/plugins/bootstrap-4.2.1/css/bootstrap.min.css" rel="stylesheet"> <link href="https://cdn.waset.org/static/plugins/fontawesome/css/all.min.css" rel="stylesheet"> <link href="https://cdn.waset.org/static/css/site.css?v=150220211555" rel="stylesheet"> </head> <body> <header> <div class="container"> <nav class="navbar navbar-expand-lg navbar-light"> <a class="navbar-brand" href="https://waset.org"> <img src="https://cdn.waset.org/static/images/wasetc.png" alt="Open Science Research Excellence" title="Open Science Research Excellence" /> </a> <button class="d-block d-lg-none navbar-toggler ml-auto" type="button" data-toggle="collapse" data-target="#navbarMenu" aria-controls="navbarMenu" aria-expanded="false" aria-label="Toggle navigation"> <span class="navbar-toggler-icon"></span> </button> <div class="w-100"> <div class="d-none d-lg-flex flex-row-reverse"> <form method="get" action="https://waset.org/search" class="form-inline my-2 my-lg-0"> <input class="form-control mr-sm-2" type="search" placeholder="Search Conferences" value="features" name="q" aria-label="Search"> <button class="btn btn-light my-2 my-sm-0" type="submit"><i class="fas fa-search"></i></button> </form> </div> <div class="collapse navbar-collapse mt-1" id="navbarMenu"> <ul class="navbar-nav ml-auto align-items-center" id="mainNavMenu"> <li class="nav-item"> <a class="nav-link" href="https://waset.org/conferences" title="Conferences in 2024/2025/2026">Conferences</a> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/disciplines" title="Disciplines">Disciplines</a> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/committees" rel="nofollow">Committees</a> </li> <li class="nav-item dropdown"> <a class="nav-link dropdown-toggle" href="#" id="navbarDropdownPublications" role="button" data-toggle="dropdown" aria-haspopup="true" aria-expanded="false"> Publications </a> <div class="dropdown-menu" aria-labelledby="navbarDropdownPublications"> <a class="dropdown-item" href="https://publications.waset.org/abstracts">Abstracts</a> <a class="dropdown-item" href="https://publications.waset.org">Periodicals</a> <a class="dropdown-item" href="https://publications.waset.org/archive">Archive</a> </div> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/page/support" title="Support">Support</a> </li> </ul> </div> </div> </nav> </div> </header> <main> <div class="container mt-4"> <div class="row"> <div class="col-md-9 mx-auto"> <form method="get" action="https://publications.waset.org/abstracts/search"> <div id="custom-search-input"> <div class="input-group"> <i class="fas fa-search"></i> <input type="text" class="search-query" name="q" placeholder="Author, Title, Abstract, Keywords" value="features"> <input type="submit" class="btn_search" value="Search"> </div> </div> </form> </div> </div> <div class="row mt-3"> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Commenced</strong> in January 2007</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Frequency:</strong> Monthly</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Edition:</strong> International</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Paper Count:</strong> 3843</div> </div> </div> </div> <h1 class="mt-3 mb-3 text-center" style="font-size:1.6rem;">Search results for: features</h1> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3843</span> Relevant LMA Features for Human Motion Recognition</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Insaf%20Ajili">Insaf Ajili</a>, <a href="https://publications.waset.org/abstracts/search?q=Malik%20Mallem"> Malik Mallem</a>, <a href="https://publications.waset.org/abstracts/search?q=Jean-Yves%20Didier"> Jean-Yves Didier</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Motion recognition from videos is actually a very complex task due to the high variability of motions. This paper describes the challenges of human motion recognition, especially motion representation step with relevant features. Our descriptor vector is inspired from Laban Movement Analysis method. We propose discriminative features using the Random Forest algorithm in order to remove redundant features and make learning algorithms operate faster and more effectively. We validate our method on MSRC-12 and UTKinect datasets. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=discriminative%20LMA%20features" title="discriminative LMA features">discriminative LMA features</a>, <a href="https://publications.waset.org/abstracts/search?q=features%20reduction" title=" features reduction"> features reduction</a>, <a href="https://publications.waset.org/abstracts/search?q=human%20motion%20recognition" title=" human motion recognition"> human motion recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=random%20forest" title=" random forest"> random forest</a> </p> <a href="https://publications.waset.org/abstracts/96299/relevant-lma-features-for-human-motion-recognition" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/96299.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">195</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3842</span> Impact of Variability in Delineation on PET Radiomics Features in Lung Tumors</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Mahsa%20Falahatpour">Mahsa Falahatpour</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Introduction: This study aims to explore how inter-observer variability in manual tumor segmentation impacts the reliability of radiomic features in non–small cell lung cancer (NSCLC). Methods: The study included twenty-three NSCLC tumors. Each patient had three tumor segmentations (VOL1, VOL2, VOL3) contoured on PET/CT scans by three radiation oncologists. Dice coefficients (DCS) were used to measure the segmentation variability. Radiomic features were extracted with 3D-slicer software, consisting of 66 features: first-order (n=15), second-order (GLCM, GLDM, GLRLM, and GLSZM) (n=33). The inter-observer variability of radiomic features was assessed using the intraclass correlation coefficient (ICC). An ICC > 0.8 indicates good stability. Results: The mean DSC of VOL1, VOL2, and VOL3 was 0.80 ± 0.04, 0.85 ± 0.03, and 0.76 ± 0.06, respectively. 92% of all extracted radiomic features were found to be stable (ICC > 0.8). The GLCM texture features had the highest stability (96%), followed by GLRLM features (90%) and GLSZM features (87%). The DSC was found to be highly correlated with the stability of radiomic features. Conclusion: The variability in inter-observer segmentation significantly impacts radiomics analysis, leading to a reduction in the number of appropriate radiomic features. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=PET%2FCT" title="PET/CT">PET/CT</a>, <a href="https://publications.waset.org/abstracts/search?q=radiomics" title=" radiomics"> radiomics</a>, <a href="https://publications.waset.org/abstracts/search?q=radiotherapy" title=" radiotherapy"> radiotherapy</a>, <a href="https://publications.waset.org/abstracts/search?q=segmentation" title=" segmentation"> segmentation</a>, <a href="https://publications.waset.org/abstracts/search?q=NSCLC" title=" NSCLC"> NSCLC</a> </p> <a href="https://publications.waset.org/abstracts/186981/impact-of-variability-in-delineation-on-pet-radiomics-features-in-lung-tumors" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/186981.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">44</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3841</span> Tree Species Classification Using Effective Features of Polarimetric SAR and Hyperspectral Images</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Milad%20Vahidi">Milad Vahidi</a>, <a href="https://publications.waset.org/abstracts/search?q=Mahmod%20R.%20Sahebi"> Mahmod R. Sahebi</a>, <a href="https://publications.waset.org/abstracts/search?q=Mehrnoosh%20Omati"> Mehrnoosh Omati</a>, <a href="https://publications.waset.org/abstracts/search?q=Reza%20Mohammadi"> Reza Mohammadi</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Forest management organizations need information to perform their work effectively. Remote sensing is an effective method to acquire information from the Earth. Two datasets of remote sensing images were used to classify forested regions. Firstly, all of extractable features from hyperspectral and PolSAR images were extracted. The optical features were spectral indexes related to the chemical, water contents, structural indexes, effective bands and absorption features. Also, PolSAR features were the original data, target decomposition components, and SAR discriminators features. Secondly, the particle swarm optimization (PSO) and the genetic algorithms (GA) were applied to select optimization features. Furthermore, the support vector machine (SVM) classifier was used to classify the image. The results showed that the combination of PSO and SVM had higher overall accuracy than the other cases. This combination provided overall accuracy about 90.56%. The effective features were the spectral index, the bands in shortwave infrared (SWIR) and the visible ranges and certain PolSAR features. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=hyperspectral" title="hyperspectral">hyperspectral</a>, <a href="https://publications.waset.org/abstracts/search?q=PolSAR" title=" PolSAR"> PolSAR</a>, <a href="https://publications.waset.org/abstracts/search?q=feature%20selection" title=" feature selection"> feature selection</a>, <a href="https://publications.waset.org/abstracts/search?q=SVM" title=" SVM"> SVM</a> </p> <a href="https://publications.waset.org/abstracts/95461/tree-species-classification-using-effective-features-of-polarimetric-sar-and-hyperspectral-images" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/95461.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">416</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3840</span> Active Features Determination: A Unified Framework</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Meenal%20Badki">Meenal Badki</a> </p> <p class="card-text"><strong>Abstract:</strong></p> We address the issue of active feature determination, where the objective is to determine the set of examples on which additional data (such as lab tests) needs to be gathered, given a large number of examples with some features (such as demographics) and some examples with all the features (such as the complete Electronic Health Record). We note that certain features may be more costly, unique, or laborious to gather. Our proposal is a general active learning approach that is independent of classifiers and similarity metrics. It allows us to identify examples that differ from the full data set and obtain all the features for the examples that match. Our comprehensive evaluation shows the efficacy of this approach, which is driven by four authentic clinical tasks. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=feature%20determination" title="feature determination">feature determination</a>, <a href="https://publications.waset.org/abstracts/search?q=classification" title=" classification"> classification</a>, <a href="https://publications.waset.org/abstracts/search?q=active%20learning" title=" active learning"> active learning</a>, <a href="https://publications.waset.org/abstracts/search?q=sample-efficiency" title=" sample-efficiency"> sample-efficiency</a> </p> <a href="https://publications.waset.org/abstracts/180994/active-features-determination-a-unified-framework" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/180994.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">75</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3839</span> 2D Point Clouds Features from Radar for Helicopter Classification</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Danilo%20Habermann">Danilo Habermann</a>, <a href="https://publications.waset.org/abstracts/search?q=Aleksander%20Medella"> Aleksander Medella</a>, <a href="https://publications.waset.org/abstracts/search?q=Carla%20Cremon"> Carla Cremon</a>, <a href="https://publications.waset.org/abstracts/search?q=Yusef%20Caceres"> Yusef Caceres</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper aims to analyze the ability of 2d point clouds features to classify different models of helicopters using radars. This method does not need to estimate the blade length, the number of blades of helicopters, and the period of their micro-Doppler signatures. It is also not necessary to generate spectrograms (or any other image based on time and frequency domain). This work transforms a radar return signal into a 2D point cloud and extracts features of it. Three classifiers are used to distinguish 9 different helicopter models in order to analyze the performance of the features used in this work. The high accuracy obtained with each of the classifiers demonstrates that the 2D point clouds features are very useful for classifying helicopters from radar signal. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=helicopter%20classification" title="helicopter classification">helicopter classification</a>, <a href="https://publications.waset.org/abstracts/search?q=point%20clouds%20features" title=" point clouds features"> point clouds features</a>, <a href="https://publications.waset.org/abstracts/search?q=radar" title=" radar"> radar</a>, <a href="https://publications.waset.org/abstracts/search?q=supervised%20classifiers" title=" supervised classifiers"> supervised classifiers</a> </p> <a href="https://publications.waset.org/abstracts/85676/2d-point-clouds-features-from-radar-for-helicopter-classification" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/85676.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">227</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3838</span> Dynamic Gabor Filter Facial Features-Based Recognition of Emotion in Video Sequences</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=T.%20Hari%20Prasath">T. Hari Prasath</a>, <a href="https://publications.waset.org/abstracts/search?q=P.%20Ithaya%20Rani"> P. Ithaya Rani</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In the world of visual technology, recognizing emotions from the face images is a challenging task. Several related methods have not utilized the dynamic facial features effectively for high performance. This paper proposes a method for emotions recognition using dynamic facial features with high performance. Initially, local features are captured by Gabor filter with different scale and orientations in each frame for finding the position and scale of face part from different backgrounds. The Gabor features are sent to the ensemble classifier for detecting Gabor facial features. The region of dynamic features is captured from the Gabor facial features in the consecutive frames which represent the dynamic variations of facial appearances. In each region of dynamic features is normalized using Z-score normalization method which is further encoded into binary pattern features with the help of threshold values. The binary features are passed to Multi-class AdaBoost classifier algorithm with the well-trained database contain happiness, sadness, surprise, fear, anger, disgust, and neutral expressions to classify the discriminative dynamic features for emotions recognition. The developed method is deployed on the Ryerson Multimedia Research Lab and Cohn-Kanade databases and they show significant performance improvement owing to their dynamic features when compared with the existing methods. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=detecting%20face" title="detecting face">detecting face</a>, <a href="https://publications.waset.org/abstracts/search?q=Gabor%20filter" title=" Gabor filter"> Gabor filter</a>, <a href="https://publications.waset.org/abstracts/search?q=multi-class%20AdaBoost%20classifier" title=" multi-class AdaBoost classifier"> multi-class AdaBoost classifier</a>, <a href="https://publications.waset.org/abstracts/search?q=Z-score%20normalization" title=" Z-score normalization"> Z-score normalization</a> </p> <a href="https://publications.waset.org/abstracts/85005/dynamic-gabor-filter-facial-features-based-recognition-of-emotion-in-video-sequences" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/85005.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">278</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3837</span> New Features for Copy-Move Image Forgery Detection</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Michael%20Zimba">Michael Zimba</a> </p> <p class="card-text"><strong>Abstract:</strong></p> A novel set of features for copy-move image forgery, CMIF, detection method is proposed. The proposed set presents a new approach which relies on electrostatic field theory, EFT. Solely for the purpose of reducing the dimension of a suspicious image, firstly performs discrete wavelet transform, DWT, of the suspicious image and extracts only the approximation subband. The extracted subband is then bijectively mapped onto a virtual electrostatic field where concepts of EFT are utilised to extract robust features. The extracted features are shown to be invariant to additive noise, JPEG compression, and affine transformation. The proposed features can also be used in general object matching. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=virtual%20electrostatic%20field" title="virtual electrostatic field">virtual electrostatic field</a>, <a href="https://publications.waset.org/abstracts/search?q=features" title=" features"> features</a>, <a href="https://publications.waset.org/abstracts/search?q=affine%20transformation" title=" affine transformation"> affine transformation</a>, <a href="https://publications.waset.org/abstracts/search?q=copy-move%20image%20forgery" title=" copy-move image forgery"> copy-move image forgery</a> </p> <a href="https://publications.waset.org/abstracts/29604/new-features-for-copy-move-image-forgery-detection" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/29604.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">543</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3836</span> Using Reservoir Models for Monitoring Geothermal Surface Features</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=John%20P.%20O%E2%80%99Sullivan">John P. O’Sullivan</a>, <a href="https://publications.waset.org/abstracts/search?q=Thomas%20M.%20P.%20Ratouis"> Thomas M. P. Ratouis</a>, <a href="https://publications.waset.org/abstracts/search?q=Michael%20J.%20O%E2%80%99Sullivan"> Michael J. O’Sullivan</a> </p> <p class="card-text"><strong>Abstract:</strong></p> As the use of geothermal energy grows internationally more effort is required to monitor and protect areas with rare and important geothermal surface features. A number of approaches are presented for developing and calibrating numerical geothermal reservoir models that are capable of accurately representing geothermal surface features. The approaches are discussed in the context of cases studies of the Rotorua geothermal system and the Orakei-korako geothermal system, both of which contain important surface features. The results show that models are able to match the available field data accurately and hence can be used as valuable tools for predicting the future response of the systems to changes in use. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=geothermal%20reservoir%20models" title="geothermal reservoir models">geothermal reservoir models</a>, <a href="https://publications.waset.org/abstracts/search?q=surface%20features" title=" surface features"> surface features</a>, <a href="https://publications.waset.org/abstracts/search?q=monitoring" title=" monitoring"> monitoring</a>, <a href="https://publications.waset.org/abstracts/search?q=TOUGH2" title=" TOUGH2"> TOUGH2</a> </p> <a href="https://publications.waset.org/abstracts/25882/using-reservoir-models-for-monitoring-geothermal-surface-features" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/25882.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">414</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3835</span> Myanmar Character Recognition Using Eight Direction Chain Code Frequency Features </h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Kyi%20Pyar%20Zaw">Kyi Pyar Zaw</a>, <a href="https://publications.waset.org/abstracts/search?q=Zin%20Mar%20Kyu"> Zin Mar Kyu </a> </p> <p class="card-text"><strong>Abstract:</strong></p> Character recognition is the process of converting a text image file into editable and searchable text file. Feature Extraction is the heart of any character recognition system. The character recognition rate may be low or high depending on the extracted features. In the proposed paper, 25 features for one character are used in character recognition. Basically, there are three steps of character recognition such as character segmentation, feature extraction and classification. In segmentation step, horizontal cropping method is used for line segmentation and vertical cropping method is used for character segmentation. In the Feature extraction step, features are extracted in two ways. The first way is that the 8 features are extracted from the entire input character using eight direction chain code frequency extraction. The second way is that the input character is divided into 16 blocks. For each block, although 8 feature values are obtained through eight-direction chain code frequency extraction method, we define the sum of these 8 feature values as a feature for one block. Therefore, 16 features are extracted from that 16 blocks in the second way. We use the number of holes feature to cluster the similar characters. We can recognize the almost Myanmar common characters with various font sizes by using these features. All these 25 features are used in both training part and testing part. In the classification step, the characters are classified by matching the all features of input character with already trained features of characters. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=chain%20code%20frequency" title="chain code frequency">chain code frequency</a>, <a href="https://publications.waset.org/abstracts/search?q=character%20recognition" title=" character recognition"> character recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=feature%20extraction" title=" feature extraction"> feature extraction</a>, <a href="https://publications.waset.org/abstracts/search?q=features%20matching" title=" features matching"> features matching</a>, <a href="https://publications.waset.org/abstracts/search?q=segmentation" title=" segmentation"> segmentation</a> </p> <a href="https://publications.waset.org/abstracts/77278/myanmar-character-recognition-using-eight-direction-chain-code-frequency-features" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/77278.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">320</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3834</span> An Experimental Study for Assessing Email Classification Attributes Using Feature Selection Methods </h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Issa%20Qabaja">Issa Qabaja</a>, <a href="https://publications.waset.org/abstracts/search?q=Fadi%20Thabtah"> Fadi Thabtah</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Email phishing classification is one of the vital problems in the online security research domain that have attracted several scholars due to its impact on the users payments performed daily online. One aspect to reach a good performance by the detection algorithms in the email phishing problem is to identify the minimal set of features that significantly have an impact on raising the phishing detection rate. This paper investigate three known feature selection methods named Information Gain (IG), Chi-square and Correlation Features Set (CFS) on the email phishing problem to separate high influential features from low influential ones in phishing detection. We measure the degree of influentially by applying four data mining algorithms on a large set of features. We compare the accuracy of these algorithms on the complete features set before feature selection has been applied and after feature selection has been applied. After conducting experiments, the results show 12 common significant features have been chosen among the considered features by the feature selection methods. Further, the average detection accuracy derived by the data mining algorithms on the reduced 12-features set was very slight affected when compared with the one derived from the 47-features set. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=data%20mining" title="data mining">data mining</a>, <a href="https://publications.waset.org/abstracts/search?q=email%20classification" title=" email classification"> email classification</a>, <a href="https://publications.waset.org/abstracts/search?q=phishing" title=" phishing"> phishing</a>, <a href="https://publications.waset.org/abstracts/search?q=online%20security" title=" online security"> online security</a> </p> <a href="https://publications.waset.org/abstracts/19757/an-experimental-study-for-assessing-email-classification-attributes-using-feature-selection-methods" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/19757.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">432</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3833</span> Exploring Syntactic and Semantic Features for Text-Based Authorship Attribution</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Haiyan%20Wu">Haiyan Wu</a>, <a href="https://publications.waset.org/abstracts/search?q=Ying%20Liu"> Ying Liu</a>, <a href="https://publications.waset.org/abstracts/search?q=Shaoyun%20Shi"> Shaoyun Shi</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Authorship attribution is to extract features to identify authors of anonymous documents. Many previous works on authorship attribution focus on statistical style features (e.g., sentence/word length), content features (e.g., frequent words, n-grams). Modeling these features by regression or some transparent machine learning methods gives a portrait of the authors' writing style. But these methods do not capture the syntactic (e.g., dependency relationship) or semantic (e.g., topics) information. In recent years, some researchers model syntactic trees or latent semantic information by neural networks. However, few works take them together. Besides, predictions by neural networks are difficult to explain, which is vital in authorship attribution tasks. In this paper, we not only utilize the statistical style and content features but also take advantage of both syntactic and semantic features. Different from an end-to-end neural model, feature selection and prediction are two steps in our method. An attentive n-gram network is utilized to select useful features, and logistic regression is applied to give prediction and understandable representation of writing style. Experiments show that our extracted features can improve the state-of-the-art methods on three benchmark datasets. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=authorship%20attribution" title="authorship attribution">authorship attribution</a>, <a href="https://publications.waset.org/abstracts/search?q=attention%20mechanism" title=" attention mechanism"> attention mechanism</a>, <a href="https://publications.waset.org/abstracts/search?q=syntactic%20feature" title=" syntactic feature"> syntactic feature</a>, <a href="https://publications.waset.org/abstracts/search?q=feature%20extraction" title=" feature extraction"> feature extraction</a> </p> <a href="https://publications.waset.org/abstracts/129270/exploring-syntactic-and-semantic-features-for-text-based-authorship-attribution" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/129270.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">136</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3832</span> Using New Machine Algorithms to Classify Iranian Musical Instruments According to Temporal, Spectral and Coefficient Features</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Ronak%20Khosravi">Ronak Khosravi</a>, <a href="https://publications.waset.org/abstracts/search?q=Mahmood%20Abbasi%20Layegh"> Mahmood Abbasi Layegh</a>, <a href="https://publications.waset.org/abstracts/search?q=Siamak%20Haghipour"> Siamak Haghipour</a>, <a href="https://publications.waset.org/abstracts/search?q=Avin%20Esmaili"> Avin Esmaili</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In this paper, a study on classification of musical woodwind instruments using a small set of features selected from a broad range of extracted ones by the sequential forward selection method was carried out. Firstly, we extract 42 features for each record in the music database of 402 sound files belonging to five different groups of Flutes (end blown and internal duct), Single –reed, Double –reed (exposed and capped), Triple reed and Quadruple reed. Then, the sequential forward selection method is adopted to choose the best feature set in order to achieve very high classification accuracy. Two different classification techniques of support vector machines and relevance vector machines have been tested out and an accuracy of up to 96% can be achieved by using 21 time, frequency and coefficient features and relevance vector machine with the Gaussian kernel function. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=coefficient%20features" title="coefficient features">coefficient features</a>, <a href="https://publications.waset.org/abstracts/search?q=relevance%20vector%20machines" title=" relevance vector machines"> relevance vector machines</a>, <a href="https://publications.waset.org/abstracts/search?q=spectral%20features" title=" spectral features"> spectral features</a>, <a href="https://publications.waset.org/abstracts/search?q=support%20vector%20machines" title=" support vector machines"> support vector machines</a>, <a href="https://publications.waset.org/abstracts/search?q=temporal%20features" title=" temporal features"> temporal features</a> </p> <a href="https://publications.waset.org/abstracts/54321/using-new-machine-algorithms-to-classify-iranian-musical-instruments-according-to-temporal-spectral-and-coefficient-features" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/54321.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">320</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3831</span> Exploring Chess Game AI Features Application</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Bashayer%20Almalki">Bashayer Almalki</a>, <a href="https://publications.waset.org/abstracts/search?q=Mayar%20Bajrai"> Mayar Bajrai</a>, <a href="https://publications.waset.org/abstracts/search?q=Dana%20Mirah"> Dana Mirah</a>, <a href="https://publications.waset.org/abstracts/search?q=Kholood%20Alghamdi"> Kholood Alghamdi</a>, <a href="https://publications.waset.org/abstracts/search?q=Hala%20Sanyour"> Hala Sanyour</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This research aims to investigate the features of an AI chess app that are most preferred by users. A questionnaire was used as the methodology to gather responses from a varied group of participants. The questionnaire consisted of several questions related to the features of the AI chess app. The responses were analyzed using descriptive statistics and factor analysis. The findings indicate that the most preferred features of an AI chess app are the ability to play against the computer, the option to adjust the difficulty level, and the availability of tutorials and puzzles. The results of this research could be useful for developers of AI chess apps to enhance the user experience and satisfaction. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=chess" title="chess">chess</a>, <a href="https://publications.waset.org/abstracts/search?q=game" title=" game"> game</a>, <a href="https://publications.waset.org/abstracts/search?q=application" title=" application"> application</a>, <a href="https://publications.waset.org/abstracts/search?q=computics" title=" computics"> computics</a> </p> <a href="https://publications.waset.org/abstracts/167493/exploring-chess-game-ai-features-application" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/167493.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">68</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3830</span> Research on Perceptual Features of Couchsurfers on New Hospitality Tourism Platform Couchsurfing</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Yuanxiang%20Miao">Yuanxiang Miao</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper aims to examine the perceptual features of couchsurfers on a new hospitality tourism platform, the free homestay website couchsurfing. As a local host, the author has accepted 61 couchsurfers in Kyoto, Japan, and attempted to figure out couchsurfers' characteristics on perception by hosting them. Moreover, the methodology of this research is mainly based on in-depth interviews, by talking with couchsurfers, observing their behaviors, doing questionnaires, etc. Five dominant perceptual features of couchsurfers were identified: (1) Trusting; (2) Meeting; (3) Sharing; (4) Reciprocity; (5) Worries. The value of this research lies in figuring out a deeper understanding of the perceptual features of couchsurfers, and the author indeed hosted and stayed with 61 couchsurfers from 30 countries and areas over one year. Lastly, the author offers practical suggestions for future research. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=couchsurfing" title="couchsurfing">couchsurfing</a>, <a href="https://publications.waset.org/abstracts/search?q=depth%20interview" title=" depth interview"> depth interview</a>, <a href="https://publications.waset.org/abstracts/search?q=hospitality%20tourism" title=" hospitality tourism"> hospitality tourism</a>, <a href="https://publications.waset.org/abstracts/search?q=perceptual%20features" title=" perceptual features"> perceptual features</a> </p> <a href="https://publications.waset.org/abstracts/125558/research-on-perceptual-features-of-couchsurfers-on-new-hospitality-tourism-platform-couchsurfing" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/125558.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">145</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3829</span> The Latent Model of Linguistic Features in Korean College Students’ L2 Argumentative Writings: Syntactic Complexity, Lexical Complexity, and Fluency</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Jiyoung%20Bae">Jiyoung Bae</a>, <a href="https://publications.waset.org/abstracts/search?q=Gyoomi%20Kim"> Gyoomi Kim</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This study explores a range of linguistic features used in Korean college students’ argumentative writings for the purpose of developing a model that identifies variables which predict writing proficiencies. This study investigated the latent variable structure of L2 linguistic features, including syntactic complexity, the lexical complexity, and fluency. One hundred forty-six university students in Korea participated in this study. The results of the study’s confirmatory factor analysis (CFA) showed that indicators of linguistic features from this study-provided a foundation for re-categorizing indicators found in extant research on L2 Korean writers depending on each latent variable of linguistic features. The CFA models indicated one measurement model of L2 syntactic complexity and L2 learners’ writing proficiency; these two latent factors were correlated with each other. Based on the overall findings of the study, integrated linguistic features of L2 writings suggested some pedagogical implications in L2 writing instructions. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=linguistic%20features" title="linguistic features">linguistic features</a>, <a href="https://publications.waset.org/abstracts/search?q=syntactic%20complexity" title=" syntactic complexity"> syntactic complexity</a>, <a href="https://publications.waset.org/abstracts/search?q=lexical%20complexity" title=" lexical complexity"> lexical complexity</a>, <a href="https://publications.waset.org/abstracts/search?q=fluency" title=" fluency"> fluency</a> </p> <a href="https://publications.waset.org/abstracts/100664/the-latent-model-of-linguistic-features-in-korean-college-students-l2-argumentative-writings-syntactic-complexity-lexical-complexity-and-fluency" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/100664.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">170</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3828</span> Comparison between XGBoost, LightGBM and CatBoost Using a Home Credit Dataset</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Essam%20Al%20Daoud">Essam Al Daoud</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Gradient boosting methods have been proven to be a very important strategy. Many successful machine learning solutions were developed using the XGBoost and its derivatives. The aim of this study is to investigate and compare the efficiency of three gradient methods. Home credit dataset is used in this work which contains 219 features and 356251 records. However, new features are generated and several techniques are used to rank and select the best features. The implementation indicates that the LightGBM is faster and more accurate than CatBoost and XGBoost using variant number of features and records. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=gradient%20boosting" title="gradient boosting">gradient boosting</a>, <a href="https://publications.waset.org/abstracts/search?q=XGBoost" title=" XGBoost"> XGBoost</a>, <a href="https://publications.waset.org/abstracts/search?q=LightGBM" title=" LightGBM"> LightGBM</a>, <a href="https://publications.waset.org/abstracts/search?q=CatBoost" title=" CatBoost"> CatBoost</a>, <a href="https://publications.waset.org/abstracts/search?q=home%20credit" title=" home credit"> home credit</a> </p> <a href="https://publications.waset.org/abstracts/104573/comparison-between-xgboost-lightgbm-and-catboost-using-a-home-credit-dataset" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/104573.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">171</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3827</span> Native Language Identification with Cross-Corpus Evaluation Using Social Media Data: ’Reddit’</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Yasmeen%20Bassas">Yasmeen Bassas</a>, <a href="https://publications.waset.org/abstracts/search?q=Sandra%20Kuebler"> Sandra Kuebler</a>, <a href="https://publications.waset.org/abstracts/search?q=Allen%20Riddell"> Allen Riddell</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Native language identification is one of the growing subfields in natural language processing (NLP). The task of native language identification (NLI) is mainly concerned with predicting the native language of an author’s writing in a second language. In this paper, we investigate the performance of two types of features; content-based features vs. content independent features, when they are evaluated on a different corpus (using social media data “Reddit”). In this NLI task, the predefined models are trained on one corpus (TOEFL), and then the trained models are evaluated on different data using an external corpus (Reddit). Three classifiers are used in this task; the baseline, linear SVM, and logistic regression. Results show that content-based features are more accurate and robust than content independent ones when tested within the corpus and across corpus. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=NLI" title="NLI">NLI</a>, <a href="https://publications.waset.org/abstracts/search?q=NLP" title=" NLP"> NLP</a>, <a href="https://publications.waset.org/abstracts/search?q=content-based%20features" title=" content-based features"> content-based features</a>, <a href="https://publications.waset.org/abstracts/search?q=content%20independent%20features" title=" content independent features"> content independent features</a>, <a href="https://publications.waset.org/abstracts/search?q=social%20media%20corpus" title=" social media corpus"> social media corpus</a>, <a href="https://publications.waset.org/abstracts/search?q=ML" title=" ML"> ML</a> </p> <a href="https://publications.waset.org/abstracts/142396/native-language-identification-with-cross-corpus-evaluation-using-social-media-data-reddit" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/142396.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">137</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3826</span> Hybrid Anomaly Detection Using Decision Tree and Support Vector Machine</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Elham%20Serkani">Elham Serkani</a>, <a href="https://publications.waset.org/abstracts/search?q=Hossein%20Gharaee%20Garakani"> Hossein Gharaee Garakani</a>, <a href="https://publications.waset.org/abstracts/search?q=Naser%20Mohammadzadeh"> Naser Mohammadzadeh</a>, <a href="https://publications.waset.org/abstracts/search?q=Elaheh%20Vaezpour"> Elaheh Vaezpour</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Intrusion detection systems (IDS) are the main components of network security. These systems analyze the network events for intrusion detection. The design of an IDS is through the training of normal traffic data or attack. The methods of machine learning are the best ways to design IDSs. In the method presented in this article, the pruning algorithm of C5.0 decision tree is being used to reduce the features of traffic data used and training IDS by the least square vector algorithm (LS-SVM). Then, the remaining features are arranged according to the predictor importance criterion. The least important features are eliminated in the order. The remaining features of this stage, which have created the highest level of accuracy in LS-SVM, are selected as the final features. The features obtained, compared to other similar articles which have examined the selected features in the least squared support vector machine model, are better in the accuracy, true positive rate, and false positive. The results are tested by the UNSW-NB15 dataset. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=decision%20tree" title="decision tree">decision tree</a>, <a href="https://publications.waset.org/abstracts/search?q=feature%20selection" title=" feature selection"> feature selection</a>, <a href="https://publications.waset.org/abstracts/search?q=intrusion%20detection%20system" title=" intrusion detection system"> intrusion detection system</a>, <a href="https://publications.waset.org/abstracts/search?q=support%20vector%20machine" title=" support vector machine"> support vector machine</a> </p> <a href="https://publications.waset.org/abstracts/90456/hybrid-anomaly-detection-using-decision-tree-and-support-vector-machine" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/90456.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">265</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3825</span> Task Distraction vs. Visual Enhancement: Which Is More Effective?</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Huangmei%20Liu">Huangmei Liu</a>, <a href="https://publications.waset.org/abstracts/search?q=Si%20Liu"> Si Liu</a>, <a href="https://publications.waset.org/abstracts/search?q=Jia%E2%80%99nan%20Liu"> Jia’nan Liu</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The present experiment investigated and compared the effectiveness of two kinds of methods of attention control: Task distraction and visual enhancement. In the study, the effectiveness of task distractions to explicit features and of visual enhancement to implicit features of the same group of Chinese characters were compared based on their effect on the participants’ reaction time, subjective confidence rating, and verbal report. We found support that the visual enhancement on implicit features did overcome the contrary effect of training distraction and led to awareness of those implicit features, at least to some extent. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=task%20distraction" title="task distraction">task distraction</a>, <a href="https://publications.waset.org/abstracts/search?q=visual%20enhancement" title=" visual enhancement"> visual enhancement</a>, <a href="https://publications.waset.org/abstracts/search?q=attention" title=" attention"> attention</a>, <a href="https://publications.waset.org/abstracts/search?q=awareness" title=" awareness"> awareness</a>, <a href="https://publications.waset.org/abstracts/search?q=learning" title=" learning"> learning</a> </p> <a href="https://publications.waset.org/abstracts/3302/task-distraction-vs-visual-enhancement-which-is-more-effective" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/3302.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">430</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3824</span> Security Features for Remote Healthcare System: A Feasibility Study</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Tamil%20Chelvi%20Vadivelu">Tamil Chelvi Vadivelu</a>, <a href="https://publications.waset.org/abstracts/search?q=Nurazean%20Maarop"> Nurazean Maarop</a>, <a href="https://publications.waset.org/abstracts/search?q=Rasimah%20Che%20Yusoff"> Rasimah Che Yusoff</a>, <a href="https://publications.waset.org/abstracts/search?q=Farhana%20Aini%20Saludin"> Farhana Aini Saludin</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Implementing a remote healthcare system needs to consider many security features. Therefore, before any deployment of the remote healthcare system, a feasibility study from the security perspective is crucial. Remote healthcare system using WBAN technology has been used in other countries for medical purposes but in Malaysia, such projects are still not yet implemented. This study was conducted qualitatively. The interview results involving five healthcare practitioners are further elaborated. The study has addressed four important security features in order to incorporate remote healthcare system using WBAN in Malaysian government hospitals. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=remote%20healthcare" title="remote healthcare">remote healthcare</a>, <a href="https://publications.waset.org/abstracts/search?q=IT%20security" title=" IT security"> IT security</a>, <a href="https://publications.waset.org/abstracts/search?q=security%20features" title=" security features"> security features</a>, <a href="https://publications.waset.org/abstracts/search?q=wireless%20sensor%20application" title=" wireless sensor application"> wireless sensor application</a> </p> <a href="https://publications.waset.org/abstracts/20183/security-features-for-remote-healthcare-system-a-feasibility-study" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/20183.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">305</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3823</span> Mood Recognition Using Indian Music</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Vishwa%20Joshi">Vishwa Joshi</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The study of mood recognition in the field of music has gained a lot of momentum in the recent years with machine learning and data mining techniques and many audio features contributing considerably to analyze and identify the relation of mood plus music. In this paper we consider the same idea forward and come up with making an effort to build a system for automatic recognition of mood underlying the audio song’s clips by mining their audio features and have evaluated several data classification algorithms in order to learn, train and test the model describing the moods of these audio songs and developed an open source framework. Before classification, Preprocessing and Feature Extraction phase is necessary for removing noise and gathering features respectively. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=music" title="music">music</a>, <a href="https://publications.waset.org/abstracts/search?q=mood" title=" mood"> mood</a>, <a href="https://publications.waset.org/abstracts/search?q=features" title=" features"> features</a>, <a href="https://publications.waset.org/abstracts/search?q=classification" title=" classification"> classification</a> </p> <a href="https://publications.waset.org/abstracts/24275/mood-recognition-using-indian-music" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/24275.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">496</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3822</span> Systems Versioning: A Features-Based Meta-Modeling Approach</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Ola%20A.%20Younis">Ola A. Younis</a>, <a href="https://publications.waset.org/abstracts/search?q=Said%20Ghoul"> Said Ghoul</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Systems running these days are huge, complex and exist in many versions. Controlling these versions and tracking their changes became a very hard process as some versions are created using meaningless names or specifications. Many versions of a system are created with no clear difference between them. This leads to mismatching between a user’s request and the version he gets. In this paper, we present a system versions meta-modeling approach that produces versions based on system’s features. This model reduced the number of steps needed to configure a release and gave each version its unique specifications. This approach is applicable for systems that use features in its specification. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=features" title="features">features</a>, <a href="https://publications.waset.org/abstracts/search?q=meta-modeling" title=" meta-modeling"> meta-modeling</a>, <a href="https://publications.waset.org/abstracts/search?q=semantic%20modeling" title=" semantic modeling"> semantic modeling</a>, <a href="https://publications.waset.org/abstracts/search?q=SPL" title=" SPL"> SPL</a>, <a href="https://publications.waset.org/abstracts/search?q=VCS" title=" VCS"> VCS</a>, <a href="https://publications.waset.org/abstracts/search?q=versioning" title=" versioning"> versioning</a> </p> <a href="https://publications.waset.org/abstracts/7797/systems-versioning-a-features-based-meta-modeling-approach" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/7797.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">446</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3821</span> Machine Vision System for Measuring the Quality of Bulk Sun-dried Organic Raisins</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Navab%20Karimi">Navab Karimi</a>, <a href="https://publications.waset.org/abstracts/search?q=Tohid%20Alizadeh"> Tohid Alizadeh</a> </p> <p class="card-text"><strong>Abstract:</strong></p> An intelligent vision-based system was designed to measure the quality and purity of raisins. A machine vision setup was utilized to capture the images of bulk raisins in ranges of 5-50% mixed pure-impure berries. The textural features of bulk raisins were extracted using Grey-level Histograms, Co-occurrence Matrix, and Local Binary Pattern (a total of 108 features). Genetic Algorithm and neural network regression were used for selecting and ranking the best features (21 features). As a result, the GLCM features set was found to have the highest accuracy (92.4%) among the other sets. Followingly, multiple feature combinations of the previous stage were fed into the second regression (linear regression) to increase accuracy, wherein a combination of 16 features was found to be the optimum. Finally, a Support Vector Machine (SVM) classifier was used to differentiate the mixtures, producing the best efficiency and accuracy of 96.2% and 97.35%, respectively. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=sun-dried%20organic%20raisin" title="sun-dried organic raisin">sun-dried organic raisin</a>, <a href="https://publications.waset.org/abstracts/search?q=genetic%20algorithm" title=" genetic algorithm"> genetic algorithm</a>, <a href="https://publications.waset.org/abstracts/search?q=feature%20extraction" title=" feature extraction"> feature extraction</a>, <a href="https://publications.waset.org/abstracts/search?q=ann%20regression" title=" ann regression"> ann regression</a>, <a href="https://publications.waset.org/abstracts/search?q=linear%20regression" title=" linear regression"> linear regression</a>, <a href="https://publications.waset.org/abstracts/search?q=support%20vector%20machine" title=" support vector machine"> support vector machine</a>, <a href="https://publications.waset.org/abstracts/search?q=south%20azerbaijan." title=" south azerbaijan."> south azerbaijan.</a> </p> <a href="https://publications.waset.org/abstracts/172004/machine-vision-system-for-measuring-the-quality-of-bulk-sun-dried-organic-raisins" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/172004.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">73</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3820</span> Training a Neural Network Using Input Dropout with Aggressive Reweighting (IDAR) on Datasets with Many Useless Features</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Stylianos%20Kampakis">Stylianos Kampakis</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper presents a new algorithm for neural networks called “Input Dropout with Aggressive Re-weighting” (IDAR) aimed specifically at datasets with many useless features. IDAR combines two techniques (dropout of input neurons and aggressive re weighting) in order to eliminate the influence of noisy features. The technique can be seen as a generalization of dropout. The algorithm is tested on two different benchmark data sets: a noisy version of the iris dataset and the MADELON data set. Its performance is compared against three other popular techniques for dealing with useless features: L2 regularization, LASSO and random forests. The results demonstrate that IDAR can be an effective technique for handling data sets with many useless features. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=neural%20networks" title="neural networks">neural networks</a>, <a href="https://publications.waset.org/abstracts/search?q=feature%20selection" title=" feature selection"> feature selection</a>, <a href="https://publications.waset.org/abstracts/search?q=regularization" title=" regularization"> regularization</a>, <a href="https://publications.waset.org/abstracts/search?q=aggressive%20reweighting" title=" aggressive reweighting"> aggressive reweighting</a> </p> <a href="https://publications.waset.org/abstracts/20362/training-a-neural-network-using-input-dropout-with-aggressive-reweighting-idar-on-datasets-with-many-useless-features" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/20362.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">455</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3819</span> An Automatic Feature Extraction Technique for 2D Punch Shapes</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Awais%20Ahmad%20Khan">Awais Ahmad Khan</a>, <a href="https://publications.waset.org/abstracts/search?q=Emad%20Abouel%20Nasr"> Emad Abouel Nasr</a>, <a href="https://publications.waset.org/abstracts/search?q=H.%20M.%20A.%20Hussein"> H. M. A. Hussein</a>, <a href="https://publications.waset.org/abstracts/search?q=Abdulrahman%20Al-Ahmari"> Abdulrahman Al-Ahmari</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Sheet-metal parts have been widely applied in electronics, communication and mechanical industries in recent decades; but the advancement in sheet-metal part design and manufacturing is still behind in comparison with the increasing importance of sheet-metal parts in modern industry. This paper presents a methodology for automatic extraction of some common 2D internal sheet metal features. The features used in this study are taken from Unipunch ™ catalogue. The extraction process starts with the data extraction from STEP file using an object oriented approach and with the application of suitable algorithms and rules, all features contained in the catalogue are automatically extracted. Since the extracted features include geometry and engineering information, they will be effective for downstream application such as feature rebuilding and process planning. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=feature%20extraction" title="feature extraction">feature extraction</a>, <a href="https://publications.waset.org/abstracts/search?q=internal%20features" title=" internal features"> internal features</a>, <a href="https://publications.waset.org/abstracts/search?q=punch%20shapes" title=" punch shapes"> punch shapes</a>, <a href="https://publications.waset.org/abstracts/search?q=sheet%20metal" title=" sheet metal"> sheet metal</a> </p> <a href="https://publications.waset.org/abstracts/45001/an-automatic-feature-extraction-technique-for-2d-punch-shapes" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/45001.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">615</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3818</span> Robust Features for Impulsive Noisy Speech Recognition Using Relative Spectral Analysis</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Hajer%20Rahali">Hajer Rahali</a>, <a href="https://publications.waset.org/abstracts/search?q=Zied%20Hajaiej"> Zied Hajaiej</a>, <a href="https://publications.waset.org/abstracts/search?q=Noureddine%20Ellouze"> Noureddine Ellouze</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The goal of speech parameterization is to extract the relevant information about what is being spoken from the audio signal. In speech recognition systems Mel-Frequency Cepstral Coefficients (MFCC) and Relative Spectral Mel-Frequency Cepstral Coefficients (RASTA-MFCC) are the two main techniques used. It will be shown in this paper that it presents some modifications to the original MFCC method. In our work the effectiveness of proposed changes to MFCC called Modified Function Cepstral Coefficients (MODFCC) were tested and compared against the original MFCC and RASTA-MFCC features. The prosodic features such as jitter and shimmer are added to baseline spectral features. The above-mentioned techniques were tested with impulsive signals under various noisy conditions within AURORA databases. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=auditory%20filter" title="auditory filter">auditory filter</a>, <a href="https://publications.waset.org/abstracts/search?q=impulsive%20noise" title=" impulsive noise"> impulsive noise</a>, <a href="https://publications.waset.org/abstracts/search?q=MFCC" title=" MFCC"> MFCC</a>, <a href="https://publications.waset.org/abstracts/search?q=prosodic%20features" title=" prosodic features"> prosodic features</a>, <a href="https://publications.waset.org/abstracts/search?q=RASTA%20filter" title=" RASTA filter"> RASTA filter</a> </p> <a href="https://publications.waset.org/abstracts/8911/robust-features-for-impulsive-noisy-speech-recognition-using-relative-spectral-analysis" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/8911.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">425</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3817</span> Enterprise Information Portal Features: Results of Content Analysis Literature Review</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Michal%20Kr%C4%8D%C3%A1l">Michal Krčál</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Since their introduction in 1990’s, Enterprise Information Portals (EIPs) were investigated from different perspectives (e.g. project management, technology acceptance, IS success). However, no systematic literature review was produced to systematize both the research efforts and the technology itself. This paper reports first results of an extent systematic literature review study focused on research of EIPs and its categorization, specifically it reports a conceptual model of EIP features. The previous attempt to categorize EIP features was published in 2002. For the purpose of the literature review, content of 89 articles was analyzed in order to identify and categorize features of EIPs. The methodology of the literature review was as follows. Firstly, search queries in major indexing databases (Web of Science and SCOPUS) were used. The results of queries were analyzed according to their usability for the goal of the study. Then, full-texts were coded in Atlas.ti according to previously established coding scheme. The codes were categorized and the conceptual model of EIP features was created. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=enterprise%20information%20portal" title="enterprise information portal">enterprise information portal</a>, <a href="https://publications.waset.org/abstracts/search?q=content%20analysis" title=" content analysis"> content analysis</a>, <a href="https://publications.waset.org/abstracts/search?q=features" title=" features"> features</a>, <a href="https://publications.waset.org/abstracts/search?q=systematic%20literature%20review" title=" systematic literature review"> systematic literature review</a> </p> <a href="https://publications.waset.org/abstracts/59660/enterprise-information-portal-features-results-of-content-analysis-literature-review" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/59660.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">298</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3816</span> Content-Based Image Retrieval Using HSV Color Space Features</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Hamed%20Qazanfari">Hamed Qazanfari</a>, <a href="https://publications.waset.org/abstracts/search?q=Hamid%20Hassanpour"> Hamid Hassanpour</a>, <a href="https://publications.waset.org/abstracts/search?q=Kazem%20Qazanfari"> Kazem Qazanfari</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In this paper, a method is provided for content-based image retrieval. Content-based image retrieval system searches query an image based on its visual content in an image database to retrieve similar images. In this paper, with the aim of simulating the human visual system sensitivity to image's edges and color features, the concept of color difference histogram (CDH) is used. CDH includes the perceptually color difference between two neighboring pixels with regard to colors and edge orientations. Since the HSV color space is close to the human visual system, the CDH is calculated in this color space. In addition, to improve the color features, the color histogram in HSV color space is also used as a feature. Among the extracted features, efficient features are selected using entropy and correlation criteria. The final features extract the content of images most efficiently. The proposed method has been evaluated on three standard databases Corel 5k, Corel 10k and UKBench. Experimental results show that the accuracy of the proposed image retrieval method is significantly improved compared to the recently developed methods. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=content-based%20image%20retrieval" title="content-based image retrieval">content-based image retrieval</a>, <a href="https://publications.waset.org/abstracts/search?q=color%20difference%20histogram" title=" color difference histogram"> color difference histogram</a>, <a href="https://publications.waset.org/abstracts/search?q=efficient%20features%20selection" title=" efficient features selection"> efficient features selection</a>, <a href="https://publications.waset.org/abstracts/search?q=entropy" title=" entropy"> entropy</a>, <a href="https://publications.waset.org/abstracts/search?q=correlation" title=" correlation"> correlation</a> </p> <a href="https://publications.waset.org/abstracts/75068/content-based-image-retrieval-using-hsv-color-space-features" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/75068.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">249</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3815</span> Investigating the Stylistic Features of Advertising: Ad Design and Creation</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Asma%20Ben%20Abdallah">Asma Ben Abdallah</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Language has a powerful influence over people and their actions. The language of advertising has a very great impact on the consumer. It makes use of different features from the linguistic continuum. The present paper attempts to apply the theories of stylistics to the analysis of advertising texts. In order to decipher the stylistic features of the advertising discourse, 30 advertising text samples designed by MA Business students have been selected. These samples have been analyzed at the level of design and content. The study brings insights into the use of stylistic devices in advertising, and it reveals that both linguistic and non-linguistic features of advertisements are frequently employed to develop a well-thought-out design and content. The practical significance of the study is to highlight the specificities of the advertising genre so that people interested in the language of advertising (Business students and ESP teachers) will have a better understanding of the nature of the language used and the techniques of writing and designing ads. Similarly, those working in the advertising sphere (ad designers) will appreciate the specificities of the advertising discourse. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=the%20language%20of%20advertising" title="the language of advertising">the language of advertising</a>, <a href="https://publications.waset.org/abstracts/search?q=advertising%20discourse" title=" advertising discourse"> advertising discourse</a>, <a href="https://publications.waset.org/abstracts/search?q=ad%20design" title=" ad design"> ad design</a>, <a href="https://publications.waset.org/abstracts/search?q=stylistic%20features" title=" stylistic features"> stylistic features</a> </p> <a href="https://publications.waset.org/abstracts/93408/investigating-the-stylistic-features-of-advertising-ad-design-and-creation" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/93408.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">238</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3814</span> TARF: Web Toolkit for Annotating RNA-Related Genomic Features</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Jialin%20Ma">Jialin Ma</a>, <a href="https://publications.waset.org/abstracts/search?q=Jia%20Meng"> Jia Meng</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Genomic features, the genome-based coordinates, are commonly used for the representation of biological features such as genes, RNA transcripts and transcription factor binding sites. For the analysis of RNA-related genomic features, such as RNA modification sites, a common task is to correlate these features with transcript components (5'UTR, CDS, 3'UTR) to explore their distribution characteristics in terms of transcriptomic coordinates, e.g., to examine whether a specific type of biological feature is enriched near transcription start sites. Existing approaches for performing these tasks involve the manipulation of a gene database, conversion from genome-based coordinate to transcript-based coordinate, and visualization methods that are capable of showing RNA transcript components and distribution of the features. These steps are complicated and time consuming, and this is especially true for researchers who are not familiar with relevant tools. To overcome this obstacle, we develop a dedicated web app TARF, which represents web toolkit for annotating RNA-related genomic features. TARF web tool intends to provide a web-based way to easily annotate and visualize RNA-related genomic features. Once a user has uploaded the features with BED format and specified a built-in transcript database or uploaded a customized gene database with GTF format, the tool could fulfill its three main functions. First, it adds annotation on gene and RNA transcript components. For every features provided by the user, the overlapping with RNA transcript components are identified, and the information is combined in one table which is available for copy and download. Summary statistics about ambiguous belongings are also carried out. Second, the tool provides a convenient visualization method of the features on single gene/transcript level. For the selected gene, the tool shows the features with gene model on genome-based view, and also maps the features to transcript-based coordinate and show the distribution against one single spliced RNA transcript. Third, a global transcriptomic view of the genomic features is generated utilizing the Guitar R/Bioconductor package. The distribution of features on RNA transcripts are normalized with respect to RNA transcript landmarks and the enrichment of the features on different RNA transcript components is demonstrated. We tested the newly developed TARF toolkit with 3 different types of genomics features related to chromatin H3K4me3, RNA N6-methyladenosine (m6A) and RNA 5-methylcytosine (m5C), which are obtained from ChIP-Seq, MeRIP-Seq and RNA BS-Seq data, respectively. TARF successfully revealed their respective distribution characteristics, i.e. H3K4me3, m6A and m5C are enriched near transcription starting sites, stop codons and 5’UTRs, respectively. Overall, TARF is a useful web toolkit for annotation and visualization of RNA-related genomic features, and should help simplify the analysis of various RNA-related genomic features, especially those related RNA modifications. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=RNA-related%20genomic%20features" title="RNA-related genomic features">RNA-related genomic features</a>, <a href="https://publications.waset.org/abstracts/search?q=annotation" title=" annotation"> annotation</a>, <a href="https://publications.waset.org/abstracts/search?q=visualization" title=" visualization"> visualization</a>, <a href="https://publications.waset.org/abstracts/search?q=web%20server" title=" web server"> web server</a> </p> <a href="https://publications.waset.org/abstracts/59044/tarf-web-toolkit-for-annotating-rna-related-genomic-features" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/59044.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">207</span> </span> </div> </div> <ul class="pagination"> <li class="page-item disabled"><span class="page-link">‹</span></li> <li class="page-item active"><span class="page-link">1</span></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=features&page=2">2</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=features&page=3">3</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=features&page=4">4</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=features&page=5">5</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=features&page=6">6</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=features&page=7">7</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=features&page=8">8</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=features&page=9">9</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=features&page=10">10</a></li> <li class="page-item disabled"><span class="page-link">...</span></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=features&page=128">128</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=features&page=129">129</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=features&page=2" rel="next">›</a></li> </ul> </div> </main> <footer> <div id="infolinks" class="pt-3 pb-2"> <div class="container"> <div style="background-color:#f5f5f5;" class="p-3"> <div class="row"> <div class="col-md-2"> <ul class="list-unstyled"> About <li><a href="https://waset.org/page/support">About Us</a></li> <li><a href="https://waset.org/page/support#legal-information">Legal</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/WASET-16th-foundational-anniversary.pdf">WASET celebrates its 16th foundational anniversary</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Account <li><a href="https://waset.org/profile">My Account</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Explore <li><a href="https://waset.org/disciplines">Disciplines</a></li> <li><a href="https://waset.org/conferences">Conferences</a></li> <li><a href="https://waset.org/conference-programs">Conference Program</a></li> <li><a href="https://waset.org/committees">Committees</a></li> <li><a href="https://publications.waset.org">Publications</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Research <li><a href="https://publications.waset.org/abstracts">Abstracts</a></li> <li><a href="https://publications.waset.org">Periodicals</a></li> <li><a href="https://publications.waset.org/archive">Archive</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Open Science <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Science-Philosophy.pdf">Open Science Philosophy</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Science-Award.pdf">Open Science Award</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Society-Open-Science-and-Open-Innovation.pdf">Open Innovation</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Postdoctoral-Fellowship-Award.pdf">Postdoctoral Fellowship Award</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Scholarly-Research-Review.pdf">Scholarly Research Review</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Support <li><a href="https://waset.org/page/support">Support</a></li> <li><a href="https://waset.org/profile/messages/create">Contact Us</a></li> <li><a href="https://waset.org/profile/messages/create">Report Abuse</a></li> </ul> </div> </div> </div> </div> </div> <div class="container text-center"> <hr style="margin-top:0;margin-bottom:.3rem;"> <a href="https://creativecommons.org/licenses/by/4.0/" target="_blank" class="text-muted small">Creative Commons Attribution 4.0 International License</a> <div id="copy" class="mt-2">© 2024 World Academy of Science, Engineering and Technology</div> </div> </footer> <a href="javascript:" id="return-to-top"><i class="fas fa-arrow-up"></i></a> <div class="modal" id="modal-template"> <div class="modal-dialog"> <div class="modal-content"> <div class="row m-0 mt-1"> <div class="col-md-12"> <button type="button" class="close" data-dismiss="modal" aria-label="Close"><span aria-hidden="true">×</span></button> </div> </div> <div class="modal-body"></div> </div> </div> </div> <script src="https://cdn.waset.org/static/plugins/jquery-3.3.1.min.js"></script> <script src="https://cdn.waset.org/static/plugins/bootstrap-4.2.1/js/bootstrap.bundle.min.js"></script> <script src="https://cdn.waset.org/static/js/site.js?v=150220211556"></script> <script> jQuery(document).ready(function() { /*jQuery.get("https://publications.waset.org/xhr/user-menu", function (response) { jQuery('#mainNavMenu').append(response); });*/ jQuery.get({ url: "https://publications.waset.org/xhr/user-menu", cache: false }).then(function(response){ jQuery('#mainNavMenu').append(response); }); }); </script> </body> </html>