CINXE.COM
Search results for: convolutional transformation
<!DOCTYPE html> <html lang="en" dir="ltr"> <head> <!-- Google tag (gtag.js) --> <script async src="https://www.googletagmanager.com/gtag/js?id=G-P63WKM1TM1"></script> <script> window.dataLayer = window.dataLayer || []; function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'G-P63WKM1TM1'); </script> <!-- Yandex.Metrika counter --> <script type="text/javascript" > (function(m,e,t,r,i,k,a){m[i]=m[i]||function(){(m[i].a=m[i].a||[]).push(arguments)}; m[i].l=1*new Date(); for (var j = 0; j < document.scripts.length; j++) {if (document.scripts[j].src === r) { return; }} k=e.createElement(t),a=e.getElementsByTagName(t)[0],k.async=1,k.src=r,a.parentNode.insertBefore(k,a)}) (window, document, "script", "https://mc.yandex.ru/metrika/tag.js", "ym"); ym(55165297, "init", { clickmap:false, trackLinks:true, accurateTrackBounce:true, webvisor:false }); </script> <noscript><div><img src="https://mc.yandex.ru/watch/55165297" style="position:absolute; left:-9999px;" alt="" /></div></noscript> <!-- /Yandex.Metrika counter --> <!-- Matomo --> <!-- End Matomo Code --> <title>Search results for: convolutional transformation</title> <meta name="description" content="Search results for: convolutional transformation"> <meta name="keywords" content="convolutional transformation"> <meta name="viewport" content="width=device-width, initial-scale=1, minimum-scale=1, maximum-scale=1, user-scalable=no"> <meta charset="utf-8"> <link href="https://cdn.waset.org/favicon.ico" type="image/x-icon" rel="shortcut icon"> <link href="https://cdn.waset.org/static/plugins/bootstrap-4.2.1/css/bootstrap.min.css" rel="stylesheet"> <link href="https://cdn.waset.org/static/plugins/fontawesome/css/all.min.css" rel="stylesheet"> <link href="https://cdn.waset.org/static/css/site.css?v=150220211555" rel="stylesheet"> </head> <body> <header> <div class="container"> <nav class="navbar navbar-expand-lg navbar-light"> <a class="navbar-brand" href="https://waset.org"> <img src="https://cdn.waset.org/static/images/wasetc.png" alt="Open Science Research Excellence" title="Open Science Research Excellence" /> </a> <button class="d-block d-lg-none navbar-toggler ml-auto" type="button" data-toggle="collapse" data-target="#navbarMenu" aria-controls="navbarMenu" aria-expanded="false" aria-label="Toggle navigation"> <span class="navbar-toggler-icon"></span> </button> <div class="w-100"> <div class="d-none d-lg-flex flex-row-reverse"> <form method="get" action="https://waset.org/search" class="form-inline my-2 my-lg-0"> <input class="form-control mr-sm-2" type="search" placeholder="Search Conferences" value="convolutional transformation" name="q" aria-label="Search"> <button class="btn btn-light my-2 my-sm-0" type="submit"><i class="fas fa-search"></i></button> </form> </div> <div class="collapse navbar-collapse mt-1" id="navbarMenu"> <ul class="navbar-nav ml-auto align-items-center" id="mainNavMenu"> <li class="nav-item"> <a class="nav-link" href="https://waset.org/conferences" title="Conferences in 2024/2025/2026">Conferences</a> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/disciplines" title="Disciplines">Disciplines</a> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/committees" rel="nofollow">Committees</a> </li> <li class="nav-item dropdown"> <a class="nav-link dropdown-toggle" href="#" id="navbarDropdownPublications" role="button" data-toggle="dropdown" aria-haspopup="true" aria-expanded="false"> Publications </a> <div class="dropdown-menu" aria-labelledby="navbarDropdownPublications"> <a class="dropdown-item" href="https://publications.waset.org/abstracts">Abstracts</a> <a class="dropdown-item" href="https://publications.waset.org">Periodicals</a> <a class="dropdown-item" href="https://publications.waset.org/archive">Archive</a> </div> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/page/support" title="Support">Support</a> </li> </ul> </div> </div> </nav> </div> </header> <main> <div class="container mt-4"> <div class="row"> <div class="col-md-9 mx-auto"> <form method="get" action="https://publications.waset.org/abstracts/search"> <div id="custom-search-input"> <div class="input-group"> <i class="fas fa-search"></i> <input type="text" class="search-query" name="q" placeholder="Author, Title, Abstract, Keywords" value="convolutional transformation"> <input type="submit" class="btn_search" value="Search"> </div> </div> </form> </div> </div> <div class="row mt-3"> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Commenced</strong> in January 2007</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Frequency:</strong> Monthly</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Edition:</strong> International</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Paper Count:</strong> 2043</div> </div> </div> </div> <h1 class="mt-3 mb-3 text-center" style="font-size:1.6rem;">Search results for: convolutional transformation</h1> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2043</span> A Custom Convolutional Neural Network with Hue, Saturation, Value Color for Malaria Classification</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Ghazala%20Hcini">Ghazala Hcini</a>, <a href="https://publications.waset.org/abstracts/search?q=Imen%20Jdey"> Imen Jdey</a>, <a href="https://publications.waset.org/abstracts/search?q=Hela%20Ltifi"> Hela Ltifi</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Malaria disease should be considered and handled as a potential restorative catastrophe. One of the most challenging tasks in the field of microscopy image processing is due to differences in test design and vulnerability of cell classifications. In this article, we focused on applying deep learning to classify patients by identifying images of infected and uninfected cells. We performed multiple forms, counting a classification approach using the Hue, Saturation, Value (HSV) color space. HSV is used since of its superior ability to speak to image brightness; at long last, for classification, a convolutional neural network (CNN) architecture is created. Clusters of focus were used to deliver the classification. The highlights got to be forbidden, and a few more clamor sorts are included in the information. The suggested method has a precision of 99.79%, a recall value of 99.55%, and provides 99.96% accuracy. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=deep%20learning" title="deep learning">deep learning</a>, <a href="https://publications.waset.org/abstracts/search?q=convolutional%20neural%20network" title=" convolutional neural network"> convolutional neural network</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20classification" title=" image classification"> image classification</a>, <a href="https://publications.waset.org/abstracts/search?q=color%20transformation" title=" color transformation"> color transformation</a>, <a href="https://publications.waset.org/abstracts/search?q=HSV%20color" title=" HSV color"> HSV color</a>, <a href="https://publications.waset.org/abstracts/search?q=malaria%20diagnosis" title=" malaria diagnosis"> malaria diagnosis</a>, <a href="https://publications.waset.org/abstracts/search?q=malaria%20cells%20images" title=" malaria cells images"> malaria cells images</a> </p> <a href="https://publications.waset.org/abstracts/161232/a-custom-convolutional-neural-network-with-hue-saturation-value-color-for-malaria-classification" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/161232.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">89</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2042</span> Traffic Light Detection Using Image Segmentation</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Vaishnavi%20Shivde">Vaishnavi Shivde</a>, <a href="https://publications.waset.org/abstracts/search?q=Shrishti%20Sinha"> Shrishti Sinha</a>, <a href="https://publications.waset.org/abstracts/search?q=Trapti%20Mishra"> Trapti Mishra</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Traffic light detection from a moving vehicle is an important technology both for driver safety assistance functions as well as for autonomous driving in the city. This paper proposed a deep-learning-based traffic light recognition method that consists of a pixel-wise image segmentation technique and a fully convolutional network i.e., UNET architecture. This paper has used a method for detecting the position and recognizing the state of the traffic lights in video sequences is presented and evaluated using Traffic Light Dataset which contains masked traffic light image data. The first stage is the detection, which is accomplished through image processing (image segmentation) techniques such as image cropping, color transformation, segmentation of possible traffic lights. The second stage is the recognition, which means identifying the color of the traffic light or knowing the state of traffic light which is achieved by using a Convolutional Neural Network (UNET architecture). <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=traffic%20light%20detection" title="traffic light detection">traffic light detection</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20segmentation" title=" image segmentation"> image segmentation</a>, <a href="https://publications.waset.org/abstracts/search?q=machine%20learning" title=" machine learning"> machine learning</a>, <a href="https://publications.waset.org/abstracts/search?q=classification" title=" classification"> classification</a>, <a href="https://publications.waset.org/abstracts/search?q=convolutional%20neural%20networks" title=" convolutional neural networks"> convolutional neural networks</a> </p> <a href="https://publications.waset.org/abstracts/137254/traffic-light-detection-using-image-segmentation" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/137254.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">174</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2041</span> Transformation of Positron Emission Tomography Raw Data into Images for Classification Using Convolutional Neural Network</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Pawe%C5%82%20Konieczka">Paweł Konieczka</a>, <a href="https://publications.waset.org/abstracts/search?q=Lech%20Raczy%C5%84ski"> Lech Raczyński</a>, <a href="https://publications.waset.org/abstracts/search?q=Wojciech%20Wi%C5%9Blicki"> Wojciech Wiślicki</a>, <a href="https://publications.waset.org/abstracts/search?q=Oleksandr%20Fedoruk"> Oleksandr Fedoruk</a>, <a href="https://publications.waset.org/abstracts/search?q=Konrad%20Klimaszewski"> Konrad Klimaszewski</a>, <a href="https://publications.waset.org/abstracts/search?q=Przemys%C5%82aw%20Kopka"> Przemysław Kopka</a>, <a href="https://publications.waset.org/abstracts/search?q=Wojciech%20Krzemie%C5%84"> Wojciech Krzemień</a>, <a href="https://publications.waset.org/abstracts/search?q=Roman%20Shopa"> Roman Shopa</a>, <a href="https://publications.waset.org/abstracts/search?q=Jakub%20Baran"> Jakub Baran</a>, <a href="https://publications.waset.org/abstracts/search?q=Aur%C3%A9lien%20Coussat"> Aurélien Coussat</a>, <a href="https://publications.waset.org/abstracts/search?q=Neha%20Chug"> Neha Chug</a>, <a href="https://publications.waset.org/abstracts/search?q=Catalina%20Curceanu"> Catalina Curceanu</a>, <a href="https://publications.waset.org/abstracts/search?q=Eryk%20Czerwi%C5%84ski"> Eryk Czerwiński</a>, <a href="https://publications.waset.org/abstracts/search?q=Meysam%20Dadgar"> Meysam Dadgar</a>, <a href="https://publications.waset.org/abstracts/search?q=Kamil%20Dulski"> Kamil Dulski</a>, <a href="https://publications.waset.org/abstracts/search?q=Aleksander%20Gajos"> Aleksander Gajos</a>, <a href="https://publications.waset.org/abstracts/search?q=Beatrix%20C.%20Hiesmayr"> Beatrix C. Hiesmayr</a>, <a href="https://publications.waset.org/abstracts/search?q=Krzysztof%20Kacprzak"> Krzysztof Kacprzak</a>, <a href="https://publications.waset.org/abstracts/search?q=%C5%82ukasz%20Kap%C5%82on"> łukasz Kapłon</a>, <a href="https://publications.waset.org/abstracts/search?q=Grzegorz%20Korcyl"> Grzegorz Korcyl</a>, <a href="https://publications.waset.org/abstracts/search?q=Tomasz%20Kozik"> Tomasz Kozik</a>, <a href="https://publications.waset.org/abstracts/search?q=Deepak%20Kumar"> Deepak Kumar</a>, <a href="https://publications.waset.org/abstracts/search?q=Szymon%20Nied%C5%BAwiecki"> Szymon Niedźwiecki</a>, <a href="https://publications.waset.org/abstracts/search?q=Dominik%20Panek"> Dominik Panek</a>, <a href="https://publications.waset.org/abstracts/search?q=Szymon%20Parzych"> Szymon Parzych</a>, <a href="https://publications.waset.org/abstracts/search?q=Elena%20P%C3%A9rez%20Del%20R%C3%ADo"> Elena Pérez Del Río</a>, <a href="https://publications.waset.org/abstracts/search?q=Sushil%20Sharma"> Sushil Sharma</a>, <a href="https://publications.waset.org/abstracts/search?q=Shivani%20Shivani"> Shivani Shivani</a>, <a href="https://publications.waset.org/abstracts/search?q=Magdalena%20Skurzok"> Magdalena Skurzok</a>, <a href="https://publications.waset.org/abstracts/search?q=Ewa%20%C5%82ucja%20St%C4%99pie%C5%84"> Ewa łucja Stępień</a>, <a href="https://publications.waset.org/abstracts/search?q=Faranak%20Tayefi"> Faranak Tayefi</a>, <a href="https://publications.waset.org/abstracts/search?q=Pawe%C5%82%20Moskal"> Paweł Moskal</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper develops the transformation of non-image data into 2-dimensional matrices, as a preparation stage for classification based on convolutional neural networks (CNNs). In positron emission tomography (PET) studies, CNN may be applied directly to the reconstructed distribution of radioactive tracers injected into the patient's body, as a pattern recognition tool. Nonetheless, much PET data still exists in non-image format and this fact opens a question on whether they can be used for training CNN. In this contribution, the main focus of this paper is the problem of processing vectors with a small number of features in comparison to the number of pixels in the output images. The proposed methodology was applied to the classification of PET coincidence events. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=convolutional%20neural%20network" title="convolutional neural network">convolutional neural network</a>, <a href="https://publications.waset.org/abstracts/search?q=kernel%20principal%20component%20analysis" title=" kernel principal component analysis"> kernel principal component analysis</a>, <a href="https://publications.waset.org/abstracts/search?q=medical%20imaging" title=" medical imaging"> medical imaging</a>, <a href="https://publications.waset.org/abstracts/search?q=positron%20emission%20tomography" title=" positron emission tomography"> positron emission tomography</a> </p> <a href="https://publications.waset.org/abstracts/150734/transformation-of-positron-emission-tomography-raw-data-into-images-for-classification-using-convolutional-neural-network" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/150734.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">144</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2040</span> Experimental Study of Hyperparameter Tuning a Deep Learning Convolutional Recurrent Network for Text Classification</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Bharatendra%20Rai">Bharatendra Rai</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The sequence of words in text data has long-term dependencies and is known to suffer from vanishing gradient problems when developing deep learning models. Although recurrent networks such as long short-term memory networks help to overcome this problem, achieving high text classification performance is a challenging problem. Convolutional recurrent networks that combine the advantages of long short-term memory networks and convolutional neural networks can be useful for text classification performance improvements. However, arriving at suitable hyperparameter values for convolutional recurrent networks is still a challenging task where fitting a model requires significant computing resources. This paper illustrates the advantages of using convolutional recurrent networks for text classification with the help of statistically planned computer experiments for hyperparameter tuning. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=long%20short-term%20memory%20networks" title="long short-term memory networks">long short-term memory networks</a>, <a href="https://publications.waset.org/abstracts/search?q=convolutional%20recurrent%20networks" title=" convolutional recurrent networks"> convolutional recurrent networks</a>, <a href="https://publications.waset.org/abstracts/search?q=text%20classification" title=" text classification"> text classification</a>, <a href="https://publications.waset.org/abstracts/search?q=hyperparameter%20tuning" title=" hyperparameter tuning"> hyperparameter tuning</a>, <a href="https://publications.waset.org/abstracts/search?q=Tukey%20honest%20significant%20differences" title=" Tukey honest significant differences"> Tukey honest significant differences</a> </p> <a href="https://publications.waset.org/abstracts/169795/experimental-study-of-hyperparameter-tuning-a-deep-learning-convolutional-recurrent-network-for-text-classification" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/169795.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">129</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2039</span> Classification of Echo Signals Based on Deep Learning</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Aisulu%20Tileukulova">Aisulu Tileukulova</a>, <a href="https://publications.waset.org/abstracts/search?q=Zhexebay%20Dauren"> Zhexebay Dauren</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Radar plays an important role because it is widely used in civil and military fields. Target detection is one of the most important radar applications. The accuracy of detecting inconspicuous aerial objects in radar facilities is lower against the background of noise. Convolutional neural networks can be used to improve the recognition of this type of aerial object. The purpose of this work is to develop an algorithm for recognizing aerial objects using convolutional neural networks, as well as training a neural network. In this paper, the structure of a convolutional neural network (CNN) consists of different types of layers: 8 convolutional layers and 3 layers of a fully connected perceptron. ReLU is used as an activation function in convolutional layers, while the last layer uses softmax. It is necessary to form a data set for training a neural network in order to detect a target. We built a Confusion Matrix of the CNN model to measure the effectiveness of our model. The results showed that the accuracy when testing the model was 95.7%. Classification of echo signals using CNN shows high accuracy and significantly speeds up the process of predicting the target. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=radar" title="radar">radar</a>, <a href="https://publications.waset.org/abstracts/search?q=neural%20network" title=" neural network"> neural network</a>, <a href="https://publications.waset.org/abstracts/search?q=convolutional%20neural%20network" title=" convolutional neural network"> convolutional neural network</a>, <a href="https://publications.waset.org/abstracts/search?q=echo%20signals" title=" echo signals"> echo signals</a> </p> <a href="https://publications.waset.org/abstracts/147596/classification-of-echo-signals-based-on-deep-learning" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/147596.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">353</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2038</span> Investigation of New Gait Representations for Improving Gait Recognition</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Chirawat%20Wattanapanich">Chirawat Wattanapanich</a>, <a href="https://publications.waset.org/abstracts/search?q=Hong%20Wei"> Hong Wei</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This study presents new gait representations for improving gait recognition accuracy on cross gait appearances, such as normal walking, wearing a coat and carrying a bag. Based on the Gait Energy Image (GEI), two ideas are implemented to generate new gait representations. One is to append lower knee regions to the original GEI, and the other is to apply convolutional operations to the GEI and its variants. A set of new gait representations are created and used for training multi-class Support Vector Machines (SVMs). Tests are conducted on the CASIA dataset B. Various combinations of the gait representations with different convolutional kernel size and different numbers of kernels used in the convolutional processes are examined. Both the entire images as features and reduced dimensional features by Principal Component Analysis (PCA) are tested in gait recognition. Interestingly, both new techniques, appending the lower knee regions to the original GEI and convolutional GEI, can significantly contribute to the performance improvement in the gait recognition. The experimental results have shown that the average recognition rate can be improved from 75.65% to 87.50%. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=convolutional%20image" title="convolutional image">convolutional image</a>, <a href="https://publications.waset.org/abstracts/search?q=lower%20knee" title=" lower knee"> lower knee</a>, <a href="https://publications.waset.org/abstracts/search?q=gait" title=" gait"> gait</a> </p> <a href="https://publications.waset.org/abstracts/80553/investigation-of-new-gait-representations-for-improving-gait-recognition" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/80553.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">202</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2037</span> Bias Prevention in Automated Diagnosis of Melanoma: Augmentation of a Convolutional Neural Network Classifier</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Kemka%20Ihemelandu">Kemka Ihemelandu</a>, <a href="https://publications.waset.org/abstracts/search?q=Chukwuemeka%20Ihemelandu"> Chukwuemeka Ihemelandu</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Melanoma remains a public health crisis, with incidence rates increasing rapidly in the past decades. Improving diagnostic accuracy to decrease misdiagnosis using Artificial intelligence (AI) continues to be documented. Unfortunately, unintended racially biased outcomes, a product of lack of diversity in the dataset used, with a noted class imbalance favoring lighter vs. darker skin tone, have increasingly been recognized as a problem.Resulting in noted limitations of the accuracy of the Convolutional neural network (CNN)models. CNN models are prone to biased output due to biases in the dataset used to train them. Our aim in this study was the optimization of convolutional neural network algorithms to mitigate bias in the automated diagnosis of melanoma. We hypothesized that our proposed training algorithms based on a data augmentation method to optimize the diagnostic accuracy of a CNN classifier by generating new training samples from the original ones will reduce bias in the automated diagnosis of melanoma. We applied geometric transformation, including; rotations, translations, scale change, flipping, and shearing. Resulting in a CNN model that provided a modifiedinput data making for a model that could learn subtle racial features. Optimal selection of the momentum and batch hyperparameter increased our model accuracy. We show that our augmented model reduces bias while maintaining accuracy in the automated diagnosis of melanoma. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=bias" title="bias">bias</a>, <a href="https://publications.waset.org/abstracts/search?q=augmentation" title=" augmentation"> augmentation</a>, <a href="https://publications.waset.org/abstracts/search?q=melanoma" title=" melanoma"> melanoma</a>, <a href="https://publications.waset.org/abstracts/search?q=convolutional%20neural%20network" title=" convolutional neural network"> convolutional neural network</a> </p> <a href="https://publications.waset.org/abstracts/147487/bias-prevention-in-automated-diagnosis-of-melanoma-augmentation-of-a-convolutional-neural-network-classifier" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/147487.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">211</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2036</span> Comparison of Classical Computer Vision vs. Convolutional Neural Networks Approaches for Weed Mapping in Aerial Images</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Paulo%20Cesar%20Pereira%20Junior">Paulo Cesar Pereira Junior</a>, <a href="https://publications.waset.org/abstracts/search?q=Alexandre%20Monteiro"> Alexandre Monteiro</a>, <a href="https://publications.waset.org/abstracts/search?q=Rafael%20da%20Luz%20Ribeiro"> Rafael da Luz Ribeiro</a>, <a href="https://publications.waset.org/abstracts/search?q=Antonio%20Carlos%20Sobieranski"> Antonio Carlos Sobieranski</a>, <a href="https://publications.waset.org/abstracts/search?q=Aldo%20von%20Wangenheim"> Aldo von Wangenheim</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In this paper, we present a comparison between convolutional neural networks and classical computer vision approaches, for the specific precision agriculture problem of weed mapping on sugarcane fields aerial images. A systematic literature review was conducted to find which computer vision methods are being used on this specific problem. The most cited methods were implemented, as well as four models of convolutional neural networks. All implemented approaches were tested using the same dataset, and their results were quantitatively and qualitatively analyzed. The obtained results were compared to a human expert made ground truth for validation. The results indicate that the convolutional neural networks present better precision and generalize better than the classical models. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=convolutional%20neural%20networks" title="convolutional neural networks">convolutional neural networks</a>, <a href="https://publications.waset.org/abstracts/search?q=deep%20learning" title=" deep learning"> deep learning</a>, <a href="https://publications.waset.org/abstracts/search?q=digital%20image%20processing" title=" digital image processing"> digital image processing</a>, <a href="https://publications.waset.org/abstracts/search?q=precision%20agriculture" title=" precision agriculture"> precision agriculture</a>, <a href="https://publications.waset.org/abstracts/search?q=semantic%20segmentation" title=" semantic segmentation"> semantic segmentation</a>, <a href="https://publications.waset.org/abstracts/search?q=unmanned%20aerial%20vehicles" title=" unmanned aerial vehicles"> unmanned aerial vehicles</a> </p> <a href="https://publications.waset.org/abstracts/112982/comparison-of-classical-computer-vision-vs-convolutional-neural-networks-approaches-for-weed-mapping-in-aerial-images" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/112982.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">260</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2035</span> Causal Relation Identification Using Convolutional Neural Networks and Knowledge Based Features</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Tharini%20N.%20de%20Silva">Tharini N. de Silva</a>, <a href="https://publications.waset.org/abstracts/search?q=Xiao%20Zhibo"> Xiao Zhibo</a>, <a href="https://publications.waset.org/abstracts/search?q=Zhao%20Rui"> Zhao Rui</a>, <a href="https://publications.waset.org/abstracts/search?q=Mao%20Kezhi"> Mao Kezhi</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Causal relation identification is a crucial task in information extraction and knowledge discovery. In this work, we present two approaches to causal relation identification. The first is a classification model trained on a set of knowledge-based features. The second is a deep learning based approach training a model using convolutional neural networks to classify causal relations. We experiment with several different convolutional neural networks (CNN) models based on previous work on relation extraction as well as our own research. Our models are able to identify both explicit and implicit causal relations as well as the direction of the causal relation. The results of our experiments show a higher accuracy than previously achieved for causal relation identification tasks. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=causal%20realtion%20extraction" title="causal realtion extraction">causal realtion extraction</a>, <a href="https://publications.waset.org/abstracts/search?q=relation%20extracton" title=" relation extracton"> relation extracton</a>, <a href="https://publications.waset.org/abstracts/search?q=convolutional%20neural%20network" title=" convolutional neural network"> convolutional neural network</a>, <a href="https://publications.waset.org/abstracts/search?q=text%20representation" title=" text representation"> text representation</a> </p> <a href="https://publications.waset.org/abstracts/61573/causal-relation-identification-using-convolutional-neural-networks-and-knowledge-based-features" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/61573.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">733</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2034</span> Image Classification with Localization Using Convolutional Neural Networks</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Bhuyain%20Mobarok%20Hossain">Bhuyain Mobarok Hossain</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Image classification and localization research is currently an important strategy in the field of computer vision. The evolution and advancement of deep learning and convolutional neural networks (CNN) have greatly improved the capabilities of object detection and image-based classification. Target detection is important to research in the field of computer vision, especially in video surveillance systems. To solve this problem, we will be applying a convolutional neural network of multiple scales at multiple locations in the image in one sliding window. Most translation networks move away from the bounding box around the area of interest. In contrast to this architecture, we consider the problem to be a classification problem where each pixel of the image is a separate section. Image classification is the method of predicting an individual category or specifying by a shoal of data points. Image classification is a part of the classification problem, including any labels throughout the image. The image can be classified as a day or night shot. Or, likewise, images of cars and motorbikes will be automatically placed in their collection. The deep learning of image classification generally includes convolutional layers; the invention of it is referred to as a convolutional neural network (CNN). <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=image%20classification" title="image classification">image classification</a>, <a href="https://publications.waset.org/abstracts/search?q=object%20detection" title=" object detection"> object detection</a>, <a href="https://publications.waset.org/abstracts/search?q=localization" title=" localization"> localization</a>, <a href="https://publications.waset.org/abstracts/search?q=particle%20filter" title=" particle filter"> particle filter</a> </p> <a href="https://publications.waset.org/abstracts/139288/image-classification-with-localization-using-convolutional-neural-networks" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/139288.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">305</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2033</span> Classification of Computer Generated Images from Photographic Images Using Convolutional Neural Networks</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Chaitanya%20Chawla">Chaitanya Chawla</a>, <a href="https://publications.waset.org/abstracts/search?q=Divya%20Panwar"> Divya Panwar</a>, <a href="https://publications.waset.org/abstracts/search?q=Gurneesh%20Singh%20Anand"> Gurneesh Singh Anand</a>, <a href="https://publications.waset.org/abstracts/search?q=M.%20P.%20S%20Bhatia"> M. P. S Bhatia</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper presents a deep-learning mechanism for classifying computer generated images and photographic images. The proposed method accounts for a convolutional layer capable of automatically learning correlation between neighbouring pixels. In the current form, Convolutional Neural Network (CNN) will learn features based on an image's content instead of the structural features of the image. The layer is particularly designed to subdue an image's content and robustly learn the sensor pattern noise features (usually inherited from image processing in a camera) as well as the statistical properties of images. The paper was assessed on latest natural and computer generated images, and it was concluded that it performs better than the current state of the art methods. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=image%20forensics" title="image forensics">image forensics</a>, <a href="https://publications.waset.org/abstracts/search?q=computer%20graphics" title=" computer graphics"> computer graphics</a>, <a href="https://publications.waset.org/abstracts/search?q=classification" title=" classification"> classification</a>, <a href="https://publications.waset.org/abstracts/search?q=deep%20learning" title=" deep learning"> deep learning</a>, <a href="https://publications.waset.org/abstracts/search?q=convolutional%20neural%20networks" title=" convolutional neural networks"> convolutional neural networks</a> </p> <a href="https://publications.waset.org/abstracts/95266/classification-of-computer-generated-images-from-photographic-images-using-convolutional-neural-networks" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/95266.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">337</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2032</span> Developing a Theory for Study of Transformation of Historic Cities</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Sana%20Ahrar">Sana Ahrar</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Cities are undergoing rapid transformation with the change in lifestyle and technological advancements. These transformations may be experienced or physically visible in the built form. This paper focuses on the relationship between the social, physical environment, change in lifestyle and the interrelated factors influencing the transformation of any historic city. Shahjahanabad as a city has undergone transformation under the various political powers as well as the various policy implementations after independence. These visible traces of transformation diffused throughout the city may be due to socio-economic, historic, political factors and due to the globalization process. This study shall enable evolving a theory for the study of transformation of Historic cities such as Shahjahanabad: which has been plundered, rebuilt, and which still thrives as a ‘living heritage city’. The theory developed will be the process of studying the transformation and can be used by planners, policy makers and researchers in different urban contexts. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=heritage" title="heritage">heritage</a>, <a href="https://publications.waset.org/abstracts/search?q=historic%20cities" title=" historic cities"> historic cities</a>, <a href="https://publications.waset.org/abstracts/search?q=Shahjahanabad" title=" Shahjahanabad"> Shahjahanabad</a>, <a href="https://publications.waset.org/abstracts/search?q=transformation" title=" transformation"> transformation</a> </p> <a href="https://publications.waset.org/abstracts/87941/developing-a-theory-for-study-of-transformation-of-historic-cities" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/87941.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">397</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2031</span> Traffic Sign Recognition System Using Convolutional Neural NetworkDevineni</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Devineni%20Vijay%20Bhaskar">Devineni Vijay Bhaskar</a>, <a href="https://publications.waset.org/abstracts/search?q=Yendluri%20Raja"> Yendluri Raja</a> </p> <p class="card-text"><strong>Abstract:</strong></p> We recommend a model for traffic sign detection stranded on Convolutional Neural Networks (CNN). We first renovate the unique image into the gray scale image through with support vector machines, then use convolutional neural networks with fixed and learnable layers for revealing and understanding. The permanent layer can reduction the amount of attention areas to notice and crop the limits very close to the boundaries of traffic signs. The learnable coverings can rise the accuracy of detection significantly. Besides, we use bootstrap procedures to progress the accuracy and avoid overfitting problem. In the German Traffic Sign Detection Benchmark, we obtained modest results, with an area under the precision-recall curve (AUC) of 99.49% in the group “Risk”, and an AUC of 96.62% in the group “Obligatory”. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=convolutional%20neural%20network" title="convolutional neural network">convolutional neural network</a>, <a href="https://publications.waset.org/abstracts/search?q=support%20vector%20machine" title=" support vector machine"> support vector machine</a>, <a href="https://publications.waset.org/abstracts/search?q=detection" title=" detection"> detection</a>, <a href="https://publications.waset.org/abstracts/search?q=traffic%20signs" title=" traffic signs"> traffic signs</a>, <a href="https://publications.waset.org/abstracts/search?q=bootstrap%20procedures" title=" bootstrap procedures"> bootstrap procedures</a>, <a href="https://publications.waset.org/abstracts/search?q=precision-recall%20curve" title=" precision-recall curve"> precision-recall curve</a> </p> <a href="https://publications.waset.org/abstracts/149896/traffic-sign-recognition-system-using-convolutional-neural-networkdevineni" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/149896.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">122</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2030</span> Detection of Keypoint in Press-Fit Curve Based on Convolutional Neural Network</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Shoujia%20Fang">Shoujia Fang</a>, <a href="https://publications.waset.org/abstracts/search?q=Guoqing%20Ding"> Guoqing Ding</a>, <a href="https://publications.waset.org/abstracts/search?q=Xin%20Chen"> Xin Chen</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The quality of press-fit assembly is closely related to reliability and safety of product. The paper proposed a keypoint detection method based on convolutional neural network to improve the accuracy of keypoint detection in press-fit curve. It would provide an auxiliary basis for judging quality of press-fit assembly. The press-fit curve is a curve of press-fit force and displacement. Both force data and distance data are time-series data. Therefore, one-dimensional convolutional neural network is used to process the press-fit curve. After the obtained press-fit data is filtered, the multi-layer one-dimensional convolutional neural network is used to perform the automatic learning of press-fit curve features, and then sent to the multi-layer perceptron to finally output keypoint of the curve. We used the data of press-fit assembly equipment in the actual production process to train CNN model, and we used different data from the same equipment to evaluate the performance of detection. Compared with the existing research result, the performance of detection was significantly improved. This method can provide a reliable basis for the judgment of press-fit quality. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=keypoint%20detection" title="keypoint detection">keypoint detection</a>, <a href="https://publications.waset.org/abstracts/search?q=curve%20feature" title=" curve feature"> curve feature</a>, <a href="https://publications.waset.org/abstracts/search?q=convolutional%20neural%20network" title=" convolutional neural network"> convolutional neural network</a>, <a href="https://publications.waset.org/abstracts/search?q=press-fit%20assembly" title=" press-fit assembly"> press-fit assembly</a> </p> <a href="https://publications.waset.org/abstracts/98263/detection-of-keypoint-in-press-fit-curve-based-on-convolutional-neural-network" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/98263.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">230</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2029</span> Aspect-Level Sentiment Analysis with Multi-Channel and Graph Convolutional Networks</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Jiajun%20Wang">Jiajun Wang</a>, <a href="https://publications.waset.org/abstracts/search?q=Xiaoge%20Li"> Xiaoge Li</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The purpose of the aspect-level sentiment analysis task is to identify the sentiment polarity of aspects in a sentence. Currently, most methods mainly focus on using neural networks and attention mechanisms to model the relationship between aspects and context, but they ignore the dependence of words in different ranges in the sentence, resulting in deviation when assigning relationship weight to other words other than aspect words. To solve these problems, we propose a new aspect-level sentiment analysis model that combines a multi-channel convolutional network and graph convolutional network (GCN). Firstly, the context and the degree of association between words are characterized by Long Short-Term Memory (LSTM) and self-attention mechanism. Besides, a multi-channel convolutional network is used to extract the features of words in different ranges. Finally, a convolutional graph network is used to associate the node information of the dependency tree structure. We conduct experiments on four benchmark datasets. The experimental results are compared with those of other models, which shows that our model is better and more effective. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=aspect-level%20sentiment%20analysis" title="aspect-level sentiment analysis">aspect-level sentiment analysis</a>, <a href="https://publications.waset.org/abstracts/search?q=attention" title=" attention"> attention</a>, <a href="https://publications.waset.org/abstracts/search?q=multi-channel%20convolution%20network" title=" multi-channel convolution network"> multi-channel convolution network</a>, <a href="https://publications.waset.org/abstracts/search?q=graph%20convolution%20network" title=" graph convolution network"> graph convolution network</a>, <a href="https://publications.waset.org/abstracts/search?q=dependency%20tree" title=" dependency tree"> dependency tree</a> </p> <a href="https://publications.waset.org/abstracts/146513/aspect-level-sentiment-analysis-with-multi-channel-and-graph-convolutional-networks" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/146513.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">219</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2028</span> Mastering Digitization: A Quality-Adapted Digital Transformation Model</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Franziska%20Schaefer">Franziska Schaefer</a>, <a href="https://publications.waset.org/abstracts/search?q=Marlene%20Kuhn"> Marlene Kuhn</a>, <a href="https://publications.waset.org/abstracts/search?q=Heiner%20Otten"> Heiner Otten</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In the very near future, digitization will be the main challenge a company has to master to survive in a highly competitive market. Developing the right transformation strategy by considering all relevant aspects determines the success or failure of a company. Especially the digital focus on the customer plays a key role in creating sustainable competitive advantages, also leading to new tasks within the quality management. Therefore, quality management needs to be particularly addressed to support the upcoming digital change. In this paper, we present an analysis of existing digital transformation approaches and derive a transformation strategy from a quality management perspective. We identify and classify different transformation dimensions and assess their relevance to quality management tasks, resulting in a quality-adapted digital transformation model. Furthermore, we introduce applicable and customized quality management methods to support the presented digital transformation tasks. With our developed model we provide a digital transformation guideline from a quality perspective to master future disruptive changes. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=digital%20transformation" title="digital transformation">digital transformation</a>, <a href="https://publications.waset.org/abstracts/search?q=digitization" title=" digitization"> digitization</a>, <a href="https://publications.waset.org/abstracts/search?q=quality%20management" title=" quality management"> quality management</a>, <a href="https://publications.waset.org/abstracts/search?q=strategy" title=" strategy"> strategy</a> </p> <a href="https://publications.waset.org/abstracts/78145/mastering-digitization-a-quality-adapted-digital-transformation-model" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/78145.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">480</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2027</span> An Automatic Model Transformation Methodology Based on Semantic and Syntactic Comparisons and the Granularity Issue Involved</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Tiexin%20Wang">Tiexin Wang</a>, <a href="https://publications.waset.org/abstracts/search?q=Sebastien%20Truptil"> Sebastien Truptil</a>, <a href="https://publications.waset.org/abstracts/search?q=Frederick%20Benaben"> Frederick Benaben</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Model transformation, as a pivotal aspect of Model-driven engineering, attracts more and more attentions both from researchers and practitioners. Many domains (enterprise engineering, software engineering, knowledge engineering, etc.) use model transformation principles and practices to serve to their domain specific problems; furthermore, model transformation could also be used to fulfill the gap between different domains: by sharing and exchanging knowledge. Since model transformation has been widely used, there comes new requirement on it: effectively and efficiently define the transformation process and reduce manual effort that involved in. This paper presents an automatic model transformation methodology based on semantic and syntactic comparisons, and focuses particularly on granularity issue that existed in transformation process. Comparing to the traditional model transformation methodologies, this methodology serves to a general purpose: cross-domain methodology. Semantic and syntactic checking measurements are combined into a refined transformation process, which solves the granularity issue. Moreover, semantic and syntactic comparisons are supported by software tool; manual effort is replaced in this way. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=automatic%20model%20transformation" title="automatic model transformation">automatic model transformation</a>, <a href="https://publications.waset.org/abstracts/search?q=granularity%20issue" title=" granularity issue"> granularity issue</a>, <a href="https://publications.waset.org/abstracts/search?q=model-driven%20engineering" title=" model-driven engineering"> model-driven engineering</a>, <a href="https://publications.waset.org/abstracts/search?q=semantic%20and%20syntactic%20comparisons" title=" semantic and syntactic comparisons"> semantic and syntactic comparisons</a> </p> <a href="https://publications.waset.org/abstracts/27878/an-automatic-model-transformation-methodology-based-on-semantic-and-syntactic-comparisons-and-the-granularity-issue-involved" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/27878.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">395</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2026</span> Makhraj Recognition Using Convolutional Neural Network</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Zan%20Azma%20Nasruddin">Zan Azma Nasruddin</a>, <a href="https://publications.waset.org/abstracts/search?q=Irwan%20Mazlin"> Irwan Mazlin</a>, <a href="https://publications.waset.org/abstracts/search?q=Nor%20Aziah%20Daud"> Nor Aziah Daud</a>, <a href="https://publications.waset.org/abstracts/search?q=Fauziah%20Redzuan"> Fauziah Redzuan</a>, <a href="https://publications.waset.org/abstracts/search?q=Fariza%20Hanis%20Abdul%20Razak"> Fariza Hanis Abdul Razak</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper focuses on a machine learning that learn the correct pronunciation of Makhraj Huroofs. Usually, people need to find an expert to pronounce the Huroof accurately. In this study, the researchers have developed a system that is able to learn the selected Huroofs which are ha, tsa, zho, and dza using the Convolutional Neural Network. The researchers present the chosen type of the CNN architecture to make the system that is able to learn the data (Huroofs) as quick as possible and produces high accuracy during the prediction. The researchers have experimented the system to measure the accuracy and the cross entropy in the training process. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=convolutional%20neural%20network" title="convolutional neural network">convolutional neural network</a>, <a href="https://publications.waset.org/abstracts/search?q=Makhraj%20recognition" title=" Makhraj recognition"> Makhraj recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=speech%20recognition" title=" speech recognition"> speech recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=signal%20processing" title=" signal processing"> signal processing</a>, <a href="https://publications.waset.org/abstracts/search?q=tensorflow" title=" tensorflow"> tensorflow</a> </p> <a href="https://publications.waset.org/abstracts/85389/makhraj-recognition-using-convolutional-neural-network" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/85389.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">335</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2025</span> Effects of Mechanical Test and Shape of Grain Boundary on Martensitic Transformation in Fe-Ni-C Steel</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Mounir%20Gaci">Mounir Gaci</a>, <a href="https://publications.waset.org/abstracts/search?q=Salim%20Meziani"> Salim Meziani</a>, <a href="https://publications.waset.org/abstracts/search?q=Atmane%20Fouathia"> Atmane Fouathia </a> </p> <p class="card-text"><strong>Abstract:</strong></p> The purpose of the present paper is to model the behavior of metal alloy, type TRIP steel (Transformation Induced Plasticity), during solid/solid phase transition. A two-dimensional micromechanical model is implemented in finite element software (ZEBULON) to simulate the martensitic transformation in Fe-Ni-C steel grain under mechanical tensile stress of 250 MPa. The effects of non-uniform grain boundary and the criterion of mechanical shear load on the transformation and on the TRIP value during martensitic transformation are studied. The suggested mechanical criterion is favourable to the influence of the shear phenomenon on the progression of the martensitic transformation (Magee’s mechanism). The obtained results are in satisfactory agreement with experimental ones and show the influence of the grain boundary shape and the chosen mechanical criterion (SMF) on the transformation parameters. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=martensitic%20transformation" title="martensitic transformation">martensitic transformation</a>, <a href="https://publications.waset.org/abstracts/search?q=non-uniform%20Grain%20Boundary" title=" non-uniform Grain Boundary"> non-uniform Grain Boundary</a>, <a href="https://publications.waset.org/abstracts/search?q=TRIP" title=" TRIP"> TRIP</a>, <a href="https://publications.waset.org/abstracts/search?q=shear%20Mechanical%20force%20%28SMF%29" title=" shear Mechanical force (SMF)"> shear Mechanical force (SMF)</a> </p> <a href="https://publications.waset.org/abstracts/42236/effects-of-mechanical-test-and-shape-of-grain-boundary-on-martensitic-transformation-in-fe-ni-c-steel" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/42236.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">261</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2024</span> Generating Innovations in Established Banks through Digital Transformation</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Wisu%20Suntoyo">Wisu Suntoyo</a>, <a href="https://publications.waset.org/abstracts/search?q=Dedy%20Sushandoyo"> Dedy Sushandoyo</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Innovation and digital transformation are essential for firms’ competitiveness in the digital age. The competition in Indonesia’s banking industry provides an intriguing case study for understanding how digital transformation can generate innovation in established companies. The empirical evidence of this study is mainly based on interviews and annual reports examining four established banks in their various states of digital transformation. The findings of this study reveal that banks’ digital transformations that lead to innovations differ in terms of the activities undertaken and the outcomes achieved depending on the state of advancement in which they are. Digital transformation is a complex and challenging process, and this study finds that with this strategy, established banks have shown capable of generating innovation. Banks can choose types of transformation activities that generate radical, architectural, modular, or even incremental innovations. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=digital%20transformation" title="digital transformation">digital transformation</a>, <a href="https://publications.waset.org/abstracts/search?q=innovations" title=" innovations"> innovations</a>, <a href="https://publications.waset.org/abstracts/search?q=banking%20industry" title=" banking industry"> banking industry</a>, <a href="https://publications.waset.org/abstracts/search?q=established%20banks" title=" established banks"> established banks</a> </p> <a href="https://publications.waset.org/abstracts/166179/generating-innovations-in-established-banks-through-digital-transformation" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/166179.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">99</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2023</span> Health Transformation Program and Effects on Health Expenditures</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Zeynep%20Karacor">Zeynep Karacor</a>, <a href="https://publications.waset.org/abstracts/search?q=Rahime%20Hulya%20Ozturk"> Rahime Hulya Ozturk</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In recent years, the rise of population density and the problem of aging population took attention to the health expenditures. In Turkey, some regulations and infrastructure changes in health sector have occurred. These changes are called Health Transformation Program. The productivity of health services, patient satisfaction, quality of services are tried to be improved with this program. Some radical changes are applied in Turkish economy in this context. The aim of this paper is to present the effects of Health Transformation Program on health expenditures. In the first part of the paper, some information’s about health system and applications in Turkey are discussed. In the second part, the aims of Health Transformation Program are explained. And in the third part the effects of Health Transformation Program on health expenditures are examined. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=health%20transformation%20program" title="health transformation program">health transformation program</a>, <a href="https://publications.waset.org/abstracts/search?q=Turkey" title=" Turkey"> Turkey</a>, <a href="https://publications.waset.org/abstracts/search?q=health%20services" title=" health services"> health services</a>, <a href="https://publications.waset.org/abstracts/search?q=health%20expenditures" title=" health expenditures"> health expenditures</a> </p> <a href="https://publications.waset.org/abstracts/57777/health-transformation-program-and-effects-on-health-expenditures" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/57777.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">395</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2022</span> Frequency Transformation with Pascal Matrix Equations</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Phuoc%20Si%20Nguyen">Phuoc Si Nguyen</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Frequency transformation with Pascal matrix equations is a method for transforming an electronic filter (analogue or digital) into another filter. The technique is based on frequency transformation in the s-domain, bilinear z-transform with pre-warping frequency, inverse bilinear transformation and a very useful application of the Pascal’s triangle that simplifies computing and enables calculation by hand when transforming from one filter to another. This paper will introduce two methods to transform a filter into a digital filter: frequency transformation from the s-domain into the z-domain; and frequency transformation in the z-domain. Further, two Pascal matrix equations are derived: an analogue to digital filter Pascal matrix equation and a digital to digital filter Pascal matrix equation. These are used to design a desired digital filter from a given filter. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=frequency%20transformation" title="frequency transformation">frequency transformation</a>, <a href="https://publications.waset.org/abstracts/search?q=bilinear%20z-transformation" title=" bilinear z-transformation"> bilinear z-transformation</a>, <a href="https://publications.waset.org/abstracts/search?q=pre-warping%20frequency" title=" pre-warping frequency"> pre-warping frequency</a>, <a href="https://publications.waset.org/abstracts/search?q=digital%20filters" title=" digital filters"> digital filters</a>, <a href="https://publications.waset.org/abstracts/search?q=analog%20filters" title=" analog filters"> analog filters</a>, <a href="https://publications.waset.org/abstracts/search?q=pascal%E2%80%99s%20triangle" title=" pascal’s triangle"> pascal’s triangle</a> </p> <a href="https://publications.waset.org/abstracts/34866/frequency-transformation-with-pascal-matrix-equations" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/34866.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">549</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2021</span> Tumor Detection Using Convolutional Neural Networks (CNN) Based Neural Network</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Vinai%20K.%20Singh">Vinai K. Singh</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In Neural Network-based Learning techniques, there are several models of Convolutional Networks. Whenever the methods are deployed with large datasets, only then can their applicability and appropriateness be determined. Clinical and pathological pictures of lobular carcinoma are thought to exhibit a large number of random formations and textures. Working with such pictures is a difficult problem in machine learning. Focusing on wet laboratories and following the outcomes, numerous studies have been published with fresh commentaries in the investigation. In this research, we provide a framework that can operate effectively on raw photos of various resolutions while easing the issues caused by the existence of patterns and texturing. The suggested approach produces very good findings that may be used to make decisions in the diagnosis of cancer. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=lobular%20carcinoma" title="lobular carcinoma">lobular carcinoma</a>, <a href="https://publications.waset.org/abstracts/search?q=convolutional%20neural%20networks%20%28CNN%29" title=" convolutional neural networks (CNN)"> convolutional neural networks (CNN)</a>, <a href="https://publications.waset.org/abstracts/search?q=deep%20learning" title=" deep learning"> deep learning</a>, <a href="https://publications.waset.org/abstracts/search?q=histopathological%20imagery%20scans" title=" histopathological imagery scans"> histopathological imagery scans</a> </p> <a href="https://publications.waset.org/abstracts/146403/tumor-detection-using-convolutional-neural-networks-cnn-based-neural-network" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/146403.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">136</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2020</span> Digraph Generated by Idempotents in Certain Finite Semigroup of Mappings</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Hassan%20Ibrahim">Hassan Ibrahim</a>, <a href="https://publications.waset.org/abstracts/search?q=Moses%20Anayo%20Mbah"> Moses Anayo Mbah</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The idempotent generators in a finite full transformation and the digraph of full transformation semi group have been an interesting area of research in group theory. In this work, it characterized some idempotent elements in full transformation semigroup T_n by counting the strongly connected and disconnected digraphs, and also the weakly and unilaterally connected digraphs. The order for those digraphs was further obtained in T_n. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=digraphs" title="digraphs">digraphs</a>, <a href="https://publications.waset.org/abstracts/search?q=indempotent" title=" indempotent"> indempotent</a>, <a href="https://publications.waset.org/abstracts/search?q=semigroup" title=" semigroup"> semigroup</a>, <a href="https://publications.waset.org/abstracts/search?q=transformation" title=" transformation"> transformation</a> </p> <a href="https://publications.waset.org/abstracts/187023/digraph-generated-by-idempotents-in-certain-finite-semigroup-of-mappings" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/187023.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">38</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2019</span> A General Framework to Successfully Operate the Digital Transformation Process in the Post-COVID Era</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Driss%20Kettani">Driss Kettani</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In this paper, we shed light on “Digital Divide 2.0,” which we see as COVID-19’s Version of the Digital Divide! We believe that “Fighting” against Digital Divide 2.0 necessitates for a Country to be seriously advanced in the Global Digital Transformation that is, naturally, a complex, delicate, costly and long-term Process. We build an argument supporting our assumption and, from there, we present the foundations of a computational framework to guide and streamline Digital Transformation at all levels. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=digital%20divide%202.0" title="digital divide 2.0">digital divide 2.0</a>, <a href="https://publications.waset.org/abstracts/search?q=digital%20transformation" title=" digital transformation"> digital transformation</a>, <a href="https://publications.waset.org/abstracts/search?q=ICTs%20for%20development" title=" ICTs for development"> ICTs for development</a>, <a href="https://publications.waset.org/abstracts/search?q=computational%20outcomes%20assessment" title=" computational outcomes assessment"> computational outcomes assessment</a> </p> <a href="https://publications.waset.org/abstracts/143734/a-general-framework-to-successfully-operate-the-digital-transformation-process-in-the-post-covid-era" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/143734.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">177</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2018</span> Effect of Enterprise Digital Transformation on Enterprise Growth: Theoretical Logic and Chinese Experience</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Bin%20Li">Bin Li</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In the era of the digital economy, digital transformation has gradually become a strategic choice for enterprise development, but there is a relative lack of systematic research from the perspective of enterprise growth. Based on the sample of Chinese A-share listed companies from 2011 to 2021, this paper constructs A digital transformation index system and an enterprise growth composite index to empirically test the impact of enterprise digital transformation on enterprise growth and its mechanism. The results show that digital transformation can significantly promote corporate growth. The mechanism analysis finds that reducing operating costs, optimizing human capital structure, promoting R&D output and improving digital innovation capability play an important intermediary role in the process of digital transformation promoting corporate growth. At the same time, the level of external digital infrastructure and the strength of organizational resilience play a positive moderating role in the process of corporate digital transformation promoting corporate growth. In addition, while further analyzing the heterogeneity of enterprises, this paper further deepens the analysis of the driving factors and digital technology support of digital transformation, as well as the three dimensions of enterprise growth, thus deepening the research depth of enterprise digital transformation. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=digital%20transformation" title="digital transformation">digital transformation</a>, <a href="https://publications.waset.org/abstracts/search?q=enterprise%20growth" title=" enterprise growth"> enterprise growth</a>, <a href="https://publications.waset.org/abstracts/search?q=digital%20technology" title=" digital technology"> digital technology</a>, <a href="https://publications.waset.org/abstracts/search?q=digital%20infrastructure" title=" digital infrastructure"> digital infrastructure</a>, <a href="https://publications.waset.org/abstracts/search?q=organization%20resilience" title=" organization resilience"> organization resilience</a>, <a href="https://publications.waset.org/abstracts/search?q=digital%20innovation" title=" digital innovation"> digital innovation</a> </p> <a href="https://publications.waset.org/abstracts/181633/effect-of-enterprise-digital-transformation-on-enterprise-growth-theoretical-logic-and-chinese-experience" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/181633.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">61</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2017</span> ELD79-LGD2006 Transformation Techniques Implementation and Accuracy Comparison in Tripoli Area, Libya</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Jamal%20A.%20Gledan">Jamal A. Gledan</a>, <a href="https://publications.waset.org/abstracts/search?q=Othman%20A.%20Azzeidani"> Othman A. Azzeidani </a> </p> <p class="card-text"><strong>Abstract:</strong></p> During the last decade, Libya established a new Geodetic Datum called Libyan Geodetic Datum 2006 (LGD 2006) by using GPS, whereas the ground traversing method was used to establish the last Libyan datum which was called the Europe Libyan Datum 79 (ELD79). The current research paper introduces ELD79 to LGD2006 coordinate transformation technique, the accurate comparison of transformation between multiple regression equations and the three-parameters model (Bursa-Wolf). The results had been obtained show that the overall accuracy of stepwise multi regression equations is better than that can be determined by using Bursa-Wolf transformation model. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=geodetic%20datum" title="geodetic datum">geodetic datum</a>, <a href="https://publications.waset.org/abstracts/search?q=horizontal%20control%20points" title=" horizontal control points"> horizontal control points</a>, <a href="https://publications.waset.org/abstracts/search?q=traditional%20similarity%20transformation%20model" title=" traditional similarity transformation model"> traditional similarity transformation model</a>, <a href="https://publications.waset.org/abstracts/search?q=unconventional%20transformation%20techniques" title=" unconventional transformation techniques"> unconventional transformation techniques</a> </p> <a href="https://publications.waset.org/abstracts/6281/eld79-lgd2006-transformation-techniques-implementation-and-accuracy-comparison-in-tripoli-area-libya" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/6281.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">307</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2016</span> The Role of Business Process Management in Driving Digital Transformation: Insurance Company Case Study</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Dalia%20Su%C5%A1a%20Vugec">Dalia Suša Vugec</a>, <a href="https://publications.waset.org/abstracts/search?q=Ana-Marija%20Stjepi%C4%87"> Ana-Marija Stjepić</a>, <a href="https://publications.waset.org/abstracts/search?q=Darija%20Ivandi%C4%87%20Vidovi%C4%87"> Darija Ivandić Vidović</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Digital transformation is one of the latest trends on the global market. In order to maintain the competitive advantage and sustainability, increasing number of organizations are conducting digital transformation processes. Those organizations are changing their business processes and creating new business models with the help of digital technologies. In that sense, one should also observe the role of business process management (BPM) and its maturity in driving digital transformation. Therefore, the goal of this paper is to investigate the role of BPM in digital transformation process within one organization. Since experiences from practice show that organizations from financial sector could be observed as leaders in digital transformation, an insurance company has been selected to participate in the study. That company has been selected due to the high level of its BPM maturity and the fact that it has previously been through a digital transformation process. In order to fulfill the goals of the paper, several interviews, as well as questionnaires, have been conducted within the selected company. The results are presented in a form of a case study. Results indicate that digital transformation process within the observed company has been successful, with special focus on the development of digital strategy, BPM and change management. The role of BPM in the digital transformation of the observed company is further discussed in the paper. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=business%20process%20management" title="business process management">business process management</a>, <a href="https://publications.waset.org/abstracts/search?q=case%20study" title=" case study"> case study</a>, <a href="https://publications.waset.org/abstracts/search?q=Croatia" title=" Croatia"> Croatia</a>, <a href="https://publications.waset.org/abstracts/search?q=digital%20transformation" title=" digital transformation"> digital transformation</a>, <a href="https://publications.waset.org/abstracts/search?q=insurance%20company" title=" insurance company"> insurance company</a> </p> <a href="https://publications.waset.org/abstracts/96874/the-role-of-business-process-management-in-driving-digital-transformation-insurance-company-case-study" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/96874.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">194</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2015</span> Slice Bispectrogram Analysis-Based Classification of Environmental Sounds Using Convolutional Neural Network</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Katsumi%20Hirata">Katsumi Hirata</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Certain systems can function well only if they recognize the sound environment as humans do. In this research, we focus on sound classification by adopting a convolutional neural network and aim to develop a method that automatically classifies various environmental sounds. Although the neural network is a powerful technique, the performance depends on the type of input data. Therefore, we propose an approach via a slice bispectrogram, which is a third-order spectrogram and is a slice version of the amplitude for the short-time bispectrum. This paper explains the slice bispectrogram and discusses the effectiveness of the derived method by evaluating the experimental results using the ESC‑50 sound dataset. As a result, the proposed scheme gives high accuracy and stability. Furthermore, some relationship between the accuracy and non-Gaussianity of sound signals was confirmed. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=environmental%20sound" title="environmental sound">environmental sound</a>, <a href="https://publications.waset.org/abstracts/search?q=bispectrum" title=" bispectrum"> bispectrum</a>, <a href="https://publications.waset.org/abstracts/search?q=spectrogram" title=" spectrogram"> spectrogram</a>, <a href="https://publications.waset.org/abstracts/search?q=slice%20bispectrogram" title=" slice bispectrogram"> slice bispectrogram</a>, <a href="https://publications.waset.org/abstracts/search?q=convolutional%20neural%20network" title=" convolutional neural network"> convolutional neural network</a> </p> <a href="https://publications.waset.org/abstracts/114107/slice-bispectrogram-analysis-based-classification-of-environmental-sounds-using-convolutional-neural-network" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/114107.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">126</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2014</span> Malignancy Assessment of Brain Tumors Using Convolutional Neural Network</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Chung-Ming%20Lo">Chung-Ming Lo</a>, <a href="https://publications.waset.org/abstracts/search?q=Kevin%20Li-Chun%20Hsieh"> Kevin Li-Chun Hsieh</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The central nervous system in the World Health Organization defines grade 2, 3, 4 gliomas according to the aggressiveness. For brain tumors, using image examination would have a lower risk than biopsy. Besides, it is a challenge to extract relevant tissues from biopsy operation. Observing the whole tumor structure and composition can provide a more objective assessment. This study further proposed a computer-aided diagnosis (CAD) system based on a convolutional neural network to quantitatively evaluate a tumor's malignancy from brain magnetic resonance imaging. A total of 30 grade 2, 43 grade 3, and 57 grade 4 gliomas were collected in the experiment. Transferred parameters from AlexNet were fine-tuned to classify the target brain tumors and achieved an accuracy of 98% and an area under the receiver operating characteristics curve (Az) of 0.99. Without pre-trained features, only 61% of accuracy was obtained. The proposed convolutional neural network can accurately and efficiently classify grade 2, 3, and 4 gliomas. The promising accuracy can provide diagnostic suggestions to radiologists in the clinic. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=convolutional%20neural%20network" title="convolutional neural network">convolutional neural network</a>, <a href="https://publications.waset.org/abstracts/search?q=computer-aided%20diagnosis" title=" computer-aided diagnosis"> computer-aided diagnosis</a>, <a href="https://publications.waset.org/abstracts/search?q=glioblastoma" title=" glioblastoma"> glioblastoma</a>, <a href="https://publications.waset.org/abstracts/search?q=magnetic%20resonance%20imaging" title=" magnetic resonance imaging"> magnetic resonance imaging</a> </p> <a href="https://publications.waset.org/abstracts/108847/malignancy-assessment-of-brain-tumors-using-convolutional-neural-network" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/108847.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">147</span> </span> </div> </div> <ul class="pagination"> <li class="page-item disabled"><span class="page-link">‹</span></li> <li class="page-item active"><span class="page-link">1</span></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=convolutional%20transformation&page=2">2</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=convolutional%20transformation&page=3">3</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=convolutional%20transformation&page=4">4</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=convolutional%20transformation&page=5">5</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=convolutional%20transformation&page=6">6</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=convolutional%20transformation&page=7">7</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=convolutional%20transformation&page=8">8</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=convolutional%20transformation&page=9">9</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=convolutional%20transformation&page=10">10</a></li> <li class="page-item disabled"><span class="page-link">...</span></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=convolutional%20transformation&page=68">68</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=convolutional%20transformation&page=69">69</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=convolutional%20transformation&page=2" rel="next">›</a></li> </ul> </div> </main> <footer> <div id="infolinks" class="pt-3 pb-2"> <div class="container"> <div style="background-color:#f5f5f5;" class="p-3"> <div class="row"> <div class="col-md-2"> <ul class="list-unstyled"> About <li><a href="https://waset.org/page/support">About Us</a></li> <li><a href="https://waset.org/page/support#legal-information">Legal</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/WASET-16th-foundational-anniversary.pdf">WASET celebrates its 16th foundational anniversary</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Account <li><a href="https://waset.org/profile">My Account</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Explore <li><a href="https://waset.org/disciplines">Disciplines</a></li> <li><a href="https://waset.org/conferences">Conferences</a></li> <li><a href="https://waset.org/conference-programs">Conference Program</a></li> <li><a href="https://waset.org/committees">Committees</a></li> <li><a href="https://publications.waset.org">Publications</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Research <li><a href="https://publications.waset.org/abstracts">Abstracts</a></li> <li><a href="https://publications.waset.org">Periodicals</a></li> <li><a href="https://publications.waset.org/archive">Archive</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Open Science <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Science-Philosophy.pdf">Open Science Philosophy</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Science-Award.pdf">Open Science Award</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Society-Open-Science-and-Open-Innovation.pdf">Open Innovation</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Postdoctoral-Fellowship-Award.pdf">Postdoctoral Fellowship Award</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Scholarly-Research-Review.pdf">Scholarly Research Review</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Support <li><a href="https://waset.org/page/support">Support</a></li> <li><a href="https://waset.org/profile/messages/create">Contact Us</a></li> <li><a href="https://waset.org/profile/messages/create">Report Abuse</a></li> </ul> </div> </div> </div> </div> </div> <div class="container text-center"> <hr style="margin-top:0;margin-bottom:.3rem;"> <a href="https://creativecommons.org/licenses/by/4.0/" target="_blank" class="text-muted small">Creative Commons Attribution 4.0 International License</a> <div id="copy" class="mt-2">© 2024 World Academy of Science, Engineering and Technology</div> </div> </footer> <a href="javascript:" id="return-to-top"><i class="fas fa-arrow-up"></i></a> <div class="modal" id="modal-template"> <div class="modal-dialog"> <div class="modal-content"> <div class="row m-0 mt-1"> <div class="col-md-12"> <button type="button" class="close" data-dismiss="modal" aria-label="Close"><span aria-hidden="true">×</span></button> </div> </div> <div class="modal-body"></div> </div> </div> </div> <script src="https://cdn.waset.org/static/plugins/jquery-3.3.1.min.js"></script> <script src="https://cdn.waset.org/static/plugins/bootstrap-4.2.1/js/bootstrap.bundle.min.js"></script> <script src="https://cdn.waset.org/static/js/site.js?v=150220211556"></script> <script> jQuery(document).ready(function() { /*jQuery.get("https://publications.waset.org/xhr/user-menu", function (response) { jQuery('#mainNavMenu').append(response); });*/ jQuery.get({ url: "https://publications.waset.org/xhr/user-menu", cache: false }).then(function(response){ jQuery('#mainNavMenu').append(response); }); }); </script> </body> </html>