CINXE.COM
Search results for: graph convolutional networks (GCNs)
<!DOCTYPE html> <html lang="en" dir="ltr"> <head> <!-- Google tag (gtag.js) --> <script async src="https://www.googletagmanager.com/gtag/js?id=G-P63WKM1TM1"></script> <script> window.dataLayer = window.dataLayer || []; function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'G-P63WKM1TM1'); </script> <!-- Yandex.Metrika counter --> <script type="text/javascript" > (function(m,e,t,r,i,k,a){m[i]=m[i]||function(){(m[i].a=m[i].a||[]).push(arguments)}; m[i].l=1*new Date(); for (var j = 0; j < document.scripts.length; j++) {if (document.scripts[j].src === r) { return; }} k=e.createElement(t),a=e.getElementsByTagName(t)[0],k.async=1,k.src=r,a.parentNode.insertBefore(k,a)}) (window, document, "script", "https://mc.yandex.ru/metrika/tag.js", "ym"); ym(55165297, "init", { clickmap:false, trackLinks:true, accurateTrackBounce:true, webvisor:false }); </script> <noscript><div><img src="https://mc.yandex.ru/watch/55165297" style="position:absolute; left:-9999px;" alt="" /></div></noscript> <!-- /Yandex.Metrika counter --> <!-- Matomo --> <!-- End Matomo Code --> <title>Search results for: graph convolutional networks (GCNs)</title> <meta name="description" content="Search results for: graph convolutional networks (GCNs)"> <meta name="keywords" content="graph convolutional networks (GCNs)"> <meta name="viewport" content="width=device-width, initial-scale=1, minimum-scale=1, maximum-scale=1, user-scalable=no"> <meta charset="utf-8"> <link href="https://cdn.waset.org/favicon.ico" type="image/x-icon" rel="shortcut icon"> <link href="https://cdn.waset.org/static/plugins/bootstrap-4.2.1/css/bootstrap.min.css" rel="stylesheet"> <link href="https://cdn.waset.org/static/plugins/fontawesome/css/all.min.css" rel="stylesheet"> <link href="https://cdn.waset.org/static/css/site.css?v=150220211555" rel="stylesheet"> </head> <body> <header> <div class="container"> <nav class="navbar navbar-expand-lg navbar-light"> <a class="navbar-brand" href="https://waset.org"> <img src="https://cdn.waset.org/static/images/wasetc.png" alt="Open Science Research Excellence" title="Open Science Research Excellence" /> </a> <button class="d-block d-lg-none navbar-toggler ml-auto" type="button" data-toggle="collapse" data-target="#navbarMenu" aria-controls="navbarMenu" aria-expanded="false" aria-label="Toggle navigation"> <span class="navbar-toggler-icon"></span> </button> <div class="w-100"> <div class="d-none d-lg-flex flex-row-reverse"> <form method="get" action="https://waset.org/search" class="form-inline my-2 my-lg-0"> <input class="form-control mr-sm-2" type="search" placeholder="Search Conferences" value="graph convolutional networks (GCNs)" name="q" aria-label="Search"> <button class="btn btn-light my-2 my-sm-0" type="submit"><i class="fas fa-search"></i></button> </form> </div> <div class="collapse navbar-collapse mt-1" id="navbarMenu"> <ul class="navbar-nav ml-auto align-items-center" id="mainNavMenu"> <li class="nav-item"> <a class="nav-link" href="https://waset.org/conferences" title="Conferences in 2024/2025/2026">Conferences</a> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/disciplines" title="Disciplines">Disciplines</a> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/committees" rel="nofollow">Committees</a> </li> <li class="nav-item dropdown"> <a class="nav-link dropdown-toggle" href="#" id="navbarDropdownPublications" role="button" data-toggle="dropdown" aria-haspopup="true" aria-expanded="false"> Publications </a> <div class="dropdown-menu" aria-labelledby="navbarDropdownPublications"> <a class="dropdown-item" href="https://publications.waset.org/abstracts">Abstracts</a> <a class="dropdown-item" href="https://publications.waset.org">Periodicals</a> <a class="dropdown-item" href="https://publications.waset.org/archive">Archive</a> </div> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/page/support" title="Support">Support</a> </li> </ul> </div> </div> </nav> </div> </header> <main> <div class="container mt-4"> <div class="row"> <div class="col-md-9 mx-auto"> <form method="get" action="https://publications.waset.org/abstracts/search"> <div id="custom-search-input"> <div class="input-group"> <i class="fas fa-search"></i> <input type="text" class="search-query" name="q" placeholder="Author, Title, Abstract, Keywords" value="graph convolutional networks (GCNs)"> <input type="submit" class="btn_search" value="Search"> </div> </div> </form> </div> </div> <div class="row mt-3"> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Commenced</strong> in January 2007</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Frequency:</strong> Monthly</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Edition:</strong> International</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Paper Count:</strong> 3335</div> </div> </div> </div> <h1 class="mt-3 mb-3 text-center" style="font-size:1.6rem;">Search results for: graph convolutional networks (GCNs)</h1> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3335</span> Aspect-Level Sentiment Analysis with Multi-Channel and Graph Convolutional Networks</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Jiajun%20Wang">Jiajun Wang</a>, <a href="https://publications.waset.org/abstracts/search?q=Xiaoge%20Li"> Xiaoge Li</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The purpose of the aspect-level sentiment analysis task is to identify the sentiment polarity of aspects in a sentence. Currently, most methods mainly focus on using neural networks and attention mechanisms to model the relationship between aspects and context, but they ignore the dependence of words in different ranges in the sentence, resulting in deviation when assigning relationship weight to other words other than aspect words. To solve these problems, we propose a new aspect-level sentiment analysis model that combines a multi-channel convolutional network and graph convolutional network (GCN). Firstly, the context and the degree of association between words are characterized by Long Short-Term Memory (LSTM) and self-attention mechanism. Besides, a multi-channel convolutional network is used to extract the features of words in different ranges. Finally, a convolutional graph network is used to associate the node information of the dependency tree structure. We conduct experiments on four benchmark datasets. The experimental results are compared with those of other models, which shows that our model is better and more effective. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=aspect-level%20sentiment%20analysis" title="aspect-level sentiment analysis">aspect-level sentiment analysis</a>, <a href="https://publications.waset.org/abstracts/search?q=attention" title=" attention"> attention</a>, <a href="https://publications.waset.org/abstracts/search?q=multi-channel%20convolution%20network" title=" multi-channel convolution network"> multi-channel convolution network</a>, <a href="https://publications.waset.org/abstracts/search?q=graph%20convolution%20network" title=" graph convolution network"> graph convolution network</a>, <a href="https://publications.waset.org/abstracts/search?q=dependency%20tree" title=" dependency tree"> dependency tree</a> </p> <a href="https://publications.waset.org/abstracts/146513/aspect-level-sentiment-analysis-with-multi-channel-and-graph-convolutional-networks" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/146513.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">217</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3334</span> Enhancing Knowledge Graph Convolutional Networks with Structural Adaptive Receptive Fields for Improved Node Representation and Information Aggregation</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Zheng%20Zhihao">Zheng Zhihao</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Recently, Knowledge Graph Framework Network (KGCN) has developed powerful capabilities in knowledge representation and reasoning tasks. However, traditional KGCN often uses a fixed weight mechanism when aggregating information, failing to make full use of rich structural information, resulting in a certain expression ability of node representation, and easily causing over-smoothing problems. In order to solve these challenges, the paper proposes an new graph neural network model called KGCN-STAR (Knowledge Graph Convolutional Network with Structural Adaptive Receptive Fields). This model dynamically adjusts the perception of each node by introducing a structural adaptive receptive field. wild range, and a subgraph aggregator is designed to capture local structural information more effectively. Experimental results show that KGCN-STAR shows significant performance improvement on multiple knowledge graph data sets, especially showing considerable capabilities in the task of representation learning of complex structures. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=knowledge%20graph" title="knowledge graph">knowledge graph</a>, <a href="https://publications.waset.org/abstracts/search?q=graph%20neural%20networks" title=" graph neural networks"> graph neural networks</a>, <a href="https://publications.waset.org/abstracts/search?q=structural%20adaptive%20receptive%20fields" title=" structural adaptive receptive fields"> structural adaptive receptive fields</a>, <a href="https://publications.waset.org/abstracts/search?q=information%20aggregation" title=" information aggregation"> information aggregation</a> </p> <a href="https://publications.waset.org/abstracts/191048/enhancing-knowledge-graph-convolutional-networks-with-structural-adaptive-receptive-fields-for-improved-node-representation-and-information-aggregation" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/191048.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">33</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3333</span> Enhancing Scalability in Ethereum Network Analysis: Methods and Techniques</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Stefan%20K.%20Behfar">Stefan K. Behfar</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The rapid growth of the Ethereum network has brought forth the urgent need for scalable analysis methods to handle the increasing volume of blockchain data. In this research, we propose efficient methodologies for making Ethereum network analysis scalable. Our approach leverages a combination of graph-based data representation, probabilistic sampling, and parallel processing techniques to achieve unprecedented scalability while preserving critical network insights. Data Representation: We develop a graph-based data representation that captures the underlying structure of the Ethereum network. Each block transaction is represented as a node in the graph, while the edges signify temporal relationships. This representation ensures efficient querying and traversal of the blockchain data. Probabilistic Sampling: To cope with the vastness of the Ethereum blockchain, we introduce a probabilistic sampling technique. This method strategically selects a representative subset of transactions and blocks, allowing for concise yet statistically significant analysis. The sampling approach maintains the integrity of the network properties while significantly reducing the computational burden. Graph Convolutional Networks (GCNs): We incorporate GCNs to process the graph-based data representation efficiently. The GCN architecture enables the extraction of complex spatial and temporal patterns from the sampled data. This combination of graph representation and GCNs facilitates parallel processing and scalable analysis. Distributed Computing: To further enhance scalability, we adopt distributed computing frameworks such as Apache Hadoop and Apache Spark. By distributing computation across multiple nodes, we achieve a significant reduction in processing time and enhanced memory utilization. Our methodology harnesses the power of parallelism, making it well-suited for large-scale Ethereum network analysis. Evaluation and Results: We extensively evaluate our methodology on real-world Ethereum datasets covering diverse time periods and transaction volumes. The results demonstrate its superior scalability, outperforming traditional analysis methods. Our approach successfully handles the ever-growing Ethereum data, empowering researchers and developers with actionable insights from the blockchain. Case Studies: We apply our methodology to real-world Ethereum use cases, including detecting transaction patterns, analyzing smart contract interactions, and predicting network congestion. The results showcase the accuracy and efficiency of our approach, emphasizing its practical applicability in real-world scenarios. Security and Robustness: To ensure the reliability of our methodology, we conduct thorough security and robustness evaluations. Our approach demonstrates high resilience against adversarial attacks and perturbations, reaffirming its suitability for security-critical blockchain applications. Conclusion: By integrating graph-based data representation, GCNs, probabilistic sampling, and distributed computing, we achieve network scalability without compromising analytical precision. This approach addresses the pressing challenges posed by the expanding Ethereum network, opening new avenues for research and enabling real-time insights into decentralized ecosystems. Our work contributes to the development of scalable blockchain analytics, laying the foundation for sustainable growth and advancement in the domain of blockchain research and application. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=Ethereum" title="Ethereum">Ethereum</a>, <a href="https://publications.waset.org/abstracts/search?q=scalable%20network" title=" scalable network"> scalable network</a>, <a href="https://publications.waset.org/abstracts/search?q=GCN" title=" GCN"> GCN</a>, <a href="https://publications.waset.org/abstracts/search?q=probabilistic%20sampling" title=" probabilistic sampling"> probabilistic sampling</a>, <a href="https://publications.waset.org/abstracts/search?q=distributed%20computing" title=" distributed computing"> distributed computing</a> </p> <a href="https://publications.waset.org/abstracts/170845/enhancing-scalability-in-ethereum-network-analysis-methods-and-techniques" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/170845.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">76</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3332</span> Revolutionizing Financial Forecasts: Enhancing Predictions with Graph Convolutional Networks (GCN) - Long Short-Term Memory (LSTM) Fusion</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Ali%20Kazemi">Ali Kazemi</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Those within the volatile and interconnected international economic markets, appropriately predicting market trends, hold substantial fees for traders and financial establishments. Traditional device mastering strategies have made full-size strides in forecasting marketplace movements; however, monetary data's complicated and networked nature calls for extra sophisticated processes. This observation offers a groundbreaking method for monetary marketplace prediction that leverages the synergistic capability of Graph Convolutional Networks (GCNs) and Long Short-Term Memory (LSTM) networks. Our suggested algorithm is meticulously designed to forecast the traits of inventory market indices and cryptocurrency costs, utilizing a comprehensive dataset spanning from January 1, 2015, to December 31, 2023. This era, marked by sizable volatility and transformation in financial markets, affords a solid basis for schooling and checking out our predictive version. Our algorithm integrates diverse facts to construct a dynamic economic graph that correctly reflects market intricacies. We meticulously collect opening, closing, and high and low costs daily for key inventory marketplace indices (e.g., S&P 500, NASDAQ) and widespread cryptocurrencies (e.g., Bitcoin, Ethereum), ensuring a holistic view of marketplace traits. Daily trading volumes are also incorporated to seize marketplace pastime and liquidity, providing critical insights into the market's shopping for and selling dynamics. Furthermore, recognizing the profound influence of the monetary surroundings on financial markets, we integrate critical macroeconomic signs with hobby fees, inflation rates, GDP increase, and unemployment costs into our model. Our GCN algorithm is adept at learning the relational patterns amongst specific financial devices represented as nodes in a comprehensive market graph. Edges in this graph encapsulate the relationships based totally on co-movement styles and sentiment correlations, enabling our version to grasp the complicated community of influences governing marketplace moves. Complementing this, our LSTM algorithm is trained on sequences of the spatial-temporal illustration discovered through the GCN, enriched with historic fee and extent records. This lets the LSTM seize and expect temporal marketplace developments accurately. Inside the complete assessment of our GCN-LSTM algorithm across the inventory marketplace and cryptocurrency datasets, the version confirmed advanced predictive accuracy and profitability compared to conventional and opportunity machine learning to know benchmarks. Specifically, the model performed a Mean Absolute Error (MAE) of 0.85%, indicating high precision in predicting day-by-day charge movements. The RMSE was recorded at 1.2%, underscoring the model's effectiveness in minimizing tremendous prediction mistakes, which is vital in volatile markets. Furthermore, when assessing the model's predictive performance on directional market movements, it achieved an accuracy rate of 78%, significantly outperforming the benchmark models, averaging an accuracy of 65%. This high degree of accuracy is instrumental for techniques that predict the course of price moves. This study showcases the efficacy of mixing graph-based totally and sequential deep learning knowledge in economic marketplace prediction and highlights the fee of a comprehensive, records-pushed evaluation framework. Our findings promise to revolutionize investment techniques and hazard management practices, offering investors and economic analysts a powerful device to navigate the complexities of cutting-edge economic markets. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=financial%20market%20prediction" title="financial market prediction">financial market prediction</a>, <a href="https://publications.waset.org/abstracts/search?q=graph%20convolutional%20networks%20%28GCNs%29" title=" graph convolutional networks (GCNs)"> graph convolutional networks (GCNs)</a>, <a href="https://publications.waset.org/abstracts/search?q=long%20short-term%20memory%20%28LSTM%29" title=" long short-term memory (LSTM)"> long short-term memory (LSTM)</a>, <a href="https://publications.waset.org/abstracts/search?q=cryptocurrency%20forecasting" title=" cryptocurrency forecasting"> cryptocurrency forecasting</a> </p> <a href="https://publications.waset.org/abstracts/184980/revolutionizing-financial-forecasts-enhancing-predictions-with-graph-convolutional-networks-gcn-long-short-term-memory-lstm-fusion" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/184980.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">65</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3331</span> Experimental Study of Hyperparameter Tuning a Deep Learning Convolutional Recurrent Network for Text Classification</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Bharatendra%20Rai">Bharatendra Rai</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The sequence of words in text data has long-term dependencies and is known to suffer from vanishing gradient problems when developing deep learning models. Although recurrent networks such as long short-term memory networks help to overcome this problem, achieving high text classification performance is a challenging problem. Convolutional recurrent networks that combine the advantages of long short-term memory networks and convolutional neural networks can be useful for text classification performance improvements. However, arriving at suitable hyperparameter values for convolutional recurrent networks is still a challenging task where fitting a model requires significant computing resources. This paper illustrates the advantages of using convolutional recurrent networks for text classification with the help of statistically planned computer experiments for hyperparameter tuning. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=long%20short-term%20memory%20networks" title="long short-term memory networks">long short-term memory networks</a>, <a href="https://publications.waset.org/abstracts/search?q=convolutional%20recurrent%20networks" title=" convolutional recurrent networks"> convolutional recurrent networks</a>, <a href="https://publications.waset.org/abstracts/search?q=text%20classification" title=" text classification"> text classification</a>, <a href="https://publications.waset.org/abstracts/search?q=hyperparameter%20tuning" title=" hyperparameter tuning"> hyperparameter tuning</a>, <a href="https://publications.waset.org/abstracts/search?q=Tukey%20honest%20significant%20differences" title=" Tukey honest significant differences"> Tukey honest significant differences</a> </p> <a href="https://publications.waset.org/abstracts/169795/experimental-study-of-hyperparameter-tuning-a-deep-learning-convolutional-recurrent-network-for-text-classification" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/169795.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">129</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3330</span> Game Structure and Spatio-Temporal Action Detection in Soccer Using Graphs and 3D Convolutional Networks</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=J%C3%A9r%C3%A9mie%20Ochin">Jérémie Ochin</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Soccer analytics are built on two data sources: the frame-by-frame position of each player on the terrain and the sequences of events, such as ball drive, pass, cross, shot, throw-in... With more than 2000 ball-events per soccer game, their precise and exhaustive annotation, based on a monocular video stream such as a TV broadcast, remains a tedious and costly manual task. State-of-the-art methods for spatio-temporal action detection from a monocular video stream, often based on 3D convolutional neural networks, are close to reach levels of performances in mean Average Precision (mAP) compatibles with the automation of such task. Nevertheless, to meet their expectation of exhaustiveness in the context of data analytics, such methods must be applied in a regime of high recall – low precision, using low confidence score thresholds. This setting unavoidably leads to the detection of false positives that are the product of the well documented overconfidence behaviour of neural networks and, in this case, their limited access to contextual information and understanding of the game: their predictions are highly unstructured. Based on the assumption that professional soccer players’ behaviour, pose, positions and velocity are highly interrelated and locally driven by the player performing a ball-action, it is hypothesized that the addition of information regarding surrounding player’s appearance, positions and velocity in the prediction methods can improve their metrics. Several methods are compared to build a proper representation of the game surrounding a player, from handcrafted features of the local graph, based on domain knowledge, to the use of Graph Neural Networks trained in an end-to-end fashion with existing state-of-the-art 3D convolutional neural networks. It is shown that the inclusion of information regarding surrounding players helps reaching higher metrics. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=fine-grained%20action%20recognition" title="fine-grained action recognition">fine-grained action recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=human%20action%20recognition" title=" human action recognition"> human action recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=convolutional%20neural%20networks" title=" convolutional neural networks"> convolutional neural networks</a>, <a href="https://publications.waset.org/abstracts/search?q=graph%20neural%20networks" title=" graph neural networks"> graph neural networks</a>, <a href="https://publications.waset.org/abstracts/search?q=spatio-temporal%20action%20recognition" title=" spatio-temporal action recognition"> spatio-temporal action recognition</a> </p> <a href="https://publications.waset.org/abstracts/192167/game-structure-and-spatio-temporal-action-detection-in-soccer-using-graphs-and-3d-convolutional-networks" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/192167.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">23</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3329</span> MhAGCN: Multi-Head Attention Graph Convolutional Network for Web Services Classification</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Bing%20Li">Bing Li</a>, <a href="https://publications.waset.org/abstracts/search?q=Zhi%20Li"> Zhi Li</a>, <a href="https://publications.waset.org/abstracts/search?q=Yilong%20Yang"> Yilong Yang</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Web classification can promote the quality of service discovery and management in the service repository. It is widely used to locate developers desired services. Although traditional classification methods based on supervised learning models can achieve classification tasks, developers need to manually mark web services, and the quality of these tags may not be enough to establish an accurate classifier for service classification. With the doubling of the number of web services, the manual tagging method has become unrealistic. In recent years, the attention mechanism has made remarkable progress in the field of deep learning, and its huge potential has been fully demonstrated in various fields. This paper designs a multi-head attention graph convolutional network (MHAGCN) service classification method, which can assign different weights to the neighborhood nodes without complicated matrix operations or relying on understanding the entire graph structure. The framework combines the advantages of the attention mechanism and graph convolutional neural network. It can classify web services through automatic feature extraction. The comprehensive experimental results on a real dataset not only show the superior performance of the proposed model over the existing models but also demonstrate its potentially good interpretability for graph analysis. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=attention%20mechanism" title="attention mechanism">attention mechanism</a>, <a href="https://publications.waset.org/abstracts/search?q=graph%20convolutional%20network" title=" graph convolutional network"> graph convolutional network</a>, <a href="https://publications.waset.org/abstracts/search?q=interpretability" title=" interpretability"> interpretability</a>, <a href="https://publications.waset.org/abstracts/search?q=service%20classification" title=" service classification"> service classification</a>, <a href="https://publications.waset.org/abstracts/search?q=service%20discovery" title=" service discovery"> service discovery</a> </p> <a href="https://publications.waset.org/abstracts/131673/mhagcn-multi-head-attention-graph-convolutional-network-for-web-services-classification" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/131673.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">135</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3328</span> Drug-Drug Interaction Prediction in Diabetes Mellitus</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Rashini%20Maduka">Rashini Maduka</a>, <a href="https://publications.waset.org/abstracts/search?q=C.%20R.%20Wijesinghe"> C. R. Wijesinghe</a>, <a href="https://publications.waset.org/abstracts/search?q=A.%20R.%20Weerasinghe"> A. R. Weerasinghe</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Drug-drug interactions (DDIs) can happen when two or more drugs are taken together. Today DDIs have become a serious health issue due to adverse drug effects. In vivo and in vitro methods for identifying DDIs are time-consuming and costly. Therefore, in-silico-based approaches are preferred in DDI identification. Most machine learning models for DDI prediction are used chemical and biological drug properties as features. However, some drug features are not available and costly to extract. Therefore, it is better to make automatic feature engineering. Furthermore, people who have diabetes already suffer from other diseases and take more than one medicine together. Then adverse drug effects may happen to diabetic patients and cause unpleasant reactions in the body. In this study, we present a model with a graph convolutional autoencoder and a graph decoder using a dataset from DrugBank version 5.1.3. The main objective of the model is to identify unknown interactions between antidiabetic drugs and the drugs taken by diabetic patients for other diseases. We considered automatic feature engineering and used Known DDIs only as the input for the model. Our model has achieved 0.86 in AUC and 0.86 in AP. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=drug-drug%20interaction%20prediction" title="drug-drug interaction prediction">drug-drug interaction prediction</a>, <a href="https://publications.waset.org/abstracts/search?q=graph%20embedding" title=" graph embedding"> graph embedding</a>, <a href="https://publications.waset.org/abstracts/search?q=graph%20convolutional%20networks" title=" graph convolutional networks"> graph convolutional networks</a>, <a href="https://publications.waset.org/abstracts/search?q=adverse%20drug%20effects" title=" adverse drug effects"> adverse drug effects</a> </p> <a href="https://publications.waset.org/abstracts/165305/drug-drug-interaction-prediction-in-diabetes-mellitus" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/165305.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">100</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3327</span> Comparison of Classical Computer Vision vs. Convolutional Neural Networks Approaches for Weed Mapping in Aerial Images</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Paulo%20Cesar%20Pereira%20Junior">Paulo Cesar Pereira Junior</a>, <a href="https://publications.waset.org/abstracts/search?q=Alexandre%20Monteiro"> Alexandre Monteiro</a>, <a href="https://publications.waset.org/abstracts/search?q=Rafael%20da%20Luz%20Ribeiro"> Rafael da Luz Ribeiro</a>, <a href="https://publications.waset.org/abstracts/search?q=Antonio%20Carlos%20Sobieranski"> Antonio Carlos Sobieranski</a>, <a href="https://publications.waset.org/abstracts/search?q=Aldo%20von%20Wangenheim"> Aldo von Wangenheim</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In this paper, we present a comparison between convolutional neural networks and classical computer vision approaches, for the specific precision agriculture problem of weed mapping on sugarcane fields aerial images. A systematic literature review was conducted to find which computer vision methods are being used on this specific problem. The most cited methods were implemented, as well as four models of convolutional neural networks. All implemented approaches were tested using the same dataset, and their results were quantitatively and qualitatively analyzed. The obtained results were compared to a human expert made ground truth for validation. The results indicate that the convolutional neural networks present better precision and generalize better than the classical models. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=convolutional%20neural%20networks" title="convolutional neural networks">convolutional neural networks</a>, <a href="https://publications.waset.org/abstracts/search?q=deep%20learning" title=" deep learning"> deep learning</a>, <a href="https://publications.waset.org/abstracts/search?q=digital%20image%20processing" title=" digital image processing"> digital image processing</a>, <a href="https://publications.waset.org/abstracts/search?q=precision%20agriculture" title=" precision agriculture"> precision agriculture</a>, <a href="https://publications.waset.org/abstracts/search?q=semantic%20segmentation" title=" semantic segmentation"> semantic segmentation</a>, <a href="https://publications.waset.org/abstracts/search?q=unmanned%20aerial%20vehicles" title=" unmanned aerial vehicles"> unmanned aerial vehicles</a> </p> <a href="https://publications.waset.org/abstracts/112982/comparison-of-classical-computer-vision-vs-convolutional-neural-networks-approaches-for-weed-mapping-in-aerial-images" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/112982.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">260</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3326</span> Cricket Shot Recognition using Conditional Directed Spatial-Temporal Graph Networks</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Tanu%20Aneja">Tanu Aneja</a>, <a href="https://publications.waset.org/abstracts/search?q=Harsha%20Malaviya"> Harsha Malaviya</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Capturing pose information in cricket shots poses several challenges, such as low-resolution videos, noisy data, and joint occlusions caused by the nature of the shots. In response to these challenges, we propose a CondDGConv-based framework specifically for cricket shot prediction. By analyzing the spatial-temporal relationships in batsman shot sequences from an annotated 2D cricket dataset, our model achieves a 97% accuracy in predicting shot types. This performance is made possible by conditioning the graph network on batsman 2D poses, allowing for precise prediction of shot outcomes based on pose dynamics. Our approach highlights the potential for enhancing shot prediction in cricket analytics, offering a robust solution for overcoming pose-related challenges in sports analysis. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=action%20recognition" title="action recognition">action recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=cricket.%20sports%20video%20analytics" title=" cricket. sports video analytics"> cricket. sports video analytics</a>, <a href="https://publications.waset.org/abstracts/search?q=computer%20vision" title=" computer vision"> computer vision</a>, <a href="https://publications.waset.org/abstracts/search?q=graph%20convolutional%20networks" title=" graph convolutional networks"> graph convolutional networks</a> </p> <a href="https://publications.waset.org/abstracts/192975/cricket-shot-recognition-using-conditional-directed-spatial-temporal-graph-networks" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/192975.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">18</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3325</span> Metric Dimension on Line Graph of Honeycomb Networks</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=M.%20Hussain">M. Hussain</a>, <a href="https://publications.waset.org/abstracts/search?q=Aqsa%20Farooq"> Aqsa Farooq</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Let G = (V,E) be a connected graph and distance between any two vertices a and b in G is a−b geodesic and is denoted by d(a, b). A set of vertices W resolves a graph G if each vertex is uniquely determined by its vector of distances to the vertices in W. A metric dimension of G is the minimum cardinality of a resolving set of G. In this paper line graph of honeycomb network has been derived and then we calculated the metric dimension on line graph of honeycomb network. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=Resolving%20set" title="Resolving set">Resolving set</a>, <a href="https://publications.waset.org/abstracts/search?q=Metric%20dimension" title=" Metric dimension"> Metric dimension</a>, <a href="https://publications.waset.org/abstracts/search?q=Honeycomb%20network" title=" Honeycomb network"> Honeycomb network</a>, <a href="https://publications.waset.org/abstracts/search?q=Line%20graph" title=" Line graph"> Line graph</a> </p> <a href="https://publications.waset.org/abstracts/101558/metric-dimension-on-line-graph-of-honeycomb-networks" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/101558.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">200</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3324</span> Causal Relation Identification Using Convolutional Neural Networks and Knowledge Based Features</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Tharini%20N.%20de%20Silva">Tharini N. de Silva</a>, <a href="https://publications.waset.org/abstracts/search?q=Xiao%20Zhibo"> Xiao Zhibo</a>, <a href="https://publications.waset.org/abstracts/search?q=Zhao%20Rui"> Zhao Rui</a>, <a href="https://publications.waset.org/abstracts/search?q=Mao%20Kezhi"> Mao Kezhi</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Causal relation identification is a crucial task in information extraction and knowledge discovery. In this work, we present two approaches to causal relation identification. The first is a classification model trained on a set of knowledge-based features. The second is a deep learning based approach training a model using convolutional neural networks to classify causal relations. We experiment with several different convolutional neural networks (CNN) models based on previous work on relation extraction as well as our own research. Our models are able to identify both explicit and implicit causal relations as well as the direction of the causal relation. The results of our experiments show a higher accuracy than previously achieved for causal relation identification tasks. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=causal%20realtion%20extraction" title="causal realtion extraction">causal realtion extraction</a>, <a href="https://publications.waset.org/abstracts/search?q=relation%20extracton" title=" relation extracton"> relation extracton</a>, <a href="https://publications.waset.org/abstracts/search?q=convolutional%20neural%20network" title=" convolutional neural network"> convolutional neural network</a>, <a href="https://publications.waset.org/abstracts/search?q=text%20representation" title=" text representation"> text representation</a> </p> <a href="https://publications.waset.org/abstracts/61573/causal-relation-identification-using-convolutional-neural-networks-and-knowledge-based-features" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/61573.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">732</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3323</span> Taxonomic Classification for Living Organisms Using Convolutional Neural Networks</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Saed%20Khawaldeh">Saed Khawaldeh</a>, <a href="https://publications.waset.org/abstracts/search?q=Mohamed%20Elsharnouby"> Mohamed Elsharnouby</a>, <a href="https://publications.waset.org/abstracts/search?q=Alaa%20%20Eddin%20Alchalabi"> Alaa Eddin Alchalabi</a>, <a href="https://publications.waset.org/abstracts/search?q=Usama%20Pervaiz"> Usama Pervaiz</a>, <a href="https://publications.waset.org/abstracts/search?q=Tajwar%20Aleef"> Tajwar Aleef</a>, <a href="https://publications.waset.org/abstracts/search?q=Vu%20Hoang%20Minh"> Vu Hoang Minh</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Taxonomic classification has a wide-range of applications such as finding out more about the evolutionary history of organisms that can be done by making a comparison between species living now and species that lived in the past. This comparison can be made using different kinds of extracted species’ data which include DNA sequences. Compared to the estimated number of the organisms that nature harbours, humanity does not have a thorough comprehension of which specific species they all belong to, in spite of the significant development of science and scientific knowledge over many years. One of the methods that can be applied to extract information out of the study of organisms in this regard is to use the DNA sequence of a living organism as a marker, thus making it available to classify it into a taxonomy. The classification of living organisms can be done in many machine learning techniques including Neural Networks (NNs). In this study, DNA sequences classification is performed using Convolutional Neural Networks (CNNs) which is a special type of NNs. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=deep%20networks" title="deep networks">deep networks</a>, <a href="https://publications.waset.org/abstracts/search?q=convolutional%20neural%20networks" title=" convolutional neural networks"> convolutional neural networks</a>, <a href="https://publications.waset.org/abstracts/search?q=taxonomic%20classification" title=" taxonomic classification"> taxonomic classification</a>, <a href="https://publications.waset.org/abstracts/search?q=DNA%20sequences%20classification" title=" DNA sequences classification "> DNA sequences classification </a> </p> <a href="https://publications.waset.org/abstracts/65170/taxonomic-classification-for-living-organisms-using-convolutional-neural-networks" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/65170.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">442</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3322</span> Tumor Detection Using Convolutional Neural Networks (CNN) Based Neural Network</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Vinai%20K.%20Singh">Vinai K. Singh</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In Neural Network-based Learning techniques, there are several models of Convolutional Networks. Whenever the methods are deployed with large datasets, only then can their applicability and appropriateness be determined. Clinical and pathological pictures of lobular carcinoma are thought to exhibit a large number of random formations and textures. Working with such pictures is a difficult problem in machine learning. Focusing on wet laboratories and following the outcomes, numerous studies have been published with fresh commentaries in the investigation. In this research, we provide a framework that can operate effectively on raw photos of various resolutions while easing the issues caused by the existence of patterns and texturing. The suggested approach produces very good findings that may be used to make decisions in the diagnosis of cancer. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=lobular%20carcinoma" title="lobular carcinoma">lobular carcinoma</a>, <a href="https://publications.waset.org/abstracts/search?q=convolutional%20neural%20networks%20%28CNN%29" title=" convolutional neural networks (CNN)"> convolutional neural networks (CNN)</a>, <a href="https://publications.waset.org/abstracts/search?q=deep%20learning" title=" deep learning"> deep learning</a>, <a href="https://publications.waset.org/abstracts/search?q=histopathological%20imagery%20scans" title=" histopathological imagery scans"> histopathological imagery scans</a> </p> <a href="https://publications.waset.org/abstracts/146403/tumor-detection-using-convolutional-neural-networks-cnn-based-neural-network" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/146403.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">136</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3321</span> Image Classification with Localization Using Convolutional Neural Networks</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Bhuyain%20Mobarok%20Hossain">Bhuyain Mobarok Hossain</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Image classification and localization research is currently an important strategy in the field of computer vision. The evolution and advancement of deep learning and convolutional neural networks (CNN) have greatly improved the capabilities of object detection and image-based classification. Target detection is important to research in the field of computer vision, especially in video surveillance systems. To solve this problem, we will be applying a convolutional neural network of multiple scales at multiple locations in the image in one sliding window. Most translation networks move away from the bounding box around the area of interest. In contrast to this architecture, we consider the problem to be a classification problem where each pixel of the image is a separate section. Image classification is the method of predicting an individual category or specifying by a shoal of data points. Image classification is a part of the classification problem, including any labels throughout the image. The image can be classified as a day or night shot. Or, likewise, images of cars and motorbikes will be automatically placed in their collection. The deep learning of image classification generally includes convolutional layers; the invention of it is referred to as a convolutional neural network (CNN). <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=image%20classification" title="image classification">image classification</a>, <a href="https://publications.waset.org/abstracts/search?q=object%20detection" title=" object detection"> object detection</a>, <a href="https://publications.waset.org/abstracts/search?q=localization" title=" localization"> localization</a>, <a href="https://publications.waset.org/abstracts/search?q=particle%20filter" title=" particle filter"> particle filter</a> </p> <a href="https://publications.waset.org/abstracts/139288/image-classification-with-localization-using-convolutional-neural-networks" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/139288.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">305</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3320</span> An Application of Graph Theory to The Electrical Circuit Using Matrix Method</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Samai%27la%20Abdullahi">Samai'la Abdullahi</a> </p> <p class="card-text"><strong>Abstract:</strong></p> A graph is a pair of two set and so that a graph is a pictorial representation of a system using two basic element nodes and edges. A node is represented by a circle (either hallo shade) and edge is represented by a line segment connecting two nodes together. In this paper, we present a circuit network in the concept of graph theory application and also circuit models of graph are represented in logical connection method were we formulate matrix method of adjacency and incidence of matrix and application of truth table. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=euler%20circuit%20and%20path" title="euler circuit and path">euler circuit and path</a>, <a href="https://publications.waset.org/abstracts/search?q=graph%20representation%20of%20circuit%20networks" title=" graph representation of circuit networks"> graph representation of circuit networks</a>, <a href="https://publications.waset.org/abstracts/search?q=representation%20of%20graph%20models" title=" representation of graph models"> representation of graph models</a>, <a href="https://publications.waset.org/abstracts/search?q=representation%20of%20circuit%20network%20using%20logical%20truth%20table" title=" representation of circuit network using logical truth table"> representation of circuit network using logical truth table</a> </p> <a href="https://publications.waset.org/abstracts/32358/an-application-of-graph-theory-to-the-electrical-circuit-using-matrix-method" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/32358.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">561</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3319</span> Classification of Echo Signals Based on Deep Learning</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Aisulu%20Tileukulova">Aisulu Tileukulova</a>, <a href="https://publications.waset.org/abstracts/search?q=Zhexebay%20Dauren"> Zhexebay Dauren</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Radar plays an important role because it is widely used in civil and military fields. Target detection is one of the most important radar applications. The accuracy of detecting inconspicuous aerial objects in radar facilities is lower against the background of noise. Convolutional neural networks can be used to improve the recognition of this type of aerial object. The purpose of this work is to develop an algorithm for recognizing aerial objects using convolutional neural networks, as well as training a neural network. In this paper, the structure of a convolutional neural network (CNN) consists of different types of layers: 8 convolutional layers and 3 layers of a fully connected perceptron. ReLU is used as an activation function in convolutional layers, while the last layer uses softmax. It is necessary to form a data set for training a neural network in order to detect a target. We built a Confusion Matrix of the CNN model to measure the effectiveness of our model. The results showed that the accuracy when testing the model was 95.7%. Classification of echo signals using CNN shows high accuracy and significantly speeds up the process of predicting the target. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=radar" title="radar">radar</a>, <a href="https://publications.waset.org/abstracts/search?q=neural%20network" title=" neural network"> neural network</a>, <a href="https://publications.waset.org/abstracts/search?q=convolutional%20neural%20network" title=" convolutional neural network"> convolutional neural network</a>, <a href="https://publications.waset.org/abstracts/search?q=echo%20signals" title=" echo signals"> echo signals</a> </p> <a href="https://publications.waset.org/abstracts/147596/classification-of-echo-signals-based-on-deep-learning" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/147596.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">353</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3318</span> Explainable Graph Attention Networks</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=David%20Pham">David Pham</a>, <a href="https://publications.waset.org/abstracts/search?q=Yongfeng%20Zhang"> Yongfeng Zhang</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Graphs are an important structure for data storage and computation. Recent years have seen the success of deep learning on graphs such as Graph Neural Networks (GNN) on various data mining and machine learning tasks. However, most of the deep learning models on graphs cannot easily explain their predictions and are thus often labelled as “black boxes.” For example, Graph Attention Network (GAT) is a frequently used GNN architecture, which adopts an attention mechanism to carefully select the neighborhood nodes for message passing and aggregation. However, it is difficult to explain why certain neighbors are selected while others are not and how the selected neighbors contribute to the final classification result. In this paper, we present a graph learning model called Explainable Graph Attention Network (XGAT), which integrates graph attention modeling and explainability. We use a single model to target both the accuracy and explainability of problem spaces and show that in the context of graph attention modeling, we can design a unified neighborhood selection strategy that selects appropriate neighbor nodes for both better accuracy and enhanced explainability. To justify this, we conduct extensive experiments to better understand the behavior of our model under different conditions and show an increase in both accuracy and explainability. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=explainable%20AI" title="explainable AI">explainable AI</a>, <a href="https://publications.waset.org/abstracts/search?q=graph%20attention%20network" title=" graph attention network"> graph attention network</a>, <a href="https://publications.waset.org/abstracts/search?q=graph%20neural%20network" title=" graph neural network"> graph neural network</a>, <a href="https://publications.waset.org/abstracts/search?q=node%20classification" title=" node classification"> node classification</a> </p> <a href="https://publications.waset.org/abstracts/156796/explainable-graph-attention-networks" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/156796.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">198</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3317</span> Deep Learning Based, End-to-End Metaphor Detection in Greek with Recurrent and Convolutional Neural Networks</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Konstantinos%20Perifanos">Konstantinos Perifanos</a>, <a href="https://publications.waset.org/abstracts/search?q=Eirini%20Florou"> Eirini Florou</a>, <a href="https://publications.waset.org/abstracts/search?q=Dionysis%20Goutsos"> Dionysis Goutsos</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper presents and benchmarks a number of end-to-end Deep Learning based models for metaphor detection in Greek. We combine Convolutional Neural Networks and Recurrent Neural Networks with representation learning to bear on the metaphor detection problem for the Greek language. The models presented achieve exceptional accuracy scores, significantly improving the previous state-of-the-art results, which had already achieved accuracy 0.82. Furthermore, no special preprocessing, feature engineering or linguistic knowledge is used in this work. The methods presented achieve accuracy of 0.92 and F-score 0.92 with Convolutional Neural Networks (CNNs) and bidirectional Long Short Term Memory networks (LSTMs). Comparable results of 0.91 accuracy and 0.91 F-score are also achieved with bidirectional Gated Recurrent Units (GRUs) and Convolutional Recurrent Neural Nets (CRNNs). The models are trained and evaluated only on the basis of training tuples, the related sentences and their labels. The outcome is a state-of-the-art collection of metaphor detection models, trained on limited labelled resources, which can be extended to other languages and similar tasks. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=metaphor%20detection" title="metaphor detection">metaphor detection</a>, <a href="https://publications.waset.org/abstracts/search?q=deep%20learning" title=" deep learning"> deep learning</a>, <a href="https://publications.waset.org/abstracts/search?q=representation%20learning" title=" representation learning"> representation learning</a>, <a href="https://publications.waset.org/abstracts/search?q=embeddings" title=" embeddings"> embeddings</a> </p> <a href="https://publications.waset.org/abstracts/115854/deep-learning-based-end-to-end-metaphor-detection-in-greek-with-recurrent-and-convolutional-neural-networks" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/115854.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">153</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3316</span> Text Localization in Fixed-Layout Documents Using Convolutional Networks in a Coarse-to-Fine Manner</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Beier%20Zhu">Beier Zhu</a>, <a href="https://publications.waset.org/abstracts/search?q=Rui%20Zhang"> Rui Zhang</a>, <a href="https://publications.waset.org/abstracts/search?q=Qi%20Song"> Qi Song</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Text contained within fixed-layout documents can be of great semantic value and so requires a high localization accuracy, such as ID cards, invoices, cheques, and passports. Recently, algorithms based on deep convolutional networks achieve high performance on text detection tasks. However, for text localization in fixed-layout documents, such algorithms detect word bounding boxes individually, which ignores the layout information. This paper presents a novel architecture built on convolutional neural networks (CNNs). A global text localization network and a regional bounding-box regression network are introduced to tackle the problem in a coarse-to-fine manner. The text localization network simultaneously locates word bounding points, which takes the layout information into account. The bounding-box regression network inputs the features pooled from arbitrarily sized RoIs and refine the localizations. These two networks share their convolutional features and are trained jointly. A typical type of fixed-layout documents: ID cards, is selected to evaluate the effectiveness of the proposed system. These networks are trained on data cropped from nature scene images, and synthetic data produced by a synthetic text generation engine. Experiments show that our approach locates high accuracy word bounding boxes and achieves state-of-the-art performance. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=bounding%20box%20regression" title="bounding box regression">bounding box regression</a>, <a href="https://publications.waset.org/abstracts/search?q=convolutional%20networks" title=" convolutional networks"> convolutional networks</a>, <a href="https://publications.waset.org/abstracts/search?q=fixed-layout%20documents" title=" fixed-layout documents"> fixed-layout documents</a>, <a href="https://publications.waset.org/abstracts/search?q=text%20localization" title=" text localization"> text localization</a> </p> <a href="https://publications.waset.org/abstracts/85636/text-localization-in-fixed-layout-documents-using-convolutional-networks-in-a-coarse-to-fine-manner" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/85636.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">194</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3315</span> Stock Market Prediction Using Convolutional Neural Network That Learns from a Graph</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Mo-Se%20Lee">Mo-Se Lee</a>, <a href="https://publications.waset.org/abstracts/search?q=Cheol-Hwi%20Ahn"> Cheol-Hwi Ahn</a>, <a href="https://publications.waset.org/abstracts/search?q=Kee-Young%20Kwahk"> Kee-Young Kwahk</a>, <a href="https://publications.waset.org/abstracts/search?q=Hyunchul%20Ahn"> Hyunchul Ahn</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Over the past decade, deep learning has been in spotlight among various machine learning algorithms. In particular, CNN (Convolutional Neural Network), which is known as effective solution for recognizing and classifying images, has been popularly applied to classification and prediction problems in various fields. In this study, we try to apply CNN to stock market prediction, one of the most challenging tasks in the machine learning research. In specific, we propose to apply CNN as the binary classifier that predicts stock market direction (up or down) by using a graph as its input. That is, our proposal is to build a machine learning algorithm that mimics a person who looks at the graph and predicts whether the trend will go up or down. Our proposed model consists of four steps. In the first step, it divides the dataset into 5 days, 10 days, 15 days, and 20 days. And then, it creates graphs for each interval in step 2. In the next step, CNN classifiers are trained using the graphs generated in the previous step. In step 4, it optimizes the hyper parameters of the trained model by using the validation dataset. To validate our model, we will apply it to the prediction of KOSPI200 for 1,986 days in eight years (from 2009 to 2016). The experimental dataset will include 14 technical indicators such as CCI, Momentum, ROC and daily closing price of KOSPI200 of Korean stock market. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=convolutional%20neural%20network" title="convolutional neural network">convolutional neural network</a>, <a href="https://publications.waset.org/abstracts/search?q=deep%20learning" title=" deep learning"> deep learning</a>, <a href="https://publications.waset.org/abstracts/search?q=Korean%20stock%20market" title=" Korean stock market"> Korean stock market</a>, <a href="https://publications.waset.org/abstracts/search?q=stock%20market%20prediction" title=" stock market prediction"> stock market prediction</a> </p> <a href="https://publications.waset.org/abstracts/80318/stock-market-prediction-using-convolutional-neural-network-that-learns-from-a-graph" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/80318.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">425</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3314</span> Topological Indices of Some Graph Operations</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=U.%20Mary">U. Mary </a> </p> <p class="card-text"><strong>Abstract:</strong></p> Let be a graph with a finite, nonempty set of objects called vertices together with a set of unordered pairs of distinct vertices of called edges. The vertex set is denoted by and the edge set by. Given two graphs and the wiener index of, wiener index for the splitting graph of a graph, the first Zagreb index of and its splitting graph, the 3-steiner wiener index of, the 3-steiner wiener index of a special graph are explored in this paper. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=complementary%20prism%20graph" title="complementary prism graph">complementary prism graph</a>, <a href="https://publications.waset.org/abstracts/search?q=first%20Zagreb%20index" title=" first Zagreb index"> first Zagreb index</a>, <a href="https://publications.waset.org/abstracts/search?q=neighborhood%20corona%20graph" title=" neighborhood corona graph"> neighborhood corona graph</a>, <a href="https://publications.waset.org/abstracts/search?q=steiner%20distance" title=" steiner distance"> steiner distance</a>, <a href="https://publications.waset.org/abstracts/search?q=splitting%20graph" title=" splitting graph"> splitting graph</a>, <a href="https://publications.waset.org/abstracts/search?q=steiner%20wiener%20index" title=" steiner wiener index"> steiner wiener index</a>, <a href="https://publications.waset.org/abstracts/search?q=wiener%20index" title=" wiener index"> wiener index</a> </p> <a href="https://publications.waset.org/abstracts/16774/topological-indices-of-some-graph-operations" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/16774.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">570</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3313</span> A Further Study on the 4-Ordered Property of Some Chordal Ring Networks</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Shin-Shin%20Kao">Shin-Shin Kao</a>, <a href="https://publications.waset.org/abstracts/search?q=Hsiu-Chunj%20Pan"> Hsiu-Chunj Pan</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Given a graph G. A cycle of G is a sequence of vertices of G such that the first and the last vertices are the same. A hamiltonian cycle of G is a cycle containing all vertices of G. The graph G is k-ordered (resp. k-ordered hamiltonian) if for any sequence of k distinct vertices of G, there exists a cycle (resp. hamiltonian cycle) in G containing these k vertices in the specified order. Obviously, any cycle in a graph is 1-ordered, 2-ordered and 3-ordered. Thus the study of any graph being k-ordered (resp. k-ordered hamiltonian) always starts with k = 4. Most studies about this topic work on graphs with no real applications. To our knowledge, the chordal ring families were the first one utilized as the underlying topology in interconnection networks and shown to be 4-ordered [1]. Furthermore, based on computer experimental results in [1], it was conjectured that some of them are 4-ordered hamiltonian. In this paper, we intend to give some possible directions in proving the conjecture. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=Hamiltonian%20cycle" title="Hamiltonian cycle">Hamiltonian cycle</a>, <a href="https://publications.waset.org/abstracts/search?q=4-ordered" title=" 4-ordered"> 4-ordered</a>, <a href="https://publications.waset.org/abstracts/search?q=Chordal%20rings" title=" Chordal rings"> Chordal rings</a>, <a href="https://publications.waset.org/abstracts/search?q=3-regular" title=" 3-regular"> 3-regular</a> </p> <a href="https://publications.waset.org/abstracts/13946/a-further-study-on-the-4-ordered-property-of-some-chordal-ring-networks" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/13946.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">434</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3312</span> Classification of Computer Generated Images from Photographic Images Using Convolutional Neural Networks</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Chaitanya%20Chawla">Chaitanya Chawla</a>, <a href="https://publications.waset.org/abstracts/search?q=Divya%20Panwar"> Divya Panwar</a>, <a href="https://publications.waset.org/abstracts/search?q=Gurneesh%20Singh%20Anand"> Gurneesh Singh Anand</a>, <a href="https://publications.waset.org/abstracts/search?q=M.%20P.%20S%20Bhatia"> M. P. S Bhatia</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper presents a deep-learning mechanism for classifying computer generated images and photographic images. The proposed method accounts for a convolutional layer capable of automatically learning correlation between neighbouring pixels. In the current form, Convolutional Neural Network (CNN) will learn features based on an image's content instead of the structural features of the image. The layer is particularly designed to subdue an image's content and robustly learn the sensor pattern noise features (usually inherited from image processing in a camera) as well as the statistical properties of images. The paper was assessed on latest natural and computer generated images, and it was concluded that it performs better than the current state of the art methods. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=image%20forensics" title="image forensics">image forensics</a>, <a href="https://publications.waset.org/abstracts/search?q=computer%20graphics" title=" computer graphics"> computer graphics</a>, <a href="https://publications.waset.org/abstracts/search?q=classification" title=" classification"> classification</a>, <a href="https://publications.waset.org/abstracts/search?q=deep%20learning" title=" deep learning"> deep learning</a>, <a href="https://publications.waset.org/abstracts/search?q=convolutional%20neural%20networks" title=" convolutional neural networks"> convolutional neural networks</a> </p> <a href="https://publications.waset.org/abstracts/95266/classification-of-computer-generated-images-from-photographic-images-using-convolutional-neural-networks" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/95266.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">336</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3311</span> Survey Paper on Graph Coloring Problem and Its Application</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Prateek%20Chharia">Prateek Chharia</a>, <a href="https://publications.waset.org/abstracts/search?q=Biswa%20Bhusan%20Ghosh"> Biswa Bhusan Ghosh</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Graph coloring is one of the prominent concepts in graph coloring. It can be defined as a coloring of the various regions of the graph such that all the constraints are fulfilled. In this paper various graphs coloring approaches like greedy coloring, Heuristic search for maximum independent set and graph coloring using edge table is described. Graph coloring can be used in various real time applications like student time tabling generation, Sudoku as a graph coloring problem, GSM phone network. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=graph%20coloring" title="graph coloring">graph coloring</a>, <a href="https://publications.waset.org/abstracts/search?q=greedy%20coloring" title=" greedy coloring"> greedy coloring</a>, <a href="https://publications.waset.org/abstracts/search?q=heuristic%20search" title=" heuristic search"> heuristic search</a>, <a href="https://publications.waset.org/abstracts/search?q=edge%20table" title=" edge table"> edge table</a>, <a href="https://publications.waset.org/abstracts/search?q=sudoku%20as%20a%20graph%20coloring%20problem" title=" sudoku as a graph coloring problem"> sudoku as a graph coloring problem</a> </p> <a href="https://publications.waset.org/abstracts/19691/survey-paper-on-graph-coloring-problem-and-its-application" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/19691.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">539</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3310</span> Traffic Sign Recognition System Using Convolutional Neural NetworkDevineni</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Devineni%20Vijay%20Bhaskar">Devineni Vijay Bhaskar</a>, <a href="https://publications.waset.org/abstracts/search?q=Yendluri%20Raja"> Yendluri Raja</a> </p> <p class="card-text"><strong>Abstract:</strong></p> We recommend a model for traffic sign detection stranded on Convolutional Neural Networks (CNN). We first renovate the unique image into the gray scale image through with support vector machines, then use convolutional neural networks with fixed and learnable layers for revealing and understanding. The permanent layer can reduction the amount of attention areas to notice and crop the limits very close to the boundaries of traffic signs. The learnable coverings can rise the accuracy of detection significantly. Besides, we use bootstrap procedures to progress the accuracy and avoid overfitting problem. In the German Traffic Sign Detection Benchmark, we obtained modest results, with an area under the precision-recall curve (AUC) of 99.49% in the group “Risk”, and an AUC of 96.62% in the group “Obligatory”. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=convolutional%20neural%20network" title="convolutional neural network">convolutional neural network</a>, <a href="https://publications.waset.org/abstracts/search?q=support%20vector%20machine" title=" support vector machine"> support vector machine</a>, <a href="https://publications.waset.org/abstracts/search?q=detection" title=" detection"> detection</a>, <a href="https://publications.waset.org/abstracts/search?q=traffic%20signs" title=" traffic signs"> traffic signs</a>, <a href="https://publications.waset.org/abstracts/search?q=bootstrap%20procedures" title=" bootstrap procedures"> bootstrap procedures</a>, <a href="https://publications.waset.org/abstracts/search?q=precision-recall%20curve" title=" precision-recall curve"> precision-recall curve</a> </p> <a href="https://publications.waset.org/abstracts/149896/traffic-sign-recognition-system-using-convolutional-neural-networkdevineni" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/149896.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">122</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3309</span> A New Graph Theoretic Problem with Ample Practical Applications</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Mehmet%20Hakan%20Karaata">Mehmet Hakan Karaata</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In this paper, we first coin a new graph theocratic problem with numerous applications. Second, we provide two algorithms for the problem. The first solution is using a brute-force techniques, whereas the second solution is based on an initial identification of the cycles in the given graph. We then provide a correctness proof of the algorithm. The applications of the problem include graph analysis, graph drawing and network structuring. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=algorithm" title="algorithm">algorithm</a>, <a href="https://publications.waset.org/abstracts/search?q=cycle" title=" cycle"> cycle</a>, <a href="https://publications.waset.org/abstracts/search?q=graph%20algorithm" title=" graph algorithm"> graph algorithm</a>, <a href="https://publications.waset.org/abstracts/search?q=graph%20theory" title=" graph theory"> graph theory</a>, <a href="https://publications.waset.org/abstracts/search?q=network%20structuring" title=" network structuring"> network structuring</a> </p> <a href="https://publications.waset.org/abstracts/67285/a-new-graph-theoretic-problem-with-ample-practical-applications" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/67285.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">386</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3308</span> Gender Effects in EEG-Based Functional Brain Networks</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Mahdi%20Jalili">Mahdi Jalili</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Functional connectivity in the human brain can be represented as a network using electroencephalography (EEG) signals. Network representation of EEG time series can be an efficient vehicle to understand the underlying mechanisms of brain function. Brain functional networks – whose nodes are brain regions and edges correspond to functional links between them – are characterized by neurobiologically meaningful graph theory metrics. This study investigates the degree to which graph theory metrics are sex dependent. To this end, EEGs from 24 healthy female subjects and 21 healthy male subjects were recorded in eyes-closed resting state conditions. The connectivity matrices were extracted using correlation analysis and were further binarized to obtain binary functional networks. Global and local efficiency measures – as graph theory metrics– were computed for the extracted networks. We found that male brains have a significantly greater global efficiency (i.e., global communicability of the network) across all frequency bands for a wide range of cost values in both hemispheres. Furthermore, for a range of cost values, female brains showed significantly greater right-hemispheric local efficiency (i.e., local connectivity) than male brains. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=EEG" title="EEG">EEG</a>, <a href="https://publications.waset.org/abstracts/search?q=brain" title=" brain"> brain</a>, <a href="https://publications.waset.org/abstracts/search?q=functional%20networks" title=" functional networks"> functional networks</a>, <a href="https://publications.waset.org/abstracts/search?q=network%20science" title=" network science"> network science</a>, <a href="https://publications.waset.org/abstracts/search?q=graph%20theory" title=" graph theory"> graph theory</a> </p> <a href="https://publications.waset.org/abstracts/23346/gender-effects-in-eeg-based-functional-brain-networks" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/23346.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">443</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3307</span> Analyzing the Factors that Cause Parallel Performance Degradation in Parallel Graph-Based Computations Using Graph500</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Mustafa%20Elfituri">Mustafa Elfituri</a>, <a href="https://publications.waset.org/abstracts/search?q=Jonathan%20Cook"> Jonathan Cook</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Recently, graph-based computations have become more important in large-scale scientific computing as they can provide a methodology to model many types of relations between independent objects. They are being actively used in fields as varied as biology, social networks, cybersecurity, and computer networks. At the same time, graph problems have some properties such as irregularity and poor locality that make their performance different than regular applications performance. Therefore, parallelizing graph algorithms is a hard and challenging task. Initial evidence is that standard computer architectures do not perform very well on graph algorithms. Little is known exactly what causes this. The Graph500 benchmark is a representative application for parallel graph-based computations, which have highly irregular data access and are driven more by traversing connected data than by computation. In this paper, we present results from analyzing the performance of various example implementations of Graph500, including a shared memory (OpenMP) version, a distributed (MPI) version, and a hybrid version. We measured and analyzed all the factors that affect its performance in order to identify possible changes that would improve its performance. Results are discussed in relation to what factors contribute to performance degradation. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=graph%20computation" title="graph computation">graph computation</a>, <a href="https://publications.waset.org/abstracts/search?q=graph500%20benchmark" title=" graph500 benchmark"> graph500 benchmark</a>, <a href="https://publications.waset.org/abstracts/search?q=parallel%20architectures" title=" parallel architectures"> parallel architectures</a>, <a href="https://publications.waset.org/abstracts/search?q=parallel%20programming" title=" parallel programming"> parallel programming</a>, <a href="https://publications.waset.org/abstracts/search?q=workload%20characterization." title=" workload characterization."> workload characterization.</a> </p> <a href="https://publications.waset.org/abstracts/133666/analyzing-the-factors-that-cause-parallel-performance-degradation-in-parallel-graph-based-computations-using-graph500" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/133666.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">147</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3306</span> Complete Tripartite Graphs with Spanning Maximal Planar Subgraphs</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Severino%20Gervacio">Severino Gervacio</a>, <a href="https://publications.waset.org/abstracts/search?q=Velimor%20Almonte"> Velimor Almonte</a>, <a href="https://publications.waset.org/abstracts/search?q=Emmanuel%20Natalio"> Emmanuel Natalio</a> </p> <p class="card-text"><strong>Abstract:</strong></p> A simple graph is planar if it there is a way of drawing it in the plane without edge crossings. A planar graph which is not a proper spanning subgraph of another planar graph is a maximal planar graph. We prove that for complete tripartite graphs of order at most 9, the only ones that contain a spanning maximal planar subgraph are K1,1,1, K2,2,2, K2,3,3, and K3,3,3. The main result gives a necessary and sufficient condition for the complete tripartite graph Kx,y,z to contain a spanning maximal planar subgraph. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=complete%20tripartite%20graph" title="complete tripartite graph">complete tripartite graph</a>, <a href="https://publications.waset.org/abstracts/search?q=graph" title=" graph"> graph</a>, <a href="https://publications.waset.org/abstracts/search?q=maximal%20planar%20graph" title=" maximal planar graph"> maximal planar graph</a>, <a href="https://publications.waset.org/abstracts/search?q=planar%20graph" title=" planar graph"> planar graph</a>, <a href="https://publications.waset.org/abstracts/search?q=subgraph" title=" subgraph"> subgraph</a> </p> <a href="https://publications.waset.org/abstracts/59157/complete-tripartite-graphs-with-spanning-maximal-planar-subgraphs" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/59157.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">380</span> </span> </div> </div> <ul class="pagination"> <li class="page-item disabled"><span class="page-link">‹</span></li> <li class="page-item active"><span class="page-link">1</span></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=graph%20convolutional%20networks%20%28GCNs%29&page=2">2</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=graph%20convolutional%20networks%20%28GCNs%29&page=3">3</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=graph%20convolutional%20networks%20%28GCNs%29&page=4">4</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=graph%20convolutional%20networks%20%28GCNs%29&page=5">5</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=graph%20convolutional%20networks%20%28GCNs%29&page=6">6</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=graph%20convolutional%20networks%20%28GCNs%29&page=7">7</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=graph%20convolutional%20networks%20%28GCNs%29&page=8">8</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=graph%20convolutional%20networks%20%28GCNs%29&page=9">9</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=graph%20convolutional%20networks%20%28GCNs%29&page=10">10</a></li> <li class="page-item disabled"><span class="page-link">...</span></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=graph%20convolutional%20networks%20%28GCNs%29&page=111">111</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=graph%20convolutional%20networks%20%28GCNs%29&page=112">112</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=graph%20convolutional%20networks%20%28GCNs%29&page=2" rel="next">›</a></li> </ul> </div> </main> <footer> <div id="infolinks" class="pt-3 pb-2"> <div class="container"> <div style="background-color:#f5f5f5;" class="p-3"> <div class="row"> <div class="col-md-2"> <ul class="list-unstyled"> About <li><a href="https://waset.org/page/support">About Us</a></li> <li><a href="https://waset.org/page/support#legal-information">Legal</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/WASET-16th-foundational-anniversary.pdf">WASET celebrates its 16th foundational anniversary</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Account <li><a href="https://waset.org/profile">My Account</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Explore <li><a href="https://waset.org/disciplines">Disciplines</a></li> <li><a href="https://waset.org/conferences">Conferences</a></li> <li><a href="https://waset.org/conference-programs">Conference Program</a></li> <li><a href="https://waset.org/committees">Committees</a></li> <li><a href="https://publications.waset.org">Publications</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Research <li><a href="https://publications.waset.org/abstracts">Abstracts</a></li> <li><a href="https://publications.waset.org">Periodicals</a></li> <li><a href="https://publications.waset.org/archive">Archive</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Open Science <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Science-Philosophy.pdf">Open Science Philosophy</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Science-Award.pdf">Open Science Award</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Society-Open-Science-and-Open-Innovation.pdf">Open Innovation</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Postdoctoral-Fellowship-Award.pdf">Postdoctoral Fellowship Award</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Scholarly-Research-Review.pdf">Scholarly Research Review</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Support <li><a href="https://waset.org/page/support">Support</a></li> <li><a href="https://waset.org/profile/messages/create">Contact Us</a></li> <li><a href="https://waset.org/profile/messages/create">Report Abuse</a></li> </ul> </div> </div> </div> </div> </div> <div class="container text-center"> <hr style="margin-top:0;margin-bottom:.3rem;"> <a href="https://creativecommons.org/licenses/by/4.0/" target="_blank" class="text-muted small">Creative Commons Attribution 4.0 International License</a> <div id="copy" class="mt-2">© 2024 World Academy of Science, Engineering and Technology</div> </div> </footer> <a href="javascript:" id="return-to-top"><i class="fas fa-arrow-up"></i></a> <div class="modal" id="modal-template"> <div class="modal-dialog"> <div class="modal-content"> <div class="row m-0 mt-1"> <div class="col-md-12"> <button type="button" class="close" data-dismiss="modal" aria-label="Close"><span aria-hidden="true">×</span></button> </div> </div> <div class="modal-body"></div> </div> </div> </div> <script src="https://cdn.waset.org/static/plugins/jquery-3.3.1.min.js"></script> <script src="https://cdn.waset.org/static/plugins/bootstrap-4.2.1/js/bootstrap.bundle.min.js"></script> <script src="https://cdn.waset.org/static/js/site.js?v=150220211556"></script> <script> jQuery(document).ready(function() { /*jQuery.get("https://publications.waset.org/xhr/user-menu", function (response) { jQuery('#mainNavMenu').append(response); });*/ jQuery.get({ url: "https://publications.waset.org/xhr/user-menu", cache: false }).then(function(response){ jQuery('#mainNavMenu').append(response); }); }); </script> </body> </html>