CINXE.COM
(PDF) Human Action Recognition: A Taxonomy-Based Survey, Updates, and Opportunities | Tangina Sultana - Academia.edu
<!DOCTYPE html> <html > <head> <meta charset="utf-8"> <meta rel="search" type="application/opensearchdescription+xml" href="/open_search.xml" title="Academia.edu"> <meta content="width=device-width, initial-scale=1" name="viewport"> <meta name="google-site-verification" content="bKJMBZA7E43xhDOopFZkssMMkBRjvYERV-NaN4R6mrs"> <meta name="csrf-param" content="authenticity_token" /> <meta name="csrf-token" content="52XvFfqCntGFuPXZuwuPwphri0BlXTEjbfuuwxIA1Nryix5SktYNVr6618AgzwOCL+c/uOLBvHSZV1voa9i6Fg==" /> <meta name="citation_title" content="Human Action Recognition: A Taxonomy-Based Survey, Updates, and Opportunities" /> <meta name="citation_journal_title" content="Sensors" /> <meta name="citation_author" content="Tangina Sultana" /> <meta name="twitter:card" content="summary" /> <meta name="twitter:url" content="https://www.academia.edu/104663980/Human_Action_Recognition_A_Taxonomy_Based_Survey_Updates_and_Opportunities" /> <meta name="twitter:title" content="Human Action Recognition: A Taxonomy-Based Survey, Updates, and Opportunities" /> <meta name="twitter:description" content="Human action recognition systems use data collected from a wide range of sensors to accurately identify and interpret human actions. One of the most challenging issues for computer vision is the automatic and precise identification of human" /> <meta name="twitter:image" content="https://0.academia-photos.com/128920438/35786042/30901429/s200_tangina.sultana.jpg" /> <meta property="fb:app_id" content="2369844204" /> <meta property="og:type" content="article" /> <meta property="og:url" content="https://www.academia.edu/104663980/Human_Action_Recognition_A_Taxonomy_Based_Survey_Updates_and_Opportunities" /> <meta property="og:title" content="Human Action Recognition: A Taxonomy-Based Survey, Updates, and Opportunities" /> <meta property="og:image" content="http://a.academia-assets.com/images/open-graph-icons/fb-paper.gif" /> <meta property="og:description" content="Human action recognition systems use data collected from a wide range of sensors to accurately identify and interpret human actions. One of the most challenging issues for computer vision is the automatic and precise identification of human" /> <meta property="article:author" content="https://khu.academia.edu/TanginaSultana" /> <meta name="description" content="Human action recognition systems use data collected from a wide range of sensors to accurately identify and interpret human actions. One of the most challenging issues for computer vision is the automatic and precise identification of human" /> <title>(PDF) Human Action Recognition: A Taxonomy-Based Survey, Updates, and Opportunities | Tangina Sultana - Academia.edu</title> <link rel="canonical" href="https://www.academia.edu/104663980/Human_Action_Recognition_A_Taxonomy_Based_Survey_Updates_and_Opportunities" /> <script async src="https://www.googletagmanager.com/gtag/js?id=G-5VKX33P2DS"></script> <script> window.dataLayer = window.dataLayer || []; function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'G-5VKX33P2DS', { cookie_domain: 'academia.edu', send_page_view: false, }); gtag('event', 'page_view', { 'controller': "single_work", 'action': "show", 'controller_action': 'single_work#show', 'logged_in': 'false', 'edge': 'unknown', // Send nil if there is no A/B test bucket, in case some records get logged // with missing data - that way we can distinguish between the two cases. // ab_test_bucket should be of the form <ab_test_name>:<bucket> 'ab_test_bucket': null, }) </script> <script> var $controller_name = 'single_work'; var $action_name = "show"; var $rails_env = 'production'; var $app_rev = '9387f500ddcbb8d05c67bef28a2fe0334f1aafb8'; var $domain = 'academia.edu'; var $app_host = "academia.edu"; var $asset_host = "academia-assets.com"; var $start_time = new Date().getTime(); var $recaptcha_key = "6LdxlRMTAAAAADnu_zyLhLg0YF9uACwz78shpjJB"; var $recaptcha_invisible_key = "6Lf3KHUUAAAAACggoMpmGJdQDtiyrjVlvGJ6BbAj"; var $disableClientRecordHit = false; </script> <script> window.require = { config: function() { return function() {} } } </script> <script> window.Aedu = window.Aedu || {}; window.Aedu.hit_data = null; window.Aedu.serverRenderTime = new Date(1733046857000); window.Aedu.timeDifference = new Date().getTime() - 1733046857000; </script> <script type="application/ld+json">{"@context":"https://schema.org","@type":"ScholarlyArticle","abstract":"Human action recognition systems use data collected from a wide range of sensors to accurately identify and interpret human actions. One of the most challenging issues for computer vision is the automatic and precise identification of human activities. A significant increase in feature learning-based representations for action recognition has emerged in recent years, due to the widespread use of deep learning-based features. This study presents an in-depth analysis of human activity recognition that investigates recent developments in computer vision. Augmented reality, human–computer interaction, cybersecurity, home monitoring, and surveillance cameras are all examples of computer vision applications that often go in conjunction with human action detection. We give a taxonomy-based, rigorous study of human activity recognition techniques, discussing the best ways to acquire human action features, derived using RGB and depth data, as well as the latest research on deep learning and ...","author":[{"@context":"https://schema.org","@type":"Person","name":"Tangina Sultana"}],"contributor":[],"dateCreated":"2023-07-17","dateModified":"2023-07-17","datePublished":null,"headline":"Human Action Recognition: A Taxonomy-Based Survey, Updates, and Opportunities","inLanguage":"en","keywords":["Computer Science","Artificial Intelligence","Human Computer Interaction","Analytical Chemistry","Sensors","Action Recognition","Action (Physics)","Electrical And Electronic Engineering"],"locationCreated":null,"publication":"Sensors","publisher":{"@context":"https://schema.org","@type":"Organization","name":"MDPI AG"},"image":null,"thumbnailUrl":null,"url":"https://www.academia.edu/104663980/Human_Action_Recognition_A_Taxonomy_Based_Survey_Updates_and_Opportunities","sourceOrganization":[{"@context":"https://schema.org","@type":"EducationalOrganization","name":"khu"}]}</script><link rel="stylesheet" media="all" href="//a.academia-assets.com/assets/single_work_page/loswp-102fa537001ba4d8dcd921ad9bd56c474abc201906ea4843e7e7efe9dfbf561d.css" /><link rel="stylesheet" media="all" href="//a.academia-assets.com/assets/design_system/body-8d679e925718b5e8e4b18e9a4fab37f7eaa99e43386459376559080ac8f2856a.css" /><link rel="stylesheet" media="all" href="//a.academia-assets.com/assets/design_system/button-3cea6e0ad4715ed965c49bfb15dedfc632787b32ff6d8c3a474182b231146ab7.css" /><link rel="stylesheet" media="all" href="//a.academia-assets.com/assets/design_system/text_button-73590134e40cdb49f9abdc8e796cc00dc362693f3f0f6137d6cf9bb78c318ce7.css" /><link crossorigin="" href="https://fonts.gstatic.com/" rel="preconnect" /><link href="https://fonts.googleapis.com/css2?family=DM+Sans:ital,opsz,wght@0,9..40,100..1000;1,9..40,100..1000&family=Gupter:wght@400;500;700&family=IBM+Plex+Mono:wght@300;400&family=Material+Symbols+Outlined:opsz,wght,FILL,GRAD@20,400,0,0&display=swap" rel="stylesheet" /><link rel="stylesheet" media="all" href="//a.academia-assets.com/assets/design_system/common-10fa40af19d25203774df2d4a03b9b5771b45109c2304968038e88a81d1215c5.css" /> </head> <body> <div id='react-modal'></div> <div class="js-upgrade-ie-banner" style="display: none; text-align: center; padding: 8px 0; background-color: #ebe480;"><p style="color: #000; font-size: 12px; margin: 0 0 4px;">Academia.edu no longer supports Internet Explorer.</p><p style="color: #000; font-size: 12px; margin: 0;">To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to <a href="https://www.academia.edu/upgrade-browser">upgrade your browser</a>.</p></div><script>// Show this banner for all versions of IE if (!!window.MSInputMethodContext || /(MSIE)/.test(navigator.userAgent)) { document.querySelector('.js-upgrade-ie-banner').style.display = 'block'; }</script> <div class="bootstrap login"><div class="modal fade login-modal" id="login-modal"><div class="login-modal-dialog modal-dialog"><div class="modal-content"><div class="modal-header"><button class="close close" data-dismiss="modal" type="button"><span aria-hidden="true">×</span><span class="sr-only">Close</span></button><h4 class="modal-title text-center"><strong>Log In</strong></h4></div><div class="modal-body"><div class="row"><div class="col-xs-10 col-xs-offset-1"><button class="btn btn-fb btn-lg btn-block btn-v-center-content" id="login-facebook-oauth-button"><svg style="float: left; width: 19px; line-height: 1em; margin-right: .3em;" aria-hidden="true" focusable="false" data-prefix="fab" data-icon="facebook-square" class="svg-inline--fa fa-facebook-square fa-w-14" role="img" xmlns="http://www.w3.org/2000/svg" viewBox="0 0 448 512"><path fill="currentColor" d="M400 32H48A48 48 0 0 0 0 80v352a48 48 0 0 0 48 48h137.25V327.69h-63V256h63v-54.64c0-62.15 37-96.48 93.67-96.48 27.14 0 55.52 4.84 55.52 4.84v61h-31.27c-30.81 0-40.42 19.12-40.42 38.73V256h68.78l-11 71.69h-57.78V480H400a48 48 0 0 0 48-48V80a48 48 0 0 0-48-48z"></path></svg><small><strong>Log in</strong> with <strong>Facebook</strong></small></button><br /><button class="btn btn-google btn-lg btn-block btn-v-center-content" id="login-google-oauth-button"><svg style="float: left; width: 22px; line-height: 1em; margin-right: .3em;" aria-hidden="true" focusable="false" data-prefix="fab" data-icon="google-plus" class="svg-inline--fa fa-google-plus fa-w-16" role="img" xmlns="http://www.w3.org/2000/svg" viewBox="0 0 512 512"><path fill="currentColor" d="M256,8C119.1,8,8,119.1,8,256S119.1,504,256,504,504,392.9,504,256,392.9,8,256,8ZM185.3,380a124,124,0,0,1,0-248c31.3,0,60.1,11,83,32.3l-33.6,32.6c-13.2-12.9-31.3-19.1-49.4-19.1-42.9,0-77.2,35.5-77.2,78.1S142.3,334,185.3,334c32.6,0,64.9-19.1,70.1-53.3H185.3V238.1H302.2a109.2,109.2,0,0,1,1.9,20.7c0,70.8-47.5,121.2-118.8,121.2ZM415.5,273.8v35.5H380V273.8H344.5V238.3H380V202.8h35.5v35.5h35.2v35.5Z"></path></svg><small><strong>Log in</strong> with <strong>Google</strong></small></button><br /><style type="text/css">.sign-in-with-apple-button { width: 100%; height: 52px; border-radius: 3px; border: 1px solid black; cursor: pointer; }</style><script src="https://appleid.cdn-apple.com/appleauth/static/jsapi/appleid/1/en_US/appleid.auth.js" type="text/javascript"></script><div class="sign-in-with-apple-button" data-border="false" data-color="white" id="appleid-signin"><span ="Sign Up with Apple" class="u-fs11"></span></div><script>AppleID.auth.init({ clientId: 'edu.academia.applesignon', scope: 'name email', redirectURI: 'https://www.academia.edu/sessions', state: "7328904ea9a99001af5cdfde0df73f34527cdc2f6a91e631f02c66294fb1d3cf", });</script><script>// Hacky way of checking if on fast loswp if (window.loswp == null) { (function() { const Google = window?.Aedu?.Auth?.OauthButton?.Login?.Google; const Facebook = window?.Aedu?.Auth?.OauthButton?.Login?.Facebook; if (Google) { new Google({ el: '#login-google-oauth-button', rememberMeCheckboxId: 'remember_me', track: null }); } if (Facebook) { new Facebook({ el: '#login-facebook-oauth-button', rememberMeCheckboxId: 'remember_me', track: null }); } })(); }</script></div></div></div><div class="modal-body"><div class="row"><div class="col-xs-10 col-xs-offset-1"><div class="hr-heading login-hr-heading"><span class="hr-heading-text">or</span></div></div></div></div><div class="modal-body"><div class="row"><div class="col-xs-10 col-xs-offset-1"><form class="js-login-form" action="https://www.academia.edu/sessions" accept-charset="UTF-8" method="post"><input name="utf8" type="hidden" value="✓" autocomplete="off" /><input type="hidden" name="authenticity_token" value="KUOfitj9rbILIqaTKaR2aG8ImXeP+OD+UZUpdpLDCps8rW7NsKk+NTAghIqyYPoo2IQtjwhkbamlOdxd6xtkVw==" autocomplete="off" /><div class="form-group"><label class="control-label" for="login-modal-email-input" style="font-size: 14px;">Email</label><input class="form-control" id="login-modal-email-input" name="login" type="email" /></div><div class="form-group"><label class="control-label" for="login-modal-password-input" style="font-size: 14px;">Password</label><input class="form-control" id="login-modal-password-input" name="password" type="password" /></div><input type="hidden" name="post_login_redirect_url" id="post_login_redirect_url" value="https://www.academia.edu/104663980/Human_Action_Recognition_A_Taxonomy_Based_Survey_Updates_and_Opportunities" autocomplete="off" /><div class="checkbox"><label><input type="checkbox" name="remember_me" id="remember_me" value="1" checked="checked" /><small style="font-size: 12px; margin-top: 2px; display: inline-block;">Remember me on this computer</small></label></div><br><input type="submit" name="commit" value="Log In" class="btn btn-primary btn-block btn-lg js-login-submit" data-disable-with="Log In" /></br></form><script>typeof window?.Aedu?.recaptchaManagedForm === 'function' && window.Aedu.recaptchaManagedForm( document.querySelector('.js-login-form'), document.querySelector('.js-login-submit') );</script><small style="font-size: 12px;"><br />or <a data-target="#login-modal-reset-password-container" data-toggle="collapse" href="javascript:void(0)">reset password</a></small><div class="collapse" id="login-modal-reset-password-container"><br /><div class="well margin-0x"><form class="js-password-reset-form" action="https://www.academia.edu/reset_password" accept-charset="UTF-8" method="post"><input name="utf8" type="hidden" value="✓" autocomplete="off" /><input type="hidden" name="authenticity_token" value="1aPoUI37x7PaGuS6fqkamiykiA/JwLD06+tq3LyZ+pXATRkX5a9UNOEYxqPlbZbamyg8905cPaMfR5/3xUGUWQ==" autocomplete="off" /><p>Enter the email address you signed up with and we'll email you a reset link.</p><div class="form-group"><input class="form-control" name="email" type="email" /></div><input class="btn btn-primary btn-block g-recaptcha js-password-reset-submit" data-sitekey="6Lf3KHUUAAAAACggoMpmGJdQDtiyrjVlvGJ6BbAj" type="submit" value="Email me a link" /></form></div></div><script> require.config({ waitSeconds: 90 })(["https://a.academia-assets.com/assets/collapse-45805421cf446ca5adf7aaa1935b08a3a8d1d9a6cc5d91a62a2a3a00b20b3e6a.js"], function() { // from javascript_helper.rb $("#login-modal-reset-password-container").on("shown.bs.collapse", function() { $(this).find("input[type=email]").focus(); }); }); </script> </div></div></div><div class="modal-footer"><div class="text-center"><small style="font-size: 12px;">Need an account? <a rel="nofollow" href="https://www.academia.edu/signup">Click here to sign up</a></small></div></div></div></div></div></div><script>// If we are on subdomain or non-bootstrapped page, redirect to login page instead of showing modal (function(){ if (typeof $ === 'undefined') return; var host = window.location.hostname; if ((host === $domain || host === "www."+$domain) && (typeof $().modal === 'function')) { $("#nav_log_in").click(function(e) { // Don't follow the link and open the modal e.preventDefault(); $("#login-modal").on('shown.bs.modal', function() { $(this).find("#login-modal-email-input").focus() }).modal('show'); }); } })()</script> <div id="fb-root"></div><script>window.fbAsyncInit = function() { FB.init({ appId: "2369844204", version: "v8.0", status: true, cookie: true, xfbml: true }); // Additional initialization code. if (window.InitFacebook) { // facebook.ts already loaded, set it up. window.InitFacebook(); } else { // Set a flag for facebook.ts to find when it loads. window.academiaAuthReadyFacebook = true; } };</script> <div id="google-root"></div><script>window.loadGoogle = function() { if (window.InitGoogle) { // google.ts already loaded, set it up. window.InitGoogle("331998490334-rsn3chp12mbkiqhl6e7lu2q0mlbu0f1b"); } else { // Set a flag for google.ts to use when it loads. window.GoogleClientID = "331998490334-rsn3chp12mbkiqhl6e7lu2q0mlbu0f1b"; } };</script> <div class="header--container" id="main-header-container"><div class="header--inner-container header--inner-container-ds2"><div class="header-ds2--left-wrapper"><div class="header-ds2--left-wrapper-inner"><a data-main-header-link-target="logo_home" href="https://www.academia.edu/"><img class="hide-on-desktop-redesign" style="height: 24px; width: 24px;" alt="Academia.edu" src="//a.academia-assets.com/images/academia-logo-redesign-2015-A.svg" width="24" height="24" /><img width="145.2" height="18" class="hide-on-mobile-redesign" style="height: 24px;" alt="Academia.edu" src="//a.academia-assets.com/images/academia-logo-redesign-2015.svg" /></a><div class="header--search-container header--search-container-ds2"><form class="js-SiteSearch-form select2-no-default-pills" action="https://www.academia.edu/search" accept-charset="UTF-8" method="get"><input name="utf8" type="hidden" value="✓" autocomplete="off" /><svg style="width: 14px; height: 14px;" aria-hidden="true" focusable="false" data-prefix="fas" data-icon="search" class="header--search-icon svg-inline--fa fa-search fa-w-16" role="img" xmlns="http://www.w3.org/2000/svg" viewBox="0 0 512 512"><path fill="currentColor" d="M505 442.7L405.3 343c-4.5-4.5-10.6-7-17-7H372c27.6-35.3 44-79.7 44-128C416 93.1 322.9 0 208 0S0 93.1 0 208s93.1 208 208 208c48.3 0 92.7-16.4 128-44v16.3c0 6.4 2.5 12.5 7 17l99.7 99.7c9.4 9.4 24.6 9.4 33.9 0l28.3-28.3c9.4-9.4 9.4-24.6.1-34zM208 336c-70.7 0-128-57.2-128-128 0-70.7 57.2-128 128-128 70.7 0 128 57.2 128 128 0 70.7-57.2 128-128 128z"></path></svg><input class="header--search-input header--search-input-ds2 js-SiteSearch-form-input" data-main-header-click-target="search_input" name="q" placeholder="Search" type="text" /></form></div></div></div><nav class="header--nav-buttons header--nav-buttons-ds2 js-main-nav"><a class="ds2-5-button ds2-5-button--secondary js-header-login-url header-button-ds2 header-login-ds2 hide-on-mobile-redesign" href="https://www.academia.edu/login" rel="nofollow">Log In</a><a class="ds2-5-button ds2-5-button--secondary header-button-ds2 hide-on-mobile-redesign" href="https://www.academia.edu/signup" rel="nofollow">Sign Up</a><button class="header--hamburger-button header--hamburger-button-ds2 hide-on-desktop-redesign js-header-hamburger-button"><div class="icon-bar"></div><div class="icon-bar" style="margin-top: 4px;"></div><div class="icon-bar" style="margin-top: 4px;"></div></button></nav></div><ul class="header--dropdown-container js-header-dropdown"><li class="header--dropdown-row"><a class="header--dropdown-link" href="https://www.academia.edu/login" rel="nofollow">Log In</a></li><li class="header--dropdown-row"><a class="header--dropdown-link" href="https://www.academia.edu/signup" rel="nofollow">Sign Up</a></li><li class="header--dropdown-row js-header-dropdown-expand-button"><button class="header--dropdown-button">more<svg aria-hidden="true" focusable="false" data-prefix="fas" data-icon="caret-down" class="header--dropdown-button-icon svg-inline--fa fa-caret-down fa-w-10" role="img" xmlns="http://www.w3.org/2000/svg" viewBox="0 0 320 512"><path fill="currentColor" d="M31.3 192h257.3c17.8 0 26.7 21.5 14.1 34.1L174.1 354.8c-7.8 7.8-20.5 7.8-28.3 0L17.2 226.1C4.6 213.5 13.5 192 31.3 192z"></path></svg></button></li><li><ul class="header--expanded-dropdown-container"><li class="header--dropdown-row"><a class="header--dropdown-link" href="https://www.academia.edu/about">About</a></li><li class="header--dropdown-row"><a class="header--dropdown-link" href="https://www.academia.edu/press">Press</a></li><li class="header--dropdown-row"><a class="header--dropdown-link" href="https://medium.com/@academia">Blog</a></li><li class="header--dropdown-row"><a class="header--dropdown-link" href="https://www.academia.edu/documents">Papers</a></li><li class="header--dropdown-row"><a class="header--dropdown-link" href="https://www.academia.edu/terms">Terms</a></li><li class="header--dropdown-row"><a class="header--dropdown-link" href="https://www.academia.edu/privacy">Privacy</a></li><li class="header--dropdown-row"><a class="header--dropdown-link" href="https://www.academia.edu/copyright">Copyright</a></li><li class="header--dropdown-row"><a class="header--dropdown-link" href="https://www.academia.edu/hiring"><svg aria-hidden="true" focusable="false" data-prefix="fas" data-icon="briefcase" class="header--dropdown-row-icon svg-inline--fa fa-briefcase fa-w-16" role="img" xmlns="http://www.w3.org/2000/svg" viewBox="0 0 512 512"><path fill="currentColor" d="M320 336c0 8.84-7.16 16-16 16h-96c-8.84 0-16-7.16-16-16v-48H0v144c0 25.6 22.4 48 48 48h416c25.6 0 48-22.4 48-48V288H320v48zm144-208h-80V80c0-25.6-22.4-48-48-48H176c-25.6 0-48 22.4-48 48v48H48c-25.6 0-48 22.4-48 48v80h512v-80c0-25.6-22.4-48-48-48zm-144 0H192V96h128v32z"></path></svg>We're Hiring!</a></li><li class="header--dropdown-row"><a class="header--dropdown-link" href="https://support.academia.edu/"><svg aria-hidden="true" focusable="false" data-prefix="fas" data-icon="question-circle" class="header--dropdown-row-icon svg-inline--fa fa-question-circle fa-w-16" role="img" xmlns="http://www.w3.org/2000/svg" viewBox="0 0 512 512"><path fill="currentColor" d="M504 256c0 136.997-111.043 248-248 248S8 392.997 8 256C8 119.083 119.043 8 256 8s248 111.083 248 248zM262.655 90c-54.497 0-89.255 22.957-116.549 63.758-3.536 5.286-2.353 12.415 2.715 16.258l34.699 26.31c5.205 3.947 12.621 3.008 16.665-2.122 17.864-22.658 30.113-35.797 57.303-35.797 20.429 0 45.698 13.148 45.698 32.958 0 14.976-12.363 22.667-32.534 33.976C247.128 238.528 216 254.941 216 296v4c0 6.627 5.373 12 12 12h56c6.627 0 12-5.373 12-12v-1.333c0-28.462 83.186-29.647 83.186-106.667 0-58.002-60.165-102-116.531-102zM256 338c-25.365 0-46 20.635-46 46 0 25.364 20.635 46 46 46s46-20.636 46-46c0-25.365-20.635-46-46-46z"></path></svg>Help Center</a></li><li class="header--dropdown-row js-header-dropdown-collapse-button"><button class="header--dropdown-button">less<svg aria-hidden="true" focusable="false" data-prefix="fas" data-icon="caret-up" class="header--dropdown-button-icon svg-inline--fa fa-caret-up fa-w-10" role="img" xmlns="http://www.w3.org/2000/svg" viewBox="0 0 320 512"><path fill="currentColor" d="M288.662 352H31.338c-17.818 0-26.741-21.543-14.142-34.142l128.662-128.662c7.81-7.81 20.474-7.81 28.284 0l128.662 128.662c12.6 12.599 3.676 34.142-14.142 34.142z"></path></svg></button></li></ul></li></ul></div> <script src="//a.academia-assets.com/assets/webpack_bundles/fast_loswp-bundle-86832e264efa375cdd3fa5979579c95643bf6a5c1b20c95ec7a98005fcb4b9a4.js" defer="defer"></script><script>window.loswp = {}; window.loswp.author = 128920438; window.loswp.bulkDownloadFilterCounts = {}; window.loswp.hasDownloadableAttachment = true; window.loswp.hasViewableAttachments = true; // TODO: just use routes for this window.loswp.loginUrl = "https://www.academia.edu/login?post_login_redirect_url=https%3A%2F%2Fwww.academia.edu%2F104663980%2FHuman_Action_Recognition_A_Taxonomy_Based_Survey_Updates_and_Opportunities%3Fauto%3Ddownload"; window.loswp.translateUrl = "https://www.academia.edu/login?post_login_redirect_url=https%3A%2F%2Fwww.academia.edu%2F104663980%2FHuman_Action_Recognition_A_Taxonomy_Based_Survey_Updates_and_Opportunities%3Fshow_translation%3Dtrue"; window.loswp.previewableAttachments = [{"id":104332123,"identifier":"Attachment_104332123","shouldShowBulkDownload":false}]; window.loswp.shouldDetectTimezone = true; window.loswp.shouldShowBulkDownload = true; window.loswp.showSignupCaptcha = false window.loswp.willEdgeCache = false; window.loswp.work = {"work":{"id":104663980,"created_at":"2023-07-17T10:56:48.297-07:00","from_world_paper_id":237517313,"updated_at":"2024-02-29T14:46:13.382-08:00","_data":{"abstract":"Human action recognition systems use data collected from a wide range of sensors to accurately identify and interpret human actions. One of the most challenging issues for computer vision is the automatic and precise identification of human activities. A significant increase in feature learning-based representations for action recognition has emerged in recent years, due to the widespread use of deep learning-based features. This study presents an in-depth analysis of human activity recognition that investigates recent developments in computer vision. Augmented reality, human–computer interaction, cybersecurity, home monitoring, and surveillance cameras are all examples of computer vision applications that often go in conjunction with human action detection. We give a taxonomy-based, rigorous study of human activity recognition techniques, discussing the best ways to acquire human action features, derived using RGB and depth data, as well as the latest research on deep learning and ...","publisher":"MDPI AG","publication_name":"Sensors"},"document_type":"paper","pre_hit_view_count_baseline":null,"quality":"high","language":"en","title":"Human Action Recognition: A Taxonomy-Based Survey, Updates, and Opportunities","broadcastable":true,"draft":null,"has_indexable_attachment":true,"indexable":true}}["work"]; window.loswp.workCoauthors = [128920438]; window.loswp.locale = "en"; window.loswp.countryCode = "SG"; window.loswp.cwvAbTestBucket = ""; window.loswp.designVariant = "ds_vanilla"; window.loswp.fullPageMobileSutdModalVariant = "full_page_mobile_sutd_modal"; window.loswp.useOptimizedScribd4genScript = false; window.loswp.appleClientId = 'edu.academia.applesignon';</script><script defer="" src="https://accounts.google.com/gsi/client"></script><div class="ds-loswp-container"><div class="ds-work-card--grid-container"><div class="ds-work-card--container js-loswp-work-card"><div class="ds-work-card--cover"><div class="ds-work-cover--wrapper"><div class="ds-work-cover--container"><button class="ds-work-cover--clickable js-swp-download-button" data-signup-modal="{"location":"swp-splash-paper-cover","attachmentId":104332123,"attachmentType":"pdf"}"><img alt="First page of “Human Action Recognition: A Taxonomy-Based Survey, Updates, and Opportunities”" class="ds-work-cover--cover-thumbnail" src="https://0.academia-photos.com/attachment_thumbnails/104332123/mini_magick20230717-1-anxq4p.png?1689616689" /><img alt="PDF Icon" class="ds-work-cover--file-icon" src="//a.academia-assets.com/assets/single_work_splash/adobe.icon-574afd46eb6b03a77a153a647fb47e30546f9215c0ee6a25df597a779717f9ef.svg" /><div class="ds-work-cover--hover-container"><span class="material-symbols-outlined" style="font-size: 20px" translate="no">download</span><p>Download Free PDF</p></div><div class="ds-work-cover--ribbon-container">Download Free PDF</div><div class="ds-work-cover--ribbon-triangle"></div></button></div></div></div><div class="ds-work-card--work-information"><h1 class="ds-work-card--work-title">Human Action Recognition: A Taxonomy-Based Survey, Updates, and Opportunities</h1><div class="ds-work-card--work-authors ds-work-card--detail"><a class="ds-work-card--author js-wsj-grid-card-author ds2-5-body-md ds2-5-body-link" data-author-id="128920438" href="https://khu.academia.edu/TanginaSultana"><img alt="Profile image of Tangina Sultana" class="ds-work-card--author-avatar" src="https://0.academia-photos.com/128920438/35786042/30901429/s65_tangina.sultana.jpg" />Tangina Sultana</a></div><div class="ds-work-card--detail"><p class="ds-work-card--detail ds2-5-body-sm">Sensors</p></div><p class="ds-work-card--work-abstract ds-work-card--detail ds2-5-body-md">Human action recognition systems use data collected from a wide range of sensors to accurately identify and interpret human actions. One of the most challenging issues for computer vision is the automatic and precise identification of human activities. A significant increase in feature learning-based representations for action recognition has emerged in recent years, due to the widespread use of deep learning-based features. This study presents an in-depth analysis of human activity recognition that investigates recent developments in computer vision. Augmented reality, human–computer interaction, cybersecurity, home monitoring, and surveillance cameras are all examples of computer vision applications that often go in conjunction with human action detection. We give a taxonomy-based, rigorous study of human activity recognition techniques, discussing the best ways to acquire human action features, derived using RGB and depth data, as well as the latest research on deep learning and ...</p><div class="ds-work-card--button-container"><button class="ds2-5-button js-swp-download-button" data-signup-modal="{"location":"continue-reading-button--work-card","attachmentId":104332123,"attachmentType":"pdf","workUrl":"https://www.academia.edu/104663980/Human_Action_Recognition_A_Taxonomy_Based_Survey_Updates_and_Opportunities"}">See full PDF</button><button class="ds2-5-button ds2-5-button--secondary js-swp-download-button" data-signup-modal="{"location":"download-pdf-button--work-card","attachmentId":104332123,"attachmentType":"pdf","workUrl":"https://www.academia.edu/104663980/Human_Action_Recognition_A_Taxonomy_Based_Survey_Updates_and_Opportunities"}"><span class="material-symbols-outlined" style="font-size: 20px" translate="no">download</span>Download PDF</button></div></div></div></div><div data-auto_select="false" data-client_id="331998490334-rsn3chp12mbkiqhl6e7lu2q0mlbu0f1b" data-doc_id="104332123" data-landing_url="https://www.academia.edu/104663980/Human_Action_Recognition_A_Taxonomy_Based_Survey_Updates_and_Opportunities" data-login_uri="https://www.academia.edu/registrations/google_one_tap" data-moment_callback="onGoogleOneTapEvent" id="g_id_onload"></div><div class="ds-top-related-works--grid-container"><div class="ds-related-content--container ds-top-related-works--container"><h2 class="ds-related-content--heading">Related papers</h2><div class="ds-related-work--container js-wsj-grid-card" data-collection-position="0" data-entity-id="88045110" data-sort-order="default"><a class="ds-related-work--title js-wsj-grid-card-title ds2-5-body-md ds2-5-body-link" href="https://www.academia.edu/88045110/Human_Action_Recognition_Using_Deep_Learning">Human Action Recognition Using Deep Learning</a><div class="ds-related-work--metadata"><a class="js-wsj-grid-card-author ds2-5-body-sm ds2-5-body-link" data-author-id="31493941" href="https://irjet.academia.edu/IRJET">IRJET Journal</a></div><p class="ds-related-work--metadata ds2-5-body-xs">IRJET, 2022</p><p class="ds-related-work--abstract ds2-5-body-sm">The goals of video analysis tasks have changed significantly over time, shifting from inferring the current state to forecasting the future state. Recent advancements in the fields of computer vision and machine learning have made it possible. Different human activities are inferred in tasks based on vision-based action recognition based on the full motions of those acts. By extrapolating from that person's current actions, it also aids in the prognosis of that person's future action. Since it directly addresses issues in the real world, such as visual surveillance, autonomous cars, entertainment, etc., it has been a prominent topic in recent years. To create an effective human action recognizer, a lot of study has been done in this area. Additionally, it is anticipated that more work will need to be done. In this sense, human action recognition has a wide range of uses, including patient monitoring, video surveillance, and many more. Two CNN and LRCN models are put out in this article. The findings show that the recommended approach performs at least 8% more accurately than the traditional two-stream CNN method. The recommended method also offers better temporal and spatial stream identification accuracy.</p><div class="ds-related-work--ctas"><button class="ds2-5-text-link ds2-5-text-link--inline js-swp-download-button" data-signup-modal="{"location":"wsj-grid-card-download-pdf-modal","work_title":"Human Action Recognition Using Deep Learning","attachmentId":92103501,"attachmentType":"pdf","work_url":"https://www.academia.edu/88045110/Human_Action_Recognition_Using_Deep_Learning","alternativeTracking":true}"><span class="material-symbols-outlined" style="font-size: 18px" translate="no">download</span><span class="ds2-5-text-link__content">Download free PDF</span></button><a class="ds2-5-text-link ds2-5-text-link--inline js-wsj-grid-card-view-pdf" href="https://www.academia.edu/88045110/Human_Action_Recognition_Using_Deep_Learning"><span class="ds2-5-text-link__content">View PDF</span><span class="material-symbols-outlined" style="font-size: 18px" translate="no">chevron_right</span></a></div></div><div class="ds-related-work--container js-wsj-grid-card" data-collection-position="1" data-entity-id="111609802" data-sort-order="default"><a class="ds-related-work--title js-wsj-grid-card-title ds2-5-body-md ds2-5-body-link" href="https://www.academia.edu/111609802/An_Efficient_Human_Activity_Recognition_Technique_Based_on_Deep_Learning">An Efficient Human Activity Recognition Technique Based on Deep Learning</a><div class="ds-related-work--metadata"><a class="js-wsj-grid-card-author ds2-5-body-sm ds2-5-body-link" data-author-id="263837202" href="https://gadz.academia.edu/FakhreddineAbabsa">Fakhreddine Ababsa</a></div><p class="ds-related-work--metadata ds2-5-body-xs">Pattern Recognition and Image Analysis, 2019</p><div class="ds-related-work--ctas"><button class="ds2-5-text-link ds2-5-text-link--inline js-swp-download-button" data-signup-modal="{"location":"wsj-grid-card-download-pdf-modal","work_title":"An Efficient Human Activity Recognition Technique Based on Deep Learning","attachmentId":109098776,"attachmentType":"pdf","work_url":"https://www.academia.edu/111609802/An_Efficient_Human_Activity_Recognition_Technique_Based_on_Deep_Learning","alternativeTracking":true}"><span class="material-symbols-outlined" style="font-size: 18px" translate="no">download</span><span class="ds2-5-text-link__content">Download free PDF</span></button><a class="ds2-5-text-link ds2-5-text-link--inline js-wsj-grid-card-view-pdf" href="https://www.academia.edu/111609802/An_Efficient_Human_Activity_Recognition_Technique_Based_on_Deep_Learning"><span class="ds2-5-text-link__content">View PDF</span><span class="material-symbols-outlined" style="font-size: 18px" translate="no">chevron_right</span></a></div></div><div class="ds-related-work--container js-wsj-grid-card" data-collection-position="2" data-entity-id="74119193" data-sort-order="default"><a class="ds-related-work--title js-wsj-grid-card-title ds2-5-body-md ds2-5-body-link" href="https://www.academia.edu/74119193/Human_Activity_Recognition_Using_Deep_Learning_A_Survey">Human Activity Recognition Using Deep Learning: A Survey</a><div class="ds-related-work--metadata"><a class="js-wsj-grid-card-author ds2-5-body-sm ds2-5-body-link" data-author-id="121800693" href="https://independent.academia.edu/BGadhia">Bijal Gadhia</a></div><p class="ds-related-work--metadata ds2-5-body-xs">Lecture Notes on Data Engineering and Communications Technologies</p><div class="ds-related-work--ctas"><button class="ds2-5-text-link ds2-5-text-link--inline js-swp-download-button" data-signup-modal="{"location":"wsj-grid-card-download-pdf-modal","work_title":"Human Activity Recognition Using Deep Learning: A Survey","attachmentId":82386389,"attachmentType":"pdf","work_url":"https://www.academia.edu/74119193/Human_Activity_Recognition_Using_Deep_Learning_A_Survey","alternativeTracking":true}"><span class="material-symbols-outlined" style="font-size: 18px" translate="no">download</span><span class="ds2-5-text-link__content">Download free PDF</span></button><a class="ds2-5-text-link ds2-5-text-link--inline js-wsj-grid-card-view-pdf" href="https://www.academia.edu/74119193/Human_Activity_Recognition_Using_Deep_Learning_A_Survey"><span class="ds2-5-text-link__content">View PDF</span><span class="material-symbols-outlined" style="font-size: 18px" translate="no">chevron_right</span></a></div></div><div class="ds-related-work--container js-wsj-grid-card" data-collection-position="3" data-entity-id="81354181" data-sort-order="default"><a class="ds-related-work--title js-wsj-grid-card-title ds2-5-body-md ds2-5-body-link" href="https://www.academia.edu/81354181/Human_Action_Recognition_with_Deep_Learning">Human Action Recognition with Deep Learning</a><div class="ds-related-work--metadata"><a class="js-wsj-grid-card-author ds2-5-body-sm ds2-5-body-link" data-author-id="159783057" href="https://independent.academia.edu/EmreTatbak">Emre Tatbak</a></div><p class="ds-related-work--metadata ds2-5-body-xs">2020</p><div class="ds-related-work--ctas"><button class="ds2-5-text-link ds2-5-text-link--inline js-swp-download-button" data-signup-modal="{"location":"wsj-grid-card-download-pdf-modal","work_title":"Human Action Recognition with Deep Learning","attachmentId":87436959,"attachmentType":"pdf","work_url":"https://www.academia.edu/81354181/Human_Action_Recognition_with_Deep_Learning","alternativeTracking":true}"><span class="material-symbols-outlined" style="font-size: 18px" translate="no">download</span><span class="ds2-5-text-link__content">Download free PDF</span></button><a class="ds2-5-text-link ds2-5-text-link--inline js-wsj-grid-card-view-pdf" href="https://www.academia.edu/81354181/Human_Action_Recognition_with_Deep_Learning"><span class="ds2-5-text-link__content">View PDF</span><span class="material-symbols-outlined" style="font-size: 18px" translate="no">chevron_right</span></a></div></div><div class="ds-related-work--container js-wsj-grid-card" data-collection-position="4" data-entity-id="89291374" data-sort-order="default"><a class="ds-related-work--title js-wsj-grid-card-title ds2-5-body-md ds2-5-body-link" href="https://www.academia.edu/89291374/A_Close_Look_into_Human_Activity_Recognition_Models_using_Deep_Learning">A Close Look into Human Activity Recognition Models using Deep Learning</a><div class="ds-related-work--metadata"><a class="js-wsj-grid-card-author ds2-5-body-sm ds2-5-body-link" data-author-id="101461231" href="https://independent.academia.edu/NaeemSeliya">Naeem Seliya</a></div><p class="ds-related-work--metadata ds2-5-body-xs">2022 3rd International Conference on Computing, Networks and Internet of Things (CNIOT)</p><div class="ds-related-work--ctas"><button class="ds2-5-text-link ds2-5-text-link--inline js-swp-download-button" data-signup-modal="{"location":"wsj-grid-card-download-pdf-modal","work_title":"A Close Look into Human Activity Recognition Models using Deep Learning","attachmentId":93118599,"attachmentType":"pdf","work_url":"https://www.academia.edu/89291374/A_Close_Look_into_Human_Activity_Recognition_Models_using_Deep_Learning","alternativeTracking":true}"><span class="material-symbols-outlined" style="font-size: 18px" translate="no">download</span><span class="ds2-5-text-link__content">Download free PDF</span></button><a class="ds2-5-text-link ds2-5-text-link--inline js-wsj-grid-card-view-pdf" href="https://www.academia.edu/89291374/A_Close_Look_into_Human_Activity_Recognition_Models_using_Deep_Learning"><span class="ds2-5-text-link__content">View PDF</span><span class="material-symbols-outlined" style="font-size: 18px" translate="no">chevron_right</span></a></div></div><div class="ds-related-work--container js-wsj-grid-card" data-collection-position="5" data-entity-id="72536482" data-sort-order="default"><a class="ds-related-work--title js-wsj-grid-card-title ds2-5-body-md ds2-5-body-link" href="https://www.academia.edu/72536482/A_review_on_Human_Action_Recognition_in_videos_using_Deep_Learning">A review on Human Action Recognition in videos using Deep Learning</a><div class="ds-related-work--metadata"><a class="js-wsj-grid-card-author ds2-5-body-sm ds2-5-body-link" data-author-id="101955904" href="https://stfrancishyd.academia.edu/VarshaDevaraj">Varsha Devaraj</a></div><p class="ds-related-work--metadata ds2-5-body-xs">2021</p><p class="ds-related-work--abstract ds2-5-body-sm">Human Action Recognition (HAR) in video plays a vital role in today&#39;s world. The aim of HARis to automatically identify and analyse human activities using acquired information from video data. Some of the applications include security and surveillance, smart homes and assisted living, health monitoring, robotics, human– computer interaction, intelligent driving, video-retrieval, gaming and entertainment etc. This paper explores the impact of Deep Learning techniques on action recognition. We also explore how spatiotemporal features are aggregated through various deep architectures, the role of optical flow as an input, the impacts on real-time capabilities, and the compactness & interpretability of the learned features. Although several papers have already been published in the general HAR topics, the growing technologies in the field as well as the multi-disciplinary nature of HAR prompt the need for constant updates in the field. In this respect, this paper attempts to review ...</p><div class="ds-related-work--ctas"><button class="ds2-5-text-link ds2-5-text-link--inline js-swp-download-button" data-signup-modal="{"location":"wsj-grid-card-download-pdf-modal","work_title":"A review on Human Action Recognition in videos using Deep Learning","attachmentId":81426314,"attachmentType":"pdf","work_url":"https://www.academia.edu/72536482/A_review_on_Human_Action_Recognition_in_videos_using_Deep_Learning","alternativeTracking":true}"><span class="material-symbols-outlined" style="font-size: 18px" translate="no">download</span><span class="ds2-5-text-link__content">Download free PDF</span></button><a class="ds2-5-text-link ds2-5-text-link--inline js-wsj-grid-card-view-pdf" href="https://www.academia.edu/72536482/A_review_on_Human_Action_Recognition_in_videos_using_Deep_Learning"><span class="ds2-5-text-link__content">View PDF</span><span class="material-symbols-outlined" style="font-size: 18px" translate="no">chevron_right</span></a></div></div><div class="ds-related-work--container js-wsj-grid-card" data-collection-position="6" data-entity-id="53352795" data-sort-order="default"><a class="ds-related-work--title js-wsj-grid-card-title ds2-5-body-md ds2-5-body-link" href="https://www.academia.edu/53352795/An_Advanced_Approach_to_Recognize_Human_Activities_via_Deep_Learning">An Advanced Approach to Recognize Human Activities via Deep Learning</a><div class="ds-related-work--metadata"><a class="js-wsj-grid-card-author ds2-5-body-sm ds2-5-body-link" data-author-id="196365830" href="https://mnnit.academia.edu/AKarn">Aryan Karn</a></div><p class="ds-related-work--metadata ds2-5-body-xs">International Journal of Engineering Applied Sciences and Technology</p><p class="ds-related-work--abstract ds2-5-body-sm">The study of wearable and handheld sensors recognizing human activity improved our understanding of human behaviours and human objectives. Many academics seek to identify the activities of a user from raw data using the fewest necessary resources. In this article, we propose a network of profound beliefs, a full-service architecture for the recognition of activities (DBN-LSTM). This DBN-LSTM method improves the human predictability of raw data and reduces the complexity of the model as well as the requirement for comprehensive workmanship. A geographically and temporally rich network is CNN-LSTM. Our proposed model for the UCI HAR Public Data Set can achieve 99% accuracy and 92% precision.</p><div class="ds-related-work--ctas"><button class="ds2-5-text-link ds2-5-text-link--inline js-swp-download-button" data-signup-modal="{"location":"wsj-grid-card-download-pdf-modal","work_title":"An Advanced Approach to Recognize Human Activities via Deep Learning","attachmentId":70237454,"attachmentType":"pdf","work_url":"https://www.academia.edu/53352795/An_Advanced_Approach_to_Recognize_Human_Activities_via_Deep_Learning","alternativeTracking":true}"><span class="material-symbols-outlined" style="font-size: 18px" translate="no">download</span><span class="ds2-5-text-link__content">Download free PDF</span></button><a class="ds2-5-text-link ds2-5-text-link--inline js-wsj-grid-card-view-pdf" href="https://www.academia.edu/53352795/An_Advanced_Approach_to_Recognize_Human_Activities_via_Deep_Learning"><span class="ds2-5-text-link__content">View PDF</span><span class="material-symbols-outlined" style="font-size: 18px" translate="no">chevron_right</span></a></div></div><div class="ds-related-work--container js-wsj-grid-card" data-collection-position="7" data-entity-id="121177720" data-sort-order="default"><a class="ds-related-work--title js-wsj-grid-card-title ds2-5-body-md ds2-5-body-link" href="https://www.academia.edu/121177720/Real_Time_Human_Activity_Recognition_Using_Deep_Learning">Real Time Human Activity Recognition Using Deep Learning</a><div class="ds-related-work--metadata"><a class="js-wsj-grid-card-author ds2-5-body-sm ds2-5-body-link" data-author-id="2328357" href="https://independent.academia.edu/JournalofComputerScienceIJCSIS">Journal of Computer Science IJCSIS</a></div><p class="ds-related-work--metadata ds2-5-body-xs">International Journal of Computer Science and Information Security (IJCSIS), Vol. 22, No. 3, June 2024, 2024</p><p class="ds-related-work--abstract ds2-5-body-sm">With the increasing number of anti-social products, security is now given more importance. Many organizations have installed CCTV to monitor people and their interactions at all times. For a developed country of 64 million people, each person is caught on camera 30 times a day. A large amount of video data was generated and stored for a specific period of time. A 704x576 image recorded at 25fps will generate about 20GB per day. Constantly monitoring data to judge whether events are abnormal is a nearly impossible task because it requires constant management and attention. This makes it necessary to automate the same. Additionally, it is important to identify which frames and which fractions contain abnormal activity which helps to quickly decide if the abnormal activity is abnormal This is done by rotating video frames and the individuals and their activity types analyzed from the processed frame. Machine learning and deep learning algorithms and methods help us in widespread adoption to make this possible.</p><div class="ds-related-work--ctas"><button class="ds2-5-text-link ds2-5-text-link--inline js-swp-download-button" data-signup-modal="{"location":"wsj-grid-card-download-pdf-modal","work_title":"Real Time Human Activity Recognition Using Deep Learning","attachmentId":116127342,"attachmentType":"pdf","work_url":"https://www.academia.edu/121177720/Real_Time_Human_Activity_Recognition_Using_Deep_Learning","alternativeTracking":true}"><span class="material-symbols-outlined" style="font-size: 18px" translate="no">download</span><span class="ds2-5-text-link__content">Download free PDF</span></button><a class="ds2-5-text-link ds2-5-text-link--inline js-wsj-grid-card-view-pdf" href="https://www.academia.edu/121177720/Real_Time_Human_Activity_Recognition_Using_Deep_Learning"><span class="ds2-5-text-link__content">View PDF</span><span class="material-symbols-outlined" style="font-size: 18px" translate="no">chevron_right</span></a></div></div><div class="ds-related-work--container js-wsj-grid-card" data-collection-position="8" data-entity-id="88881215" data-sort-order="default"><a class="ds-related-work--title js-wsj-grid-card-title ds2-5-body-md ds2-5-body-link" href="https://www.academia.edu/88881215/Human_Activity_Recognition">Human Activity Recognition</a><div class="ds-related-work--metadata"><a class="js-wsj-grid-card-author ds2-5-body-sm ds2-5-body-link" data-author-id="31493941" href="https://irjet.academia.edu/IRJET">IRJET Journal</a></div><p class="ds-related-work--metadata ds2-5-body-xs">IRJET, 2022</p><p class="ds-related-work--abstract ds2-5-body-sm">Due to the extensive use of various sensors, human activity detection has recently gained popularity in a variety of domains such as person monitoring and human-robot interaction. The main goal of the proposed system is to create an activity detection model that aims to identify human actions through video using deep learning. The dataset called Kinetics is utilised to train the activity recognition model. Convolutional Neural Network (CNN) is one of these techniques. It is a type of neural network in deep learning that has the ability to turn on the underdone inputs immediately. These models can only currently handle inputs that are two dimensional. However, in order to detect the actions in the videos, this study uses a three-dimension CNN model for classification of videos. Since 3D convolutional networks naturally apply convolutions in 3D space, they are recommended for video categorization.</p><div class="ds-related-work--ctas"><button class="ds2-5-text-link ds2-5-text-link--inline js-swp-download-button" data-signup-modal="{"location":"wsj-grid-card-download-pdf-modal","work_title":"Human Activity Recognition","attachmentId":92782565,"attachmentType":"pdf","work_url":"https://www.academia.edu/88881215/Human_Activity_Recognition","alternativeTracking":true}"><span class="material-symbols-outlined" style="font-size: 18px" translate="no">download</span><span class="ds2-5-text-link__content">Download free PDF</span></button><a class="ds2-5-text-link ds2-5-text-link--inline js-wsj-grid-card-view-pdf" href="https://www.academia.edu/88881215/Human_Activity_Recognition"><span class="ds2-5-text-link__content">View PDF</span><span class="material-symbols-outlined" style="font-size: 18px" translate="no">chevron_right</span></a></div></div><div class="ds-related-work--container js-wsj-grid-card" data-collection-position="9" data-entity-id="114631714" data-sort-order="default"><a class="ds-related-work--title js-wsj-grid-card-title ds2-5-body-md ds2-5-body-link" href="https://www.academia.edu/114631714/Enhanced_Recognition_of_Human_Activity_using_Hybrid_Deep_Learning_Techniques">Enhanced Recognition of Human Activity using Hybrid Deep Learning Techniques</a><div class="ds-related-work--metadata"><a class="js-wsj-grid-card-author ds2-5-body-sm ds2-5-body-link" data-author-id="293247179" href="https://independent.academia.edu/AbinayaS137">Abinaya S</a><span>, </span><a class="js-wsj-grid-card-author ds2-5-body-sm ds2-5-body-link" data-author-id="303709566" href="https://independent.academia.edu/PottiSaiPavanGuruJayanth">Potti Sai Pavan Guru Jayanth</a><span>, </span><a class="js-wsj-grid-card-author ds2-5-body-sm ds2-5-body-link" data-author-id="64876522" href="https://independent.academia.edu/ForexJournal">FOREX Publication</a></div><p class="ds-related-work--metadata ds2-5-body-xs">FOREX Publication, 2024</p><p class="ds-related-work--abstract ds2-5-body-sm">In the domain of deep learning, Human Activity Recognition (HAR) models stand out, surpassing conventional methods. These cutting-edge models excel in autonomously extracting vital data features and managing complex sensor data. However, the evolving nature of HAR demands costly and frequent retraining due to subjects, sensors, and sampling rate variations. To address this challenge, we introduce Cross-Domain Activities Analysis (CDAA) combined with a clustering-based Gated Recurrent Unit (GRU) model. CDAA reimagines motion clusters, merging origin and destination movements while quantifying domain disparities. Expanding our horizons, we incorporate image datasets, leveraging Convolutional Neural Networks (CNNs). The innovative aspects of the proposed hybrid GRU_CNN model, showcasing its superiority in addressing specific challenges in human activity recognition, such as subject and sensor variations. This approach consistently achieves 98.5% accuracy across image, UCI-HAR, and PAMAP2 datasets. It excels in distinguishing activities with similar postures. Our research not only pushes boundaries but also reshapes the landscape of HAR, opening doors to innovative applications in healthcare, fitness tracking, and beyond.</p><div class="ds-related-work--ctas"><button class="ds2-5-text-link ds2-5-text-link--inline js-swp-download-button" data-signup-modal="{"location":"wsj-grid-card-download-pdf-modal","work_title":"Enhanced Recognition of Human Activity using Hybrid Deep Learning Techniques","attachmentId":111278533,"attachmentType":"pdf","work_url":"https://www.academia.edu/114631714/Enhanced_Recognition_of_Human_Activity_using_Hybrid_Deep_Learning_Techniques","alternativeTracking":true}"><span class="material-symbols-outlined" style="font-size: 18px" translate="no">download</span><span class="ds2-5-text-link__content">Download free PDF</span></button><a class="ds2-5-text-link ds2-5-text-link--inline js-wsj-grid-card-view-pdf" href="https://www.academia.edu/114631714/Enhanced_Recognition_of_Human_Activity_using_Hybrid_Deep_Learning_Techniques"><span class="ds2-5-text-link__content">View PDF</span><span class="material-symbols-outlined" style="font-size: 18px" translate="no">chevron_right</span></a></div></div></div></div><div class="ds-sticky-ctas--wrapper js-loswp-sticky-ctas hidden"><div class="ds-sticky-ctas--grid-container"><div class="ds-sticky-ctas--container"><button class="ds2-5-button js-swp-download-button" data-signup-modal="{"location":"continue-reading-button--sticky-ctas","attachmentId":104332123,"attachmentType":"pdf","workUrl":null}">See full PDF</button><button class="ds2-5-button ds2-5-button--secondary js-swp-download-button" data-signup-modal="{"location":"download-pdf-button--sticky-ctas","attachmentId":104332123,"attachmentType":"pdf","workUrl":null}"><span class="material-symbols-outlined" style="font-size: 20px" translate="no">download</span>Download PDF</button></div></div></div><div class="ds-below-fold--grid-container"><div class="ds-work--container js-loswp-embedded-document"><div class="attachment_preview" data-attachment="Attachment_104332123" style="display: none"><div class="js-scribd-document-container"><div class="scribd--document-loading js-scribd-document-loader" style="display: block;"><img alt="Loading..." src="//a.academia-assets.com/images/loaders/paper-load.gif" /><p>Loading Preview</p></div></div><div style="text-align: center;"><div class="scribd--no-preview-alert js-preview-unavailable"><p>Sorry, preview is currently unavailable. You can download the paper by clicking the button above.</p></div></div></div></div><div class="ds-sidebar--container js-work-sidebar"><div class="ds-related-content--container"><h2 class="ds-related-content--heading">Related papers</h2><div class="ds-related-work--container js-related-work-sidebar-card" data-collection-position="0" data-entity-id="69125155" data-sort-order="default"><a class="ds-related-work--title js-related-work-grid-card-title ds2-5-body-md ds2-5-body-link" href="https://www.academia.edu/69125155/Uncovering_Human_Multimodal_Activity_Recognition_with_a_Deep_Learning_Approach">Uncovering Human Multimodal Activity Recognition with a Deep Learning Approach</a><div class="ds-related-work--metadata"><a class="js-related-work-grid-card-author ds2-5-body-sm ds2-5-body-link" data-author-id="29014395" href="https://independent.academia.edu/RoseliRomero">Roseli Romero</a></div><p class="ds-related-work--metadata ds2-5-body-xs">2020 International Joint Conference on Neural Networks (IJCNN), 2020</p><div class="ds-related-work--ctas"><button class="ds2-5-text-link ds2-5-text-link--inline js-swp-download-button" data-signup-modal="{"location":"wsj-grid-card-download-pdf-modal","work_title":"Uncovering Human Multimodal Activity Recognition with a Deep Learning Approach","attachmentId":79342210,"attachmentType":"pdf","work_url":"https://www.academia.edu/69125155/Uncovering_Human_Multimodal_Activity_Recognition_with_a_Deep_Learning_Approach","alternativeTracking":true}"><span class="material-symbols-outlined" style="font-size: 18px" translate="no">download</span><span class="ds2-5-text-link__content">Download free PDF</span></button><a class="ds2-5-text-link ds2-5-text-link--inline js-related-work-grid-card-view-pdf" href="https://www.academia.edu/69125155/Uncovering_Human_Multimodal_Activity_Recognition_with_a_Deep_Learning_Approach"><span class="ds2-5-text-link__content">View PDF</span><span class="material-symbols-outlined" style="font-size: 18px" translate="no">chevron_right</span></a></div></div><div class="ds-related-work--container js-related-work-sidebar-card" data-collection-position="1" data-entity-id="73612164" data-sort-order="default"><a class="ds-related-work--title js-related-work-grid-card-title ds2-5-body-md ds2-5-body-link" href="https://www.academia.edu/73612164/Deep_Learning_based_Human_Action_Recognition">Deep Learning based Human Action Recognition</a><div class="ds-related-work--metadata"><a class="js-related-work-grid-card-author ds2-5-body-sm ds2-5-body-link" data-author-id="181118999" href="https://independent.academia.edu/RitikVerma28">Ritik Verma</a></div><p class="ds-related-work--metadata ds2-5-body-xs">ITM Web of Conferences, 2021</p><div class="ds-related-work--ctas"><button class="ds2-5-text-link ds2-5-text-link--inline js-swp-download-button" data-signup-modal="{"location":"wsj-grid-card-download-pdf-modal","work_title":"Deep Learning based Human Action Recognition","attachmentId":82066435,"attachmentType":"pdf","work_url":"https://www.academia.edu/73612164/Deep_Learning_based_Human_Action_Recognition","alternativeTracking":true}"><span class="material-symbols-outlined" style="font-size: 18px" translate="no">download</span><span class="ds2-5-text-link__content">Download free PDF</span></button><a class="ds2-5-text-link ds2-5-text-link--inline js-related-work-grid-card-view-pdf" href="https://www.academia.edu/73612164/Deep_Learning_based_Human_Action_Recognition"><span class="ds2-5-text-link__content">View PDF</span><span class="material-symbols-outlined" style="font-size: 18px" translate="no">chevron_right</span></a></div></div><div class="ds-related-work--container js-related-work-sidebar-card" data-collection-position="2" data-entity-id="82131968" data-sort-order="default"><a class="ds-related-work--title js-related-work-grid-card-title ds2-5-body-md ds2-5-body-link" href="https://www.academia.edu/82131968/Sensor_Based_Human_Activity_Recognition_with_Spatio_Temporal_Deep_Learning">Sensor-Based Human Activity Recognition with Spatio-Temporal Deep Learning</a><div class="ds-related-work--metadata"><a class="js-related-work-grid-card-author ds2-5-body-sm ds2-5-body-link" data-author-id="20756645" href="https://independent.academia.edu/mansouralsulaiman">mansour alsulaiman</a></div><p class="ds-related-work--metadata ds2-5-body-xs">Sensors, 2021</p><div class="ds-related-work--ctas"><button class="ds2-5-text-link ds2-5-text-link--inline js-swp-download-button" data-signup-modal="{"location":"wsj-grid-card-download-pdf-modal","work_title":"Sensor-Based Human Activity Recognition with Spatio-Temporal Deep Learning","attachmentId":87935284,"attachmentType":"pdf","work_url":"https://www.academia.edu/82131968/Sensor_Based_Human_Activity_Recognition_with_Spatio_Temporal_Deep_Learning","alternativeTracking":true}"><span class="material-symbols-outlined" style="font-size: 18px" translate="no">download</span><span class="ds2-5-text-link__content">Download free PDF</span></button><a class="ds2-5-text-link ds2-5-text-link--inline js-related-work-grid-card-view-pdf" href="https://www.academia.edu/82131968/Sensor_Based_Human_Activity_Recognition_with_Spatio_Temporal_Deep_Learning"><span class="ds2-5-text-link__content">View PDF</span><span class="material-symbols-outlined" style="font-size: 18px" translate="no">chevron_right</span></a></div></div><div class="ds-related-work--container js-related-work-sidebar-card" data-collection-position="3" data-entity-id="74866271" data-sort-order="default"><a class="ds-related-work--title js-related-work-grid-card-title ds2-5-body-md ds2-5-body-link" href="https://www.academia.edu/74866271/Hybrid_deep_learning_framework_for_human_activity_recognition">Hybrid deep learning framework for human activity recognition</a><div class="ds-related-work--metadata"><a class="js-related-work-grid-card-author ds2-5-body-sm ds2-5-body-link" data-author-id="190266685" href="https://gtu-in.academia.edu/SHRISHAILSIDRAMAYYAMATH">SHRISHAIL SIDRAMAYYA MATH</a></div><p class="ds-related-work--metadata ds2-5-body-xs">International Journal of Nonlinear Analysis and Applications, 2022</p><div class="ds-related-work--ctas"><button class="ds2-5-text-link ds2-5-text-link--inline js-swp-download-button" data-signup-modal="{"location":"wsj-grid-card-download-pdf-modal","work_title":"Hybrid deep learning framework for human activity recognition","attachmentId":82864472,"attachmentType":"pdf","work_url":"https://www.academia.edu/74866271/Hybrid_deep_learning_framework_for_human_activity_recognition","alternativeTracking":true}"><span class="material-symbols-outlined" style="font-size: 18px" translate="no">download</span><span class="ds2-5-text-link__content">Download free PDF</span></button><a class="ds2-5-text-link ds2-5-text-link--inline js-related-work-grid-card-view-pdf" href="https://www.academia.edu/74866271/Hybrid_deep_learning_framework_for_human_activity_recognition"><span class="ds2-5-text-link__content">View PDF</span><span class="material-symbols-outlined" style="font-size: 18px" translate="no">chevron_right</span></a></div></div><div class="ds-related-work--container js-related-work-sidebar-card" data-collection-position="4" data-entity-id="124414957" data-sort-order="default"><a class="ds-related-work--title js-related-work-grid-card-title ds2-5-body-md ds2-5-body-link" href="https://www.academia.edu/124414957/Comprehensive_Analysis_of_Deep_Learning_based_Human_Activity_Recognition_approaches_based_on_Accuracy">Comprehensive Analysis of Deep Learning-based Human Activity Recognition approaches based on Accuracy</a><div class="ds-related-work--metadata"><a class="js-related-work-grid-card-author ds2-5-body-sm ds2-5-body-link" data-author-id="58961425" href="https://independent.academia.edu/HardikModi12">Hardik Modi</a></div><p class="ds-related-work--metadata ds2-5-body-xs">International Journal of Computing and Digital Systems, 2022</p><div class="ds-related-work--ctas"><button class="ds2-5-text-link ds2-5-text-link--inline js-swp-download-button" data-signup-modal="{"location":"wsj-grid-card-download-pdf-modal","work_title":"Comprehensive Analysis of Deep Learning-based Human Activity Recognition approaches based on Accuracy","attachmentId":118644950,"attachmentType":"pdf","work_url":"https://www.academia.edu/124414957/Comprehensive_Analysis_of_Deep_Learning_based_Human_Activity_Recognition_approaches_based_on_Accuracy","alternativeTracking":true}"><span class="material-symbols-outlined" style="font-size: 18px" translate="no">download</span><span class="ds2-5-text-link__content">Download free PDF</span></button><a class="ds2-5-text-link ds2-5-text-link--inline js-related-work-grid-card-view-pdf" href="https://www.academia.edu/124414957/Comprehensive_Analysis_of_Deep_Learning_based_Human_Activity_Recognition_approaches_based_on_Accuracy"><span class="ds2-5-text-link__content">View PDF</span><span class="material-symbols-outlined" style="font-size: 18px" translate="no">chevron_right</span></a></div></div><div class="ds-related-work--container js-related-work-sidebar-card" data-collection-position="5" data-entity-id="83593895" data-sort-order="default"><a class="ds-related-work--title js-related-work-grid-card-title ds2-5-body-md ds2-5-body-link" href="https://www.academia.edu/83593895/Human_Activity_Recognition_Using_Tools_of_Convolutional_Neural_Networks_A_State_of_the_Art_Review_Data_Sets_Challenges_and_Future_Prospects">Human Activity Recognition Using Tools of Convolutional Neural Networks: A State of the Art Review, Data Sets, Challenges and Future Prospects</a><div class="ds-related-work--metadata"><a class="js-related-work-grid-card-author ds2-5-body-sm ds2-5-body-link" data-author-id="32981241" href="https://independent.academia.edu/FakhriKarray">Fakhri Karray</a></div><p class="ds-related-work--metadata ds2-5-body-xs">2022</p><div class="ds-related-work--ctas"><button class="ds2-5-text-link ds2-5-text-link--inline js-swp-download-button" data-signup-modal="{"location":"wsj-grid-card-download-pdf-modal","work_title":"Human Activity Recognition Using Tools of Convolutional Neural Networks: A State of the Art Review, Data Sets, Challenges and Future Prospects","attachmentId":88887210,"attachmentType":"pdf","work_url":"https://www.academia.edu/83593895/Human_Activity_Recognition_Using_Tools_of_Convolutional_Neural_Networks_A_State_of_the_Art_Review_Data_Sets_Challenges_and_Future_Prospects","alternativeTracking":true}"><span class="material-symbols-outlined" style="font-size: 18px" translate="no">download</span><span class="ds2-5-text-link__content">Download free PDF</span></button><a class="ds2-5-text-link ds2-5-text-link--inline js-related-work-grid-card-view-pdf" href="https://www.academia.edu/83593895/Human_Activity_Recognition_Using_Tools_of_Convolutional_Neural_Networks_A_State_of_the_Art_Review_Data_Sets_Challenges_and_Future_Prospects"><span class="ds2-5-text-link__content">View PDF</span><span class="material-symbols-outlined" style="font-size: 18px" translate="no">chevron_right</span></a></div></div><div class="ds-related-work--container js-related-work-sidebar-card" data-collection-position="6" data-entity-id="116130298" data-sort-order="default"><a class="ds-related-work--title js-related-work-grid-card-title ds2-5-body-md ds2-5-body-link" href="https://www.academia.edu/116130298/Human_activity_recognition_using_Deep_Learning">Human activity recognition using Deep Learning</a><div class="ds-related-work--metadata"><a class="js-related-work-grid-card-author ds2-5-body-sm ds2-5-body-link" data-author-id="293925150" href="https://independent.academia.edu/IshikaJindal23">Ishika Jindal</a></div><p class="ds-related-work--metadata ds2-5-body-xs">International Journal of Emerging Trends in Engineering Research, 2019</p><div class="ds-related-work--ctas"><button class="ds2-5-text-link ds2-5-text-link--inline js-swp-download-button" data-signup-modal="{"location":"wsj-grid-card-download-pdf-modal","work_title":"Human activity recognition using Deep Learning","attachmentId":112345537,"attachmentType":"pdf","work_url":"https://www.academia.edu/116130298/Human_activity_recognition_using_Deep_Learning","alternativeTracking":true}"><span class="material-symbols-outlined" style="font-size: 18px" translate="no">download</span><span class="ds2-5-text-link__content">Download free PDF</span></button><a class="ds2-5-text-link ds2-5-text-link--inline js-related-work-grid-card-view-pdf" href="https://www.academia.edu/116130298/Human_activity_recognition_using_Deep_Learning"><span class="ds2-5-text-link__content">View PDF</span><span class="material-symbols-outlined" style="font-size: 18px" translate="no">chevron_right</span></a></div></div><div class="ds-related-work--container js-related-work-sidebar-card" data-collection-position="7" data-entity-id="72028102" data-sort-order="default"><a class="ds-related-work--title js-related-work-grid-card-title ds2-5-body-md ds2-5-body-link" href="https://www.academia.edu/72028102/Deep_Learning_for_Recognizing_Human_Activities_using_Motions_of_Skeletal_Joints">Deep Learning for Recognizing Human Activities using Motions of Skeletal Joints</a><div class="ds-related-work--metadata"><a class="js-related-work-grid-card-author ds2-5-body-sm ds2-5-body-link" data-author-id="1291001" href="https://independent.academia.edu/PykeTin">Pyke Tin</a></div><p class="ds-related-work--metadata ds2-5-body-xs">IEEE Transactions on Consumer Electronics</p><div class="ds-related-work--ctas"><button class="ds2-5-text-link ds2-5-text-link--inline js-swp-download-button" data-signup-modal="{"location":"wsj-grid-card-download-pdf-modal","work_title":"Deep Learning for Recognizing Human Activities using Motions of Skeletal Joints","attachmentId":81125064,"attachmentType":"pdf","work_url":"https://www.academia.edu/72028102/Deep_Learning_for_Recognizing_Human_Activities_using_Motions_of_Skeletal_Joints","alternativeTracking":true}"><span class="material-symbols-outlined" style="font-size: 18px" translate="no">download</span><span class="ds2-5-text-link__content">Download free PDF</span></button><a class="ds2-5-text-link ds2-5-text-link--inline js-related-work-grid-card-view-pdf" href="https://www.academia.edu/72028102/Deep_Learning_for_Recognizing_Human_Activities_using_Motions_of_Skeletal_Joints"><span class="ds2-5-text-link__content">View PDF</span><span class="material-symbols-outlined" style="font-size: 18px" translate="no">chevron_right</span></a></div></div><div class="ds-related-work--container js-related-work-sidebar-card" data-collection-position="8" data-entity-id="96014667" data-sort-order="default"><a class="ds-related-work--title js-related-work-grid-card-title ds2-5-body-md ds2-5-body-link" href="https://www.academia.edu/96014667/Recent_evolution_of_modern_datasets_for_human_activity_recognition_a_deep_survey">Recent evolution of modern datasets for human activity recognition: a deep survey</a><div class="ds-related-work--metadata"><a class="js-related-work-grid-card-author ds2-5-body-sm ds2-5-body-link" data-author-id="255486760" href="https://independent.academia.edu/RoshanSingh353">Roshan Singh</a></div><p class="ds-related-work--metadata ds2-5-body-xs">Multimedia Systems, 2019</p><div class="ds-related-work--ctas"><button class="ds2-5-text-link ds2-5-text-link--inline js-swp-download-button" data-signup-modal="{"location":"wsj-grid-card-download-pdf-modal","work_title":"Recent evolution of modern datasets for human activity recognition: a deep survey","attachmentId":98032350,"attachmentType":"pdf","work_url":"https://www.academia.edu/96014667/Recent_evolution_of_modern_datasets_for_human_activity_recognition_a_deep_survey","alternativeTracking":true}"><span class="material-symbols-outlined" style="font-size: 18px" translate="no">download</span><span class="ds2-5-text-link__content">Download free PDF</span></button><a class="ds2-5-text-link ds2-5-text-link--inline js-related-work-grid-card-view-pdf" href="https://www.academia.edu/96014667/Recent_evolution_of_modern_datasets_for_human_activity_recognition_a_deep_survey"><span class="ds2-5-text-link__content">View PDF</span><span class="material-symbols-outlined" style="font-size: 18px" translate="no">chevron_right</span></a></div></div><div class="ds-related-work--container js-related-work-sidebar-card" data-collection-position="9" data-entity-id="122620392" data-sort-order="default"><a class="ds-related-work--title js-related-work-grid-card-title ds2-5-body-md ds2-5-body-link" href="https://www.academia.edu/122620392/Multimodal_vision_based_human_action_recognition_using_deep_learning_a_review">Multimodal vision-based human action recognition using deep learning: a review</a><div class="ds-related-work--metadata"><a class="js-related-work-grid-card-author ds2-5-body-sm ds2-5-body-link" data-author-id="165040186" href="https://uk.academia.edu/ElhamShabaninia">Elham Shabaninia</a></div><p class="ds-related-work--metadata ds2-5-body-xs">Artificial intelligence review, 2024</p><div class="ds-related-work--ctas"><button class="ds2-5-text-link ds2-5-text-link--inline js-swp-download-button" data-signup-modal="{"location":"wsj-grid-card-download-pdf-modal","work_title":"Multimodal vision-based human action recognition using deep learning: a review","attachmentId":117251664,"attachmentType":"pdf","work_url":"https://www.academia.edu/122620392/Multimodal_vision_based_human_action_recognition_using_deep_learning_a_review","alternativeTracking":true}"><span class="material-symbols-outlined" style="font-size: 18px" translate="no">download</span><span class="ds2-5-text-link__content">Download free PDF</span></button><a class="ds2-5-text-link ds2-5-text-link--inline js-related-work-grid-card-view-pdf" href="https://www.academia.edu/122620392/Multimodal_vision_based_human_action_recognition_using_deep_learning_a_review"><span class="ds2-5-text-link__content">View PDF</span><span class="material-symbols-outlined" style="font-size: 18px" translate="no">chevron_right</span></a></div></div><div class="ds-related-work--container js-related-work-sidebar-card" data-collection-position="10" data-entity-id="92177354" data-sort-order="default"><a class="ds-related-work--title js-related-work-grid-card-title ds2-5-body-md ds2-5-body-link" href="https://www.academia.edu/92177354/Review_on_recent_Computer_Vision_Methods_for_Human_Action_Recognition">Review on recent Computer Vision Methods for Human Action Recognition</a><div class="ds-related-work--metadata"><a class="js-related-work-grid-card-author ds2-5-body-sm ds2-5-body-link" data-author-id="192526280" href="https://independent.academia.edu/muhamadazhee">azhee muhamad</a></div><p class="ds-related-work--metadata ds2-5-body-xs">ADCAIJ: Advances in Distributed Computing and Artificial Intelligence Journal, 2022</p><div class="ds-related-work--ctas"><button class="ds2-5-text-link ds2-5-text-link--inline js-swp-download-button" data-signup-modal="{"location":"wsj-grid-card-download-pdf-modal","work_title":"Review on recent Computer Vision Methods for Human Action Recognition","attachmentId":95255754,"attachmentType":"pdf","work_url":"https://www.academia.edu/92177354/Review_on_recent_Computer_Vision_Methods_for_Human_Action_Recognition","alternativeTracking":true}"><span class="material-symbols-outlined" style="font-size: 18px" translate="no">download</span><span class="ds2-5-text-link__content">Download free PDF</span></button><a class="ds2-5-text-link ds2-5-text-link--inline js-related-work-grid-card-view-pdf" href="https://www.academia.edu/92177354/Review_on_recent_Computer_Vision_Methods_for_Human_Action_Recognition"><span class="ds2-5-text-link__content">View PDF</span><span class="material-symbols-outlined" style="font-size: 18px" translate="no">chevron_right</span></a></div></div><div class="ds-related-work--container js-related-work-sidebar-card" data-collection-position="11" data-entity-id="44954185" data-sort-order="default"><a class="ds-related-work--title js-related-work-grid-card-title ds2-5-body-md ds2-5-body-link" href="https://www.academia.edu/44954185/Complex_Human_Action_Recognition_Using_a_Hierarchical_Feature_Reduction_and_Deep_Learning_Based_Method">Complex Human Action Recognition Using a Hierarchical Feature Reduction and Deep Learning Based Method</a><div class="ds-related-work--metadata"><a class="js-related-work-grid-card-author ds2-5-body-sm ds2-5-body-link" data-author-id="50818720" href="https://leeds.academia.edu/MahdiRezaei">Mahdi Rezaei</a></div><p class="ds-related-work--metadata ds2-5-body-xs">Springer Nature, 2021</p><div class="ds-related-work--ctas"><button class="ds2-5-text-link ds2-5-text-link--inline js-swp-download-button" data-signup-modal="{"location":"wsj-grid-card-download-pdf-modal","work_title":"Complex Human Action Recognition Using a Hierarchical Feature Reduction and Deep Learning Based Method","attachmentId":65485206,"attachmentType":"pdf","work_url":"https://www.academia.edu/44954185/Complex_Human_Action_Recognition_Using_a_Hierarchical_Feature_Reduction_and_Deep_Learning_Based_Method","alternativeTracking":true}"><span class="material-symbols-outlined" style="font-size: 18px" translate="no">download</span><span class="ds2-5-text-link__content">Download free PDF</span></button><a class="ds2-5-text-link ds2-5-text-link--inline js-related-work-grid-card-view-pdf" href="https://www.academia.edu/44954185/Complex_Human_Action_Recognition_Using_a_Hierarchical_Feature_Reduction_and_Deep_Learning_Based_Method"><span class="ds2-5-text-link__content">View PDF</span><span class="material-symbols-outlined" style="font-size: 18px" translate="no">chevron_right</span></a></div></div><div class="ds-related-work--container js-related-work-sidebar-card" data-collection-position="12" data-entity-id="108373861" data-sort-order="default"><a class="ds-related-work--title js-related-work-grid-card-title ds2-5-body-md ds2-5-body-link" href="https://www.academia.edu/108373861/A_Deep_Learning_Approach_for_Human_Action_Recognition_Using_Skeletal_Information">A Deep Learning Approach for Human Action Recognition Using Skeletal Information</a><div class="ds-related-work--metadata"><a class="js-related-work-grid-card-author ds2-5-body-sm ds2-5-body-link" data-author-id="38211" href="https://uniwa.academia.edu/PhivosMylonas">Phivos Mylonas</a></div><p class="ds-related-work--metadata ds2-5-body-xs">Springer eBooks, 2020</p><div class="ds-related-work--ctas"><button class="ds2-5-text-link ds2-5-text-link--inline js-swp-download-button" data-signup-modal="{"location":"wsj-grid-card-download-pdf-modal","work_title":"A Deep Learning Approach for Human Action Recognition Using Skeletal Information","attachmentId":106775741,"attachmentType":"pdf","work_url":"https://www.academia.edu/108373861/A_Deep_Learning_Approach_for_Human_Action_Recognition_Using_Skeletal_Information","alternativeTracking":true}"><span class="material-symbols-outlined" style="font-size: 18px" translate="no">download</span><span class="ds2-5-text-link__content">Download free PDF</span></button><a class="ds2-5-text-link ds2-5-text-link--inline js-related-work-grid-card-view-pdf" href="https://www.academia.edu/108373861/A_Deep_Learning_Approach_for_Human_Action_Recognition_Using_Skeletal_Information"><span class="ds2-5-text-link__content">View PDF</span><span class="material-symbols-outlined" style="font-size: 18px" translate="no">chevron_right</span></a></div></div><div class="ds-related-work--container js-related-work-sidebar-card" data-collection-position="13" data-entity-id="101151940" data-sort-order="default"><a class="ds-related-work--title js-related-work-grid-card-title ds2-5-body-md ds2-5-body-link" href="https://www.academia.edu/101151940/A_Review_Human_Activity_Recognition">A Review : Human Activity Recognition</a><div class="ds-related-work--metadata"><a class="js-related-work-grid-card-author ds2-5-body-sm ds2-5-body-link" data-author-id="64525554" href="https://technoscienceacademy.academia.edu/IJSRCSEIT">International Journal of Scientific Research in Computer Science, Engineering and Information Technology IJSRCSEIT</a></div><p class="ds-related-work--metadata ds2-5-body-xs">International Journal of Scientific Research in Computer Science, Engineering and Information Technology, 2023</p><div class="ds-related-work--ctas"><button class="ds2-5-text-link ds2-5-text-link--inline js-swp-download-button" data-signup-modal="{"location":"wsj-grid-card-download-pdf-modal","work_title":"A Review : Human Activity Recognition","attachmentId":101771228,"attachmentType":"pdf","work_url":"https://www.academia.edu/101151940/A_Review_Human_Activity_Recognition","alternativeTracking":true}"><span class="material-symbols-outlined" style="font-size: 18px" translate="no">download</span><span class="ds2-5-text-link__content">Download free PDF</span></button><a class="ds2-5-text-link ds2-5-text-link--inline js-related-work-grid-card-view-pdf" href="https://www.academia.edu/101151940/A_Review_Human_Activity_Recognition"><span class="ds2-5-text-link__content">View PDF</span><span class="material-symbols-outlined" style="font-size: 18px" translate="no">chevron_right</span></a></div></div><div class="ds-related-work--container js-related-work-sidebar-card" data-collection-position="14" data-entity-id="60901450" data-sort-order="default"><a class="ds-related-work--title js-related-work-grid-card-title ds2-5-body-md ds2-5-body-link" href="https://www.academia.edu/60901450/Multi_Layered_Deep_Learning_Features_Fusion_for_Human_Action_Recognition">Multi-Layered Deep Learning Features Fusion for Human Action Recognition</a><div class="ds-related-work--metadata"><a class="js-related-work-grid-card-author ds2-5-body-sm ds2-5-body-link" data-author-id="41666418" href="https://independent.academia.edu/attiquekhan1">attique khan</a></div><p class="ds-related-work--metadata ds2-5-body-xs">Computers, Materials & Continua</p><div class="ds-related-work--ctas"><button class="ds2-5-text-link ds2-5-text-link--inline js-swp-download-button" data-signup-modal="{"location":"wsj-grid-card-download-pdf-modal","work_title":"Multi-Layered Deep Learning Features Fusion for Human Action Recognition","attachmentId":74143474,"attachmentType":"pdf","work_url":"https://www.academia.edu/60901450/Multi_Layered_Deep_Learning_Features_Fusion_for_Human_Action_Recognition","alternativeTracking":true}"><span class="material-symbols-outlined" style="font-size: 18px" translate="no">download</span><span class="ds2-5-text-link__content">Download free PDF</span></button><a class="ds2-5-text-link ds2-5-text-link--inline js-related-work-grid-card-view-pdf" href="https://www.academia.edu/60901450/Multi_Layered_Deep_Learning_Features_Fusion_for_Human_Action_Recognition"><span class="ds2-5-text-link__content">View PDF</span><span class="material-symbols-outlined" style="font-size: 18px" translate="no">chevron_right</span></a></div></div><div class="ds-related-work--container js-related-work-sidebar-card" data-collection-position="15" data-entity-id="86356759" data-sort-order="default"><a class="ds-related-work--title js-related-work-grid-card-title ds2-5-body-md ds2-5-body-link" href="https://www.academia.edu/86356759/Deep_Architectures_for_Human_Activity_Recognition_using_Sensors">Deep Architectures for Human Activity Recognition using Sensors</a><div class="ds-related-work--metadata"><a class="js-related-work-grid-card-author ds2-5-body-sm ds2-5-body-link" data-author-id="8042236" href="https://muet.academia.edu/FaisalKShaikh">Faisal K. Shaikh</a></div><p class="ds-related-work--metadata ds2-5-body-xs">3C Tecnología_Glosas de innovación aplicadas a la pyme, 2019</p><div class="ds-related-work--ctas"><button class="ds2-5-text-link ds2-5-text-link--inline js-swp-download-button" data-signup-modal="{"location":"wsj-grid-card-download-pdf-modal","work_title":"Deep Architectures for Human Activity Recognition using Sensors","attachmentId":90824702,"attachmentType":"pdf","work_url":"https://www.academia.edu/86356759/Deep_Architectures_for_Human_Activity_Recognition_using_Sensors","alternativeTracking":true}"><span class="material-symbols-outlined" style="font-size: 18px" translate="no">download</span><span class="ds2-5-text-link__content">Download free PDF</span></button><a class="ds2-5-text-link ds2-5-text-link--inline js-related-work-grid-card-view-pdf" href="https://www.academia.edu/86356759/Deep_Architectures_for_Human_Activity_Recognition_using_Sensors"><span class="ds2-5-text-link__content">View PDF</span><span class="material-symbols-outlined" style="font-size: 18px" translate="no">chevron_right</span></a></div></div></div><div class="ds-related-content--container"><h2 class="ds-related-content--heading">Related topics</h2><div class="ds-research-interests--pills-container"><a class="js-related-research-interest ds-research-interests--pill" data-entity-id="422" href="https://www.academia.edu/Documents/in/Computer_Science">Computer Science</a><a class="js-related-research-interest ds-research-interests--pill" data-entity-id="465" href="https://www.academia.edu/Documents/in/Artificial_Intelligence">Artificial Intelligence</a><a class="js-related-research-interest ds-research-interests--pill" data-entity-id="472" href="https://www.academia.edu/Documents/in/Human_Computer_Interaction">Human Computer Interaction</a><a class="js-related-research-interest ds-research-interests--pill" data-entity-id="524" href="https://www.academia.edu/Documents/in/Analytical_Chemistry">Analytical Chemistry</a><a class="js-related-research-interest ds-research-interests--pill" data-entity-id="55405" href="https://www.academia.edu/Documents/in/Sensors">Sensors</a><a class="js-related-research-interest ds-research-interests--pill" data-entity-id="90270" href="https://www.academia.edu/Documents/in/Action_Recognition">Action Recognition</a><a class="js-related-research-interest ds-research-interests--pill" data-entity-id="901526" href="https://www.academia.edu/Documents/in/Action_Physics_">Action (Physics)</a><a class="js-related-research-interest ds-research-interests--pill" data-entity-id="1237788" href="https://www.academia.edu/Documents/in/Electrical_And_Electronic_Engineering">Electrical And Electronic Engine...</a></div></div></div></div></div><div class="footer--content"><ul class="footer--main-links hide-on-mobile"><li><a href="https://www.academia.edu/about">About</a></li><li><a href="https://www.academia.edu/press">Press</a></li><li><a rel="nofollow" href="https://medium.com/academia">Blog</a></li><li><a href="https://www.academia.edu/documents">Papers</a></li><li><a href="https://www.academia.edu/topics">Topics</a></li><li><a href="https://www.academia.edu/hiring"><svg style="width: 13px; height: 13px; position: relative; bottom: -1px;" aria-hidden="true" focusable="false" data-prefix="fas" data-icon="briefcase" class="svg-inline--fa fa-briefcase fa-w-16" role="img" xmlns="http://www.w3.org/2000/svg" viewBox="0 0 512 512"><path fill="currentColor" d="M320 336c0 8.84-7.16 16-16 16h-96c-8.84 0-16-7.16-16-16v-48H0v144c0 25.6 22.4 48 48 48h416c25.6 0 48-22.4 48-48V288H320v48zm144-208h-80V80c0-25.6-22.4-48-48-48H176c-25.6 0-48 22.4-48 48v48H48c-25.6 0-48 22.4-48 48v80h512v-80c0-25.6-22.4-48-48-48zm-144 0H192V96h128v32z"></path></svg> <strong>We're Hiring!</strong></a></li><li><a href="https://support.academia.edu/"><svg style="width: 12px; height: 12px; position: relative; bottom: -1px;" aria-hidden="true" focusable="false" data-prefix="fas" data-icon="question-circle" class="svg-inline--fa fa-question-circle fa-w-16" role="img" xmlns="http://www.w3.org/2000/svg" viewBox="0 0 512 512"><path fill="currentColor" d="M504 256c0 136.997-111.043 248-248 248S8 392.997 8 256C8 119.083 119.043 8 256 8s248 111.083 248 248zM262.655 90c-54.497 0-89.255 22.957-116.549 63.758-3.536 5.286-2.353 12.415 2.715 16.258l34.699 26.31c5.205 3.947 12.621 3.008 16.665-2.122 17.864-22.658 30.113-35.797 57.303-35.797 20.429 0 45.698 13.148 45.698 32.958 0 14.976-12.363 22.667-32.534 33.976C247.128 238.528 216 254.941 216 296v4c0 6.627 5.373 12 12 12h56c6.627 0 12-5.373 12-12v-1.333c0-28.462 83.186-29.647 83.186-106.667 0-58.002-60.165-102-116.531-102zM256 338c-25.365 0-46 20.635-46 46 0 25.364 20.635 46 46 46s46-20.636 46-46c0-25.365-20.635-46-46-46z"></path></svg> <strong>Help Center</strong></a></li></ul><ul class="footer--research-interests"><li>Find new research papers in:</li><li><a href="https://www.academia.edu/Documents/in/Physics">Physics</a></li><li><a href="https://www.academia.edu/Documents/in/Chemistry">Chemistry</a></li><li><a href="https://www.academia.edu/Documents/in/Biology">Biology</a></li><li><a href="https://www.academia.edu/Documents/in/Health_Sciences">Health Sciences</a></li><li><a href="https://www.academia.edu/Documents/in/Ecology">Ecology</a></li><li><a href="https://www.academia.edu/Documents/in/Earth_Sciences">Earth Sciences</a></li><li><a href="https://www.academia.edu/Documents/in/Cognitive_Science">Cognitive Science</a></li><li><a href="https://www.academia.edu/Documents/in/Mathematics">Mathematics</a></li><li><a href="https://www.academia.edu/Documents/in/Computer_Science">Computer Science</a></li></ul><ul class="footer--legal-links hide-on-mobile"><li><a href="https://www.academia.edu/terms">Terms</a></li><li><a href="https://www.academia.edu/privacy">Privacy</a></li><li><a href="https://www.academia.edu/copyright">Copyright</a></li><li>Academia ©2024</li></ul></div> </body> </html>