CINXE.COM
quietscientist/gma_score_prediction_from_video: First Steps
<!DOCTYPE html> <html lang="en" dir="ltr"> <head> <meta charset="utf-8"> <meta http-equiv="X-UA-Compatible" content="IE=edge"> <meta name="viewport" content="width=device-width, initial-scale=1"> <meta name="google-site-verification" content="5fPGCLllnWrvFxH9QWI0l1TadV7byeEvfPcyK2VkS_s"/> <meta name="google-site-verification" content="Rp5zp04IKW-s1IbpTOGB7Z6XY60oloZD5C3kTM-AiY4"/> <meta name="generator" content="InvenioRDM 13.0"/> <meta name="description" content="Pre-release: Early Prototype for Video-based GMA Score Prediction This initial pre-release of the GMA Score Prediction from Video project introduces foundational functionalities for automating General Movement Assessment (GMA) score predictions using video input. Designed with clinicians and researchers in mind, this prototype implements core processing pipelines, including pose estimation, feature extraction, and basic model prediction capabilities. Key Features Pose estimation: Google Colab implementation of MMPose pose estimation pipeline Feature Extraction: Feature extraction for capturing clinician-selected movement metrics. Prediction Model: Prototype model for generating preliminary GMA scores based on extracted movement features using auto-sklearn Please note: This pre-release is intended for testing and feedback purposes and will undergo significant revisions in future releases." /> <meta name="citation_title" content="quietscientist/gma_score_prediction_from_video: First Steps" /> <meta name="citation_author" content="quietscientist" /> <meta name="citation_doi" content="10.5281/zenodo.14042732" /> <meta name="citation_abstract_html_url" content="https://zenodo.org/records/14042732" /> <meta property="og:title" content="quietscientist/gma_score_prediction_from_video: First Steps" /> <meta property="og:description" content="Pre-release: Early Prototype for Video-based GMA Score Prediction This initial pre-release of the GMA Score Prediction from Video project introduces foundational functionalities for automating General Movement Assessment (GMA) score predictions using video input. Designed with clinicians and researchers in mind, this prototype implements core processing pipelines, including pose estimation, feature extraction, and basic model prediction capabilities. Key Features Pose estimation: Google Colab implementation of MMPose pose estimation pipeline Feature Extraction: Feature extraction for capturing clinician-selected movement metrics. Prediction Model: Prototype model for generating preliminary GMA scores based on extracted movement features using auto-sklearn Please note: This pre-release is intended for testing and feedback purposes and will undergo significant revisions in future releases." /> <meta property="og:url" content="https://zenodo.org/records/14042732" /> <meta property="og:site_name" content="Zenodo" /> <meta name="twitter:card" content="summary" /> <meta name="twitter:site" content="@zenodo_org" /> <meta name="twitter:title" content="quietscientist/gma_score_prediction_from_video: First Steps" /> <meta name="twitter:description" content="Pre-release: Early Prototype for Video-based GMA Score Prediction This initial pre-release of the GMA Score Prediction from Video project introduces foundational functionalities for automating General Movement Assessment (GMA) score predictions using video input. Designed with clinicians and researchers in mind, this prototype implements core processing pipelines, including pose estimation, feature extraction, and basic model prediction capabilities. Key Features Pose estimation: Google Colab implementation of MMPose pose estimation pipeline Feature Extraction: Feature extraction for capturing clinician-selected movement metrics. Prediction Model: Prototype model for generating preliminary GMA scores based on extracted movement features using auto-sklearn Please note: This pre-release is intended for testing and feedback purposes and will undergo significant revisions in future releases." /> <link rel="alternate" type="application/zip" href="https://zenodo.org/records/14042732/files/quietscientist/gma_score_prediction_from_video-v0.1.0-alpha.zip"> <link rel="canonical" href="https://zenodo.org/records/14042732"> <title>quietscientist/gma_score_prediction_from_video: First Steps</title> <link rel="shortcut icon" type="image/x-icon" href="/static/favicon.ico"/> <link rel="apple-touch-icon" sizes="120x120" href="/static/apple-touch-icon-120.png"/> <link rel="apple-touch-icon" sizes="152x152" href="/static/apple-touch-icon-152.png"/> <link rel="apple-touch-icon" sizes="167x167" href="/static/apple-touch-icon-167.png"/> <link rel="apple-touch-icon" sizes="180x180" href="/static/apple-touch-icon-180.png"/> <link rel="stylesheet" href="/static/dist/css/3526.a23019fbdd46ac0a80c6.css" /> <!-- HTML5 shim and Respond.js for IE8 support of HTML5 elements and media queries --> <!--[if lt IE 9]> <script src="https://oss.maxcdn.com/html5shiv/3.7.2/html5shiv.min.js"></script> <script src="https://oss.maxcdn.com/respond/1.4.2/respond.min.js"></script> <![endif]--> </head> <body data-invenio-config='{"isMathJaxEnabled": "//cdnjs.cloudflare.com/ajax/libs/mathjax/3.2.2/es5/tex-mml-chtml.js?config=TeX-AMS-MML_HTMLorMML"}' itemscope itemtype="http://schema.org/WebPage" data-spy="scroll" data-target=".scrollspy-target"> <a id="skip-to-main" class="ui button primary ml-5 mt-5 skip-link" href="#main">Skip to main</a> <!--[if lt IE 8]> <p class="browserupgrade">You are using an <strong>outdated</strong> browser. Please <a href="http://browsehappy.com/">upgrade your browser</a> to improve your experience.</p> <![endif]--> <div> <header class="theme header"> <div class="outer-navbar"> <div class="ui container invenio-header-container"> <nav id="invenio-nav" class="ui inverted menu borderless p-0"> <div class="item logo p-0"> <a class="logo-link" href="/"> <img class="ui image rdm-logo" src="/static/images/invenio-rdm.svg" alt="Zenodo home"/> </a> </div> <div id="rdm-burger-toggle"> <button id="rdm-burger-menu-icon" class="ui button transparent" aria-label="Menu" aria-haspopup="menu" aria-expanded="false" aria-controls="invenio-menu" > <span class="navicon" aria-hidden="true"></span> </button> </div> <nav id="invenio-menu" aria-labelledby="rdm-burger-menu-icon" class="ui fluid menu borderless mobile-hidden" > <button id="rdm-close-burger-menu-icon" class="ui button transparent" aria-label="Close menu" > <span class="navicon" aria-hidden="true"></span> </button> <div class="item p-0 search-bar"> <div id="header-search-bar" data-options='[{"key": "records", "text": "All Zenodo", "value": "/search"}]'> <div class="ui fluid search"> <div class="ui icon input"> <input autocomplete="off" aria-label="Search records" placeholder="Search records..." type="text" tabindex="0" class="prompt" value="" > <i aria-hidden="true" class="search icon"></i> </div> </div> </div> </div> <div class="item"> <a href="/communities">Communities</a> </div> <div class="item"> <a href="/me/uploads">My dashboard</a> </div> <div class="right menu item"> <form> <a href="/login/?next=/records/14042732" class="ui button auth-button" aria-busy="false" aria-live="polite" aria-label="Log in" > <i class="sign-in icon auth-icon" aria-hidden="true"></i> Log in </a> <a href="/signup/" class="ui button signup"> <i class="edit outline icon"></i> Sign up </a> </form> </div> </nav> </nav> </div> </div> <div class="ui info message top attached m-0 inv-banner" id="banner-37"> <div class="ui container"> <p>Sign in with OpenAIRE is temporarily disabled due to a technical issue. We apologise for the inconvenience.</p> </div> </div> </header> </div> <main id="main"> <div class="invenio-page-body"> <section id="banners" class="banners" aria-label="Information banner"> <!-- COMMUNITY HEADER: hide it when displaying the submission request --> <!-- /COMMUNITY HEADER --> <!-- PREVIEW HEADER --> <!-- /PREVIEW HEADER --> <div class="ui warning flashed bottom attached manage message"> <div class="ui container"> <div class="ui relaxed grid"> <div class="column"> <div class="row"> <p> There is a <a href="https://zenodo.org/records/14042732/latest"><b>newer version</b></a> of the record available. </p> </div> </div> </div> </div> </div> </section> <div class="ui container"> <div class="ui relaxed grid mt-5"> <div class="two column row top-padded"> <article class="sixteen wide tablet eleven wide computer column main-record-content"> <section id="record-info" aria-label="Publication date and version number"> <div class="ui grid middle aligned"> <div class="two column row"> <div class="left floated left aligned column"> <span class="ui" title="Publication date"> Published November 5, 2024 </span> <span class="label text-muted"> | Version v0.1.0-alpha</span> </div> <div class="right floated right aligned column"> <span role="note" class="ui label horizontal small neutral mb-5" aria-label="Resource type" > Software </span> <span role="note" class="ui label horizontal small access-status open mb-5" data-tooltip="The record and files are publicly accessible." data-inverted="" aria-label="Access status" > <i class="icon unlock" aria-hidden="true"></i> <span aria-label="The record and files are publicly accessible."> Open </span> </span> </div> </div> </div> </section> <div class="ui divider hidden"></div><section id="record-title-section" aria-label="Record title and creators"> <h1 id="record-title" class="wrap-overflowing-text">quietscientist/gma_score_prediction_from_video: First Steps</h1> <section id="creatibutors" aria-label="Creators and contributors"> <div class="ui grid"> <div class="row ui accordion affiliations"> <div class="sixteen wide mobile twelve wide tablet thirteen wide computer column"> <h3 class="sr-only">Creators</h3> <ul class="creatibutors"> <li class="creatibutor-wrap separated"> <a class="ui creatibutor-link" href="/search?q=metadata.creators.person_or_org.name:%22quietscientist%22" > <span class="creatibutor-name">quietscientist</span></a> <i class="user icon"></i> </li> </ul> </div> </div> </div> </section> </section> <section id="description" class="rel-mt-2 rich-input-content" aria-label="Record description"> <h2 id="description-heading" class="sr-only">Description</h2> <div style="word-wrap: break-word;"> <p><strong>Pre-release: Early Prototype for Video-based GMA Score Prediction</strong> This initial pre-release of the GMA Score Prediction from Video project introduces foundational functionalities for automating General Movement Assessment (GMA) score predictions using video input. Designed with clinicians and researchers in mind, this prototype implements core processing pipelines, including pose estimation, feature extraction, and basic model prediction capabilities.</p> <p><strong>Key Features</strong></p> <ul> <li>Pose estimation: Google Colab implementation of MMPose pose estimation pipeline</li> <li>Feature Extraction: Feature extraction for capturing clinician-selected movement metrics.</li> <li>Prediction Model: Prototype model for generating preliminary GMA scores based on extracted movement features using auto-sklearn</li> </ul> <p>Please note: This pre-release is intended for testing and feedback purposes and will undergo significant revisions in future releases.</p> </div> </section> <section id="record-files" class="rel-mt-2 rel-mb-3" aria-label="Files" ><h2 id="files-heading">Files</h2> <div class="ui accordion panel mb-10 open" href="#files-preview-accordion-panel"> <h3 class="active title panel-heading open m-0"> <div role="button" id="files-preview-accordion-trigger" aria-controls="files-preview-accordion-panel" aria-expanded="true" tabindex="0" class="trigger" aria-label="File preview" > <span id="preview-file-title">quietscientist/gma_score_prediction_from_video-v0.1.0-alpha.zip</span> <i class="angle right icon" aria-hidden="true"></i> </div> </h3> <div role="region" id="files-preview-accordion-panel" aria-labelledby="files-preview-accordion-trigger" class="active content preview-container pt-0 open" > <div> <iframe title="Preview" class="preview-iframe" id="preview-iframe" name="preview-iframe" src="/records/14042732/preview/quietscientist/gma_score_prediction_from_video-v0.1.0-alpha.zip?include_deleted=0" > </iframe> </div> </div> </div> <div class="ui accordion panel mb-10 open" href="#files-list-accordion-panel"> <h3 class="active title panel-heading open m-0"> <div role="button" id="files-list-accordion-trigger" aria-controls="files-list-accordion-panel" aria-expanded="true" tabindex="0" class="trigger"> Files <small class="text-muted"> (24.5 kB)</small> <i class="angle right icon" aria-hidden="true"></i> </div> </h3> <div role="region" id="files-list-accordion-panel" aria-labelledby="files-list-accordion-trigger" class="active content pt-0"> <div> <table class="ui striped table files fluid open"> <thead> <tr> <th>Name</th> <th>Size</th> <th class> <a role="button" class="ui compact mini button right floated archive-link" href="https://zenodo.org/api/records/14042732/files-archive"> <i class="file archive icon button" aria-hidden="true"></i> Download all </a> </th> </tr> </thead> <tbody> <tr> <td class="ten wide"> <div> <a href="/records/14042732/files/quietscientist/gma_score_prediction_from_video-v0.1.0-alpha.zip?download=1">quietscientist/gma_score_prediction_from_video-v0.1.0-alpha.zip</a> </div> <small class="ui text-muted font-tiny">md5:16e0ca0052016cde5afb0e542fa7edfc <div class="ui icon inline-block" data-tooltip="This is the file fingerprint (checksum), which can be used to verify the file integrity."> <i class="question circle checksum icon"></i> </div> </small> </td> <td>24.5 kB</td> <td class="right aligned"> <span> <a role="button" class="ui compact mini button preview-link" href="/records/14042732/preview/quietscientist/gma_score_prediction_from_video-v0.1.0-alpha.zip?include_deleted=0" target="preview-iframe" data-file-key="quietscientist/gma_score_prediction_from_video-v0.1.0-alpha.zip"> <i class="eye icon" aria-hidden="true"></i>Preview </a> <a role="button" class="ui compact mini button" href="/records/14042732/files/quietscientist/gma_score_prediction_from_video-v0.1.0-alpha.zip?download=1"> <i class="download icon" aria-hidden="true"></i>Download </a> </span> </td> </tr> </tbody> </table> </div> </div> </div> </section> <section id="additional-details" class="rel-mt-2" aria-label="Additional record details"> <h2 id="record-details-heading">Additional details</h2> <div class="ui divider"></div> <div class="ui grid"> <div class="sixteen wide mobile four wide tablet three wide computer column"> <h3 class="ui header">Related works</h3> </div> <div class="sixteen wide mobile twelve wide tablet thirteen wide computer column"> <dl class="details-list"> <dt class="ui tiny header">Is supplement to</dt> <dd> Software: <a href="https://github.com/quietscientist/gma_score_prediction_from_video/tree/v0.1.0-alpha" target="_blank" title="Opens in new tab"> https://github.com/quietscientist/gma_score_prediction_from_video/tree/v0.1.0-alpha </a> (URL) </dd> </dl> </div> </div> <div class="ui divider"></div> </section> <section id="citations-search" data-record-pids='{"doi": {"client": "datacite", "identifier": "10.5281/zenodo.14042732", "provider": "datacite"}, "oai": {"identifier": "oai:zenodo.org:14042732", "provider": "oai"}}' data-record-parent-pids='{"doi": {"client": "datacite", "identifier": "10.5281/zenodo.14042731", "provider": "datacite"}}' data-citations-endpoint="https://zenodo-broker.web.cern.ch/api/relationships" aria-label="Record citations" class="rel-mb-1" > </section> </article> <aside class="sixteen wide tablet five wide computer column sidebar" aria-label="Record details"> <section id="metrics" aria-label="Metrics" class="ui segment rdm-sidebar sidebar-container"> <div class="ui tiny two statistics rel-mt-1"> <div class="ui statistic"> <div class="value">52</div> <div class="label"> <i aria-hidden="true" class="eye icon"></i> Views </div> </div> <div class="ui statistic"> <div class="value">11</div> <div class="label"> <i aria-hidden="true" class="download icon"></i> Downloads </div> </div> </div> <div class="ui accordion rel-mt-1 centered"> <div class="title"> <i class="caret right icon" aria-hidden="true"></i> <span tabindex="0" class="trigger" data-open-text="Show more details" data-close-text="Show less details" > Show more details </span> </div> <div class="content"> <table id="record-statistics" class="ui definition table fluid"> <thead> <tr> <th></th> <th class="right aligned">All versions</th> <th class="right aligned">This version</th> </tr> </thead> <tbody> <tr> <td> Views <i tabindex="0" role="button" style="position:relative" class="popup-trigger question circle small icon" aria-expanded="false" aria-label="More info" data-variation="mini inverted" > </i> <p role="tooltip" class="popup-content ui flowing popup transition hidden"> Total views </p> </td> <td data-label="All versions" class="right aligned"> 52 </td> <td data-label="This version" class="right aligned"> 29 </td> </tr> <tr> <td> Downloads <i tabindex="0" role="button" style="position:relative" class="popup-trigger question circle small icon" aria-expanded="false" aria-label="More info" data-variation="mini inverted" > </i> <p role="tooltip" class="popup-content ui flowing popup transition hidden"> Total downloads </p> </td> <td data-label="All versions" class="right aligned"> 11 </td> <td data-label="This version" class="right aligned"> 5 </td> </tr> <tr> <td> Data volume <i tabindex="0" role="button" style="position:relative" class="popup-trigger question circle small icon" aria-expanded="false" aria-label="More info" data-variation="mini inverted" > </i> <p role="tooltip" class="popup-content ui flowing popup transition hidden"> Total data volume </p> </td> <td data-label="All versions" class="right aligned">447.7 kB</td> <td data-label="This version" class="right aligned">122.6 kB</td> </tr> </tbody> </table> <p class="text-align-center rel-mt-1"> <small> <a href="/help/statistics">More info on how stats are collected....</a> </small> </p> </div> </div> </section> <div class="sidebar-container"> <h2 class="ui medium top attached header mt-0">Versions</h2> <div id="record-versions" class="ui segment rdm-sidebar bottom attached pl-0 pr-0 pt-0"> <div class="versions"> <div id="recordVersions" data-record='{"access": {"embargo": {"active": false, "reason": null}, "files": "public", "record": "public", "status": "open"}, "created": "2024-11-05T20:42:59.872671+00:00", "custom_fields": {}, "deletion_status": {"is_deleted": false, "status": "P"}, "expanded": {"parent": {"access": {"owned_by": {"active": null, "blocked_at": null, "confirmed_at": null, "email": "", "id": "222512", "is_current_user": false, "links": {"avatar": "https://zenodo.org/api/users/222512/avatar.svg", "records_html": "https://zenodo.org/search/records?q=parent.access.owned_by.user:222512", "self": "https://zenodo.org/api/users/222512"}, "profile": {"affiliations": "", "full_name": ""}, "username": "msegado", "verified_at": null}}}}, "files": {"count": 1, "enabled": true, "entries": {"quietscientist/gma_score_prediction_from_video-v0.1.0-alpha.zip": {"access": {"hidden": false}, "checksum": "md5:16e0ca0052016cde5afb0e542fa7edfc", "ext": "zip", "id": "414fbafc-cc69-463b-bb90-4f00018816e3", "key": "quietscientist/gma_score_prediction_from_video-v0.1.0-alpha.zip", "links": {"content": "https://zenodo.org/api/records/14042732/files/quietscientist/gma_score_prediction_from_video-v0.1.0-alpha.zip/content", "self": "https://zenodo.org/api/records/14042732/files/quietscientist/gma_score_prediction_from_video-v0.1.0-alpha.zip"}, "metadata": null, "mimetype": "application/zip", "size": 24520, "storage_class": "L"}}, "order": [], "total_bytes": 24520}, "id": "14042732", "is_draft": false, "is_published": true, "links": {"access": "https://zenodo.org/api/records/14042732/access", "access_grants": "https://zenodo.org/api/records/14042732/access/grants", "access_links": "https://zenodo.org/api/records/14042732/access/links", "access_request": "https://zenodo.org/api/records/14042732/access/request", "access_users": "https://zenodo.org/api/records/14042732/access/users", "archive": "https://zenodo.org/api/records/14042732/files-archive", "archive_media": "https://zenodo.org/api/records/14042732/media-files-archive", "communities": "https://zenodo.org/api/records/14042732/communities", "communities-suggestions": "https://zenodo.org/api/records/14042732/communities-suggestions", "doi": "https://doi.org/10.5281/zenodo.14042732", "draft": "https://zenodo.org/api/records/14042732/draft", "files": "https://zenodo.org/api/records/14042732/files", "latest": "https://zenodo.org/api/records/14042732/versions/latest", "latest_html": "https://zenodo.org/records/14042732/latest", "media_files": "https://zenodo.org/api/records/14042732/media-files", "parent": "https://zenodo.org/api/records/14042731", "parent_doi": "https://doi.org/10.5281/zenodo.14042731", "parent_doi_html": "https://zenodo.org/doi/10.5281/zenodo.14042731", "parent_html": "https://zenodo.org/records/14042731", "requests": "https://zenodo.org/api/records/14042732/requests", "reserve_doi": "https://zenodo.org/api/records/14042732/draft/pids/doi", "self": "https://zenodo.org/api/records/14042732", "self_doi": "https://doi.org/10.5281/zenodo.14042732", "self_doi_html": "https://zenodo.org/doi/10.5281/zenodo.14042732", "self_html": "https://zenodo.org/records/14042732", "self_iiif_manifest": "https://zenodo.org/api/iiif/record:14042732/manifest", "self_iiif_sequence": "https://zenodo.org/api/iiif/record:14042732/sequence/default", "versions": "https://zenodo.org/api/records/14042732/versions"}, "media_files": {"count": 0, "enabled": false, "entries": {}, "order": [], "total_bytes": 0}, "metadata": {"creators": [{"person_or_org": {"family_name": "quietscientist", "name": "quietscientist", "type": "personal"}}], "description": "\u003cp\u003e\u003cstrong\u003ePre-release: Early Prototype for Video-based GMA Score Prediction\u003c/strong\u003e\nThis initial pre-release of the GMA Score Prediction from Video project introduces foundational functionalities for automating General Movement Assessment (GMA) score predictions using video input. Designed with clinicians and researchers in mind, this prototype implements core processing pipelines, including pose estimation, feature extraction, and basic model prediction capabilities.\u003c/p\u003e\n\u003cp\u003e\u003cstrong\u003eKey Features\u003c/strong\u003e\u003c/p\u003e\n\u003cul\u003e\n\u003cli\u003ePose estimation: Google Colab implementation of MMPose pose estimation pipeline\u003c/li\u003e\n\u003cli\u003eFeature Extraction: Feature extraction for capturing clinician-selected movement metrics.\u003c/li\u003e\n\u003cli\u003ePrediction Model: Prototype model for generating preliminary GMA scores based on extracted movement features using auto-sklearn\u003c/li\u003e\n\u003c/ul\u003e\n\u003cp\u003ePlease note: This pre-release is intended for testing and feedback purposes and will undergo significant revisions in future releases.\u003c/p\u003e", "publication_date": "2024-11-05", "publisher": "Zenodo", "related_identifiers": [{"identifier": "https://github.com/quietscientist/gma_score_prediction_from_video/tree/v0.1.0-alpha", "relation_type": {"id": "issupplementto", "title": {"de": "Erg\u00e4nzt", "en": "Is supplement to"}}, "resource_type": {"id": "software", "title": {"de": "Software", "en": "Software"}}, "scheme": "url"}], "resource_type": {"id": "software", "title": {"de": "Software", "en": "Software"}}, "rights": [{"description": {"en": "The Creative Commons Attribution license allows re-distribution and re-use of a licensed work on the condition that the creator is appropriately credited."}, "icon": "cc-by-icon", "id": "cc-by-4.0", "props": {"scheme": "spdx", "url": "https://creativecommons.org/licenses/by/4.0/legalcode"}, "title": {"en": "Creative Commons Attribution 4.0 International"}}], "title": "quietscientist/gma_score_prediction_from_video: First Steps", "version": "v0.1.0-alpha"}, "parent": {"access": {"owned_by": {"user": "222512"}, "settings": {"accept_conditions_text": null, "allow_guest_requests": false, "allow_user_requests": false, "secret_link_expiration": 0}}, "communities": {}, "id": "14042731", "pids": {"doi": {"client": "datacite", "identifier": "10.5281/zenodo.14042731", "provider": "datacite"}}}, "pids": {"doi": {"client": "datacite", "identifier": "10.5281/zenodo.14042732", "provider": "datacite"}, "oai": {"identifier": "oai:zenodo.org:14042732", "provider": "oai"}}, "revision_id": 4, "stats": {"all_versions": {"data_volume": 447716.0, "downloads": 11, "unique_downloads": 11, "unique_views": 52, "views": 63}, "this_version": {"data_volume": 122600.0, "downloads": 5, "unique_downloads": 5, "unique_views": 29, "views": 34}}, "status": "published", "swh": {"swhid": "swh:1:dir:2a2b23761aeec5d5ba0a0167db2c67fd49249c02;origin=https://doi.org/10.5281/zenodo.14042731;visit=swh:1:snp:46beb0f36edc7426ebaf5aa90b748b9d65a5c523;anchor=swh:1:rel:5eff4ff96e329bbef660573fc3fc0341c77056b6;path=/"}, "ui": {"access_status": {"description_l10n": "The record and files are publicly accessible.", "embargo_date_l10n": null, "icon": "unlock", "id": "open", "message_class": "", "title_l10n": "Open"}, "created_date_l10n_long": "November 5, 2024", "creators": {"affiliations": [], "creators": [{"person_or_org": {"family_name": "quietscientist", "name": "quietscientist", "type": "personal"}}]}, "custom_fields": {}, "description_stripped": "Pre-release: Early Prototype for Video-based GMA Score Prediction\nThis initial pre-release of the GMA Score Prediction from Video project introduces foundational functionalities for automating General Movement Assessment (GMA) score predictions using video input. Designed with clinicians and researchers in mind, this prototype implements core processing pipelines, including pose estimation, feature extraction, and basic model prediction capabilities.\n\nKey Features\n\n\n\nPose estimation: Google Colab implementation of MMPose pose estimation pipeline\n\nFeature Extraction: Feature extraction for capturing clinician-selected movement metrics.\n\nPrediction Model: Prototype model for generating preliminary GMA scores based on extracted movement features using auto-sklearn\n\n\nPlease note: This pre-release is intended for testing and feedback purposes and will undergo significant revisions in future releases.", "is_draft": false, "publication_date_l10n_long": "November 5, 2024", "publication_date_l10n_medium": "Nov 5, 2024", "related_identifiers": [{"identifier": "https://github.com/quietscientist/gma_score_prediction_from_video/tree/v0.1.0-alpha", "relation_type": {"id": "issupplementto", "title_l10n": "Is supplement to"}, "resource_type": {"id": "software", "title_l10n": "Software"}, "scheme": "url"}], "resource_type": {"id": "software", "title_l10n": "Software"}, "rights": [{"description_l10n": "The Creative Commons Attribution license allows re-distribution and re-use of a licensed work on the condition that the creator is appropriately credited.", "icon": "cc-by-icon", "id": "cc-by-4.0", "props": {"scheme": "spdx", "url": "https://creativecommons.org/licenses/by/4.0/legalcode"}, "title_l10n": "Creative Commons Attribution 4.0 International"}], "updated_date_l10n_long": "November 5, 2024", "version": "v0.1.0-alpha"}, "updated": "2024-11-05T20:43:00.074715+00:00", "versions": {"index": 1, "is_latest": false}}' data-preview='false'> <div class="rel-p-1"></div> <div class="ui fluid placeholder rel-mr-1 rel-ml-1"></div> <div class="header"> <div class="line"></div> <div class="line"></div> <div class="line"></div> </div> </div> </div> </div> </div><div class="sidebar-container"> <h2 class="ui small top attached header">External resources</h2> <div id="external-resource" aria-label="External resources" class="ui bottom attached segment rdm-sidebar external resource"> <h3 class="ui small header">Archived in</h3> <ul class="ui relaxed list no-bullet"> <li class="item flex"> <img class="ui image" src="/static/images/swh.png" alt="Software Heritage icon" width="32"> <div class="content truncated"> <a class="header" href="https://archive.softwareheritage.org/swh:1:dir:2a2b23761aeec5d5ba0a0167db2c67fd49249c02;origin=https://doi.org/10.5281/zenodo.14042731;visit=swh:1:snp:46beb0f36edc7426ebaf5aa90b748b9d65a5c523;anchor=swh:1:rel:5eff4ff96e329bbef660573fc3fc0341c77056b6;path=/" target="_blank" rel="noreferrer" >Software Heritage </a> <p class="full-width truncated"> swh:1:dir:2a2b23761aeec5d5ba0a0167db2c67fd49249c02 </p></div> </li></ul><h3 class="ui small header">Available in</h3> <ul class="ui relaxed list no-bullet"> <li class="item flex align-items-center"> <img class="ui image" src="/static/images/github.svg" alt="" width="32"> <div class="content"> <a class="header" href="https://github.com/quietscientist/gma_score_prediction_from_video/tree/v0.1.0-alpha" target="_blank" rel="noreferrer" >quietscientist/gma_score_prediction_from_video </a> <p class="description"> Release: v0.1.0-alpha </p></div> </li></ul><h3 class="ui small header">Indexed in</h3> <ul class="ui relaxed list no-bullet"> <li class="item flex align-items-center"> <img class="ui image" src="/static/images/openaire.svg" alt="" width="32"> <div class="content"> <a class="header" href="https://explore.openaire.eu/search/software?pid=10.5281/zenodo.14042732" target="_blank" rel="noreferrer" >OpenAIRE </a> </div> </li></ul></div> </div><div id="sidebar-communities-manage" data-user-communities-memberships='{}' data-record-community-endpoint="https://zenodo.org/api/records/14042732/communities" data-record-community-search-endpoint="https://zenodo.org/api/records/14042732/communities-suggestions" data-record-user-community-search-endpoint="" data-can-manage-record='false' data-pending-communities-search-config='{"aggs": [{"aggName": "type", "field": "type", "title": "Type"}, {"aggName": "status", "field": "status", "title": "Status"}], "appId": "InvenioAppRdm.RecordRequests", "defaultSortingOnEmptyQueryString": [{"sortBy": "newest"}], "initialQueryState": {"filters": [], "hiddenParams": [["expand", "1"], ["is_open", "true"], ["type", "community-inclusion"], ["type", "community-submission"]], "layout": "list", "page": 1, "size": 10, "sortBy": "bestmatch"}, "layoutOptions": {"gridView": false, "listView": true}, "paginationOptions": {"defaultValue": 10, "maxTotalResults": 10000, "resultsPerPage": [{"text": "10", "value": 10}, {"text": "20", "value": 20}, {"text": "50", "value": 50}]}, "searchApi": {"axios": {"headers": {"Accept": "application/json"}, "url": "https://zenodo.org/api/records/14042732/requests", "withCredentials": true}, "invenio": {"requestSerializer": "InvenioRecordsResourcesRequestSerializer"}}, "sortOptions": [{"sortBy": "bestmatch", "text": "Best match"}, {"sortBy": "newest", "text": "Newest"}, {"sortBy": "oldest", "text": "Oldest"}], "sortOrderDisabled": true}' data-record-community-search-config='{"aggs": [{"aggName": "type", "field": "type", "title": "Type"}, {"aggName": "funder", "field": "metadata.funding.funder", "title": "Funders"}, {"aggName": "organization", "field": "metadata.organizations", "title": "Organizations"}], "appId": "InvenioAppRdm.RecordCommunitiesSuggestions", "defaultSortingOnEmptyQueryString": [{"sortBy": "newest"}], "initialQueryState": {"filters": [], "hiddenParams": null, "layout": "list", "page": 1, "size": 10, "sortBy": "bestmatch"}, "layoutOptions": {"gridView": false, "listView": true}, "paginationOptions": {"defaultValue": 10, "maxTotalResults": 10000, "resultsPerPage": [{"text": "10", "value": 10}, {"text": "20", "value": 20}]}, "searchApi": {"axios": {"headers": {"Accept": "application/vnd.inveniordm.v1+json"}, "url": "https://zenodo.org/api/records/14042732/communities-suggestions", "withCredentials": true}, "invenio": {"requestSerializer": "InvenioRecordsResourcesRequestSerializer"}}, "sortOptions": [{"sortBy": "bestmatch", "text": "Best match"}, {"sortBy": "newest", "text": "Newest"}, {"sortBy": "oldest", "text": "Oldest"}], "sortOrderDisabled": true}' data-record-user-community-search-config='{"aggs": [{"aggName": "type", "field": "type", "title": "Type"}, {"aggName": "funder", "field": "metadata.funding.funder", "title": "Funders"}, {"aggName": "organization", "field": "metadata.organizations", "title": "Organizations"}], "appId": "InvenioAppRdm.RecordUserCommunitiesSuggestions", "defaultSortingOnEmptyQueryString": [{"sortBy": "newest"}], "initialQueryState": {"filters": [], "hiddenParams": [["membership", "true"]], "layout": "list", "page": 1, "size": 10, "sortBy": "bestmatch"}, "layoutOptions": {"gridView": false, "listView": true}, "paginationOptions": {"defaultValue": 10, "maxTotalResults": 10000, "resultsPerPage": [{"text": "10", "value": 10}, {"text": "20", "value": 20}]}, "searchApi": {"axios": {"headers": {"Accept": "application/vnd.inveniordm.v1+json"}, "url": "https://zenodo.org/api/records/14042732/communities-suggestions", "withCredentials": true}, "invenio": {"requestSerializer": "InvenioRecordsResourcesRequestSerializer"}}, "sortOptions": [{"sortBy": "bestmatch", "text": "Best match"}, {"sortBy": "newest", "text": "Newest"}, {"sortBy": "oldest", "text": "Oldest"}], "sortOrderDisabled": true}' data-permissions='{"can_edit": false, "can_manage": false, "can_media_read_files": true, "can_moderate": false, "can_new_version": false, "can_read_files": true, "can_review": false, "can_update_draft": false, "can_view": false}' class="sidebar-container" > <h2 class="ui medium top attached header">Communities</h2> <div class="ui segment bottom attached rdm-sidebar"> <div class="ui fluid placeholder"> <div class="image header"> <div class="line"></div> <div class="line"></div> </div> <div class="image header"> <div class="line"></div> <div class="line"></div> </div> <div class="image header"> <div class="line"></div> <div class="line"></div> </div> </div> </div> </div> <div class="sidebar-container"> <h2 class="ui medium top attached header mt-0">Details</h2> <div id="record-details" class="ui segment bottom attached rdm-sidebar"> <dl class="details-list"> <dt class="ui tiny header">DOI <dd> <span class="get-badge" data-toggle="tooltip" data-placement="bottom" style="cursor: pointer;" title="Get the DOI badge!"> <img id='record-doi-badge' data-target="[data-modal='10.5281/zenodo.14042732']" src="/badge/DOI/10.5281/zenodo.14042732.svg" alt="10.5281/zenodo.14042732" /> </span> <div id="doi-modal" class="ui modal fade badge-modal" data-modal="10.5281/zenodo.14042732"> <div class="header">DOI Badge</div> <div class="content"> <h4> <small>DOI</small> </h4> <h4> <pre>10.5281/zenodo.14042732</pre> </h4> <h3 class="ui small header"> Markdown </h3> <div class="ui message code"> <pre>[](https://doi.org/10.5281/zenodo.14042732)</pre> </div> <h3 class="ui small header"> reStructuredText </h3> <div class="ui message code"> <pre>.. image:: https://zenodo.org/badge/DOI/10.5281/zenodo.14042732.svg :target: https://doi.org/10.5281/zenodo.14042732</pre> </div> <h3 class="ui small header"> HTML </h3> <div class="ui message code"> <pre><a href="https://doi.org/10.5281/zenodo.14042732"><img src="https://zenodo.org/badge/DOI/10.5281/zenodo.14042732.svg" alt="DOI"></a></pre> </div> <h3 class="ui small header"> Image URL </h3> <div class="ui message code"> <pre>https://zenodo.org/badge/DOI/10.5281/zenodo.14042732.svg</pre> </div> <h3 class="ui small header"> Target URL </h3> <div class="ui message code"> <pre>https://doi.org/10.5281/zenodo.14042732</pre> </div> </div> </div> </dd> <dt class="ui tiny header">Resource type</dt> <dd>Software</dd> <dt class="ui tiny header">Publisher</dt> <dd>Zenodo</dd> </dl> </div> </div> <div class="sidebar-container"> <h2 class="ui medium top attached header mt-0">Rights</h2> <div id="licenses" class="ui segment bottom attached rdm-sidebar"> <ul class="details-list m-0 p-0"> <li id="license-cc-by-4.0-1" class="has-popup"> <div id="title-cc-by-4.0-1" class="license clickable" tabindex="0" aria-haspopup="dialog" aria-expanded="false" role="button" aria-label="Creative Commons Attribution 4.0 International" > <span class="icon-wrap"> <img class="icon" src="/static/icons/licenses/cc-by-icon.svg" alt="cc-by-4.0 icon"/> </span> <span class="title-text"> Creative Commons Attribution 4.0 International </span> </div> <div id="description-cc-by-4.0-1" class="licenses-description ui flowing popup transition hidden" role="dialog" aria-labelledby="title-cc-by-4.0-1" > <i role="button" tabindex="0" class="close icon text-muted" aria-label="Close"></i> <div id="license-description-1" class="description"> <span class="text-muted"> The Creative Commons Attribution license allows re-distribution and re-use of a licensed work on the condition that the creator is appropriately credited. </span> <a class="license-link" href="https://creativecommons.org/licenses/by/4.0/legalcode" target="_blank" title="Opens in new tab">Read more</a> </div> </div> </li> </ul> </div> </div> <div class="sidebar-container"> <h2 class="ui medium top attached header mt-0">Citation</h2> <div id="citation" class="ui segment bottom attached rdm-sidebar"> <div id="recordCitation" data-record='{"access": {"embargo": {"active": false, "reason": null}, "files": "public", "record": "public", "status": "open"}, "created": "2024-11-05T20:42:59.872671+00:00", "custom_fields": {}, "deletion_status": {"is_deleted": false, "status": "P"}, "expanded": {"parent": {"access": {"owned_by": {"active": null, "blocked_at": null, "confirmed_at": null, "email": "", "id": "222512", "is_current_user": false, "links": {"avatar": "https://zenodo.org/api/users/222512/avatar.svg", "records_html": "https://zenodo.org/search/records?q=parent.access.owned_by.user:222512", "self": "https://zenodo.org/api/users/222512"}, "profile": {"affiliations": "", "full_name": ""}, "username": "msegado", "verified_at": null}}}}, "files": {"count": 1, "enabled": true, "entries": {"quietscientist/gma_score_prediction_from_video-v0.1.0-alpha.zip": {"access": {"hidden": false}, "checksum": "md5:16e0ca0052016cde5afb0e542fa7edfc", "ext": "zip", "id": "414fbafc-cc69-463b-bb90-4f00018816e3", "key": "quietscientist/gma_score_prediction_from_video-v0.1.0-alpha.zip", "links": {"content": "https://zenodo.org/api/records/14042732/files/quietscientist/gma_score_prediction_from_video-v0.1.0-alpha.zip/content", "self": "https://zenodo.org/api/records/14042732/files/quietscientist/gma_score_prediction_from_video-v0.1.0-alpha.zip"}, "metadata": null, "mimetype": "application/zip", "size": 24520, "storage_class": "L"}}, "order": [], "total_bytes": 24520}, "id": "14042732", "is_draft": false, "is_published": true, "links": {"access": "https://zenodo.org/api/records/14042732/access", "access_grants": "https://zenodo.org/api/records/14042732/access/grants", "access_links": "https://zenodo.org/api/records/14042732/access/links", "access_request": "https://zenodo.org/api/records/14042732/access/request", "access_users": "https://zenodo.org/api/records/14042732/access/users", "archive": "https://zenodo.org/api/records/14042732/files-archive", "archive_media": "https://zenodo.org/api/records/14042732/media-files-archive", "communities": "https://zenodo.org/api/records/14042732/communities", "communities-suggestions": "https://zenodo.org/api/records/14042732/communities-suggestions", "doi": "https://doi.org/10.5281/zenodo.14042732", "draft": "https://zenodo.org/api/records/14042732/draft", "files": "https://zenodo.org/api/records/14042732/files", "latest": "https://zenodo.org/api/records/14042732/versions/latest", "latest_html": "https://zenodo.org/records/14042732/latest", "media_files": "https://zenodo.org/api/records/14042732/media-files", "parent": "https://zenodo.org/api/records/14042731", "parent_doi": "https://doi.org/10.5281/zenodo.14042731", "parent_doi_html": "https://zenodo.org/doi/10.5281/zenodo.14042731", "parent_html": "https://zenodo.org/records/14042731", "requests": "https://zenodo.org/api/records/14042732/requests", "reserve_doi": "https://zenodo.org/api/records/14042732/draft/pids/doi", "self": "https://zenodo.org/api/records/14042732", "self_doi": "https://doi.org/10.5281/zenodo.14042732", "self_doi_html": "https://zenodo.org/doi/10.5281/zenodo.14042732", "self_html": "https://zenodo.org/records/14042732", "self_iiif_manifest": "https://zenodo.org/api/iiif/record:14042732/manifest", "self_iiif_sequence": "https://zenodo.org/api/iiif/record:14042732/sequence/default", "versions": "https://zenodo.org/api/records/14042732/versions"}, "media_files": {"count": 0, "enabled": false, "entries": {}, "order": [], "total_bytes": 0}, "metadata": {"creators": [{"person_or_org": {"family_name": "quietscientist", "name": "quietscientist", "type": "personal"}}], "description": "\u003cp\u003e\u003cstrong\u003ePre-release: Early Prototype for Video-based GMA Score Prediction\u003c/strong\u003e\nThis initial pre-release of the GMA Score Prediction from Video project introduces foundational functionalities for automating General Movement Assessment (GMA) score predictions using video input. Designed with clinicians and researchers in mind, this prototype implements core processing pipelines, including pose estimation, feature extraction, and basic model prediction capabilities.\u003c/p\u003e\n\u003cp\u003e\u003cstrong\u003eKey Features\u003c/strong\u003e\u003c/p\u003e\n\u003cul\u003e\n\u003cli\u003ePose estimation: Google Colab implementation of MMPose pose estimation pipeline\u003c/li\u003e\n\u003cli\u003eFeature Extraction: Feature extraction for capturing clinician-selected movement metrics.\u003c/li\u003e\n\u003cli\u003ePrediction Model: Prototype model for generating preliminary GMA scores based on extracted movement features using auto-sklearn\u003c/li\u003e\n\u003c/ul\u003e\n\u003cp\u003ePlease note: This pre-release is intended for testing and feedback purposes and will undergo significant revisions in future releases.\u003c/p\u003e", "publication_date": "2024-11-05", "publisher": "Zenodo", "related_identifiers": [{"identifier": "https://github.com/quietscientist/gma_score_prediction_from_video/tree/v0.1.0-alpha", "relation_type": {"id": "issupplementto", "title": {"de": "Erg\u00e4nzt", "en": "Is supplement to"}}, "resource_type": {"id": "software", "title": {"de": "Software", "en": "Software"}}, "scheme": "url"}], "resource_type": {"id": "software", "title": {"de": "Software", "en": "Software"}}, "rights": [{"description": {"en": "The Creative Commons Attribution license allows re-distribution and re-use of a licensed work on the condition that the creator is appropriately credited."}, "icon": "cc-by-icon", "id": "cc-by-4.0", "props": {"scheme": "spdx", "url": "https://creativecommons.org/licenses/by/4.0/legalcode"}, "title": {"en": "Creative Commons Attribution 4.0 International"}}], "title": "quietscientist/gma_score_prediction_from_video: First Steps", "version": "v0.1.0-alpha"}, "parent": {"access": {"owned_by": {"user": "222512"}, "settings": {"accept_conditions_text": null, "allow_guest_requests": false, "allow_user_requests": false, "secret_link_expiration": 0}}, "communities": {}, "id": "14042731", "pids": {"doi": {"client": "datacite", "identifier": "10.5281/zenodo.14042731", "provider": "datacite"}}}, "pids": {"doi": {"client": "datacite", "identifier": "10.5281/zenodo.14042732", "provider": "datacite"}, "oai": {"identifier": "oai:zenodo.org:14042732", "provider": "oai"}}, "revision_id": 4, "stats": {"all_versions": {"data_volume": 447716.0, "downloads": 11, "unique_downloads": 11, "unique_views": 52, "views": 63}, "this_version": {"data_volume": 122600.0, "downloads": 5, "unique_downloads": 5, "unique_views": 29, "views": 34}}, "status": "published", "swh": {"swhid": "swh:1:dir:2a2b23761aeec5d5ba0a0167db2c67fd49249c02;origin=https://doi.org/10.5281/zenodo.14042731;visit=swh:1:snp:46beb0f36edc7426ebaf5aa90b748b9d65a5c523;anchor=swh:1:rel:5eff4ff96e329bbef660573fc3fc0341c77056b6;path=/"}, "ui": {"access_status": {"description_l10n": "The record and files are publicly accessible.", "embargo_date_l10n": null, "icon": "unlock", "id": "open", "message_class": "", "title_l10n": "Open"}, "created_date_l10n_long": "November 5, 2024", "creators": {"affiliations": [], "creators": [{"person_or_org": {"family_name": "quietscientist", "name": "quietscientist", "type": "personal"}}]}, "custom_fields": {}, "description_stripped": "Pre-release: Early Prototype for Video-based GMA Score Prediction\nThis initial pre-release of the GMA Score Prediction from Video project introduces foundational functionalities for automating General Movement Assessment (GMA) score predictions using video input. Designed with clinicians and researchers in mind, this prototype implements core processing pipelines, including pose estimation, feature extraction, and basic model prediction capabilities.\n\nKey Features\n\n\n\nPose estimation: Google Colab implementation of MMPose pose estimation pipeline\n\nFeature Extraction: Feature extraction for capturing clinician-selected movement metrics.\n\nPrediction Model: Prototype model for generating preliminary GMA scores based on extracted movement features using auto-sklearn\n\n\nPlease note: This pre-release is intended for testing and feedback purposes and will undergo significant revisions in future releases.", "is_draft": false, "publication_date_l10n_long": "November 5, 2024", "publication_date_l10n_medium": "Nov 5, 2024", "related_identifiers": [{"identifier": "https://github.com/quietscientist/gma_score_prediction_from_video/tree/v0.1.0-alpha", "relation_type": {"id": "issupplementto", "title_l10n": "Is supplement to"}, "resource_type": {"id": "software", "title_l10n": "Software"}, "scheme": "url"}], "resource_type": {"id": "software", "title_l10n": "Software"}, "rights": [{"description_l10n": "The Creative Commons Attribution license allows re-distribution and re-use of a licensed work on the condition that the creator is appropriately credited.", "icon": "cc-by-icon", "id": "cc-by-4.0", "props": {"scheme": "spdx", "url": "https://creativecommons.org/licenses/by/4.0/legalcode"}, "title_l10n": "Creative Commons Attribution 4.0 International"}], "updated_date_l10n_long": "November 5, 2024", "version": "v0.1.0-alpha"}, "updated": "2024-11-05T20:43:00.074715+00:00", "versions": {"index": 1, "is_latest": false}}' data-styles='[["apa", "APA"], ["harvard-cite-them-right", "Harvard"], ["modern-language-association", "MLA"], ["vancouver", "Vancouver"], ["chicago-fullnote-bibliography", "Chicago"], ["ieee", "IEEE"]]' data-defaultstyle='"apa"' data-include-deleted='false'> </div> </div> </div> <div class="sidebar-container"> <h2 class="ui medium top attached header mt-0">Export</h2> <div id="export-record" class="ui segment bottom attached exports rdm-sidebar"> <div id="recordExportDownload" data-formats='[{"export_url": "/records/14042732/export/json", "name": "JSON"}, {"export_url": "/records/14042732/export/json-ld", "name": "JSON-LD"}, {"export_url": "/records/14042732/export/csl", "name": "CSL"}, {"export_url": "/records/14042732/export/datacite-json", "name": "DataCite JSON"}, {"export_url": "/records/14042732/export/datacite-xml", "name": "DataCite XML"}, {"export_url": "/records/14042732/export/dublincore", "name": "Dublin Core XML"}, {"export_url": "/records/14042732/export/marcxml", "name": "MARCXML"}, {"export_url": "/records/14042732/export/bibtex", "name": "BibTeX"}, {"export_url": "/records/14042732/export/geojson", "name": "GeoJSON"}, {"export_url": "/records/14042732/export/dcat-ap", "name": "DCAT"}, {"export_url": "/records/14042732/export/codemeta", "name": "Codemeta"}, {"export_url": "/records/14042732/export/cff", "name": "Citation File Format"}]'></div> </div> </div> <section id="upload-info" role="note" aria-label="Upload information" class="sidebar-container ui segment rdm-sidebar text-muted" > <h2 class="ui small header text-muted p-0 mb-5"><small>Technical metadata</small></h2> <dl class="m-0"> <dt class="inline"><small>Created</small></dt> <dd class="inline"> <small>November 5, 2024</small> </dd> <div> <dt class="rel-mt-1 inline"><small>Modified</small></dt> <dd class="inline"> <small>November 5, 2024</small> </dd> </div> </dl> </section> </aside> </div> </div> <div class="ui container"> <div class="ui relaxed grid"> <div class="two column row"> <div class="sixteen wide tablet eleven wide computer column"> <div class="ui grid"> <div class="centered row rel-mt-1"> <button id="jump-btn" class="jump-to-top ui button labeled icon" aria-label="Jump to top of page"> <i class="arrow alternate circle up outline icon"></i> Jump up </button> </div> </div></div> </div> </div> </div> </div> </div> </main> <footer id="rdm-footer-element"> <div class="footer-top"> <div class="ui container app-rdm-footer"> <div class="ui equal width stackable grid zenodo-footer"> <div class="column"> <h2 class="ui inverted tiny header">About</h2> <ul class="ui inverted link list"> <li class="item"> <a href="https://about.zenodo.org">About</a> </li> <li class="item"> <a href="https://about.zenodo.org/policies">Policies</a> </li> <li class="item"> <a href="https://about.zenodo.org/infrastructure">Infrastructure</a> </li> <li class="item"> <a href="https://about.zenodo.org/principles">Principles</a> </li> <li class="item"> <a href="https://about.zenodo.org/projects/">Projects</a> </li> <li class="item"> <a href="https://about.zenodo.org/roadmap/">Roadmap</a> </li> <li class="item"> <a href="https://about.zenodo.org/contact">Contact</a> </li> </ul> </div> <div class="column"> <h2 class="ui inverted tiny header">Blog</h2> <ul class="ui inverted link list"> <li class="item"> <a href="https://blog.zenodo.org">Blog</a> </li> </ul> </div> <div class="column"> <h2 class="ui inverted tiny header">Help</h2> <ul class="ui inverted link list"> <li class="item"> <a href="https://help.zenodo.org">FAQ</a> </li> <li class="item"> <a href="https://help.zenodo.org/docs/">Docs</a> </li> <li class="item"> <a href="https://help.zenodo.org/guides/">Guides</a> </li> <li class="item"> <a href="https://zenodo.org/support">Support</a> </li> </ul> </div> <div class="column"> <h2 class="ui inverted tiny header">Developers</h2> <ul class="ui inverted link list"> <li class="item"> <a href="https://developers.zenodo.org">REST API</a> </li> <li class="item"> <a href="https://developers.zenodo.org#oai-pmh">OAI-PMH</a> </li> </ul> </div> <div class="column"> <h2 class="ui inverted tiny header">Contribute</h2> <ul class="ui inverted link list"> <li class="item"> <a href="https://github.com/zenodo/zenodo-rdm"> <i class="icon external" aria-hidden="true"></i> GitHub </a> </li> <li class="item"> <a href="/donate"> <i class="icon external" aria-hidden="true"></i> Donate </a> </li> </ul> </div> <div class="six wide column right aligned"> <h2 class="ui inverted tiny header">Funded by</h2> <ul class="ui horizontal link list"> <li class="item"> <a href="https://home.cern" aria-label="CERN"> <img src="/static/images/cern.png" width="60" height="60" alt="" /> </a> </li> <li class="item"> <a href="https://www.openaire.eu" aria-label="OpenAIRE"> <img src="/static/images/openaire.png" width="60" height="60" alt="" /> </a> </li> <li class="item"> <a href="https://commission.europa.eu/index_en" aria-label="European Commission"> <img src="/static/images/eu.png" width="88" height="60" alt="" /> </a> </li> </ul> </div> </div> </div> </div> <div class="footer-bottom"> <div class="ui inverted container"> <div class="ui grid"> <div class="eight wide column left middle aligned"> <p class="m-0"> Powered by <a href="http://information-technology.web.cern.ch/about/computer-centre">CERN Data Centre</a> & <a href="https://inveniordm.docs.cern.ch/">InvenioRDM</a> </p> </div> <div class="eight wide column right aligned"> <ul class="ui inverted horizontal link list"> <li class="item"> <a href="https://stats.uptimerobot.com/vlYOVuWgM/">Status</a> </li> <li class="item"> <a href="https://about.zenodo.org/privacy-policy">Privacy policy</a> </li> <li class="item"> <a href="https://about.zenodo.org/cookie-policy">Cookie policy</a> </li> <li class="item"> <a href="https://about.zenodo.org/terms">Terms of Use</a> </li> <li class="item"> <a href="/support">Support</a> </li> </ul> </div> </div> </div> </div> </footer> <script type="text/javascript"> window.MathJax = { tex: { inlineMath: [['$', '$'], ['\\(', '\\)']], processEscapes: true // Allows escaping $ signs if needed } }; </script> <script type="text/javascript" src="//cdnjs.cloudflare.com/ajax/libs/mathjax/3.2.2/es5/tex-mml-chtml.js?config=TeX-AMS-MML_HTMLorMML"></script> <script src="/static/dist/js/manifest.72cfa35c718a31310304.js"></script> <script src="/static/dist/js/73.b6b9397a87d45e8c7e79.js"></script> <script src="/static/dist/js/3526.bd3264d45f1ccb44f1af.js"></script> <script src="/static/dist/js/theme.11978575020ef7cbe61f.js"></script> <script src="/static/dist/js/3378.e724bc63e26a0c239377.js"></script> <script src="/static/dist/js/1057.0b81c8b7baf9420bd6d0.js"></script> <script src="/static/dist/js/7655.080424d0b9a5a7a5a9b1.js"></script> <script src="/static/dist/js/6506.9170dec9c316b2a1f3ff.js"></script> <script src="/static/dist/js/8871.f9931e2caeec55e62caa.js"></script> <script src="/static/dist/js/621.9e1000a5f039459a7477.js"></script> <script src="/static/dist/js/9827.6cd4f907ce62a46317ae.js"></script> <script src="/static/dist/js/742.a643c720ee5af43fd5bd.js"></script> <script src="/static/dist/js/base-theme-rdm.75cb08732d7d870443b0.js"></script> <script src="/static/dist/js/i18n_app.87c128dd93a480df8ee1.js"></script> <script src="/static/dist/js/4709.edb227953b4ff98d5ce6.js"></script> <script src="/static/dist/js/5941.45000a5fd4c17fad73d2.js"></script> <script src="/static/dist/js/9736.0fcd1f1a5978e6cd4ba7.js"></script> <script src="/static/dist/js/5965.81d9be87d398c617d6ec.js"></script> <script src="/static/dist/js/1677.2971dd047372d8d7fd06.js"></script> <script src="/static/dist/js/8102.95da350bf4dfcd36d527.js"></script> <script src="/static/dist/js/5368.1ead38f6cd40e43dda08.js"></script> <script src="/static/dist/js/8585.0c5e4994f5cb936d3ee3.js"></script> <script src="/static/dist/js/1990.b5183092e2854c3879f6.js"></script> <script src="/static/dist/js/7579.68d70da4d5ff4b2ff8c0.js"></script> <script src="/static/dist/js/overridable-registry.e5d7a3043a7c6ba8fde8.js"></script> <script type='application/ld+json'>{"@context": "http://schema.org", "@id": "https://doi.org/10.5281/zenodo.14042732", "@type": "https://schema.org/SoftwareSourceCode", "author": [{"@type": "Person", "familyName": "quietscientist", "name": "quietscientist"}], "contentSize": "23.95 KB", "creator": [{"@type": "Person", "familyName": "quietscientist", "name": "quietscientist"}], "dateCreated": "2024-11-05T20:42:59.872671+00:00", "dateModified": "2024-11-05T20:43:00.074715+00:00", "datePublished": "2024-11-05", "description": "\u003cp\u003e\u003cstrong\u003ePre-release: Early Prototype for Video-based GMA Score Prediction\u003c/strong\u003e\nThis initial pre-release of the GMA Score Prediction from Video project introduces foundational functionalities for automating General Movement Assessment (GMA) score predictions using video input. Designed with clinicians and researchers in mind, this prototype implements core processing pipelines, including pose estimation, feature extraction, and basic model prediction capabilities.\u003c/p\u003e\n\u003cp\u003e\u003cstrong\u003eKey Features\u003c/strong\u003e\u003c/p\u003e\n\u003cul\u003e\n\u003cli\u003ePose estimation: Google Colab implementation of MMPose pose estimation pipeline\u003c/li\u003e\n\u003cli\u003eFeature Extraction: Feature extraction for capturing clinician-selected movement metrics.\u003c/li\u003e\n\u003cli\u003ePrediction Model: Prototype model for generating preliminary GMA scores based on extracted movement features using auto-sklearn\u003c/li\u003e\n\u003c/ul\u003e\n\u003cp\u003ePlease note: This pre-release is intended for testing and feedback purposes and will undergo significant revisions in future releases.\u003c/p\u003e", "identifier": "https://doi.org/10.5281/zenodo.14042732", "license": "https://creativecommons.org/licenses/by/4.0/legalcode", "name": "quietscientist/gma_score_prediction_from_video: First Steps", "publisher": {"@type": "Organization", "name": "Zenodo"}, "size": "23.95 KB", "url": "https://zenodo.org/records/14042732", "version": "v0.1.0-alpha"}</script> <script src="/static/dist/js/invenio-app-rdm-landing-page-theme.3b272942835c0a00f70d.js"></script> <script src="/static/dist/js/9945.a92c05fa251cdd46af39.js"></script> <script src="/static/dist/js/1357.6ee8cbca1c30c50f9548.js"></script> <script src="/static/dist/js/1644.e1781edcf51e2dc7f142.js"></script> <script src="/static/dist/js/8962.96d2b28b733c6e09354a.js"></script> <script src="/static/dist/js/9300.3ca5da14db03d2fd0854.js"></script> <script src="/static/dist/js/5680.fe8049296074c5f486f1.js"></script> <script src="/static/dist/js/invenio-app-rdm-landing-page.2a922a49323c9b3bb9a9.js"></script> <script src="/static/dist/js/previewer_theme.a4cb12f2f2d734727d26.js"></script> <script src="/static/dist/js/zenodo-rdm-citations.fec7f37f1c6823bc87e7.js"></script> <div class="ui container info message cookie-banner hidden"> <i class="close icon"></i> <div> <i aria-hidden="true" class="info icon"></i> <p class="inline">This site uses cookies. Find out more on <a href="https://about.zenodo.org/cookie-policy">how we use cookies</a></p> </div> <div class="buttons"> <button class="ui button small primary" id="cookies-all">Accept all cookies</button> <button class="ui button small" id="cookies-essential">Accept only essential cookies</button> </div> </div> <script> var _paq = window._paq = window._paq || []; _paq.push(['requireCookieConsent']); (function() { var u="https://webanalytics.web.cern.ch/"; _paq.push(['setTrackerUrl', u+'matomo.php']); _paq.push(['setSiteId', '366']); var d=document, g=d.createElement('script'), s=d.getElementsByTagName('script')[0]; g.async=true; g.src=u+'matomo.js'; s.parentNode.insertBefore(g,s); })(); const cookieConsent = document.cookie .split("; ") .find((row) => row.startsWith("cookie_consent=")) ?.split("=")[1]; if (cookieConsent) { if (cookieConsent === "all") { matomo(); } } else { document.querySelector(".cookie-banner").classList.remove("hidden") _paq.push(['forgetConsentGiven']); } $('.cookie-banner .close') .on('click', function () { $(this) .closest('.message') .transition('fade'); setCookie("cookie_consent","essential"); }); $('#cookies-essential') .on('click', function () { $(this) .closest('.message') .transition('fade'); setCookie("cookie_consent","essential"); }); $('#cookies-all') .on('click', function () { $(this) .closest('.message') .transition('fade'); setCookie("cookie_consent","all"); _paq.push(['rememberCookieConsentGiven']); matomo(); }); function matomo() { /* tracker methods like "setCustomDimension" should be called before "trackPageView" */ _paq.push(['trackPageView']); _paq.push(['enableLinkTracking']); } function setCookie(cname, cvalue) { var d = new Date(); d.setTime(d.getTime() + (365 * 24 * 60 * 60 * 1000)); // one year var expires = "expires=" + d.toUTCString(); var cookie = cname + "=" + cvalue + ";" + expires + ";" cookie += "Domain=zenodo.org;Path=/;SameSite=None; Secure"; document.cookie = cookie; } </script> </body> </html>