CINXE.COM

Beat Signer | Vrije Universiteit Brussel - Academia.edu

<!DOCTYPE html> <html lang="en" xmlns:fb="http://www.facebook.com/2008/fbml" class="wf-loading"> <head prefix="og: https://ogp.me/ns# fb: https://ogp.me/ns/fb# academia: https://ogp.me/ns/fb/academia#"> <meta charset="utf-8"> <meta name=viewport content="width=device-width, initial-scale=1"> <meta rel="search" type="application/opensearchdescription+xml" href="/open_search.xml" title="Academia.edu"> <title>Beat Signer | Vrije Universiteit Brussel - Academia.edu</title> <!-- _ _ _ | | (_) | | __ _ ___ __ _ __| | ___ _ __ ___ _ __ _ ___ __| |_ _ / _` |/ __/ _` |/ _` |/ _ \ '_ ` _ \| |/ _` | / _ \/ _` | | | | | (_| | (_| (_| | (_| | __/ | | | | | | (_| || __/ (_| | |_| | \__,_|\___\__,_|\__,_|\___|_| |_| |_|_|\__,_(_)___|\__,_|\__,_| We're hiring! See https://www.academia.edu/hiring --> <link href="//a.academia-assets.com/images/favicons/favicon-production.ico" rel="shortcut icon" type="image/vnd.microsoft.icon"> <link rel="apple-touch-icon" sizes="57x57" href="//a.academia-assets.com/images/favicons/apple-touch-icon-57x57.png"> <link rel="apple-touch-icon" sizes="60x60" href="//a.academia-assets.com/images/favicons/apple-touch-icon-60x60.png"> <link rel="apple-touch-icon" sizes="72x72" href="//a.academia-assets.com/images/favicons/apple-touch-icon-72x72.png"> <link rel="apple-touch-icon" sizes="76x76" href="//a.academia-assets.com/images/favicons/apple-touch-icon-76x76.png"> <link rel="apple-touch-icon" sizes="114x114" href="//a.academia-assets.com/images/favicons/apple-touch-icon-114x114.png"> <link rel="apple-touch-icon" sizes="120x120" href="//a.academia-assets.com/images/favicons/apple-touch-icon-120x120.png"> <link rel="apple-touch-icon" sizes="144x144" href="//a.academia-assets.com/images/favicons/apple-touch-icon-144x144.png"> <link rel="apple-touch-icon" sizes="152x152" href="//a.academia-assets.com/images/favicons/apple-touch-icon-152x152.png"> <link rel="apple-touch-icon" sizes="180x180" href="//a.academia-assets.com/images/favicons/apple-touch-icon-180x180.png"> <link rel="icon" type="image/png" href="//a.academia-assets.com/images/favicons/favicon-32x32.png" sizes="32x32"> <link rel="icon" type="image/png" href="//a.academia-assets.com/images/favicons/favicon-194x194.png" sizes="194x194"> <link rel="icon" type="image/png" href="//a.academia-assets.com/images/favicons/favicon-96x96.png" sizes="96x96"> <link rel="icon" type="image/png" href="//a.academia-assets.com/images/favicons/android-chrome-192x192.png" sizes="192x192"> <link rel="icon" type="image/png" href="//a.academia-assets.com/images/favicons/favicon-16x16.png" sizes="16x16"> <link rel="manifest" href="//a.academia-assets.com/images/favicons/manifest.json"> <meta name="msapplication-TileColor" content="#2b5797"> <meta name="msapplication-TileImage" content="//a.academia-assets.com/images/favicons/mstile-144x144.png"> <meta name="theme-color" content="#ffffff"> <script> window.performance && window.performance.measure && window.performance.measure("Time To First Byte", "requestStart", "responseStart"); </script> <script> (function() { if (!window.URLSearchParams || !window.history || !window.history.replaceState) { return; } var searchParams = new URLSearchParams(window.location.search); var paramsToDelete = [ 'fs', 'sm', 'swp', 'iid', 'nbs', 'rcc', // related content category 'rcpos', // related content carousel position 'rcpg', // related carousel page 'rchid', // related content hit id 'f_ri', // research interest id, for SEO tracking 'f_fri', // featured research interest, for SEO tracking (param key without value) 'f_rid', // from research interest directory for SEO tracking 'f_loswp', // from research interest pills on LOSWP sidebar for SEO tracking 'rhid', // referrring hit id ]; if (paramsToDelete.every((key) => searchParams.get(key) === null)) { return; } paramsToDelete.forEach((key) => { searchParams.delete(key); }); var cleanUrl = new URL(window.location.href); cleanUrl.search = searchParams.toString(); history.replaceState({}, document.title, cleanUrl); })(); </script> <script async src="https://www.googletagmanager.com/gtag/js?id=G-5VKX33P2DS"></script> <script> window.dataLayer = window.dataLayer || []; function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'G-5VKX33P2DS', { cookie_domain: 'academia.edu', send_page_view: false, }); gtag('event', 'page_view', { 'controller': "profiles/works", 'action': "summary", 'controller_action': 'profiles/works#summary', 'logged_in': 'false', 'edge': 'unknown', // Send nil if there is no A/B test bucket, in case some records get logged // with missing data - that way we can distinguish between the two cases. // ab_test_bucket should be of the form <ab_test_name>:<bucket> 'ab_test_bucket': null, }) </script> <script type="text/javascript"> window.sendUserTiming = function(timingName) { if (!(window.performance && window.performance.measure)) return; var entries = window.performance.getEntriesByName(timingName, "measure"); if (entries.length !== 1) return; var timingValue = Math.round(entries[0].duration); gtag('event', 'timing_complete', { name: timingName, value: timingValue, event_category: 'User-centric', }); }; window.sendUserTiming("Time To First Byte"); </script> <meta name="csrf-param" content="authenticity_token" /> <meta name="csrf-token" content="rc4d0laDvsIo-1EADNmVbbkUxOe2jfntHVBjB4zwtLy2sUaJu26SP5GfzsIiuTwzAg2fAuaTm63MqP0XKZYdtw" /> <link rel="stylesheet" href="//a.academia-assets.com/assets/wow-3d36c19b4875b226bfed0fcba1dcea3f2fe61148383d97c0465c016b8c969290.css" media="all" /><link rel="stylesheet" href="//a.academia-assets.com/assets/social/home-79e78ce59bef0a338eb6540ec3d93b4a7952115b56c57f1760943128f4544d42.css" media="all" /><link rel="stylesheet" href="//a.academia-assets.com/assets/single_work_page/figure_carousel-2004283e0948681916eefa74772df54f56cb5c7413d82b160212231c2f474bb3.css" media="all" /><script type="application/ld+json">{"@context":"https://schema.org","@type":"ProfilePage","mainEntity":{"@context":"https://schema.org","@type":"Person","name":"Beat Signer","url":"https://vub.academia.edu/BeatSigner","image":"https://0.academia-photos.com/13155/4407/155010730/s200_beat.signer.png","sameAs":["http://www.beatsigner.com","http://www.linkedin.com/in/signer","http://twitter.com/beatsigner","http://scholar.google.be/citations?user=v16OAn78FFUJ\u0026hl=en","https://www.facebook.com/bsigner","http://wise.vub.ac.be/beat-signer","https://www.youtube.com/channel/UCkb4k2qvCVvOghh3wFrBnhw","http://www.informatik.uni-trier.de/~ley/pers/hd/s/Signer:Beat","https://www.instagram.com/beat_signer/","http://orcid.org/0000-0001-9916-0837","https://www.researchgate.net/profile/Beat_Signer","https://dl.acm.org/profile/81100250652","https://speakerdeck.com/signer","https://beatsigner.smugmug.com/Portfolios/Wildlife/"]},"dateCreated":"2008-10-29T17:11:18-07:00","dateModified":"2025-04-09T03:58:20-07:00","name":"Beat Signer","description":"Professor of Computer Science, https://beatsigner.com\n\nBeat Signer is Professor of Computer Science at the Vrije Universiteit Brussel (VUB) in Belgium and director of the Web and Information System Engineering (WISE) lab. He studied Computer Science at ETH Zurich and obtained a PhD in Computer Science from ETH Zurich.\n\nCURRENT RESEARCH\nWith his research group, Beat is investigating cross-media information spaces and architectures (CISA), cross-media technologies, interactive paper and augmented reality solutions, dynamic data physicalisation, technology-enhanced learning as well as multimodal interaction frameworks and hybrid positioning solutions. Thereby, they are coming up with new document formats for representing information across different types of media as well as fluid user interfaces for interacting with the resulting cross-media information spaces. A significant part of the research is based on extensions and applications of the resource-selector-link (RSL) hypermedia metamodel.\n\nRESEARCH INTERESTS\n * cross-media technologies\n * human-information interaction \n * augmented reality and interactive paper\n * data physicalisation and tangible holograms\n * personal information management\n * technology-enhanced learning\n * internet of things and web of things\n * document engineering\n\nTEACHING\n * Next Generation User Interfaces\n * Human-Computer Interaction\n * Information Visualisation\n * Web Technologies\n * Advanced Topics in Big Data\n * Advanced Topics in Information Systems (past)\n * Introduction to Databases (past)\n * Databases (past)\n \nAs part of the European Paper++ and PaperWorks projects, Beat Signer has developed the interactive paper (iPaper) framework for integrating paper and digital services and information. Different solutions for innovative forms of interactive paper document publishing have been realised and the iPaper framework has been applied in a variety of applications including PaperPoint, EdFest, Generosa Enterprise, the Lost Cosmonaut, Print-n-Link, PaperProof and other solutions. Based on the experience that he has gained from realising various interactive paper applications, he is currently investigating general design patterns and metaphors for cross-media user interfaces.\n ","image":"https://0.academia-photos.com/13155/4407/155010730/s200_beat.signer.png","thumbnailUrl":"https://0.academia-photos.com/13155/4407/155010730/s65_beat.signer.png","primaryImageOfPage":{"@type":"ImageObject","url":"https://0.academia-photos.com/13155/4407/155010730/s200_beat.signer.png","width":200},"sameAs":["http://www.beatsigner.com","http://www.linkedin.com/in/signer","http://twitter.com/beatsigner","http://scholar.google.be/citations?user=v16OAn78FFUJ\u0026hl=en","https://www.facebook.com/bsigner","http://wise.vub.ac.be/beat-signer","https://www.youtube.com/channel/UCkb4k2qvCVvOghh3wFrBnhw","http://www.informatik.uni-trier.de/~ley/pers/hd/s/Signer:Beat","https://www.instagram.com/beat_signer/","http://orcid.org/0000-0001-9916-0837","https://www.researchgate.net/profile/Beat_Signer","https://dl.acm.org/profile/81100250652","https://speakerdeck.com/signer","https://beatsigner.smugmug.com/Portfolios/Wildlife/"],"relatedLink":"https://www.academia.edu/128692393/JsStories_Improving_Social_Inclusion_in_Computer_Science_Education_Through_Interactive_Stories"}</script><link rel="stylesheet" href="//a.academia-assets.com/assets/design_system/heading-95367dc03b794f6737f30123738a886cf53b7a65cdef98a922a98591d60063e3.css" media="all" /><link rel="stylesheet" href="//a.academia-assets.com/assets/design_system/button-8c9ae4b5c8a2531640c354d92a1f3579c8ff103277ef74913e34c8a76d4e6c00.css" media="all" /><link rel="stylesheet" href="//a.academia-assets.com/assets/design_system/body-170d1319f0e354621e81ca17054bb147da2856ec0702fe440a99af314a6338c5.css" media="all" /><link rel="stylesheet" href="//a.academia-assets.com/assets/design_system/text_button-d1941ab08e91e29ee143084c4749da4aaffa350a2ac6eec2306b1d7a352d911a.css" media="all" /><style type="text/css">@media(max-width: 567px){:root{--token-mode: Parity;--dropshadow: 0 2px 4px 0 #22223340;--primary-brand: #0645b1;--error-dark: #b60000;--success-dark: #05b01c;--inactive-fill: #ebebee;--hover: #0c3b8d;--pressed: #082f75;--button-primary-fill-inactive: #ebebee;--button-primary-fill: #0645b1;--button-primary-text: #ffffff;--button-primary-fill-hover: #0c3b8d;--button-primary-fill-press: #082f75;--button-primary-icon: #ffffff;--button-primary-fill-inverse: #ffffff;--button-primary-text-inverse: #082f75;--button-primary-icon-inverse: #0645b1;--button-primary-fill-inverse-hover: #cddaef;--button-primary-stroke-inverse-pressed: #0645b1;--button-secondary-stroke-inactive: #b1b1ba;--button-secondary-fill: #eef2f9;--button-secondary-text: #082f75;--button-secondary-fill-press: #cddaef;--button-secondary-fill-inactive: #ebebee;--button-secondary-stroke: #cddaef;--button-secondary-stroke-hover: #386ac1;--button-secondary-stroke-press: #0645b1;--button-secondary-text-inactive: #b1b1ba;--button-secondary-icon: #082f75;--button-secondary-fill-hover: #e6ecf7;--button-secondary-stroke-inverse: #ffffff;--button-secondary-fill-inverse: rgba(255, 255, 255, 0);--button-secondary-icon-inverse: #ffffff;--button-secondary-icon-hover: #082f75;--button-secondary-icon-press: #082f75;--button-secondary-text-inverse: #ffffff;--button-secondary-text-hover: #082f75;--button-secondary-text-press: #082f75;--button-secondary-fill-inverse-hover: #043059;--button-xs-stroke: #141413;--button-xs-stroke-hover: #0c3b8d;--button-xs-stroke-press: #082f75;--button-xs-stroke-inactive: #ebebee;--button-xs-text: #141413;--button-xs-text-hover: #0c3b8d;--button-xs-text-press: #082f75;--button-xs-text-inactive: #91919e;--button-xs-icon: #141413;--button-xs-icon-hover: #0c3b8d;--button-xs-icon-press: #082f75;--button-xs-icon-inactive: #91919e;--button-xs-fill: #ffffff;--button-xs-fill-hover: #f4f7fc;--button-xs-fill-press: #eef2f9;--buttons-button-text-inactive: #91919e;--buttons-button-focus: #0645b1;--buttons-button-icon-inactive: #91919e;--buttons-small-buttons-corner-radius: 8px;--buttons-small-buttons-l-r-padding: 12px;--buttons-small-buttons-height: 44px;--buttons-small-buttons-gap: 8px;--buttons-small-buttons-icon-only-width: 44px;--buttons-small-buttons-icon-size: 20px;--buttons-small-buttons-stroke-default: 1px;--buttons-small-buttons-stroke-thick: 2px;--buttons-large-buttons-l-r-padding: 20px;--buttons-large-buttons-height: 54px;--buttons-large-buttons-icon-only-width: 54px;--buttons-large-buttons-icon-size: 20px;--buttons-large-buttons-gap: 8px;--buttons-large-buttons-corner-radius: 8px;--buttons-large-buttons-stroke-default: 1px;--buttons-large-buttons-stroke-thick: 2px;--buttons-extra-small-buttons-l-r-padding: 8px;--buttons-extra-small-buttons-height: 32px;--buttons-extra-small-buttons-icon-size: 16px;--buttons-extra-small-buttons-gap: 4px;--buttons-extra-small-buttons-corner-radius: 8px;--buttons-stroke-default: 1px;--buttons-stroke-thick: 2px;--background-beige: #f9f7f4;--error-light: #fff2f2;--text-placeholder: #6d6d7d;--stroke-dark: #141413;--stroke-light: #dddde2;--stroke-medium: #535366;--accent-green: #ccffd4;--accent-turquoise: #ccf7ff;--accent-yellow: #f7ffcc;--accent-peach: #ffd4cc;--accent-violet: #f7ccff;--accent-purple: #f4f7fc;--text-primary: #141413;--secondary-brand: #141413;--text-hover: #0c3b8d;--text-white: #ffffff;--text-link: #0645b1;--text-press: #082f75;--success-light: #f0f8f1;--background-light-blue: #eef2f9;--background-white: #ffffff;--premium-dark: #877440;--premium-light: #f9f6ed;--stroke-white: #ffffff;--inactive-content: #b1b1ba;--annotate-light: #a35dff;--annotate-dark: #824acc;--grid: #eef2f9;--inactive-stroke: #ebebee;--shadow: rgba(34, 34, 51, 0.25);--text-inactive: #6d6d7d;--text-error: #b60000;--stroke-error: #b60000;--background-error: #fff2f2;--background-black: #141413;--icon-default: #141413;--icon-blue: #0645b1;--background-grey: #dddde2;--icon-grey: #b1b1ba;--text-focus: #082f75;--brand-colors-neutral-black: #141413;--brand-colors-neutral-900: #535366;--brand-colors-neutral-800: #6d6d7d;--brand-colors-neutral-700: #91919e;--brand-colors-neutral-600: #b1b1ba;--brand-colors-neutral-500: #c8c8cf;--brand-colors-neutral-400: #dddde2;--brand-colors-neutral-300: #ebebee;--brand-colors-neutral-200: #f8f8fb;--brand-colors-neutral-100: #fafafa;--brand-colors-neutral-white: #ffffff;--brand-colors-blue-900: #043059;--brand-colors-blue-800: #082f75;--brand-colors-blue-700: #0c3b8d;--brand-colors-blue-600: #0645b1;--brand-colors-blue-500: #386ac1;--brand-colors-blue-400: #cddaef;--brand-colors-blue-300: #e6ecf7;--brand-colors-blue-200: #eef2f9;--brand-colors-blue-100: #f4f7fc;--brand-colors-gold-500: #877440;--brand-colors-gold-400: #e9e3d4;--brand-colors-gold-300: #f2efe8;--brand-colors-gold-200: #f9f6ed;--brand-colors-gold-100: #f9f7f4;--brand-colors-error-900: #920000;--brand-colors-error-500: #b60000;--brand-colors-success-900: #035c0f;--brand-colors-green: #ccffd4;--brand-colors-turquoise: #ccf7ff;--brand-colors-yellow: #f7ffcc;--brand-colors-peach: #ffd4cc;--brand-colors-violet: #f7ccff;--brand-colors-error-100: #fff2f2;--brand-colors-success-500: #05b01c;--brand-colors-success-100: #f0f8f1;--text-secondary: #535366;--icon-white: #ffffff;--background-beige-darker: #f2efe8;--icon-dark-grey: #535366;--type-font-family-sans-serif: Roboto;--type-font-family-serif: Georgia;--type-font-family-mono: IBM Plex Mono;--type-weights-300: 300;--type-weights-400: 400;--type-weights-500: 500;--type-weights-700: 700;--type-sizes-12: 12px;--type-sizes-14: 14px;--type-sizes-16: 16px;--type-sizes-18: 18px;--type-sizes-20: 20px;--type-sizes-22: 22px;--type-sizes-24: 24px;--type-sizes-28: 28px;--type-sizes-30: 30px;--type-sizes-32: 32px;--type-sizes-40: 40px;--type-sizes-42: 42px;--type-sizes-48-2: 48px;--type-line-heights-16: 16px;--type-line-heights-20: 20px;--type-line-heights-23: 23px;--type-line-heights-24: 24px;--type-line-heights-25: 25px;--type-line-heights-26: 26px;--type-line-heights-29: 29px;--type-line-heights-30: 30px;--type-line-heights-32: 32px;--type-line-heights-34: 34px;--type-line-heights-35: 35px;--type-line-heights-36: 36px;--type-line-heights-38: 38px;--type-line-heights-40: 40px;--type-line-heights-46: 46px;--type-line-heights-48: 48px;--type-line-heights-52: 52px;--type-line-heights-58: 58px;--type-line-heights-68: 68px;--type-line-heights-74: 74px;--type-line-heights-82: 82px;--type-paragraph-spacings-0: 0px;--type-paragraph-spacings-4: 4px;--type-paragraph-spacings-8: 8px;--type-paragraph-spacings-16: 16px;--type-sans-serif-xl-font-weight: 400;--type-sans-serif-xl-size: 32px;--type-sans-serif-xl-line-height: 46px;--type-sans-serif-xl-paragraph-spacing: 16px;--type-sans-serif-lg-font-weight: 400;--type-sans-serif-lg-size: 30px;--type-sans-serif-lg-line-height: 36px;--type-sans-serif-lg-paragraph-spacing: 16px;--type-sans-serif-md-font-weight: 400;--type-sans-serif-md-line-height: 30px;--type-sans-serif-md-paragraph-spacing: 16px;--type-sans-serif-md-size: 24px;--type-sans-serif-xs-font-weight: 700;--type-sans-serif-xs-line-height: 24px;--type-sans-serif-xs-paragraph-spacing: 0px;--type-sans-serif-xs-size: 18px;--type-sans-serif-sm-font-weight: 400;--type-sans-serif-sm-line-height: 32px;--type-sans-serif-sm-paragraph-spacing: 16px;--type-sans-serif-sm-size: 20px;--type-body-xl-font-weight: 400;--type-body-xl-size: 24px;--type-body-xl-line-height: 36px;--type-body-xl-paragraph-spacing: 0px;--type-body-sm-font-weight: 400;--type-body-sm-size: 14px;--type-body-sm-line-height: 20px;--type-body-sm-paragraph-spacing: 8px;--type-body-xs-font-weight: 400;--type-body-xs-size: 12px;--type-body-xs-line-height: 16px;--type-body-xs-paragraph-spacing: 0px;--type-body-md-font-weight: 400;--type-body-md-size: 16px;--type-body-md-line-height: 20px;--type-body-md-paragraph-spacing: 4px;--type-body-lg-font-weight: 400;--type-body-lg-size: 20px;--type-body-lg-line-height: 26px;--type-body-lg-paragraph-spacing: 16px;--type-body-lg-medium-font-weight: 500;--type-body-lg-medium-size: 20px;--type-body-lg-medium-line-height: 32px;--type-body-lg-medium-paragraph-spacing: 16px;--type-body-md-medium-font-weight: 500;--type-body-md-medium-size: 16px;--type-body-md-medium-line-height: 20px;--type-body-md-medium-paragraph-spacing: 4px;--type-body-sm-bold-font-weight: 700;--type-body-sm-bold-size: 14px;--type-body-sm-bold-line-height: 20px;--type-body-sm-bold-paragraph-spacing: 8px;--type-body-sm-medium-font-weight: 500;--type-body-sm-medium-size: 14px;--type-body-sm-medium-line-height: 20px;--type-body-sm-medium-paragraph-spacing: 8px;--type-serif-md-font-weight: 400;--type-serif-md-size: 32px;--type-serif-md-paragraph-spacing: 0px;--type-serif-md-line-height: 40px;--type-serif-sm-font-weight: 400;--type-serif-sm-size: 24px;--type-serif-sm-paragraph-spacing: 0px;--type-serif-sm-line-height: 26px;--type-serif-lg-font-weight: 400;--type-serif-lg-size: 48px;--type-serif-lg-paragraph-spacing: 0px;--type-serif-lg-line-height: 52px;--type-serif-xs-font-weight: 400;--type-serif-xs-size: 18px;--type-serif-xs-line-height: 24px;--type-serif-xs-paragraph-spacing: 0px;--type-serif-xl-font-weight: 400;--type-serif-xl-size: 48px;--type-serif-xl-paragraph-spacing: 0px;--type-serif-xl-line-height: 58px;--type-mono-md-font-weight: 400;--type-mono-md-size: 22px;--type-mono-md-line-height: 24px;--type-mono-md-paragraph-spacing: 0px;--type-mono-lg-font-weight: 400;--type-mono-lg-size: 40px;--type-mono-lg-line-height: 40px;--type-mono-lg-paragraph-spacing: 0px;--type-mono-sm-font-weight: 400;--type-mono-sm-size: 14px;--type-mono-sm-line-height: 24px;--type-mono-sm-paragraph-spacing: 0px;--spacing-xs-4: 4px;--spacing-xs-8: 8px;--spacing-xs-16: 16px;--spacing-sm-24: 24px;--spacing-sm-32: 32px;--spacing-md-40: 40px;--spacing-md-48: 48px;--spacing-lg-64: 64px;--spacing-lg-80: 80px;--spacing-xlg-104: 104px;--spacing-xlg-152: 152px;--spacing-xs-12: 12px;--spacing-page-section: 80px;--spacing-card-list-spacing: 48px;--spacing-text-section-spacing: 64px;--spacing-md-xs-headings: 40px;--corner-radius-radius-lg: 16px;--corner-radius-radius-sm: 4px;--corner-radius-radius-md: 8px;--corner-radius-radius-round: 104px}}@media(min-width: 568px)and (max-width: 1279px){:root{--token-mode: Parity;--dropshadow: 0 2px 4px 0 #22223340;--primary-brand: #0645b1;--error-dark: #b60000;--success-dark: #05b01c;--inactive-fill: #ebebee;--hover: #0c3b8d;--pressed: #082f75;--button-primary-fill-inactive: #ebebee;--button-primary-fill: #0645b1;--button-primary-text: #ffffff;--button-primary-fill-hover: #0c3b8d;--button-primary-fill-press: #082f75;--button-primary-icon: #ffffff;--button-primary-fill-inverse: #ffffff;--button-primary-text-inverse: #082f75;--button-primary-icon-inverse: #0645b1;--button-primary-fill-inverse-hover: #cddaef;--button-primary-stroke-inverse-pressed: #0645b1;--button-secondary-stroke-inactive: #b1b1ba;--button-secondary-fill: #eef2f9;--button-secondary-text: #082f75;--button-secondary-fill-press: #cddaef;--button-secondary-fill-inactive: #ebebee;--button-secondary-stroke: #cddaef;--button-secondary-stroke-hover: #386ac1;--button-secondary-stroke-press: #0645b1;--button-secondary-text-inactive: #b1b1ba;--button-secondary-icon: #082f75;--button-secondary-fill-hover: #e6ecf7;--button-secondary-stroke-inverse: #ffffff;--button-secondary-fill-inverse: rgba(255, 255, 255, 0);--button-secondary-icon-inverse: #ffffff;--button-secondary-icon-hover: #082f75;--button-secondary-icon-press: #082f75;--button-secondary-text-inverse: #ffffff;--button-secondary-text-hover: #082f75;--button-secondary-text-press: #082f75;--button-secondary-fill-inverse-hover: #043059;--button-xs-stroke: #141413;--button-xs-stroke-hover: #0c3b8d;--button-xs-stroke-press: #082f75;--button-xs-stroke-inactive: #ebebee;--button-xs-text: #141413;--button-xs-text-hover: #0c3b8d;--button-xs-text-press: #082f75;--button-xs-text-inactive: #91919e;--button-xs-icon: #141413;--button-xs-icon-hover: #0c3b8d;--button-xs-icon-press: #082f75;--button-xs-icon-inactive: #91919e;--button-xs-fill: #ffffff;--button-xs-fill-hover: #f4f7fc;--button-xs-fill-press: #eef2f9;--buttons-button-text-inactive: #91919e;--buttons-button-focus: #0645b1;--buttons-button-icon-inactive: #91919e;--buttons-small-buttons-corner-radius: 8px;--buttons-small-buttons-l-r-padding: 12px;--buttons-small-buttons-height: 44px;--buttons-small-buttons-gap: 8px;--buttons-small-buttons-icon-only-width: 44px;--buttons-small-buttons-icon-size: 20px;--buttons-small-buttons-stroke-default: 1px;--buttons-small-buttons-stroke-thick: 2px;--buttons-large-buttons-l-r-padding: 20px;--buttons-large-buttons-height: 54px;--buttons-large-buttons-icon-only-width: 54px;--buttons-large-buttons-icon-size: 20px;--buttons-large-buttons-gap: 8px;--buttons-large-buttons-corner-radius: 8px;--buttons-large-buttons-stroke-default: 1px;--buttons-large-buttons-stroke-thick: 2px;--buttons-extra-small-buttons-l-r-padding: 8px;--buttons-extra-small-buttons-height: 32px;--buttons-extra-small-buttons-icon-size: 16px;--buttons-extra-small-buttons-gap: 4px;--buttons-extra-small-buttons-corner-radius: 8px;--buttons-stroke-default: 1px;--buttons-stroke-thick: 2px;--background-beige: #f9f7f4;--error-light: #fff2f2;--text-placeholder: #6d6d7d;--stroke-dark: #141413;--stroke-light: #dddde2;--stroke-medium: #535366;--accent-green: #ccffd4;--accent-turquoise: #ccf7ff;--accent-yellow: #f7ffcc;--accent-peach: #ffd4cc;--accent-violet: #f7ccff;--accent-purple: #f4f7fc;--text-primary: #141413;--secondary-brand: #141413;--text-hover: #0c3b8d;--text-white: #ffffff;--text-link: #0645b1;--text-press: #082f75;--success-light: #f0f8f1;--background-light-blue: #eef2f9;--background-white: #ffffff;--premium-dark: #877440;--premium-light: #f9f6ed;--stroke-white: #ffffff;--inactive-content: #b1b1ba;--annotate-light: #a35dff;--annotate-dark: #824acc;--grid: #eef2f9;--inactive-stroke: #ebebee;--shadow: rgba(34, 34, 51, 0.25);--text-inactive: #6d6d7d;--text-error: #b60000;--stroke-error: #b60000;--background-error: #fff2f2;--background-black: #141413;--icon-default: #141413;--icon-blue: #0645b1;--background-grey: #dddde2;--icon-grey: #b1b1ba;--text-focus: #082f75;--brand-colors-neutral-black: #141413;--brand-colors-neutral-900: #535366;--brand-colors-neutral-800: #6d6d7d;--brand-colors-neutral-700: #91919e;--brand-colors-neutral-600: #b1b1ba;--brand-colors-neutral-500: #c8c8cf;--brand-colors-neutral-400: #dddde2;--brand-colors-neutral-300: #ebebee;--brand-colors-neutral-200: #f8f8fb;--brand-colors-neutral-100: #fafafa;--brand-colors-neutral-white: #ffffff;--brand-colors-blue-900: #043059;--brand-colors-blue-800: #082f75;--brand-colors-blue-700: #0c3b8d;--brand-colors-blue-600: #0645b1;--brand-colors-blue-500: #386ac1;--brand-colors-blue-400: #cddaef;--brand-colors-blue-300: #e6ecf7;--brand-colors-blue-200: #eef2f9;--brand-colors-blue-100: #f4f7fc;--brand-colors-gold-500: #877440;--brand-colors-gold-400: #e9e3d4;--brand-colors-gold-300: #f2efe8;--brand-colors-gold-200: #f9f6ed;--brand-colors-gold-100: #f9f7f4;--brand-colors-error-900: #920000;--brand-colors-error-500: #b60000;--brand-colors-success-900: #035c0f;--brand-colors-green: #ccffd4;--brand-colors-turquoise: #ccf7ff;--brand-colors-yellow: #f7ffcc;--brand-colors-peach: #ffd4cc;--brand-colors-violet: #f7ccff;--brand-colors-error-100: #fff2f2;--brand-colors-success-500: #05b01c;--brand-colors-success-100: #f0f8f1;--text-secondary: #535366;--icon-white: #ffffff;--background-beige-darker: #f2efe8;--icon-dark-grey: #535366;--type-font-family-sans-serif: Roboto;--type-font-family-serif: Georgia;--type-font-family-mono: IBM Plex Mono;--type-weights-300: 300;--type-weights-400: 400;--type-weights-500: 500;--type-weights-700: 700;--type-sizes-12: 12px;--type-sizes-14: 14px;--type-sizes-16: 16px;--type-sizes-18: 18px;--type-sizes-20: 20px;--type-sizes-22: 22px;--type-sizes-24: 24px;--type-sizes-28: 28px;--type-sizes-30: 30px;--type-sizes-32: 32px;--type-sizes-40: 40px;--type-sizes-42: 42px;--type-sizes-48-2: 48px;--type-line-heights-16: 16px;--type-line-heights-20: 20px;--type-line-heights-23: 23px;--type-line-heights-24: 24px;--type-line-heights-25: 25px;--type-line-heights-26: 26px;--type-line-heights-29: 29px;--type-line-heights-30: 30px;--type-line-heights-32: 32px;--type-line-heights-34: 34px;--type-line-heights-35: 35px;--type-line-heights-36: 36px;--type-line-heights-38: 38px;--type-line-heights-40: 40px;--type-line-heights-46: 46px;--type-line-heights-48: 48px;--type-line-heights-52: 52px;--type-line-heights-58: 58px;--type-line-heights-68: 68px;--type-line-heights-74: 74px;--type-line-heights-82: 82px;--type-paragraph-spacings-0: 0px;--type-paragraph-spacings-4: 4px;--type-paragraph-spacings-8: 8px;--type-paragraph-spacings-16: 16px;--type-sans-serif-xl-font-weight: 400;--type-sans-serif-xl-size: 42px;--type-sans-serif-xl-line-height: 46px;--type-sans-serif-xl-paragraph-spacing: 16px;--type-sans-serif-lg-font-weight: 400;--type-sans-serif-lg-size: 32px;--type-sans-serif-lg-line-height: 36px;--type-sans-serif-lg-paragraph-spacing: 16px;--type-sans-serif-md-font-weight: 400;--type-sans-serif-md-line-height: 34px;--type-sans-serif-md-paragraph-spacing: 16px;--type-sans-serif-md-size: 28px;--type-sans-serif-xs-font-weight: 700;--type-sans-serif-xs-line-height: 25px;--type-sans-serif-xs-paragraph-spacing: 0px;--type-sans-serif-xs-size: 20px;--type-sans-serif-sm-font-weight: 400;--type-sans-serif-sm-line-height: 30px;--type-sans-serif-sm-paragraph-spacing: 16px;--type-sans-serif-sm-size: 24px;--type-body-xl-font-weight: 400;--type-body-xl-size: 24px;--type-body-xl-line-height: 36px;--type-body-xl-paragraph-spacing: 0px;--type-body-sm-font-weight: 400;--type-body-sm-size: 14px;--type-body-sm-line-height: 20px;--type-body-sm-paragraph-spacing: 8px;--type-body-xs-font-weight: 400;--type-body-xs-size: 12px;--type-body-xs-line-height: 16px;--type-body-xs-paragraph-spacing: 0px;--type-body-md-font-weight: 400;--type-body-md-size: 16px;--type-body-md-line-height: 20px;--type-body-md-paragraph-spacing: 4px;--type-body-lg-font-weight: 400;--type-body-lg-size: 20px;--type-body-lg-line-height: 26px;--type-body-lg-paragraph-spacing: 16px;--type-body-lg-medium-font-weight: 500;--type-body-lg-medium-size: 20px;--type-body-lg-medium-line-height: 32px;--type-body-lg-medium-paragraph-spacing: 16px;--type-body-md-medium-font-weight: 500;--type-body-md-medium-size: 16px;--type-body-md-medium-line-height: 20px;--type-body-md-medium-paragraph-spacing: 4px;--type-body-sm-bold-font-weight: 700;--type-body-sm-bold-size: 14px;--type-body-sm-bold-line-height: 20px;--type-body-sm-bold-paragraph-spacing: 8px;--type-body-sm-medium-font-weight: 500;--type-body-sm-medium-size: 14px;--type-body-sm-medium-line-height: 20px;--type-body-sm-medium-paragraph-spacing: 8px;--type-serif-md-font-weight: 400;--type-serif-md-size: 40px;--type-serif-md-paragraph-spacing: 0px;--type-serif-md-line-height: 48px;--type-serif-sm-font-weight: 400;--type-serif-sm-size: 28px;--type-serif-sm-paragraph-spacing: 0px;--type-serif-sm-line-height: 32px;--type-serif-lg-font-weight: 400;--type-serif-lg-size: 58px;--type-serif-lg-paragraph-spacing: 0px;--type-serif-lg-line-height: 68px;--type-serif-xs-font-weight: 400;--type-serif-xs-size: 18px;--type-serif-xs-line-height: 24px;--type-serif-xs-paragraph-spacing: 0px;--type-serif-xl-font-weight: 400;--type-serif-xl-size: 74px;--type-serif-xl-paragraph-spacing: 0px;--type-serif-xl-line-height: 82px;--type-mono-md-font-weight: 400;--type-mono-md-size: 22px;--type-mono-md-line-height: 24px;--type-mono-md-paragraph-spacing: 0px;--type-mono-lg-font-weight: 400;--type-mono-lg-size: 40px;--type-mono-lg-line-height: 40px;--type-mono-lg-paragraph-spacing: 0px;--type-mono-sm-font-weight: 400;--type-mono-sm-size: 14px;--type-mono-sm-line-height: 24px;--type-mono-sm-paragraph-spacing: 0px;--spacing-xs-4: 4px;--spacing-xs-8: 8px;--spacing-xs-16: 16px;--spacing-sm-24: 24px;--spacing-sm-32: 32px;--spacing-md-40: 40px;--spacing-md-48: 48px;--spacing-lg-64: 64px;--spacing-lg-80: 80px;--spacing-xlg-104: 104px;--spacing-xlg-152: 152px;--spacing-xs-12: 12px;--spacing-page-section: 104px;--spacing-card-list-spacing: 48px;--spacing-text-section-spacing: 80px;--spacing-md-xs-headings: 40px;--corner-radius-radius-lg: 16px;--corner-radius-radius-sm: 4px;--corner-radius-radius-md: 8px;--corner-radius-radius-round: 104px}}@media(min-width: 1280px){:root{--token-mode: Parity;--dropshadow: 0 2px 4px 0 #22223340;--primary-brand: #0645b1;--error-dark: #b60000;--success-dark: #05b01c;--inactive-fill: #ebebee;--hover: #0c3b8d;--pressed: #082f75;--button-primary-fill-inactive: #ebebee;--button-primary-fill: #0645b1;--button-primary-text: #ffffff;--button-primary-fill-hover: #0c3b8d;--button-primary-fill-press: #082f75;--button-primary-icon: #ffffff;--button-primary-fill-inverse: #ffffff;--button-primary-text-inverse: #082f75;--button-primary-icon-inverse: #0645b1;--button-primary-fill-inverse-hover: #cddaef;--button-primary-stroke-inverse-pressed: #0645b1;--button-secondary-stroke-inactive: #b1b1ba;--button-secondary-fill: #eef2f9;--button-secondary-text: #082f75;--button-secondary-fill-press: #cddaef;--button-secondary-fill-inactive: #ebebee;--button-secondary-stroke: #cddaef;--button-secondary-stroke-hover: #386ac1;--button-secondary-stroke-press: #0645b1;--button-secondary-text-inactive: #b1b1ba;--button-secondary-icon: #082f75;--button-secondary-fill-hover: #e6ecf7;--button-secondary-stroke-inverse: #ffffff;--button-secondary-fill-inverse: rgba(255, 255, 255, 0);--button-secondary-icon-inverse: #ffffff;--button-secondary-icon-hover: #082f75;--button-secondary-icon-press: #082f75;--button-secondary-text-inverse: #ffffff;--button-secondary-text-hover: #082f75;--button-secondary-text-press: #082f75;--button-secondary-fill-inverse-hover: #043059;--button-xs-stroke: #141413;--button-xs-stroke-hover: #0c3b8d;--button-xs-stroke-press: #082f75;--button-xs-stroke-inactive: #ebebee;--button-xs-text: #141413;--button-xs-text-hover: #0c3b8d;--button-xs-text-press: #082f75;--button-xs-text-inactive: #91919e;--button-xs-icon: #141413;--button-xs-icon-hover: #0c3b8d;--button-xs-icon-press: #082f75;--button-xs-icon-inactive: #91919e;--button-xs-fill: #ffffff;--button-xs-fill-hover: #f4f7fc;--button-xs-fill-press: #eef2f9;--buttons-button-text-inactive: #91919e;--buttons-button-focus: #0645b1;--buttons-button-icon-inactive: #91919e;--buttons-small-buttons-corner-radius: 8px;--buttons-small-buttons-l-r-padding: 12px;--buttons-small-buttons-height: 44px;--buttons-small-buttons-gap: 8px;--buttons-small-buttons-icon-only-width: 44px;--buttons-small-buttons-icon-size: 20px;--buttons-small-buttons-stroke-default: 1px;--buttons-small-buttons-stroke-thick: 2px;--buttons-large-buttons-l-r-padding: 20px;--buttons-large-buttons-height: 54px;--buttons-large-buttons-icon-only-width: 54px;--buttons-large-buttons-icon-size: 20px;--buttons-large-buttons-gap: 8px;--buttons-large-buttons-corner-radius: 8px;--buttons-large-buttons-stroke-default: 1px;--buttons-large-buttons-stroke-thick: 2px;--buttons-extra-small-buttons-l-r-padding: 8px;--buttons-extra-small-buttons-height: 32px;--buttons-extra-small-buttons-icon-size: 16px;--buttons-extra-small-buttons-gap: 4px;--buttons-extra-small-buttons-corner-radius: 8px;--buttons-stroke-default: 1px;--buttons-stroke-thick: 2px;--background-beige: #f9f7f4;--error-light: #fff2f2;--text-placeholder: #6d6d7d;--stroke-dark: #141413;--stroke-light: #dddde2;--stroke-medium: #535366;--accent-green: #ccffd4;--accent-turquoise: #ccf7ff;--accent-yellow: #f7ffcc;--accent-peach: #ffd4cc;--accent-violet: #f7ccff;--accent-purple: #f4f7fc;--text-primary: #141413;--secondary-brand: #141413;--text-hover: #0c3b8d;--text-white: #ffffff;--text-link: #0645b1;--text-press: #082f75;--success-light: #f0f8f1;--background-light-blue: #eef2f9;--background-white: #ffffff;--premium-dark: #877440;--premium-light: #f9f6ed;--stroke-white: #ffffff;--inactive-content: #b1b1ba;--annotate-light: #a35dff;--annotate-dark: #824acc;--grid: #eef2f9;--inactive-stroke: #ebebee;--shadow: rgba(34, 34, 51, 0.25);--text-inactive: #6d6d7d;--text-error: #b60000;--stroke-error: #b60000;--background-error: #fff2f2;--background-black: #141413;--icon-default: #141413;--icon-blue: #0645b1;--background-grey: #dddde2;--icon-grey: #b1b1ba;--text-focus: #082f75;--brand-colors-neutral-black: #141413;--brand-colors-neutral-900: #535366;--brand-colors-neutral-800: #6d6d7d;--brand-colors-neutral-700: #91919e;--brand-colors-neutral-600: #b1b1ba;--brand-colors-neutral-500: #c8c8cf;--brand-colors-neutral-400: #dddde2;--brand-colors-neutral-300: #ebebee;--brand-colors-neutral-200: #f8f8fb;--brand-colors-neutral-100: #fafafa;--brand-colors-neutral-white: #ffffff;--brand-colors-blue-900: #043059;--brand-colors-blue-800: #082f75;--brand-colors-blue-700: #0c3b8d;--brand-colors-blue-600: #0645b1;--brand-colors-blue-500: #386ac1;--brand-colors-blue-400: #cddaef;--brand-colors-blue-300: #e6ecf7;--brand-colors-blue-200: #eef2f9;--brand-colors-blue-100: #f4f7fc;--brand-colors-gold-500: #877440;--brand-colors-gold-400: #e9e3d4;--brand-colors-gold-300: #f2efe8;--brand-colors-gold-200: #f9f6ed;--brand-colors-gold-100: #f9f7f4;--brand-colors-error-900: #920000;--brand-colors-error-500: #b60000;--brand-colors-success-900: #035c0f;--brand-colors-green: #ccffd4;--brand-colors-turquoise: #ccf7ff;--brand-colors-yellow: #f7ffcc;--brand-colors-peach: #ffd4cc;--brand-colors-violet: #f7ccff;--brand-colors-error-100: #fff2f2;--brand-colors-success-500: #05b01c;--brand-colors-success-100: #f0f8f1;--text-secondary: #535366;--icon-white: #ffffff;--background-beige-darker: #f2efe8;--icon-dark-grey: #535366;--type-font-family-sans-serif: Roboto;--type-font-family-serif: Georgia;--type-font-family-mono: IBM Plex Mono;--type-weights-300: 300;--type-weights-400: 400;--type-weights-500: 500;--type-weights-700: 700;--type-sizes-12: 12px;--type-sizes-14: 14px;--type-sizes-16: 16px;--type-sizes-18: 18px;--type-sizes-20: 20px;--type-sizes-22: 22px;--type-sizes-24: 24px;--type-sizes-28: 28px;--type-sizes-30: 30px;--type-sizes-32: 32px;--type-sizes-40: 40px;--type-sizes-42: 42px;--type-sizes-48-2: 48px;--type-line-heights-16: 16px;--type-line-heights-20: 20px;--type-line-heights-23: 23px;--type-line-heights-24: 24px;--type-line-heights-25: 25px;--type-line-heights-26: 26px;--type-line-heights-29: 29px;--type-line-heights-30: 30px;--type-line-heights-32: 32px;--type-line-heights-34: 34px;--type-line-heights-35: 35px;--type-line-heights-36: 36px;--type-line-heights-38: 38px;--type-line-heights-40: 40px;--type-line-heights-46: 46px;--type-line-heights-48: 48px;--type-line-heights-52: 52px;--type-line-heights-58: 58px;--type-line-heights-68: 68px;--type-line-heights-74: 74px;--type-line-heights-82: 82px;--type-paragraph-spacings-0: 0px;--type-paragraph-spacings-4: 4px;--type-paragraph-spacings-8: 8px;--type-paragraph-spacings-16: 16px;--type-sans-serif-xl-font-weight: 400;--type-sans-serif-xl-size: 42px;--type-sans-serif-xl-line-height: 46px;--type-sans-serif-xl-paragraph-spacing: 16px;--type-sans-serif-lg-font-weight: 400;--type-sans-serif-lg-size: 32px;--type-sans-serif-lg-line-height: 38px;--type-sans-serif-lg-paragraph-spacing: 16px;--type-sans-serif-md-font-weight: 400;--type-sans-serif-md-line-height: 34px;--type-sans-serif-md-paragraph-spacing: 16px;--type-sans-serif-md-size: 28px;--type-sans-serif-xs-font-weight: 700;--type-sans-serif-xs-line-height: 25px;--type-sans-serif-xs-paragraph-spacing: 0px;--type-sans-serif-xs-size: 20px;--type-sans-serif-sm-font-weight: 400;--type-sans-serif-sm-line-height: 30px;--type-sans-serif-sm-paragraph-spacing: 16px;--type-sans-serif-sm-size: 24px;--type-body-xl-font-weight: 400;--type-body-xl-size: 24px;--type-body-xl-line-height: 36px;--type-body-xl-paragraph-spacing: 0px;--type-body-sm-font-weight: 400;--type-body-sm-size: 14px;--type-body-sm-line-height: 20px;--type-body-sm-paragraph-spacing: 8px;--type-body-xs-font-weight: 400;--type-body-xs-size: 12px;--type-body-xs-line-height: 16px;--type-body-xs-paragraph-spacing: 0px;--type-body-md-font-weight: 400;--type-body-md-size: 16px;--type-body-md-line-height: 20px;--type-body-md-paragraph-spacing: 4px;--type-body-lg-font-weight: 400;--type-body-lg-size: 20px;--type-body-lg-line-height: 26px;--type-body-lg-paragraph-spacing: 16px;--type-body-lg-medium-font-weight: 500;--type-body-lg-medium-size: 20px;--type-body-lg-medium-line-height: 32px;--type-body-lg-medium-paragraph-spacing: 16px;--type-body-md-medium-font-weight: 500;--type-body-md-medium-size: 16px;--type-body-md-medium-line-height: 20px;--type-body-md-medium-paragraph-spacing: 4px;--type-body-sm-bold-font-weight: 700;--type-body-sm-bold-size: 14px;--type-body-sm-bold-line-height: 20px;--type-body-sm-bold-paragraph-spacing: 8px;--type-body-sm-medium-font-weight: 500;--type-body-sm-medium-size: 14px;--type-body-sm-medium-line-height: 20px;--type-body-sm-medium-paragraph-spacing: 8px;--type-serif-md-font-weight: 400;--type-serif-md-size: 40px;--type-serif-md-paragraph-spacing: 0px;--type-serif-md-line-height: 48px;--type-serif-sm-font-weight: 400;--type-serif-sm-size: 28px;--type-serif-sm-paragraph-spacing: 0px;--type-serif-sm-line-height: 32px;--type-serif-lg-font-weight: 400;--type-serif-lg-size: 58px;--type-serif-lg-paragraph-spacing: 0px;--type-serif-lg-line-height: 68px;--type-serif-xs-font-weight: 400;--type-serif-xs-size: 18px;--type-serif-xs-line-height: 24px;--type-serif-xs-paragraph-spacing: 0px;--type-serif-xl-font-weight: 400;--type-serif-xl-size: 74px;--type-serif-xl-paragraph-spacing: 0px;--type-serif-xl-line-height: 82px;--type-mono-md-font-weight: 400;--type-mono-md-size: 22px;--type-mono-md-line-height: 24px;--type-mono-md-paragraph-spacing: 0px;--type-mono-lg-font-weight: 400;--type-mono-lg-size: 40px;--type-mono-lg-line-height: 40px;--type-mono-lg-paragraph-spacing: 0px;--type-mono-sm-font-weight: 400;--type-mono-sm-size: 14px;--type-mono-sm-line-height: 24px;--type-mono-sm-paragraph-spacing: 0px;--spacing-xs-4: 4px;--spacing-xs-8: 8px;--spacing-xs-16: 16px;--spacing-sm-24: 24px;--spacing-sm-32: 32px;--spacing-md-40: 40px;--spacing-md-48: 48px;--spacing-lg-64: 64px;--spacing-lg-80: 80px;--spacing-xlg-104: 104px;--spacing-xlg-152: 152px;--spacing-xs-12: 12px;--spacing-page-section: 152px;--spacing-card-list-spacing: 48px;--spacing-text-section-spacing: 80px;--spacing-md-xs-headings: 40px;--corner-radius-radius-lg: 16px;--corner-radius-radius-sm: 4px;--corner-radius-radius-md: 8px;--corner-radius-radius-round: 104px}}</style><link crossorigin="" href="https://fonts.gstatic.com/" rel="preconnect" /><link href="https://fonts.googleapis.com/css2?family=DM+Sans:ital,opsz,wght@0,9..40,100..1000;1,9..40,100..1000&amp;family=Gupter:wght@400;500;700&amp;family=IBM+Plex+Mono:wght@300;400&amp;family=Material+Symbols+Outlined:opsz,wght,FILL,GRAD@20,400,0,0&amp;display=swap" rel="stylesheet" /><link rel="stylesheet" href="//a.academia-assets.com/assets/design_system/common-57f9da13cef3fd4e2a8b655342c6488eded3e557e823fe67571f2ac77acd7b6f.css" media="all" /> <meta name="author" content="beat signer" /> <meta name="description" content="Professor of Computer Science, https://beatsigner.com Beat Signer is Professor of Computer Science at the Vrije Universiteit Brussel (VUB) in Belgium and…" /> <meta name="google-site-verification" content="bKJMBZA7E43xhDOopFZkssMMkBRjvYERV-NaN4R6mrs" /> <script> var $controller_name = 'works'; var $action_name = "summary"; var $rails_env = 'production'; var $app_rev = '9f2595e5484a9fd84cd24d3be57c00d1579ea224'; var $domain = 'academia.edu'; var $app_host = "academia.edu"; var $asset_host = "academia-assets.com"; var $start_time = new Date().getTime(); var $recaptcha_key = "6LdxlRMTAAAAADnu_zyLhLg0YF9uACwz78shpjJB"; var $recaptcha_invisible_key = "6Lf3KHUUAAAAACggoMpmGJdQDtiyrjVlvGJ6BbAj"; var $disableClientRecordHit = false; </script> <script> window.Aedu = { hit_data: null }; window.Aedu.SiteStats = {"premium_universities_count":13879,"monthly_visitors":"31 million","monthly_visitor_count":31300000,"monthly_visitor_count_in_millions":31,"user_count":286683520,"paper_count":55203019,"paper_count_in_millions":55,"page_count":432000000,"page_count_in_millions":432,"pdf_count":16500000,"pdf_count_in_millions":16}; window.Aedu.serverRenderTime = new Date(1744218307000); window.Aedu.timeDifference = new Date().getTime() - 1744218307000; window.Aedu.isUsingCssV1 = false; window.Aedu.enableLocalization = true; window.Aedu.activateFullstory = false; window.Aedu.serviceAvailability = { status: {"attention_db":"on","bibliography_db":"on","contacts_db":"on","email_db":"on","indexability_db":"on","mentions_db":"on","news_db":"on","notifications_db":"on","offsite_mentions_db":"on","redshift":"on","redshift_exports_db":"on","related_works_db":"on","ring_db":"on","user_tests_db":"on"}, serviceEnabled: function(service) { return this.status[service] === "on"; }, readEnabled: function(service) { return this.serviceEnabled(service) || this.status[service] === "read_only"; }, }; window.Aedu.viewApmTrace = function() { // Check if x-apm-trace-id meta tag is set, and open the trace in APM // in a new window if it is. var apmTraceId = document.head.querySelector('meta[name="x-apm-trace-id"]'); if (apmTraceId) { var traceId = apmTraceId.content; // Use trace ID to construct URL, an example URL looks like: // https://app.datadoghq.com/apm/traces?query=trace_id%31298410148923562634 var apmUrl = 'https://app.datadoghq.com/apm/traces?query=trace_id%3A' + traceId; window.open(apmUrl, '_blank'); } }; </script> <!--[if lt IE 9]> <script src="//cdnjs.cloudflare.com/ajax/libs/html5shiv/3.7.2/html5shiv.min.js"></script> <![endif]--> <link href="https://fonts.googleapis.com/css?family=Roboto:100,100i,300,300i,400,400i,500,500i,700,700i,900,900i" rel="stylesheet"> <link rel="preload" href="//maxcdn.bootstrapcdn.com/font-awesome/4.3.0/css/font-awesome.min.css" as="style" onload="this.rel='stylesheet'"> <link rel="stylesheet" href="//a.academia-assets.com/assets/libraries-a9675dcb01ec4ef6aa807ba772c7a5a00c1820d3ff661c1038a20f80d06bb4e4.css" media="all" /> <link rel="stylesheet" href="//a.academia-assets.com/assets/academia-d43ce42295f6e9eee644b2bf4f7c938efedf73a791a7b96315b2735fd5bbc7d7.css" media="all" /> <link rel="stylesheet" href="//a.academia-assets.com/assets/design_system_legacy-056a9113b9a0f5343d013b29ee1929d5a18be35fdcdceb616600b4db8bd20054.css" media="all" /> <script src="//a.academia-assets.com/assets/webpack_bundles/runtime-bundle-005434038af4252ca37c527588411a3d6a0eabb5f727fac83f8bbe7fd88d93bb.js"></script> <script src="//a.academia-assets.com/assets/webpack_bundles/webpack_libraries_and_infrequently_changed.wjs-bundle-7fdb74ea7c97cb254ac2c4f31e80ad5a64c58e5ddd088d4fec81511810f0bd68.js"></script> <script src="//a.academia-assets.com/assets/webpack_bundles/core_webpack.wjs-bundle-3c29db8944dad56fbf12fb6edf7960b826c114e7ba3b77bf026d480bf9d692e0.js"></script> <script src="//a.academia-assets.com/assets/webpack_bundles/sentry.wjs-bundle-5fe03fddca915c8ba0f7edbe64c194308e8ce5abaed7bffe1255ff37549c4808.js"></script> <script> jade = window.jade || {}; jade.helpers = window.$h; jade._ = window._; </script> <!-- Google Tag Manager --> <script id="tag-manager-head-root">(function(w,d,s,l,i){w[l]=w[l]||[];w[l].push({'gtm.start': new Date().getTime(),event:'gtm.js'});var f=d.getElementsByTagName(s)[0], j=d.createElement(s),dl=l!='dataLayer'?'&l='+l:'';j.async=true;j.src= 'https://www.googletagmanager.com/gtm.js?id='+i+dl;f.parentNode.insertBefore(j,f); })(window,document,'script','dataLayer_old','GTM-5G9JF7Z');</script> <!-- End Google Tag Manager --> <script> window.gptadslots = []; window.googletag = window.googletag || {}; window.googletag.cmd = window.googletag.cmd || []; </script> <script type="text/javascript"> // TODO(jacob): This should be defined, may be rare load order problem. // Checking if null is just a quick fix, will default to en if unset. // Better fix is to run this immedietely after I18n is set. if (window.I18n != null) { I18n.defaultLocale = "en"; I18n.locale = "en"; I18n.fallbacks = true; } </script> <link rel="canonical" href="https://vub.academia.edu/BeatSigner" /> </head> <!--[if gte IE 9 ]> <body class='ie ie9 c-profiles/works a-summary logged_out'> <![endif]--> <!--[if !(IE) ]><!--> <body class='c-profiles/works a-summary logged_out'> <!--<![endif]--> <div id="fb-root"></div><script>window.fbAsyncInit = function() { FB.init({ appId: "2369844204", version: "v8.0", status: true, cookie: true, xfbml: true }); // Additional initialization code. if (window.InitFacebook) { // facebook.ts already loaded, set it up. window.InitFacebook(); } else { // Set a flag for facebook.ts to find when it loads. window.academiaAuthReadyFacebook = true; } };</script><script>window.fbAsyncLoad = function() { // Protection against double calling of this function if (window.FB) { return; } (function(d, s, id){ var js, fjs = d.getElementsByTagName(s)[0]; if (d.getElementById(id)) {return;} js = d.createElement(s); js.id = id; js.src = "//connect.facebook.net/en_US/sdk.js"; fjs.parentNode.insertBefore(js, fjs); }(document, 'script', 'facebook-jssdk')); } if (!window.defer_facebook) { // Autoload if not deferred window.fbAsyncLoad(); } else { // Defer loading by 5 seconds setTimeout(function() { window.fbAsyncLoad(); }, 5000); }</script> <div id="google-root"></div><script>window.loadGoogle = function() { if (window.InitGoogle) { // google.ts already loaded, set it up. window.InitGoogle("331998490334-rsn3chp12mbkiqhl6e7lu2q0mlbu0f1b"); } else { // Set a flag for google.ts to use when it loads. window.GoogleClientID = "331998490334-rsn3chp12mbkiqhl6e7lu2q0mlbu0f1b"; } };</script><script>window.googleAsyncLoad = function() { // Protection against double calling of this function (function(d) { var js; var id = 'google-jssdk'; var ref = d.getElementsByTagName('script')[0]; if (d.getElementById(id)) { return; } js = d.createElement('script'); js.id = id; js.async = true; js.onload = loadGoogle; js.src = "https://accounts.google.com/gsi/client" ref.parentNode.insertBefore(js, ref); }(document)); } if (!window.defer_google) { // Autoload if not deferred window.googleAsyncLoad(); } else { // Defer loading by 5 seconds setTimeout(function() { window.googleAsyncLoad(); }, 5000); }</script> <div id="tag-manager-body-root"> <!-- Google Tag Manager (noscript) --> <noscript><iframe src="https://www.googletagmanager.com/ns.html?id=GTM-5G9JF7Z" height="0" width="0" style="display:none;visibility:hidden"></iframe></noscript> <!-- End Google Tag Manager (noscript) --> <!-- Event listeners for analytics --> <script> window.addEventListener('load', function() { if (document.querySelector('input[name="commit"]')) { document.querySelector('input[name="commit"]').addEventListener('click', function() { gtag('event', 'click', { event_category: 'button', event_label: 'Log In' }) }) } }); </script> </div> <script>var _comscore = _comscore || []; _comscore.push({ c1: "2", c2: "26766707" }); (function() { var s = document.createElement("script"), el = document.getElementsByTagName("script")[0]; s.async = true; s.src = (document.location.protocol == "https:" ? "https://sb" : "http://b") + ".scorecardresearch.com/beacon.js"; el.parentNode.insertBefore(s, el); })();</script><img src="https://sb.scorecardresearch.com/p?c1=2&amp;c2=26766707&amp;cv=2.0&amp;cj=1" style="position: absolute; visibility: hidden" /> <div id='react-modal'></div> <div class='DesignSystem'> <a class='u-showOnFocus' href='#site'> Skip to main content </a> </div> <div id="upgrade_ie_banner" style="display: none;"><p>Academia.edu no longer supports Internet Explorer.</p><p>To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to&nbsp;<a href="https://www.academia.edu/upgrade-browser">upgrade your browser</a>.</p></div><script>// Show this banner for all versions of IE if (!!window.MSInputMethodContext || /(MSIE)/.test(navigator.userAgent)) { document.getElementById('upgrade_ie_banner').style.display = 'block'; }</script> <div class="DesignSystem bootstrap ShrinkableNav"><div class="navbar navbar-default main-header"><div class="container-wrapper" id="main-header-container"><div class="container"><div class="navbar-header"><div class="nav-left-wrapper u-mt0x"><div class="nav-logo"><a data-main-header-link-target="logo_home" href="https://www.academia.edu/"><img class="visible-xs-inline-block" style="height: 24px;" alt="Academia.edu" src="//a.academia-assets.com/images/academia-logo-redesign-2015-A.svg" width="24" height="24" /><img width="145.2" height="18" class="hidden-xs" style="height: 24px;" alt="Academia.edu" src="//a.academia-assets.com/images/academia-logo-redesign-2015.svg" /></a></div><div class="nav-search"><div class="SiteSearch-wrapper select2-no-default-pills"><form class="js-SiteSearch-form DesignSystem" action="https://www.academia.edu/search" accept-charset="UTF-8" method="get"><i class="SiteSearch-icon fa fa-search u-fw700 u-positionAbsolute u-tcGrayDark"></i><input class="js-SiteSearch-form-input SiteSearch-form-input form-control" data-main-header-click-target="search_input" name="q" placeholder="Search" type="text" value="" /></form></div></div></div><div class="nav-right-wrapper pull-right"><ul class="NavLinks js-main-nav list-unstyled"><li class="NavLinks-link"><a class="js-header-login-url Button Button--inverseGray Button--sm u-mb4x" id="nav_log_in" rel="nofollow" href="https://www.academia.edu/login">Log In</a></li><li class="NavLinks-link u-p0x"><a class="Button Button--inverseGray Button--sm u-mb4x" rel="nofollow" href="https://www.academia.edu/signup">Sign Up</a></li></ul><button class="hidden-lg hidden-md hidden-sm u-ml4x navbar-toggle collapsed" data-target=".js-mobile-header-links" data-toggle="collapse" type="button"><span class="icon-bar"></span><span class="icon-bar"></span><span class="icon-bar"></span></button></div></div><div class="collapse navbar-collapse js-mobile-header-links"><ul class="nav navbar-nav"><li class="u-borderColorGrayLight u-borderBottom1"><a rel="nofollow" href="https://www.academia.edu/login">Log In</a></li><li class="u-borderColorGrayLight u-borderBottom1"><a rel="nofollow" href="https://www.academia.edu/signup">Sign Up</a></li><li class="u-borderColorGrayLight u-borderBottom1 js-mobile-nav-expand-trigger"><a href="#">more&nbsp<span class="caret"></span></a></li><li><ul class="js-mobile-nav-expand-section nav navbar-nav u-m0x collapse"><li class="u-borderColorGrayLight u-borderBottom1"><a rel="false" href="https://www.academia.edu/about">About</a></li><li class="u-borderColorGrayLight u-borderBottom1"><a rel="nofollow" href="https://www.academia.edu/press">Press</a></li><li class="u-borderColorGrayLight u-borderBottom1"><a rel="false" href="https://www.academia.edu/documents">Papers</a></li><li class="u-borderColorGrayLight u-borderBottom1"><a rel="nofollow" href="https://www.academia.edu/terms">Terms</a></li><li class="u-borderColorGrayLight u-borderBottom1"><a rel="nofollow" href="https://www.academia.edu/privacy">Privacy</a></li><li class="u-borderColorGrayLight u-borderBottom1"><a rel="nofollow" href="https://www.academia.edu/copyright">Copyright</a></li><li class="u-borderColorGrayLight u-borderBottom1"><a rel="nofollow" href="https://www.academia.edu/hiring"><i class="fa fa-briefcase"></i>&nbsp;We're Hiring!</a></li><li class="u-borderColorGrayLight u-borderBottom1"><a rel="nofollow" href="https://support.academia.edu/hc/en-us"><i class="fa fa-question-circle"></i>&nbsp;Help Center</a></li><li class="js-mobile-nav-collapse-trigger u-borderColorGrayLight u-borderBottom1 dropup" style="display:none"><a href="#">less&nbsp<span class="caret"></span></a></li></ul></li></ul></div></div></div><script>(function(){ var $moreLink = $(".js-mobile-nav-expand-trigger"); var $lessLink = $(".js-mobile-nav-collapse-trigger"); var $section = $('.js-mobile-nav-expand-section'); $moreLink.click(function(ev){ ev.preventDefault(); $moreLink.hide(); $lessLink.show(); $section.collapse('show'); }); $lessLink.click(function(ev){ ev.preventDefault(); $moreLink.show(); $lessLink.hide(); $section.collapse('hide'); }); })() if ($a.is_logged_in() || false) { new Aedu.NavigationController({ el: '.js-main-nav', showHighlightedNotification: false }); } else { $(".js-header-login-url").attr("href", $a.loginUrlWithRedirect()); } Aedu.autocompleteSearch = new AutocompleteSearch({el: '.js-SiteSearch-form'});</script></div></div> <div id='site' class='fixed'> <div id="content" class="clearfix"> <script>document.addEventListener('DOMContentLoaded', function(){ var $dismissible = $(".dismissible_banner"); $dismissible.click(function(ev) { $dismissible.hide(); }); });</script> <script src="//a.academia-assets.com/assets/webpack_bundles/profile.wjs-bundle-5888e8cdcd0801926d9a9f3090e2416b8e78912036f8e63b1c6faa35593f3647.js" defer="defer"></script><script>$viewedUser = Aedu.User.set_viewed( {"id":13155,"first_name":"Beat","middle_initials":null,"last_name":"Signer","page_name":"BeatSigner","domain_name":"vub","created_at":"2008-10-29T17:11:18.015-07:00","display_name":"Beat Signer","url":"https://vub.academia.edu/BeatSigner","photo":"https://0.academia-photos.com/13155/4407/155010730/s65_beat.signer.png","has_photo":true,"department":{"id":3039,"name":"Computer Science","url":"https://vub.academia.edu/Departments/Computer_Science/Documents","university":{"id":1396,"name":"Vrije Universiteit Brussel","url":"https://vub.academia.edu/"}},"position":"Faculty Member","position_id":1,"is_analytics_public":false,"interests":[{"id":422,"name":"Computer Science","url":"https://www.academia.edu/Documents/in/Computer_Science"},{"id":472,"name":"Human Computer Interaction","url":"https://www.academia.edu/Documents/in/Human_Computer_Interaction"},{"id":459,"name":"Information Science","url":"https://www.academia.edu/Documents/in/Information_Science"},{"id":11086,"name":"Interactive Paper","url":"https://www.academia.edu/Documents/in/Interactive_Paper"},{"id":11123,"name":"Personal Information Management","url":"https://www.academia.edu/Documents/in/Personal_Information_Management"},{"id":11085,"name":"Cross-Media Information Spaces","url":"https://www.academia.edu/Documents/in/Cross-Media_Information_Spaces"},{"id":2947471,"name":"Data Physicalization","url":"https://www.academia.edu/Documents/in/Data_Physicalization"},{"id":27906,"name":"Media technology","url":"https://www.academia.edu/Documents/in/Media_technology"},{"id":21201,"name":"Multimodality","url":"https://www.academia.edu/Documents/in/Multimodality"},{"id":2879,"name":"Ubiquitous Computing","url":"https://www.academia.edu/Documents/in/Ubiquitous_Computing"},{"id":17701,"name":"Gesture Recognition","url":"https://www.academia.edu/Documents/in/Gesture_Recognition"},{"id":1210,"name":"Informatics","url":"https://www.academia.edu/Documents/in/Informatics"},{"id":20470,"name":"Tangible User Interfaces","url":"https://www.academia.edu/Documents/in/Tangible_User_Interfaces"},{"id":8215,"name":"Conceptual Modelling","url":"https://www.academia.edu/Documents/in/Conceptual_Modelling"},{"id":9135,"name":"The Internet of Things","url":"https://www.academia.edu/Documents/in/The_Internet_of_Things"},{"id":449899,"name":"Big Data Visualisation","url":"https://www.academia.edu/Documents/in/Big_Data_Visualisation"},{"id":10907,"name":"Context-Aware Computing","url":"https://www.academia.edu/Documents/in/Context-Aware_Computing"},{"id":5673,"name":"Augmented Reality","url":"https://www.academia.edu/Documents/in/Augmented_Reality"},{"id":1241,"name":"Knowledge Management","url":"https://www.academia.edu/Documents/in/Knowledge_Management"},{"id":3424,"name":"Information Visualization","url":"https://www.academia.edu/Documents/in/Information_Visualization"},{"id":1380,"name":"Computer Engineering","url":"https://www.academia.edu/Documents/in/Computer_Engineering"},{"id":1012,"name":"Digital Libraries","url":"https://www.academia.edu/Documents/in/Digital_Libraries"},{"id":4198,"name":"Mobile Technology","url":"https://www.academia.edu/Documents/in/Mobile_Technology"},{"id":922,"name":"Education","url":"https://www.academia.edu/Documents/in/Education"},{"id":2129,"name":"Computer Supported Cooperative Work (CSCW)","url":"https://www.academia.edu/Documents/in/Computer_Supported_Cooperative_Work_CSCW_"},{"id":5266,"name":"Human Information Interaction","url":"https://www.academia.edu/Documents/in/Human_Information_Interaction"},{"id":10165,"name":"Interactive and Digital Media","url":"https://www.academia.edu/Documents/in/Interactive_and_Digital_Media"},{"id":5978,"name":"Web Technologies","url":"https://www.academia.edu/Documents/in/Web_Technologies"},{"id":1384,"name":"Web Engineering","url":"https://www.academia.edu/Documents/in/Web_Engineering"},{"id":54192,"name":"Hypertext","url":"https://www.academia.edu/Documents/in/Hypertext"},{"id":4416,"name":"Interaction Design","url":"https://www.academia.edu/Documents/in/Interaction_Design"},{"id":29124,"name":"Web Science","url":"https://www.academia.edu/Documents/in/Web_Science"},{"id":11732,"name":"Linked Data","url":"https://www.academia.edu/Documents/in/Linked_Data"},{"id":9134,"name":"Pervasive Computing","url":"https://www.academia.edu/Documents/in/Pervasive_Computing"},{"id":3429,"name":"Educational Research","url":"https://www.academia.edu/Documents/in/Educational_Research"},{"id":1197,"name":"Digital Humanities","url":"https://www.academia.edu/Documents/in/Digital_Humanities"},{"id":449,"name":"Software Engineering","url":"https://www.academia.edu/Documents/in/Software_Engineering"},{"id":2482,"name":"Database Systems","url":"https://www.academia.edu/Documents/in/Database_Systems"},{"id":2869,"name":"Digital Media","url":"https://www.academia.edu/Documents/in/Digital_Media"},{"id":85420,"name":"Crossmedia","url":"https://www.academia.edu/Documents/in/Crossmedia"},{"id":44606,"name":"Hypermedia","url":"https://www.academia.edu/Documents/in/Hypermedia"},{"id":1003,"name":"Educational Technology","url":"https://www.academia.edu/Documents/in/Educational_Technology"},{"id":17758,"name":"Technology Enhanced Learning","url":"https://www.academia.edu/Documents/in/Technology_Enhanced_Learning"},{"id":16999,"name":"Personal Knowledge Management","url":"https://www.academia.edu/Documents/in/Personal_Knowledge_Management"},{"id":11119,"name":"User Interface","url":"https://www.academia.edu/Documents/in/User_Interface"},{"id":11081,"name":"Multimodal Interaction","url":"https://www.academia.edu/Documents/in/Multimodal_Interaction"},{"id":1440,"name":"Visualization","url":"https://www.academia.edu/Documents/in/Visualization"},{"id":193390,"name":"RSL","url":"https://www.academia.edu/Documents/in/RSL"},{"id":491,"name":"Information Technology","url":"https://www.academia.edu/Documents/in/Information_Technology"},{"id":2875,"name":"User Experience (UX)","url":"https://www.academia.edu/Documents/in/User_Experience_UX_"},{"id":1615,"name":"Usability","url":"https://www.academia.edu/Documents/in/Usability"},{"id":48,"name":"Engineering","url":"https://www.academia.edu/Documents/in/Engineering"},{"id":1453,"name":"Information Management","url":"https://www.academia.edu/Documents/in/Information_Management"},{"id":3419,"name":"Multimedia","url":"https://www.academia.edu/Documents/in/Multimedia"},{"id":30947,"name":"The Internet","url":"https://www.academia.edu/Documents/in/The_Internet"},{"id":39433,"name":"Ambient Intelligence","url":"https://www.academia.edu/Documents/in/Ambient_Intelligence"},{"id":69100,"name":"Data Science","url":"https://www.academia.edu/Documents/in/Data_Science"},{"id":923,"name":"Technology","url":"https://www.academia.edu/Documents/in/Technology"},{"id":23997,"name":"HCI","url":"https://www.academia.edu/Documents/in/HCI"},{"id":1589,"name":"Photography","url":"https://www.academia.edu/Documents/in/Photography"},{"id":7585,"name":"ICT in Education","url":"https://www.academia.edu/Documents/in/ICT_in_Education"},{"id":39370,"name":"Mixed Reality","url":"https://www.academia.edu/Documents/in/Mixed_Reality"},{"id":48591,"name":"Data Visualisation","url":"https://www.academia.edu/Documents/in/Data_Visualisation"},{"id":933,"name":"New Media","url":"https://www.academia.edu/Documents/in/New_Media"},{"id":37,"name":"Information Systems","url":"https://www.academia.edu/Documents/in/Information_Systems"},{"id":1744493,"name":"Science and Technology Studies","url":"https://www.academia.edu/Documents/in/Science_and_Technology_Studies"},{"id":27360,"name":"Databases","url":"https://www.academia.edu/Documents/in/Databases"},{"id":126300,"name":"Big Data","url":"https://www.academia.edu/Documents/in/Big_Data"},{"id":47980,"name":"Data Visualization","url":"https://www.academia.edu/Documents/in/Data_Visualization"},{"id":928,"name":"Media Studies","url":"https://www.academia.edu/Documents/in/Media_Studies"},{"id":988,"name":"Design","url":"https://www.academia.edu/Documents/in/Design"},{"id":289278,"name":"Big Data Analytics","url":"https://www.academia.edu/Documents/in/Big_Data_Analytics"},{"id":2009,"name":"Data Mining","url":"https://www.academia.edu/Documents/in/Data_Mining"},{"id":464,"name":"Information Retrieval","url":"https://www.academia.edu/Documents/in/Information_Retrieval"},{"id":48200,"name":"Digital Library","url":"https://www.academia.edu/Documents/in/Digital_Library"},{"id":14304,"name":"Usability and user experience","url":"https://www.academia.edu/Documents/in/Usability_and_user_experience"},{"id":13647,"name":"Physical Computing","url":"https://www.academia.edu/Documents/in/Physical_Computing"},{"id":1609,"name":"E-learning","url":"https://www.academia.edu/Documents/in/E-learning"},{"id":859,"name":"Communication","url":"https://www.academia.edu/Documents/in/Communication"},{"id":18408,"name":"Content Management","url":"https://www.academia.edu/Documents/in/Content_Management"},{"id":141114,"name":"World Wide Web","url":"https://www.academia.edu/Documents/in/World_Wide_Web"},{"id":20481,"name":"Information Visualisation","url":"https://www.academia.edu/Documents/in/Information_Visualisation"},{"id":2621,"name":"Higher Education","url":"https://www.academia.edu/Documents/in/Higher_Education"},{"id":279114,"name":"Technology Enhanced Education","url":"https://www.academia.edu/Documents/in/Technology_Enhanced_Education"},{"id":27417,"name":"Mobile HCI","url":"https://www.academia.edu/Documents/in/Mobile_HCI"},{"id":3147,"name":"Gesture","url":"https://www.academia.edu/Documents/in/Gesture"},{"id":2896,"name":"Wearable Computing","url":"https://www.academia.edu/Documents/in/Wearable_Computing"},{"id":13958,"name":"Media","url":"https://www.academia.edu/Documents/in/Media"},{"id":9246,"name":"Social Media","url":"https://www.academia.edu/Documents/in/Social_Media"},{"id":465,"name":"Artificial Intelligence","url":"https://www.academia.edu/Documents/in/Artificial_Intelligence"},{"id":41030,"name":"Mobile Augmented Reality","url":"https://www.academia.edu/Documents/in/Mobile_Augmented_Reality"},{"id":1226904,"name":"Computer Science and Engineering","url":"https://www.academia.edu/Documents/in/Computer_Science_and_Engineering-1"},{"id":2099,"name":"Information Society","url":"https://www.academia.edu/Documents/in/Information_Society"},{"id":803,"name":"Philosophy","url":"https://www.academia.edu/Documents/in/Philosophy"},{"id":821,"name":"Philosophy of Science","url":"https://www.academia.edu/Documents/in/Philosophy_of_Science"},{"id":17712,"name":"Science and Technology","url":"https://www.academia.edu/Documents/in/Science_and_Technology"},{"id":672167,"name":"Cross-Media Information Spaces and Architectures (CISA)","url":"https://www.academia.edu/Documents/in/Cross-Media_Information_Spaces_and_Architectures_CISA_"},{"id":854,"name":"Computer Vision","url":"https://www.academia.edu/Documents/in/Computer_Vision"},{"id":26825,"name":"Mobile Computing","url":"https://www.academia.edu/Documents/in/Mobile_Computing"},{"id":8673,"name":"Digital Media \u0026 Learning","url":"https://www.academia.edu/Documents/in/Digital_Media_and_Learning"},{"id":2209,"name":"Cross-Media Studies","url":"https://www.academia.edu/Documents/in/Cross-Media_Studies"},{"id":41474,"name":"Cross Media Platforms","url":"https://www.academia.edu/Documents/in/Cross_Media_Platforms"},{"id":3457,"name":"Learning and Teaching","url":"https://www.academia.edu/Documents/in/Learning_and_Teaching"},{"id":1993399,"name":"Data Physicalisation","url":"https://www.academia.edu/Documents/in/Data_Physicalisation"},{"id":93455,"name":"Infographics and data visualization","url":"https://www.academia.edu/Documents/in/Infographics_and_data_visualization"},{"id":887,"name":"Teaching and Learning","url":"https://www.academia.edu/Documents/in/Teaching_and_Learning"},{"id":2065,"name":"Research Methodology","url":"https://www.academia.edu/Documents/in/Research_Methodology"},{"id":17760,"name":"Digital Arts","url":"https://www.academia.edu/Documents/in/Digital_Arts"},{"id":3962,"name":"Visual Literacy","url":"https://www.academia.edu/Documents/in/Visual_Literacy"},{"id":42095,"name":"Document Engineering","url":"https://www.academia.edu/Documents/in/Document_Engineering"},{"id":69848,"name":"Presentation","url":"https://www.academia.edu/Documents/in/Presentation"},{"id":1763,"name":"Mobile Learning","url":"https://www.academia.edu/Documents/in/Mobile_Learning"}]} ); if ($a.is_logged_in() && $viewedUser.is_current_user()) { $('body').addClass('profile-viewed-by-owner'); } $socialProfiles = [{"id":73179,"link":"http://www.beatsigner.com","name":"Homepage","link_domain":"www.beatsigner.com","icon":"//www.google.com/s2/u/0/favicons?domain=www.beatsigner.com"},{"id":73180,"link":"http://www.linkedin.com/in/signer","name":"LinkedIn Profile","link_domain":"www.linkedin.com","icon":"//www.google.com/s2/u/0/favicons?domain=www.linkedin.com"},{"id":253240,"link":"http://twitter.com/beatsigner","name":"Twitter","link_domain":"twitter.com","icon":"//www.google.com/s2/u/0/favicons?domain=twitter.com"},{"id":253241,"link":"http://scholar.google.be/citations?user=v16OAn78FFUJ\u0026hl=en","name":"Google Scholar","link_domain":"scholar.google.be","icon":"//www.google.com/s2/u/0/favicons?domain=scholar.google.be"},{"id":8370998,"link":"https://www.facebook.com/bsigner","name":"Facebook","link_domain":"www.facebook.com","icon":"//www.google.com/s2/u/0/favicons?domain=www.facebook.com"},{"id":8371022,"link":"http://wise.vub.ac.be/beat-signer","name":"WISE Research Group","link_domain":"wise.vub.ac.be","icon":"//www.google.com/s2/u/0/favicons?domain=wise.vub.ac.be"},{"id":8371058,"link":"https://www.youtube.com/channel/UCkb4k2qvCVvOghh3wFrBnhw","name":"YouTube Channel","link_domain":"www.youtube.com","icon":"//www.google.com/s2/u/0/favicons?domain=www.youtube.com"},{"id":9413988,"link":"http://www.informatik.uni-trier.de/~ley/pers/hd/s/Signer:Beat","name":"dblp computer science bibliography","link_domain":"www.informatik.uni-trier.de","icon":"//www.google.com/s2/u/0/favicons?domain=www.informatik.uni-trier.de"},{"id":28322062,"link":"https://www.instagram.com/beat_signer/","name":"Instagram","link_domain":"www.instagram.com","icon":"//www.google.com/s2/u/0/favicons?domain=www.instagram.com"},{"id":28322114,"link":"http://orcid.org/0000-0001-9916-0837","name":"ORCID","link_domain":"orcid.org","icon":"//www.google.com/s2/u/0/favicons?domain=orcid.org"},{"id":58035743,"link":"https://www.researchgate.net/profile/Beat_Signer","name":"ResearchGate","link_domain":"www.researchgate.net","icon":"//www.google.com/s2/u/0/favicons?domain=www.researchgate.net"},{"id":66073807,"link":"https://dl.acm.org/profile/81100250652","name":"ACM Profile","link_domain":"dl.acm.org","icon":"//www.google.com/s2/u/0/favicons?domain=dl.acm.org"},{"id":66073822,"link":"https://speakerdeck.com/signer","name":"Speaker Deck","link_domain":"speakerdeck.com","icon":"//www.google.com/s2/u/0/favicons?domain=speakerdeck.com"},{"id":66073824,"link":"https://beatsigner.smugmug.com/Portfolios/Wildlife/","name":"SmugMug","link_domain":"beatsigner.smugmug.com","icon":"//www.google.com/s2/u/0/favicons?domain=beatsigner.smugmug.com"}]</script><div id="js-react-on-rails-context" style="display:none" data-rails-context="{&quot;inMailer&quot;:false,&quot;i18nLocale&quot;:&quot;en&quot;,&quot;i18nDefaultLocale&quot;:&quot;en&quot;,&quot;href&quot;:&quot;https://vub.academia.edu/BeatSigner&quot;,&quot;location&quot;:&quot;/BeatSigner&quot;,&quot;scheme&quot;:&quot;https&quot;,&quot;host&quot;:&quot;vub.academia.edu&quot;,&quot;port&quot;:null,&quot;pathname&quot;:&quot;/BeatSigner&quot;,&quot;search&quot;:null,&quot;httpAcceptLanguage&quot;:null,&quot;serverSide&quot;:false}"></div> <div class="js-react-on-rails-component" style="display:none" data-component-name="ProfileCheckPaperUpdate" data-props="{}" data-trace="false" data-dom-id="ProfileCheckPaperUpdate-react-component-1817f6da-f023-4ae7-829e-89e7c6329f9a"></div> <div id="ProfileCheckPaperUpdate-react-component-1817f6da-f023-4ae7-829e-89e7c6329f9a"></div> <div class="DesignSystem"><div class="onsite-ping" id="onsite-ping"></div></div><div class="profile-user-info DesignSystem"><div class="social-profile-container"><div class="left-panel-container"><div class="user-info-component-wrapper"><div class="user-summary-cta-container"><div class="user-summary-container"><div class="social-profile-avatar-container"><img class="profile-avatar u-positionAbsolute" alt="Beat Signer" border="0" onerror="if (this.src != &#39;//a.academia-assets.com/images/s200_no_pic.png&#39;) this.src = &#39;//a.academia-assets.com/images/s200_no_pic.png&#39;;" width="200" height="200" src="https://0.academia-photos.com/13155/4407/155010730/s200_beat.signer.png" /></div><div class="title-container"><h1 class="ds2-5-heading-sans-serif-sm">Beat Signer</h1><div class="affiliations-container fake-truncate js-profile-affiliations"><div><a class="u-tcGrayDarker" href="https://vub.academia.edu/">Vrije Universiteit Brussel</a>, <a class="u-tcGrayDarker" href="https://vub.academia.edu/Departments/Computer_Science/Documents">Computer Science</a>, <span class="u-tcGrayDarker">Faculty Member</span></div></div></div></div><div class="sidebar-cta-container"><button class="ds2-5-button hidden profile-cta-button grow js-profile-follow-button" data-broccoli-component="user-info.follow-button" data-click-track="profile-user-info-follow-button" data-follow-user-fname="Beat" data-follow-user-id="13155" data-follow-user-source="profile_button" data-has-google="false"><span class="material-symbols-outlined" style="font-size: 20px" translate="no">add</span>Follow</button><button class="ds2-5-button hidden profile-cta-button grow js-profile-unfollow-button" data-broccoli-component="user-info.unfollow-button" data-click-track="profile-user-info-unfollow-button" data-unfollow-user-id="13155"><span class="material-symbols-outlined" style="font-size: 20px" translate="no">done</span>Following</button></div></div><div class="user-stats-container"><a><div class="stat-container js-profile-followers"><p class="label">Followers</p><p class="data">26,876</p></div></a><a><div class="stat-container js-profile-followees" data-broccoli-component="user-info.followees-count" data-click-track="profile-expand-user-info-following"><p class="label">Following</p><p class="data">438</p></div></a><a><div class="stat-container js-profile-coauthors" data-broccoli-component="user-info.coauthors-count" data-click-track="profile-expand-user-info-coauthors"><p class="label">Co-authors</p><p class="data">38</p></div></a><span><div class="stat-container"><p class="label"><span class="js-profile-total-view-text">Public Views</span></p><p class="data"><span class="js-profile-view-count"></span></p></div></span></div><div class="user-bio-container"><div class="profile-bio fake-truncate js-profile-about" style="margin: 0px;">Professor of Computer Science, https://beatsigner.com<br /><br />Beat Signer is Professor of Computer Science at the Vrije Universiteit Brussel (VUB) in Belgium and director of the Web and Information System Engineering (WISE) lab. He studied Computer Science at ETH Zurich and obtained a PhD in Computer Science from ETH Zurich.<br /><br />CURRENT RESEARCH<br />With his research group, Beat is investigating cross-media information spaces and architectures (CISA), cross-media technologies, interactive paper and augmented reality solutions, dynamic data physicalisation, technology-enhanced learning as well as multimodal interaction frameworks and hybrid positioning solutions.&nbsp; Thereby, they are coming up with new document formats for representing information across different types of media as well as fluid user interfaces for interacting with the resulting cross-media information spaces. A significant part of the research is based on extensions and applications of the resource-selector-link (RSL) hypermedia metamodel.<br /><br />RESEARCH INTERESTS<br /> * cross-media technologies<br /> * human-information interaction <br /> * augmented reality and interactive paper<br /> * data physicalisation and tangible holograms<br /> * personal information management<br /> * technology-enhanced learning<br /> * internet of things and web of things<br /> * document engineering<br /><br />TEACHING<br /> * Next Generation User Interfaces<br /> * Human-Computer Interaction<br /> * Information Visualisation<br /> * Web Technologies<br /> * Advanced Topics in Big Data<br /> * Advanced Topics in Information Systems (past)<br /> * Introduction to Databases (past)<br /> * Databases (past)<br /> <br />As part of the European Paper++ and PaperWorks projects, Beat Signer has developed the interactive paper (iPaper) framework for integrating paper and digital services and information. Different solutions for innovative forms of interactive paper document publishing have been realised and the iPaper framework has been applied in a variety of applications including PaperPoint, EdFest, Generosa Enterprise, the Lost Cosmonaut, Print-n-Link, PaperProof and other solutions. Based on the experience that he has gained from realising various interactive paper applications, he is currently investigating general design patterns and metaphors for cross-media user interfaces.<br /><span class="u-fw700">Phone:&nbsp;</span>+32 2 629 12 39<br /><b>Address:&nbsp;</b>Vrije Universiteit Brussel<br />Department of Computer Science<br />Pleinlaan 2<br />1050 Brussels (Belgium)<br /><br />Office: PL9.3.60 (Pleinlaan 9)<br /><div class="js-profile-less-about u-linkUnstyled u-tcGrayDarker u-textDecorationUnderline u-displayNone">less</div></div></div><div class="suggested-academics-container"><div class="suggested-academics--header"><h3 class="ds2-5-heading-sans-serif-xs">Related Authors</h3></div><ul class="suggested-user-card-list" data-nosnippet="true"><div class="suggested-user-card"><div class="suggested-user-card__avatar social-profile-avatar-container"><a data-nosnippet="" href="https://vub.academia.edu/YoshiMalaise"><img class="profile-avatar u-positionAbsolute" alt="Yoshi Malaise related author profile picture" border="0" onerror="if (this.src != &#39;//a.academia-assets.com/images/s200_no_pic.png&#39;) this.src = &#39;//a.academia-assets.com/images/s200_no_pic.png&#39;;" width="200" height="200" src="https://0.academia-photos.com/230910613/87328080/76006785/s200_yoshi.malaise.png" /></a></div><div class="suggested-user-card__user-info"><a class="suggested-user-card__user-info__header ds2-5-body-sm-bold ds2-5-body-link" href="https://vub.academia.edu/YoshiMalaise">Yoshi Malaise</a><p class="suggested-user-card__user-info__subheader ds2-5-body-xs">Vrije Universiteit Brussel</p></div></div><div class="suggested-user-card"><div class="suggested-user-card__avatar social-profile-avatar-container"><a data-nosnippet="" href="https://independent.academia.edu/EvanCole15"><img class="profile-avatar u-positionAbsolute" alt="Evan Cole related author profile picture" border="0" src="//a.academia-assets.com/images/s200_no_pic.png" /></a></div><div class="suggested-user-card__user-info"><a class="suggested-user-card__user-info__header ds2-5-body-sm-bold ds2-5-body-link" href="https://independent.academia.edu/EvanCole15">Evan Cole</a></div></div><div class="suggested-user-card"><div class="suggested-user-card__avatar social-profile-avatar-container"><a data-nosnippet="" href="https://uth.academia.edu/VasileiosVlachos"><img class="profile-avatar u-positionAbsolute" alt="Vasileios Vlachos related author profile picture" border="0" onerror="if (this.src != &#39;//a.academia-assets.com/images/s200_no_pic.png&#39;) this.src = &#39;//a.academia-assets.com/images/s200_no_pic.png&#39;;" width="200" height="200" src="https://0.academia-photos.com/272536/59768/94642661/s200_vasileios.vlachos.jpg" /></a></div><div class="suggested-user-card__user-info"><a class="suggested-user-card__user-info__header ds2-5-body-sm-bold ds2-5-body-link" href="https://uth.academia.edu/VasileiosVlachos">Vasileios Vlachos</a><p class="suggested-user-card__user-info__subheader ds2-5-body-xs">UNIVERSITY OF THESSALY, GREECE</p></div></div><div class="suggested-user-card"><div class="suggested-user-card__avatar social-profile-avatar-container"><a data-nosnippet="" href="https://independent.academia.edu/HamzaManzoor"><img class="profile-avatar u-positionAbsolute" alt="Hamza Manzoor related author profile picture" border="0" onerror="if (this.src != &#39;//a.academia-assets.com/images/s200_no_pic.png&#39;) this.src = &#39;//a.academia-assets.com/images/s200_no_pic.png&#39;;" width="200" height="200" src="https://0.academia-photos.com/9445691/18025012/18030610/s200_hamza.manzoor.jpg" /></a></div><div class="suggested-user-card__user-info"><a class="suggested-user-card__user-info__header ds2-5-body-sm-bold ds2-5-body-link" href="https://independent.academia.edu/HamzaManzoor">Hamza Manzoor</a></div></div><div class="suggested-user-card"><div class="suggested-user-card__avatar social-profile-avatar-container"><a data-nosnippet="" href="https://vmwork.academia.edu/RodrigoDuran"><img class="profile-avatar u-positionAbsolute" alt="Rodrigo Duran related author profile picture" border="0" onerror="if (this.src != &#39;//a.academia-assets.com/images/s200_no_pic.png&#39;) this.src = &#39;//a.academia-assets.com/images/s200_no_pic.png&#39;;" width="200" height="200" src="https://0.academia-photos.com/52222343/13813047/14907742/s200_rodrigo.duran.jpg" /></a></div><div class="suggested-user-card__user-info"><a class="suggested-user-card__user-info__header ds2-5-body-sm-bold ds2-5-body-link" href="https://vmwork.academia.edu/RodrigoDuran">Rodrigo Duran</a><p class="suggested-user-card__user-info__subheader ds2-5-body-xs">Aalto University, School of Science</p></div></div><div class="suggested-user-card"><div class="suggested-user-card__avatar social-profile-avatar-container"><a data-nosnippet="" href="https://independent.academia.edu/bhagyamunasinghe1"><img class="profile-avatar u-positionAbsolute" alt="bhagya munasinghe related author profile picture" border="0" onerror="if (this.src != &#39;//a.academia-assets.com/images/s200_no_pic.png&#39;) this.src = &#39;//a.academia-assets.com/images/s200_no_pic.png&#39;;" width="200" height="200" src="https://0.academia-photos.com/286792553/134277935/123714186/s200_bhagya.munasinghe.jpeg" /></a></div><div class="suggested-user-card__user-info"><a class="suggested-user-card__user-info__header ds2-5-body-sm-bold ds2-5-body-link" href="https://independent.academia.edu/bhagyamunasinghe1">bhagya munasinghe</a></div></div><div class="suggested-user-card"><div class="suggested-user-card__avatar social-profile-avatar-container"><a data-nosnippet="" href="https://independent.academia.edu/AmandaNero1"><img class="profile-avatar u-positionAbsolute" alt="Amanda Nero related author profile picture" border="0" src="//a.academia-assets.com/images/s200_no_pic.png" /></a></div><div class="suggested-user-card__user-info"><a class="suggested-user-card__user-info__header ds2-5-body-sm-bold ds2-5-body-link" href="https://independent.academia.edu/AmandaNero1">Amanda Nero</a></div></div><div class="suggested-user-card"><div class="suggested-user-card__avatar social-profile-avatar-container"><a data-nosnippet="" href="https://independent.academia.edu/AndrewBellas1"><img class="profile-avatar u-positionAbsolute" alt="Andrew Bellas related author profile picture" border="0" src="//a.academia-assets.com/images/s200_no_pic.png" /></a></div><div class="suggested-user-card__user-info"><a class="suggested-user-card__user-info__header ds2-5-body-sm-bold ds2-5-body-link" href="https://independent.academia.edu/AndrewBellas1">Andrew Bellas</a></div></div><div class="suggested-user-card"><div class="suggested-user-card__avatar social-profile-avatar-container"><a data-nosnippet="" href="https://independent.academia.edu/ChristinaStoiber"><img class="profile-avatar u-positionAbsolute" alt="Christina Stoiber related author profile picture" border="0" src="//a.academia-assets.com/images/s200_no_pic.png" /></a></div><div class="suggested-user-card__user-info"><a class="suggested-user-card__user-info__header ds2-5-body-sm-bold ds2-5-body-link" href="https://independent.academia.edu/ChristinaStoiber">Christina Stoiber</a></div></div><div class="suggested-user-card"><div class="suggested-user-card__avatar social-profile-avatar-container"><a data-nosnippet="" href="https://independent.academia.edu/KlementynaJankiewicz"><img class="profile-avatar u-positionAbsolute" alt="Klem Jankiewicz related author profile picture" border="0" src="//a.academia-assets.com/images/s200_no_pic.png" /></a></div><div class="suggested-user-card__user-info"><a class="suggested-user-card__user-info__header ds2-5-body-sm-bold ds2-5-body-link" href="https://independent.academia.edu/KlementynaJankiewicz">Klem Jankiewicz</a></div></div></ul></div><style type="text/css">.suggested-academics--header h3{font-size:16px;font-weight:500;line-height:20px}</style><div class="ri-section"><div class="ri-section-header"><span>Interests</span><a class="ri-more-link js-profile-ri-list-card" data-click-track="profile-user-info-primary-research-interest" data-has-card-for-ri-list="13155">View All (70)</a></div><div class="ri-tags-container"><a data-click-track="profile-user-info-expand-research-interests" data-has-card-for-ri-list="13155" href="https://www.academia.edu/Documents/in/Computer_Science"><div id="js-react-on-rails-context" style="display:none" data-rails-context="{&quot;inMailer&quot;:false,&quot;i18nLocale&quot;:&quot;en&quot;,&quot;i18nDefaultLocale&quot;:&quot;en&quot;,&quot;href&quot;:&quot;https://vub.academia.edu/BeatSigner&quot;,&quot;location&quot;:&quot;/BeatSigner&quot;,&quot;scheme&quot;:&quot;https&quot;,&quot;host&quot;:&quot;vub.academia.edu&quot;,&quot;port&quot;:null,&quot;pathname&quot;:&quot;/BeatSigner&quot;,&quot;search&quot;:null,&quot;httpAcceptLanguage&quot;:null,&quot;serverSide&quot;:false}"></div> <div class="js-react-on-rails-component" style="display:none" data-component-name="Pill" data-props="{&quot;color&quot;:&quot;gray&quot;,&quot;children&quot;:[&quot;Computer Science&quot;]}" data-trace="false" data-dom-id="Pill-react-component-8e1779f3-8fae-47bd-b6d4-897546784885"></div> <div id="Pill-react-component-8e1779f3-8fae-47bd-b6d4-897546784885"></div> </a><a data-click-track="profile-user-info-expand-research-interests" data-has-card-for-ri-list="13155" href="https://www.academia.edu/Documents/in/Human_Computer_Interaction"><div class="js-react-on-rails-component" style="display:none" data-component-name="Pill" data-props="{&quot;color&quot;:&quot;gray&quot;,&quot;children&quot;:[&quot;Human Computer Interaction&quot;]}" data-trace="false" data-dom-id="Pill-react-component-06f7221d-7877-4dae-9089-66f974bf314e"></div> <div id="Pill-react-component-06f7221d-7877-4dae-9089-66f974bf314e"></div> </a><a data-click-track="profile-user-info-expand-research-interests" data-has-card-for-ri-list="13155" href="https://www.academia.edu/Documents/in/Information_Science"><div class="js-react-on-rails-component" style="display:none" data-component-name="Pill" data-props="{&quot;color&quot;:&quot;gray&quot;,&quot;children&quot;:[&quot;Information Science&quot;]}" data-trace="false" data-dom-id="Pill-react-component-51f8df45-9e11-4fe8-84c7-49bc52883536"></div> <div id="Pill-react-component-51f8df45-9e11-4fe8-84c7-49bc52883536"></div> </a><a data-click-track="profile-user-info-expand-research-interests" data-has-card-for-ri-list="13155" href="https://www.academia.edu/Documents/in/Interactive_Paper"><div class="js-react-on-rails-component" style="display:none" data-component-name="Pill" data-props="{&quot;color&quot;:&quot;gray&quot;,&quot;children&quot;:[&quot;Interactive Paper&quot;]}" data-trace="false" data-dom-id="Pill-react-component-3fee7f4d-7b36-4559-82bb-a5e17f3a94da"></div> <div id="Pill-react-component-3fee7f4d-7b36-4559-82bb-a5e17f3a94da"></div> </a><a data-click-track="profile-user-info-expand-research-interests" data-has-card-for-ri-list="13155" href="https://www.academia.edu/Documents/in/Personal_Information_Management"><div class="js-react-on-rails-component" style="display:none" data-component-name="Pill" data-props="{&quot;color&quot;:&quot;gray&quot;,&quot;children&quot;:[&quot;Personal Information Management&quot;]}" data-trace="false" data-dom-id="Pill-react-component-98b97969-5c1c-4155-a331-6eacfda2414e"></div> <div id="Pill-react-component-98b97969-5c1c-4155-a331-6eacfda2414e"></div> </a></div></div><div class="external-links-container"><ul class="profile-links new-profile js-UserInfo-social"><li class="left-most js-UserInfo-social-cv" data-broccoli-component="user-info.cv-button" data-click-track="profile-user-info-cv" data-cv-filename="signerCV.pdf" data-placement="top" data-toggle="tooltip" href="/BeatSigner/CurriculumVitae"><button class="ds2-5-text-link ds2-5-text-link--small" style="font-size: 20px; letter-spacing: 0.8px"><span class="ds2-5-text-link__content">CV</span></button></li><li><a class="ds2-5-text-link ds2-5-text-link--small" href="https://beatsigner.academia.edu/"><span class="ds2-5-text-link__content"><i class="fa fa-laptop"></i></span></a></li><li class="profile-profiles js-social-profiles-container"><i class="fa fa-spin fa-spinner"></i></li></ul></div></div></div><div class="right-panel-container"><div class="user-content-wrapper"><div class="uploads-container" id="social-redesign-work-container"><div class="upload-header"><h2 class="ds2-5-heading-sans-serif-xs">Uploads</h2></div><div class="nav-container backbone-profile-documents-nav hidden-xs"><ul class="nav-tablist" role="tablist"><li class="nav-chip active" role="presentation"><a data-section-name="" data-toggle="tab" href="#all" role="tab">all</a></li><li class="nav-chip" role="presentation"><a class="js-profile-docs-nav-section u-textTruncate" data-click-track="profile-works-tab" data-section-name="Books" data-toggle="tab" href="#books" role="tab" title="Books"><span>1</span>&nbsp;<span class="ds2-5-body-sm-bold">Books</span></a></li><li class="nav-chip" role="presentation"><a class="js-profile-docs-nav-section u-textTruncate" data-click-track="profile-works-tab" data-section-name="Interviews" data-toggle="tab" href="#interviews" role="tab" title="Interviews"><span>1</span>&nbsp;<span class="ds2-5-body-sm-bold">Interviews</span></a></li><li class="nav-chip" role="presentation"><a class="js-profile-docs-nav-section u-textTruncate" data-click-track="profile-works-tab" data-section-name="Flyers" data-toggle="tab" href="#flyers" role="tab" title="Flyers"><span>2</span>&nbsp;<span class="ds2-5-body-sm-bold">Flyers</span></a></li><li class="nav-chip" role="presentation"><a class="js-profile-docs-nav-section u-textTruncate" data-click-track="profile-works-tab" data-section-name="Papers" data-toggle="tab" href="#papers" role="tab" title="Papers"><span>145</span>&nbsp;<span class="ds2-5-body-sm-bold">Papers</span></a></li><li class="nav-chip more-tab" role="presentation"><a class="js-profile-documents-more-tab link-unstyled u-textTruncate" data-toggle="dropdown" role="tab">More&nbsp;&nbsp;<i class="fa fa-chevron-down"></i></a><ul class="js-profile-documents-more-dropdown dropdown-menu dropdown-menu-right profile-documents-more-dropdown" role="menu"><li role="presentation"><a data-click-track="profile-works-tab" data-section-name="Talks" data-toggle="tab" href="#talks" role="tab" style="border: none;"><span>21</span>&nbsp;Talks</a></li><li role="presentation"><a data-click-track="profile-works-tab" data-section-name="Datasets" data-toggle="tab" href="#datasets" role="tab" style="border: none;"><span>7</span>&nbsp;Datasets</a></li><li role="presentation"><a data-click-track="profile-works-tab" data-section-name="Teaching-Documents" data-toggle="tab" href="#teachingdocuments" role="tab" style="border: none;"><span>33</span>&nbsp;Teaching Documents</a></li><li role="presentation"><a data-click-track="profile-works-tab" data-section-name="Edited-Proceedings" data-toggle="tab" href="#editedproceedings" role="tab" style="border: none;"><span>3</span>&nbsp;Edited Proceedings</a></li><li role="presentation"><a data-click-track="profile-works-tab" data-section-name="Dissertations" data-toggle="tab" href="#dissertations" role="tab" style="border: none;"><span>1</span>&nbsp;Dissertations</a></li><li role="presentation"><a data-click-track="profile-works-tab" data-section-name="Conference-Presentations" data-toggle="tab" href="#conferencepresentations" role="tab" style="border: none;"><span>29</span>&nbsp;Conference Presentations</a></li><li role="presentation"><a data-click-track="profile-works-tab" data-section-name="Videos" data-toggle="tab" href="#videos" role="tab" style="border: none;"><span>10</span>&nbsp;Videos</a></li><li role="presentation"><a data-click-track="profile-works-tab" data-section-name="Misc" data-toggle="tab" href="#misc" role="tab" style="border: none;"><span>0</span>&nbsp;Misc</a></li></ul></li></ul></div><div class="divider ds-divider-16" style="margin: 0px;"></div><div class="documents-container backbone-social-profile-documents" style="width: 100%;"><div class="u-taCenter"></div><div class="profile--tab_content_container js-tab-pane tab-pane active" id="all"><div class="profile--tab_heading_container js-section-heading" data-section="Books" id="Books"><h3 class="profile--tab_heading_container">Books by Beat Signer</h3></div><div class="js-work-strip profile--work_container" data-work-id="175411"><div class="profile--work_thumbnail hidden-xs"><a class="js-work-strip-work-link" data-click-track="profile-work-strip-thumbnail" href="https://www.academia.edu/175411/Fundamental_Concepts_for_Interactive_Paper_and_Cross_Media_Information_Spaces"><img alt="Research paper thumbnail of Fundamental Concepts for Interactive Paper and Cross-Media Information Spaces" class="work-thumbnail" src="https://attachments.academia-assets.com/54076380/thumbnails/1.jpg" /></a></div><div class="wp-workCard wp-workCard_itemContainer"><div class="wp-workCard_item wp-workCard--title"><a class="js-work-strip-work-link text-gray-darker" data-click-track="profile-work-strip-title" href="https://www.academia.edu/175411/Fundamental_Concepts_for_Interactive_Paper_and_Cross_Media_Information_Spaces">Fundamental Concepts for Interactive Paper and Cross-Media Information Spaces</a></div><div class="wp-workCard_item"><span class="js-work-more-abstract-truncated">While there have been dramatic increases in the use of digital technologies for information stora...</span><a class="js-work-more-abstract" data-broccoli-component="work_strip.more_abstract" data-click-track="profile-work-strip-more-abstract" href="javascript:;"><span> more </span><span><i class="fa fa-caret-down"></i></span></a><span class="js-work-more-abstract-untruncated hidden">While there have been dramatic increases in the use of digital technologies for information storage, processing and delivery over the last twenty years, the affordances of paper have ensured its retention as a key information medium. In this book we review a wide variety of projects and technological developments for bridging the paper-digital divide. We present our information-centric approach for a tight integration of paper and digital information that is based on a general cross-media information platform. Different innovative augmented paper applications that have been developed based on our interactive paper platform and Anoto Digital Pen and Paper technology are introduced. For example, these applications include a mobile interactive paper-based tourist information system (EdFest) and a paper-digital presentation tool (PaperPoint). Challenges and solutions for new forms of interactive paper and cross-media publishing are discussed. The book is targeted at developers and researchers in information systems, hypermedia and human computer interaction, professionals from the printing and publishing industry as well as readers with a general interest in the future of paper. <br /> <br />Buy from Amazon: <a href="http://www.amazon.co.uk/Fundamental-Concepts-Interactive-Cross-Media-Information/dp/3837027139" rel="nofollow">http://www.amazon.co.uk/Fundamental-Concepts-Interactive-Cross-Media-Information/dp/3837027139</a></span></div><div class="wp-workCard_item wp-workCard--actions"><span class="work-strip-bookmark-button-container"></span><a id="9bdf01fac66a1a290ae0557a4672c903" class="wp-workCard--action" rel="nofollow" data-click-track="profile-work-strip-download" data-download="{&quot;attachment_id&quot;:54076380,&quot;asset_id&quot;:175411,&quot;asset_type&quot;:&quot;Work&quot;,&quot;button_location&quot;:&quot;profile&quot;}" href="https://www.academia.edu/attachments/54076380/download_file?s=profile"><span><i class="fa fa-arrow-down"></i></span><span>Download</span></a><span class="wp-workCard--action visible-if-viewed-by-owner inline-block" style="display: none;"><span class="js-profile-work-strip-edit-button-wrapper profile-work-strip-edit-button-wrapper" data-work-id="175411"><a class="js-profile-work-strip-edit-button" tabindex="0"><span><i class="fa fa-pencil"></i></span><span>Edit</span></a></span></span></div><div class="wp-workCard_item wp-workCard--stats"><span><span><span class="js-view-count view-count u-mr2x" data-work-id="175411"><i class="fa fa-spinner fa-spin"></i></span><script>$(function () { var workId = 175411; window.Academia.workViewCountsFetcher.queue(workId, function (count) { var description = window.$h.commaizeInt(count) + " " + window.$h.pluralize(count, 'View'); $(".js-view-count[data-work-id=175411]").text(description); $(".js-view-count[data-work-id=175411]").attr('title', description).tooltip(); }); });</script></span></span><span><span class="percentile-widget hidden"><span class="u-mr2x work-percentile"></span></span><script>$(function () { var workId = 175411; window.Academia.workPercentilesFetcher.queue(workId, function (percentileText) { var container = $(".js-work-strip[data-work-id='175411']"); container.find('.work-percentile').text(percentileText.charAt(0).toUpperCase() + percentileText.slice(1)); container.find('.percentile-widget').show(); container.find('.percentile-widget').removeClass('hidden'); }); });</script></span></div><div id="work-strip-premium-row-container"></div></div></div><script> require.config({ waitSeconds: 90 })(["https://a.academia-assets.com/assets/wow_profile-a9bf3a2bc8c89fa2a77156577594264ee8a0f214d74241bc0fcd3f69f8d107ac.js","https://a.academia-assets.com/assets/work_edit-ad038b8c047c1a8d4fa01b402d530ff93c45fee2137a149a4a5398bc8ad67560.js"], function() { // from javascript_helper.rb var dispatcherData = {} if (true){ window.WowProfile.dispatcher = window.WowProfile.dispatcher || _.clone(Backbone.Events); dispatcherData = { dispatcher: window.WowProfile.dispatcher, downloadLinkId: "9bdf01fac66a1a290ae0557a4672c903" } } $('.js-work-strip[data-work-id=175411]').each(function() { if (!$(this).data('initialized')) { new WowProfile.WorkStripView({ el: this, workJSON: {"id":175411,"title":"Fundamental Concepts for Interactive Paper and Cross-Media Information Spaces","translated_title":"","metadata":{"abstract":"While there have been dramatic increases in the use of digital technologies for information storage, processing and delivery over the last twenty years, the affordances of paper have ensured its retention as a key information medium. In this book we review a wide variety of projects and technological developments for bridging the paper-digital divide. We present our information-centric approach for a tight integration of paper and digital information that is based on a general cross-media information platform. Different innovative augmented paper applications that have been developed based on our interactive paper platform and Anoto Digital Pen and Paper technology are introduced. For example, these applications include a mobile interactive paper-based tourist information system (EdFest) and a paper-digital presentation tool (PaperPoint). Challenges and solutions for new forms of interactive paper and cross-media publishing are discussed. The book is targeted at developers and researchers in information systems, hypermedia and human computer interaction, professionals from the printing and publishing industry as well as readers with a general interest in the future of paper.\r\n\r\nBuy from Amazon: http://www.amazon.co.uk/Fundamental-Concepts-Interactive-Cross-Media-Information/dp/3837027139","publication_date":{"day":null,"month":null,"year":2017,"errors":{}}},"translated_abstract":"While there have been dramatic increases in the use of digital technologies for information storage, processing and delivery over the last twenty years, the affordances of paper have ensured its retention as a key information medium. In this book we review a wide variety of projects and technological developments for bridging the paper-digital divide. We present our information-centric approach for a tight integration of paper and digital information that is based on a general cross-media information platform. Different innovative augmented paper applications that have been developed based on our interactive paper platform and Anoto Digital Pen and Paper technology are introduced. For example, these applications include a mobile interactive paper-based tourist information system (EdFest) and a paper-digital presentation tool (PaperPoint). Challenges and solutions for new forms of interactive paper and cross-media publishing are discussed. The book is targeted at developers and researchers in information systems, hypermedia and human computer interaction, professionals from the printing and publishing industry as well as readers with a general interest in the future of paper.\r\n\r\nBuy from Amazon: http://www.amazon.co.uk/Fundamental-Concepts-Interactive-Cross-Media-Information/dp/3837027139","internal_url":"https://www.academia.edu/175411/Fundamental_Concepts_for_Interactive_Paper_and_Cross_Media_Information_Spaces","translated_internal_url":"","created_at":"2009-03-16T08:59:43.243-07:00","preview_url":null,"current_user_can_edit":null,"current_user_is_owner":null,"owner_id":13155,"coauthors_can_edit":true,"document_type":"book","co_author_tags":[],"downloadable_attachments":[{"id":54076380,"title":"","file_type":"pdf","scribd_thumbnail_url":"https://attachments.academia-assets.com/54076380/thumbnails/1.jpg","file_name":"signer2017b.pdf","download_url":"https://www.academia.edu/attachments/54076380/download_file","bulk_download_file_name":"Fundamental_Concepts_for_Interactive_Pap.pdf","bulk_download_url":"https://d1wqtxts1xzle7.cloudfront.net/54076380/signer2017b-libre.pdf?1502094237=\u0026response-content-disposition=attachment%3B+filename%3DFundamental_Concepts_for_Interactive_Pap.pdf\u0026Expires=1744203890\u0026Signature=QHnnCJ9dH-SZYTXH3DP3DLMK9wQLXyvYaXCD-KAmEYgIpFcFjMSFx0KqHxjI74nsDQXTCdBR6~H~B-fGsRCa6qC8rsVNswa~xf3K2n9AKPDN~yJnLPmZp6uRuCHtyZZHcbgVhl-ZHlo7HnSo-rE-tRo32TFCOwtmacKw2NQx7XX9VXpFXe2~tkywxBHdvJ4hButMTADeqlIWTLhs-fUpO2Zo7yBE0W4olgr6lHxH0S--f0g~TnnwNdPlAxoYatvXmWYXlNDO-dbatrtCB2GlOWL1MDJhVPwjRtKcgst7eIHgN~JWxT~BiUfQbebRJPKrSEOwLd2aN8hLket7J44suA__\u0026Key-Pair-Id=APKAJLOHF5GGSLRBV4ZA"}],"slug":"Fundamental_Concepts_for_Interactive_Paper_and_Cross_Media_Information_Spaces","translated_slug":"","page_count":278,"language":"en","content_type":"Work","summary":"While there have been dramatic increases in the use of digital technologies for information storage, processing and delivery over the last twenty years, the affordances of paper have ensured its retention as a key information medium. In this book we review a wide variety of projects and technological developments for bridging the paper-digital divide. We present our information-centric approach for a tight integration of paper and digital information that is based on a general cross-media information platform. Different innovative augmented paper applications that have been developed based on our interactive paper platform and Anoto Digital Pen and Paper technology are introduced. For example, these applications include a mobile interactive paper-based tourist information system (EdFest) and a paper-digital presentation tool (PaperPoint). Challenges and solutions for new forms of interactive paper and cross-media publishing are discussed. The book is targeted at developers and researchers in information systems, hypermedia and human computer interaction, professionals from the printing and publishing industry as well as readers with a general interest in the future of paper.\r\n\r\nBuy from Amazon: http://www.amazon.co.uk/Fundamental-Concepts-Interactive-Cross-Media-Information/dp/3837027139","impression_tracking_id":null,"owner":{"id":13155,"first_name":"Beat","middle_initials":null,"last_name":"Signer","page_name":"BeatSigner","domain_name":"vub","created_at":"2008-10-29T17:11:18.015-07:00","display_name":"Beat Signer","url":"https://vub.academia.edu/BeatSigner"},"attachments":[{"id":54076380,"title":"","file_type":"pdf","scribd_thumbnail_url":"https://attachments.academia-assets.com/54076380/thumbnails/1.jpg","file_name":"signer2017b.pdf","download_url":"https://www.academia.edu/attachments/54076380/download_file","bulk_download_file_name":"Fundamental_Concepts_for_Interactive_Pap.pdf","bulk_download_url":"https://d1wqtxts1xzle7.cloudfront.net/54076380/signer2017b-libre.pdf?1502094237=\u0026response-content-disposition=attachment%3B+filename%3DFundamental_Concepts_for_Interactive_Pap.pdf\u0026Expires=1744203890\u0026Signature=QHnnCJ9dH-SZYTXH3DP3DLMK9wQLXyvYaXCD-KAmEYgIpFcFjMSFx0KqHxjI74nsDQXTCdBR6~H~B-fGsRCa6qC8rsVNswa~xf3K2n9AKPDN~yJnLPmZp6uRuCHtyZZHcbgVhl-ZHlo7HnSo-rE-tRo32TFCOwtmacKw2NQx7XX9VXpFXe2~tkywxBHdvJ4hButMTADeqlIWTLhs-fUpO2Zo7yBE0W4olgr6lHxH0S--f0g~TnnwNdPlAxoYatvXmWYXlNDO-dbatrtCB2GlOWL1MDJhVPwjRtKcgst7eIHgN~JWxT~BiUfQbebRJPKrSEOwLd2aN8hLket7J44suA__\u0026Key-Pair-Id=APKAJLOHF5GGSLRBV4ZA"}],"research_interests":[{"id":37,"name":"Information Systems","url":"https://www.academia.edu/Documents/in/Information_Systems"},{"id":48,"name":"Engineering","url":"https://www.academia.edu/Documents/in/Engineering"},{"id":422,"name":"Computer Science","url":"https://www.academia.edu/Documents/in/Computer_Science"},{"id":449,"name":"Software Engineering","url":"https://www.academia.edu/Documents/in/Software_Engineering"},{"id":459,"name":"Information Science","url":"https://www.academia.edu/Documents/in/Information_Science"},{"id":472,"name":"Human Computer Interaction","url":"https://www.academia.edu/Documents/in/Human_Computer_Interaction"},{"id":491,"name":"Information Technology","url":"https://www.academia.edu/Documents/in/Information_Technology"},{"id":859,"name":"Communication","url":"https://www.academia.edu/Documents/in/Communication"},{"id":923,"name":"Technology","url":"https://www.academia.edu/Documents/in/Technology"},{"id":933,"name":"New Media","url":"https://www.academia.edu/Documents/in/New_Media"},{"id":1003,"name":"Educational Technology","url":"https://www.academia.edu/Documents/in/Educational_Technology"},{"id":1012,"name":"Digital Libraries","url":"https://www.academia.edu/Documents/in/Digital_Libraries"},{"id":1210,"name":"Informatics","url":"https://www.academia.edu/Documents/in/Informatics"},{"id":1241,"name":"Knowledge Management","url":"https://www.academia.edu/Documents/in/Knowledge_Management"},{"id":1380,"name":"Computer Engineering","url":"https://www.academia.edu/Documents/in/Computer_Engineering"},{"id":1453,"name":"Information Management","url":"https://www.academia.edu/Documents/in/Information_Management"},{"id":1609,"name":"E-learning","url":"https://www.academia.edu/Documents/in/E-learning"},{"id":2129,"name":"Computer Supported Cooperative Work (CSCW)","url":"https://www.academia.edu/Documents/in/Computer_Supported_Cooperative_Work_CSCW_"},{"id":2482,"name":"Database Systems","url":"https://www.academia.edu/Documents/in/Database_Systems"},{"id":2869,"name":"Digital Media","url":"https://www.academia.edu/Documents/in/Digital_Media"},{"id":2879,"name":"Ubiquitous Computing","url":"https://www.academia.edu/Documents/in/Ubiquitous_Computing"},{"id":3419,"name":"Multimedia","url":"https://www.academia.edu/Documents/in/Multimedia"},{"id":4416,"name":"Interaction Design","url":"https://www.academia.edu/Documents/in/Interaction_Design"},{"id":5266,"name":"Human Information Interaction","url":"https://www.academia.edu/Documents/in/Human_Information_Interaction"},{"id":5673,"name":"Augmented Reality","url":"https://www.academia.edu/Documents/in/Augmented_Reality"},{"id":6492,"name":"Storytelling","url":"https://www.academia.edu/Documents/in/Storytelling"},{"id":7454,"name":"Information Communication Technology","url":"https://www.academia.edu/Documents/in/Information_Communication_Technology"},{"id":8215,"name":"Conceptual Modelling","url":"https://www.academia.edu/Documents/in/Conceptual_Modelling"},{"id":9134,"name":"Pervasive Computing","url":"https://www.academia.edu/Documents/in/Pervasive_Computing"},{"id":10165,"name":"Interactive and Digital Media","url":"https://www.academia.edu/Documents/in/Interactive_and_Digital_Media"},{"id":11085,"name":"Cross-Media Information Spaces","url":"https://www.academia.edu/Documents/in/Cross-Media_Information_Spaces"},{"id":11086,"name":"Interactive Paper","url":"https://www.academia.edu/Documents/in/Interactive_Paper"},{"id":11119,"name":"User Interface","url":"https://www.academia.edu/Documents/in/User_Interface"},{"id":11123,"name":"Personal Information Management","url":"https://www.academia.edu/Documents/in/Personal_Information_Management"},{"id":15951,"name":"Digital Pen and Paper","url":"https://www.academia.edu/Documents/in/Digital_Pen_and_Paper"},{"id":17758,"name":"Technology Enhanced Learning","url":"https://www.academia.edu/Documents/in/Technology_Enhanced_Learning"},{"id":20470,"name":"Tangible User Interfaces","url":"https://www.academia.edu/Documents/in/Tangible_User_Interfaces"},{"id":23997,"name":"HCI","url":"https://www.academia.edu/Documents/in/HCI"},{"id":27360,"name":"Databases","url":"https://www.academia.edu/Documents/in/Databases"},{"id":41030,"name":"Mobile Augmented Reality","url":"https://www.academia.edu/Documents/in/Mobile_Augmented_Reality"},{"id":41474,"name":"Cross Media Platforms","url":"https://www.academia.edu/Documents/in/Cross_Media_Platforms"},{"id":42095,"name":"Document Engineering","url":"https://www.academia.edu/Documents/in/Document_Engineering"},{"id":42896,"name":"Conceptual Modeling","url":"https://www.academia.edu/Documents/in/Conceptual_Modeling"},{"id":44606,"name":"Hypermedia","url":"https://www.academia.edu/Documents/in/Hypermedia"},{"id":48200,"name":"Digital Library","url":"https://www.academia.edu/Documents/in/Digital_Library"},{"id":51492,"name":"Pulp and Paper + Recycled Paper","url":"https://www.academia.edu/Documents/in/Pulp_and_Paper_Recycled_Paper"},{"id":97585,"name":"User interfaces","url":"https://www.academia.edu/Documents/in/User_interfaces"},{"id":311668,"name":"Digital Story Telling","url":"https://www.academia.edu/Documents/in/Digital_Story_Telling"},{"id":369022,"name":"Cross Media","url":"https://www.academia.edu/Documents/in/Cross_Media"},{"id":416425,"name":"Tangible Media","url":"https://www.academia.edu/Documents/in/Tangible_Media"},{"id":688513,"name":"Information Technology‎","url":"https://www.academia.edu/Documents/in/Information_Technology_"},{"id":721414,"name":"Augmented Paper","url":"https://www.academia.edu/Documents/in/Augmented_Paper"},{"id":1251579,"name":"Human-Computer Interaction","url":"https://www.academia.edu/Documents/in/Human-Computer_Interaction"}],"urls":[{"id":4448895,"url":"http://www.amazon.co.uk/Fundamental-Concepts-Interactive-Cross-Media-Information/dp/3837027139"},{"id":8277477,"url":"https://www.amazon.com/Fundamental-Concepts-Interactive-Cross-Media-Information-ebook/dp/B0753MK7VN/"}]}, dispatcherData: dispatcherData }); $(this).data('initialized', true); } }); $a.trackClickSource(".js-work-strip-work-link", "profile_work_strip") if (false) { Aedu.setUpFigureCarousel('profile-work-175411-figures'); } }); </script> <div class="profile--tab_heading_container js-section-heading" data-section="Interviews" id="Interviews"><h3 class="profile--tab_heading_container">Interviews by Beat Signer</h3></div><div class="js-work-strip profile--work_container" data-work-id="46927206"><div class="profile--work_thumbnail hidden-xs"><a class="js-work-strip-work-link" data-click-track="profile-work-strip-thumbnail" href="https://www.academia.edu/46927206/Interview_with_Beat_Signer"><img alt="Research paper thumbnail of Interview with Beat Signer" class="work-thumbnail" src="https://attachments.academia-assets.com/113208906/thumbnails/1.jpg" /></a></div><div class="wp-workCard wp-workCard_itemContainer"><div class="wp-workCard_item wp-workCard--title"><a class="js-work-strip-work-link text-gray-darker" data-click-track="profile-work-strip-title" href="https://www.academia.edu/46927206/Interview_with_Beat_Signer">Interview with Beat Signer</a></div><div class="wp-workCard_item"><span>ACM SIGWEB Newsletter 2021(Winter), February 2021</span><span>, 2021</span></div><div class="wp-workCard_item"><span class="js-work-more-abstract-truncated">Beat Signer is Professor of Computer Science at the Vrije Universiteit Brussel (VUB) and codirect...</span><a class="js-work-more-abstract" data-broccoli-component="work_strip.more_abstract" data-click-track="profile-work-strip-more-abstract" href="javascript:;"><span> more </span><span><i class="fa fa-caret-down"></i></span></a><span class="js-work-more-abstract-untruncated hidden">Beat Signer is Professor of Computer Science at the Vrije Universiteit Brussel (VUB) and codirectorof the Web &amp; Information Systems Engineering (WISE) research lab. He received a PhDin Computer Science from ETH Zurich where he has also been leading the Interactive Paper lab asa senior researcher for four years. He is an internationally distinguished expert in cross-media technologies and interactive paper solutions. His further research interests include human-information interaction, document engineering, data physicalisation, mixed reality as well as multimodal interaction. He has published more than 100 papers on these topics at international conferences and journals, and received multiple best paper awards.<br /><br />Beat has 20 years of experience in research on cross-media information management and mul-timodal user interfaces. As part of his PhD research, he investigated the use of paper as an interactive user interface and developed the resource-selector-link (RSL) hypermedia metamodel. With the interactive paper platform (iPaper), he strongly contributed to the interdisciplinary European Paper++ and PaperWorks research projects and the seminal research on paper-digital user interfaces led to innovative cross-media publishing solutions and novel forms of paper-based human-computer interaction. The RSL hypermedia metamodel is nowadays widely applied in his research lab and has, for example, been used for cross-media personal information management, an extensible cross-document link service, the MindXpres presentation platform as well as in a framework for cross-device and Internet of Things applications. For more details, please visit <a href="https://beatsigner.com" rel="nofollow">https://beatsigner.com</a>.</span></div><div class="wp-workCard_item wp-workCard--actions"><span class="work-strip-bookmark-button-container"></span><a id="37619249c5521245d7003d4dfb95dab9" class="wp-workCard--action" rel="nofollow" data-click-track="profile-work-strip-download" data-download="{&quot;attachment_id&quot;:113208906,&quot;asset_id&quot;:46927206,&quot;asset_type&quot;:&quot;Work&quot;,&quot;button_location&quot;:&quot;profile&quot;}" href="https://www.academia.edu/attachments/113208906/download_file?s=profile"><span><i class="fa fa-arrow-down"></i></span><span>Download</span></a><span class="wp-workCard--action visible-if-viewed-by-owner inline-block" style="display: none;"><span class="js-profile-work-strip-edit-button-wrapper profile-work-strip-edit-button-wrapper" data-work-id="46927206"><a class="js-profile-work-strip-edit-button" tabindex="0"><span><i class="fa fa-pencil"></i></span><span>Edit</span></a></span></span></div><div class="wp-workCard_item wp-workCard--stats"><span><span><span class="js-view-count view-count u-mr2x" data-work-id="46927206"><i class="fa fa-spinner fa-spin"></i></span><script>$(function () { var workId = 46927206; window.Academia.workViewCountsFetcher.queue(workId, function (count) { var description = window.$h.commaizeInt(count) + " " + window.$h.pluralize(count, 'View'); $(".js-view-count[data-work-id=46927206]").text(description); $(".js-view-count[data-work-id=46927206]").attr('title', description).tooltip(); }); });</script></span></span><span><span class="percentile-widget hidden"><span class="u-mr2x work-percentile"></span></span><script>$(function () { var workId = 46927206; window.Academia.workPercentilesFetcher.queue(workId, function (percentileText) { var container = $(".js-work-strip[data-work-id='46927206']"); container.find('.work-percentile').text(percentileText.charAt(0).toUpperCase() + percentileText.slice(1)); container.find('.percentile-widget').show(); container.find('.percentile-widget').removeClass('hidden'); }); });</script></span></div><div id="work-strip-premium-row-container"></div></div></div><script> require.config({ waitSeconds: 90 })(["https://a.academia-assets.com/assets/wow_profile-a9bf3a2bc8c89fa2a77156577594264ee8a0f214d74241bc0fcd3f69f8d107ac.js","https://a.academia-assets.com/assets/work_edit-ad038b8c047c1a8d4fa01b402d530ff93c45fee2137a149a4a5398bc8ad67560.js"], function() { // from javascript_helper.rb var dispatcherData = {} if (true){ window.WowProfile.dispatcher = window.WowProfile.dispatcher || _.clone(Backbone.Events); dispatcherData = { dispatcher: window.WowProfile.dispatcher, downloadLinkId: "37619249c5521245d7003d4dfb95dab9" } } $('.js-work-strip[data-work-id=46927206]').each(function() { if (!$(this).data('initialized')) { new WowProfile.WorkStripView({ el: this, workJSON: {"id":46927206,"title":"Interview with Beat Signer","translated_title":"","metadata":{"doi":"10.1145/3447879.3447881","abstract":"Beat Signer is Professor of Computer Science at the Vrije Universiteit Brussel (VUB) and codirectorof the Web \u0026 Information Systems Engineering (WISE) research lab. He received a PhDin Computer Science from ETH Zurich where he has also been leading the Interactive Paper lab asa senior researcher for four years. He is an internationally distinguished expert in cross-media technologies and interactive paper solutions. His further research interests include human-information interaction, document engineering, data physicalisation, mixed reality as well as multimodal interaction. He has published more than 100 papers on these topics at international conferences and journals, and received multiple best paper awards.\n\nBeat has 20 years of experience in research on cross-media information management and mul-timodal user interfaces. As part of his PhD research, he investigated the use of paper as an interactive user interface and developed the resource-selector-link (RSL) hypermedia metamodel. With the interactive paper platform (iPaper), he strongly contributed to the interdisciplinary European Paper++ and PaperWorks research projects and the seminal research on paper-digital user interfaces led to innovative cross-media publishing solutions and novel forms of paper-based human-computer interaction. The RSL hypermedia metamodel is nowadays widely applied in his research lab and has, for example, been used for cross-media personal information management, an extensible cross-document link service, the MindXpres presentation platform as well as in a framework for cross-device and Internet of Things applications. For more details, please visit https://beatsigner.com.","publication_date":{"day":null,"month":null,"year":2021,"errors":{}},"publication_name":"ACM SIGWEB Newsletter 2021(Winter), February 2021"},"translated_abstract":"Beat Signer is Professor of Computer Science at the Vrije Universiteit Brussel (VUB) and codirectorof the Web \u0026 Information Systems Engineering (WISE) research lab. He received a PhDin Computer Science from ETH Zurich where he has also been leading the Interactive Paper lab asa senior researcher for four years. He is an internationally distinguished expert in cross-media technologies and interactive paper solutions. His further research interests include human-information interaction, document engineering, data physicalisation, mixed reality as well as multimodal interaction. He has published more than 100 papers on these topics at international conferences and journals, and received multiple best paper awards.\n\nBeat has 20 years of experience in research on cross-media information management and mul-timodal user interfaces. As part of his PhD research, he investigated the use of paper as an interactive user interface and developed the resource-selector-link (RSL) hypermedia metamodel. With the interactive paper platform (iPaper), he strongly contributed to the interdisciplinary European Paper++ and PaperWorks research projects and the seminal research on paper-digital user interfaces led to innovative cross-media publishing solutions and novel forms of paper-based human-computer interaction. The RSL hypermedia metamodel is nowadays widely applied in his research lab and has, for example, been used for cross-media personal information management, an extensible cross-document link service, the MindXpres presentation platform as well as in a framework for cross-device and Internet of Things applications. For more details, please visit https://beatsigner.com.","internal_url":"https://www.academia.edu/46927206/Interview_with_Beat_Signer","translated_internal_url":"","created_at":"2021-04-18T01:23:55.745-07:00","preview_url":null,"current_user_can_edit":null,"current_user_is_owner":null,"owner_id":13155,"coauthors_can_edit":true,"document_type":"book","co_author_tags":[{"id":36449542,"work_id":46927206,"tagging_user_id":13155,"tagged_user_id":162672613,"co_author_invite_id":null,"email":"c***s@atzenbeck.de","display_order":1,"name":"Claus Atzenbeck","title":"Interview with Beat Signer"},{"id":36449543,"work_id":46927206,"tagging_user_id":13155,"tagged_user_id":null,"co_author_invite_id":6826860,"email":"b***r@vub.be","display_order":3,"name":"Beat Signer","title":"Interview with Beat Signer"},{"id":36449544,"work_id":46927206,"tagging_user_id":13155,"tagged_user_id":null,"co_author_invite_id":6640951,"email":"b***r@vub.ac","display_order":4,"name":"Beat Signer","title":"Interview with Beat Signer"},{"id":36449545,"work_id":46927206,"tagging_user_id":13155,"tagged_user_id":null,"co_author_invite_id":6640952,"email":"s***r@inf.ethz","display_order":5,"name":"Beat Signer","title":"Interview with Beat Signer"},{"id":36449546,"work_id":46927206,"tagging_user_id":13155,"tagged_user_id":13155,"co_author_invite_id":6640953,"email":"b***r@vub.be","affiliation":"Vrije Universiteit Brussel","display_order":6,"name":"Beat Signer","title":"Interview with Beat Signer"},{"id":36449547,"work_id":46927206,"tagging_user_id":13155,"tagged_user_id":null,"co_author_invite_id":921185,"email":"s***r@inf.ethz.ch","display_order":7,"name":"Beat Signer","title":"Interview with Beat Signer"},{"id":36449548,"work_id":46927206,"tagging_user_id":13155,"tagged_user_id":null,"co_author_invite_id":6640954,"email":"b***r@vub.ac.be","display_order":8,"name":"Beat Signer","title":"Interview with Beat Signer"}],"downloadable_attachments":[{"id":113208906,"title":"","file_type":"pdf","scribd_thumbnail_url":"https://attachments.academia-assets.com/113208906/thumbnails/1.jpg","file_name":"interview_with_beat_signer.pdf","download_url":"https://www.academia.edu/attachments/113208906/download_file","bulk_download_file_name":"Interview_with_Beat_Signer.pdf","bulk_download_url":"https://d1wqtxts1xzle7.cloudfront.net/113208906/interview_with_beat_signer-libre.pdf?1712776904=\u0026response-content-disposition=attachment%3B+filename%3DInterview_with_Beat_Signer.pdf\u0026Expires=1744203890\u0026Signature=AxxBQ6ymDg8oSjqflRTXNlYouhrebEqgkjoyV-dk0lkRuKmRLMGjcj7Iih5BAxkiGyuJjctgJur1zpruA8gLL9ZrDOhtpLVKFzDh51018ZVtmCX24BfJAA11CQd9RoTp3~Wb5q331p1Xvf7ZQ3FGKOl7fzC-yptVsEWomKfxm8Zpaj0tbSq9fObAiZUU5vBx9R~m7N0JThydubp0LAyjx-twCOvfbJcx9MKwpGmSw~6lXxMTDgc2sRAGEZ4R2i28GWKjiuOdU9ToWyFPkrnf512FGTQQk8UJkFlpYjfp~78MHopckycyYKfi~P1sXcej7WmJamxWnexIC1pknGvVGQ__\u0026Key-Pair-Id=APKAJLOHF5GGSLRBV4ZA"}],"slug":"Interview_with_Beat_Signer","translated_slug":"","page_count":5,"language":"en","content_type":"Work","summary":"Beat Signer is Professor of Computer Science at the Vrije Universiteit Brussel (VUB) and codirectorof the Web \u0026 Information Systems Engineering (WISE) research lab. He received a PhDin Computer Science from ETH Zurich where he has also been leading the Interactive Paper lab asa senior researcher for four years. He is an internationally distinguished expert in cross-media technologies and interactive paper solutions. His further research interests include human-information interaction, document engineering, data physicalisation, mixed reality as well as multimodal interaction. He has published more than 100 papers on these topics at international conferences and journals, and received multiple best paper awards.\n\nBeat has 20 years of experience in research on cross-media information management and mul-timodal user interfaces. As part of his PhD research, he investigated the use of paper as an interactive user interface and developed the resource-selector-link (RSL) hypermedia metamodel. With the interactive paper platform (iPaper), he strongly contributed to the interdisciplinary European Paper++ and PaperWorks research projects and the seminal research on paper-digital user interfaces led to innovative cross-media publishing solutions and novel forms of paper-based human-computer interaction. The RSL hypermedia metamodel is nowadays widely applied in his research lab and has, for example, been used for cross-media personal information management, an extensible cross-document link service, the MindXpres presentation platform as well as in a framework for cross-device and Internet of Things applications. For more details, please visit https://beatsigner.com.","impression_tracking_id":null,"owner":{"id":13155,"first_name":"Beat","middle_initials":null,"last_name":"Signer","page_name":"BeatSigner","domain_name":"vub","created_at":"2008-10-29T17:11:18.015-07:00","display_name":"Beat Signer","url":"https://vub.academia.edu/BeatSigner"},"attachments":[{"id":113208906,"title":"","file_type":"pdf","scribd_thumbnail_url":"https://attachments.academia-assets.com/113208906/thumbnails/1.jpg","file_name":"interview_with_beat_signer.pdf","download_url":"https://www.academia.edu/attachments/113208906/download_file","bulk_download_file_name":"Interview_with_Beat_Signer.pdf","bulk_download_url":"https://d1wqtxts1xzle7.cloudfront.net/113208906/interview_with_beat_signer-libre.pdf?1712776904=\u0026response-content-disposition=attachment%3B+filename%3DInterview_with_Beat_Signer.pdf\u0026Expires=1744203890\u0026Signature=AxxBQ6ymDg8oSjqflRTXNlYouhrebEqgkjoyV-dk0lkRuKmRLMGjcj7Iih5BAxkiGyuJjctgJur1zpruA8gLL9ZrDOhtpLVKFzDh51018ZVtmCX24BfJAA11CQd9RoTp3~Wb5q331p1Xvf7ZQ3FGKOl7fzC-yptVsEWomKfxm8Zpaj0tbSq9fObAiZUU5vBx9R~m7N0JThydubp0LAyjx-twCOvfbJcx9MKwpGmSw~6lXxMTDgc2sRAGEZ4R2i28GWKjiuOdU9ToWyFPkrnf512FGTQQk8UJkFlpYjfp~78MHopckycyYKfi~P1sXcej7WmJamxWnexIC1pknGvVGQ__\u0026Key-Pair-Id=APKAJLOHF5GGSLRBV4ZA"}],"research_interests":[{"id":37,"name":"Information Systems","url":"https://www.academia.edu/Documents/in/Information_Systems"},{"id":422,"name":"Computer Science","url":"https://www.academia.edu/Documents/in/Computer_Science"},{"id":459,"name":"Information Science","url":"https://www.academia.edu/Documents/in/Information_Science"},{"id":472,"name":"Human Computer Interaction","url":"https://www.academia.edu/Documents/in/Human_Computer_Interaction"},{"id":491,"name":"Information Technology","url":"https://www.academia.edu/Documents/in/Information_Technology"},{"id":3419,"name":"Multimedia","url":"https://www.academia.edu/Documents/in/Multimedia"},{"id":5266,"name":"Human Information Interaction","url":"https://www.academia.edu/Documents/in/Human_Information_Interaction"},{"id":5978,"name":"Web Technologies","url":"https://www.academia.edu/Documents/in/Web_Technologies"},{"id":11085,"name":"Cross-Media Information Spaces","url":"https://www.academia.edu/Documents/in/Cross-Media_Information_Spaces"},{"id":11086,"name":"Interactive Paper","url":"https://www.academia.edu/Documents/in/Interactive_Paper"},{"id":11123,"name":"Personal Information Management","url":"https://www.academia.edu/Documents/in/Personal_Information_Management"},{"id":15893,"name":"Hypertext theory","url":"https://www.academia.edu/Documents/in/Hypertext_theory"},{"id":20470,"name":"Tangible User Interfaces","url":"https://www.academia.edu/Documents/in/Tangible_User_Interfaces"},{"id":44606,"name":"Hypermedia","url":"https://www.academia.edu/Documents/in/Hypermedia"},{"id":54192,"name":"Hypertext","url":"https://www.academia.edu/Documents/in/Hypertext"},{"id":69848,"name":"Presentation","url":"https://www.academia.edu/Documents/in/Presentation"},{"id":141114,"name":"World Wide Web","url":"https://www.academia.edu/Documents/in/World_Wide_Web"},{"id":193390,"name":"RSL","url":"https://www.academia.edu/Documents/in/RSL"},{"id":1993399,"name":"Data Physicalisation","url":"https://www.academia.edu/Documents/in/Data_Physicalisation"}],"urls":[{"id":29589554,"url":"https://beatsigner.com/publications/interview-with-beat-signer.pdf"}]}, dispatcherData: dispatcherData }); $(this).data('initialized', true); } }); $a.trackClickSource(".js-work-strip-work-link", "profile_work_strip") if (false) { Aedu.setUpFigureCarousel('profile-work-46927206-figures'); } }); </script> <div class="profile--tab_heading_container js-section-heading" data-section="Flyers" id="Flyers"><h3 class="profile--tab_heading_container">Flyers by Beat Signer</h3></div><div class="js-work-strip profile--work_container" data-work-id="12785185"><div class="profile--work_thumbnail hidden-xs"><a class="js-work-strip-work-link" data-click-track="profile-work-strip-thumbnail" href="https://www.academia.edu/12785185/MindXpres_An_Extensible_Cross_Media_Presentation_Tool"><img alt="Research paper thumbnail of MindXpres - An Extensible Cross-Media Presentation Tool" class="work-thumbnail" src="https://attachments.academia-assets.com/51752000/thumbnails/1.jpg" /></a></div><div class="wp-workCard wp-workCard_itemContainer"><div class="wp-workCard_item wp-workCard--title"><a class="js-work-strip-work-link text-gray-darker" data-click-track="profile-work-strip-title" href="https://www.academia.edu/12785185/MindXpres_An_Extensible_Cross_Media_Presentation_Tool">MindXpres - An Extensible Cross-Media Presentation Tool</a></div><div class="wp-workCard_item wp-workCard--actions"><span class="work-strip-bookmark-button-container"></span><a id="32cf187967198e5a7bbfa140eccca594" class="wp-workCard--action" rel="nofollow" data-click-track="profile-work-strip-download" data-download="{&quot;attachment_id&quot;:51752000,&quot;asset_id&quot;:12785185,&quot;asset_type&quot;:&quot;Work&quot;,&quot;button_location&quot;:&quot;profile&quot;}" href="https://www.academia.edu/attachments/51752000/download_file?s=profile"><span><i class="fa fa-arrow-down"></i></span><span>Download</span></a><span class="wp-workCard--action visible-if-viewed-by-owner inline-block" style="display: none;"><span class="js-profile-work-strip-edit-button-wrapper profile-work-strip-edit-button-wrapper" data-work-id="12785185"><a class="js-profile-work-strip-edit-button" tabindex="0"><span><i class="fa fa-pencil"></i></span><span>Edit</span></a></span></span></div><div class="wp-workCard_item wp-workCard--stats"><span><span><span class="js-view-count view-count u-mr2x" data-work-id="12785185"><i class="fa fa-spinner fa-spin"></i></span><script>$(function () { var workId = 12785185; window.Academia.workViewCountsFetcher.queue(workId, function (count) { var description = window.$h.commaizeInt(count) + " " + window.$h.pluralize(count, 'View'); $(".js-view-count[data-work-id=12785185]").text(description); $(".js-view-count[data-work-id=12785185]").attr('title', description).tooltip(); }); });</script></span></span><span><span class="percentile-widget hidden"><span class="u-mr2x work-percentile"></span></span><script>$(function () { var workId = 12785185; window.Academia.workPercentilesFetcher.queue(workId, function (percentileText) { var container = $(".js-work-strip[data-work-id='12785185']"); container.find('.work-percentile').text(percentileText.charAt(0).toUpperCase() + percentileText.slice(1)); container.find('.percentile-widget').show(); container.find('.percentile-widget').removeClass('hidden'); }); });</script></span></div><div id="work-strip-premium-row-container"></div></div></div><script> require.config({ waitSeconds: 90 })(["https://a.academia-assets.com/assets/wow_profile-a9bf3a2bc8c89fa2a77156577594264ee8a0f214d74241bc0fcd3f69f8d107ac.js","https://a.academia-assets.com/assets/work_edit-ad038b8c047c1a8d4fa01b402d530ff93c45fee2137a149a4a5398bc8ad67560.js"], function() { // from javascript_helper.rb var dispatcherData = {} if (true){ window.WowProfile.dispatcher = window.WowProfile.dispatcher || _.clone(Backbone.Events); dispatcherData = { dispatcher: window.WowProfile.dispatcher, downloadLinkId: "32cf187967198e5a7bbfa140eccca594" } } $('.js-work-strip[data-work-id=12785185]').each(function() { if (!$(this).data('initialized')) { new WowProfile.WorkStripView({ el: this, workJSON: {"id":12785185,"title":"MindXpres - An Extensible Cross-Media Presentation Tool","translated_title":"","metadata":{"ai_abstract":"MindXpres is an innovative presentation platform designed to address the limitations of traditional slideware by providing a plug-in architecture that enhances content presentation. It allows users to integrate various media types and functionalities, focusing on content delivery rather than aesthetic aspects. By enabling central content storage and a web-based framework, MindXpres ensures high portability and audience interaction, paving the way for collaborative and engaging presentations."},"translated_abstract":null,"internal_url":"https://www.academia.edu/12785185/MindXpres_An_Extensible_Cross_Media_Presentation_Tool","translated_internal_url":"","created_at":"2015-06-03T18:54:57.671-07:00","preview_url":null,"current_user_can_edit":null,"current_user_is_owner":null,"owner_id":13155,"coauthors_can_edit":true,"document_type":"other","co_author_tags":[],"downloadable_attachments":[{"id":51752000,"title":"","file_type":"pdf","scribd_thumbnail_url":"https://attachments.academia-assets.com/51752000/thumbnails/1.jpg","file_name":"MindXpres.pdf","download_url":"https://www.academia.edu/attachments/51752000/download_file","bulk_download_file_name":"MindXpres_An_Extensible_Cross_Media_Pres.pdf","bulk_download_url":"https://d1wqtxts1xzle7.cloudfront.net/51752000/MindXpres-libre.pdf?1486839083=\u0026response-content-disposition=attachment%3B+filename%3DMindXpres_An_Extensible_Cross_Media_Pres.pdf\u0026Expires=1744203890\u0026Signature=E37dAKw3Xbl-K8hug2exMP9EmPCs3wkqJlRmdSrbuCWF7EN3iQwp95~Hpz~mIQ13GVmUx9ZXhvs-rFCj9tGxduISvZIVgerlqkzbvdX9GN6MOfw2cd8aHuQv7FzD4SQzUVFzVItR50P8lEgQmnw7jmzFniW1fmPmvi-E7aMPFu5bgeKfdLWxwa72BTBBOHSXsdo6uMItljFzUz-nKxRM3IyDxrWBT~Jad9sq~ntAmtp6LZlZLyrf2ts9T9ZVqQrtV2SwHE8C36hikJaV~pXoKWdGZ9BW1SmdFpmpiimVj-zuqIF7AmEccFAggOGbeTGb65hJwnUUbzF1fbLnJrsHiA__\u0026Key-Pair-Id=APKAJLOHF5GGSLRBV4ZA"}],"slug":"MindXpres_An_Extensible_Cross_Media_Presentation_Tool","translated_slug":"","page_count":1,"language":"en","content_type":"Work","summary":null,"impression_tracking_id":null,"owner":{"id":13155,"first_name":"Beat","middle_initials":null,"last_name":"Signer","page_name":"BeatSigner","domain_name":"vub","created_at":"2008-10-29T17:11:18.015-07:00","display_name":"Beat Signer","url":"https://vub.academia.edu/BeatSigner"},"attachments":[{"id":51752000,"title":"","file_type":"pdf","scribd_thumbnail_url":"https://attachments.academia-assets.com/51752000/thumbnails/1.jpg","file_name":"MindXpres.pdf","download_url":"https://www.academia.edu/attachments/51752000/download_file","bulk_download_file_name":"MindXpres_An_Extensible_Cross_Media_Pres.pdf","bulk_download_url":"https://d1wqtxts1xzle7.cloudfront.net/51752000/MindXpres-libre.pdf?1486839083=\u0026response-content-disposition=attachment%3B+filename%3DMindXpres_An_Extensible_Cross_Media_Pres.pdf\u0026Expires=1744203890\u0026Signature=E37dAKw3Xbl-K8hug2exMP9EmPCs3wkqJlRmdSrbuCWF7EN3iQwp95~Hpz~mIQ13GVmUx9ZXhvs-rFCj9tGxduISvZIVgerlqkzbvdX9GN6MOfw2cd8aHuQv7FzD4SQzUVFzVItR50P8lEgQmnw7jmzFniW1fmPmvi-E7aMPFu5bgeKfdLWxwa72BTBBOHSXsdo6uMItljFzUz-nKxRM3IyDxrWBT~Jad9sq~ntAmtp6LZlZLyrf2ts9T9ZVqQrtV2SwHE8C36hikJaV~pXoKWdGZ9BW1SmdFpmpiimVj-zuqIF7AmEccFAggOGbeTGb65hJwnUUbzF1fbLnJrsHiA__\u0026Key-Pair-Id=APKAJLOHF5GGSLRBV4ZA"}],"research_interests":[{"id":37,"name":"Information Systems","url":"https://www.academia.edu/Documents/in/Information_Systems"},{"id":48,"name":"Engineering","url":"https://www.academia.edu/Documents/in/Engineering"},{"id":422,"name":"Computer Science","url":"https://www.academia.edu/Documents/in/Computer_Science"},{"id":449,"name":"Software Engineering","url":"https://www.academia.edu/Documents/in/Software_Engineering"},{"id":451,"name":"Programming Languages","url":"https://www.academia.edu/Documents/in/Programming_Languages"},{"id":453,"name":"Object Oriented Programming","url":"https://www.academia.edu/Documents/in/Object_Oriented_Programming"},{"id":472,"name":"Human Computer Interaction","url":"https://www.academia.edu/Documents/in/Human_Computer_Interaction"},{"id":491,"name":"Information Technology","url":"https://www.academia.edu/Documents/in/Information_Technology"},{"id":492,"name":"Management Information Systems","url":"https://www.academia.edu/Documents/in/Management_Information_Systems"},{"id":859,"name":"Communication","url":"https://www.academia.edu/Documents/in/Communication"},{"id":887,"name":"Teaching and Learning","url":"https://www.academia.edu/Documents/in/Teaching_and_Learning"},{"id":922,"name":"Education","url":"https://www.academia.edu/Documents/in/Education"},{"id":923,"name":"Technology","url":"https://www.academia.edu/Documents/in/Technology"},{"id":1003,"name":"Educational Technology","url":"https://www.academia.edu/Documents/in/Educational_Technology"},{"id":1007,"name":"Teaching English as a Second Language","url":"https://www.academia.edu/Documents/in/Teaching_English_as_a_Second_Language"},{"id":1128,"name":"Computer Science Education","url":"https://www.academia.edu/Documents/in/Computer_Science_Education"},{"id":1210,"name":"Informatics","url":"https://www.academia.edu/Documents/in/Informatics"},{"id":1609,"name":"E-learning","url":"https://www.academia.edu/Documents/in/E-learning"},{"id":2209,"name":"Cross-Media Studies","url":"https://www.academia.edu/Documents/in/Cross-Media_Studies"},{"id":2621,"name":"Higher Education","url":"https://www.academia.edu/Documents/in/Higher_Education"},{"id":2869,"name":"Digital Media","url":"https://www.academia.edu/Documents/in/Digital_Media"},{"id":3095,"name":"Computer-Based Learning","url":"https://www.academia.edu/Documents/in/Computer-Based_Learning"},{"id":3424,"name":"Information Visualization","url":"https://www.academia.edu/Documents/in/Information_Visualization"},{"id":3457,"name":"Learning and Teaching","url":"https://www.academia.edu/Documents/in/Learning_and_Teaching"},{"id":5266,"name":"Human Information Interaction","url":"https://www.academia.edu/Documents/in/Human_Information_Interaction"},{"id":7585,"name":"ICT in Education","url":"https://www.academia.edu/Documents/in/ICT_in_Education"},{"id":8129,"name":"Software Development","url":"https://www.academia.edu/Documents/in/Software_Development"},{"id":8130,"name":"Web Development","url":"https://www.academia.edu/Documents/in/Web_Development"},{"id":8673,"name":"Digital Media \u0026 Learning","url":"https://www.academia.edu/Documents/in/Digital_Media_and_Learning"},{"id":8679,"name":"Computer Supported Collaborative Learning (CSCL)","url":"https://www.academia.edu/Documents/in/Computer_Supported_Collaborative_Learning_CSCL_"},{"id":9270,"name":"Software Architecture","url":"https://www.academia.edu/Documents/in/Software_Architecture"},{"id":10472,"name":"Web Applications","url":"https://www.academia.edu/Documents/in/Web_Applications"},{"id":11085,"name":"Cross-Media Information Spaces","url":"https://www.academia.edu/Documents/in/Cross-Media_Information_Spaces"},{"id":11119,"name":"User Interface","url":"https://www.academia.edu/Documents/in/User_Interface"},{"id":12417,"name":"Multimedia Learning","url":"https://www.academia.edu/Documents/in/Multimedia_Learning"},{"id":17758,"name":"Technology Enhanced Learning","url":"https://www.academia.edu/Documents/in/Technology_Enhanced_Learning"},{"id":17951,"name":"Learning And Teaching In Higher Education","url":"https://www.academia.edu/Documents/in/Learning_And_Teaching_In_Higher_Education"},{"id":20481,"name":"Information Visualisation","url":"https://www.academia.edu/Documents/in/Information_Visualisation"},{"id":23997,"name":"HCI","url":"https://www.academia.edu/Documents/in/HCI"},{"id":25475,"name":"Learning environments","url":"https://www.academia.edu/Documents/in/Learning_environments"},{"id":25681,"name":"E-learning 2.0","url":"https://www.academia.edu/Documents/in/E-learning_2.0"},{"id":33112,"name":"Blended learning in higher education","url":"https://www.academia.edu/Documents/in/Blended_learning_in_higher_education"},{"id":33915,"name":"Technology-enhanced Learning","url":"https://www.academia.edu/Documents/in/Technology-enhanced_Learning"},{"id":41474,"name":"Cross Media Platforms","url":"https://www.academia.edu/Documents/in/Cross_Media_Platforms"},{"id":42095,"name":"Document Engineering","url":"https://www.academia.edu/Documents/in/Document_Engineering"},{"id":53293,"name":"Software","url":"https://www.academia.edu/Documents/in/Software"},{"id":69848,"name":"Presentation","url":"https://www.academia.edu/Documents/in/Presentation"},{"id":69857,"name":"PowerPoint","url":"https://www.academia.edu/Documents/in/PowerPoint"},{"id":141114,"name":"World Wide Web","url":"https://www.academia.edu/Documents/in/World_Wide_Web"},{"id":148250,"name":"Presentation of Paper in a Seminar","url":"https://www.academia.edu/Documents/in/Presentation_of_Paper_in_a_Seminar"},{"id":242420,"name":"Presentation Slides","url":"https://www.academia.edu/Documents/in/Presentation_Slides"},{"id":279114,"name":"Technology Enhanced Education","url":"https://www.academia.edu/Documents/in/Technology_Enhanced_Education"},{"id":407764,"name":"Visual Presentation","url":"https://www.academia.edu/Documents/in/Visual_Presentation"},{"id":502875,"name":"Microsoft Powerpoint","url":"https://www.academia.edu/Documents/in/Microsoft_Powerpoint"},{"id":514903,"name":"Presentasion Training","url":"https://www.academia.edu/Documents/in/Presentasion_Training"},{"id":554420,"name":"PPT Presentation","url":"https://www.academia.edu/Documents/in/PPT_Presentation"},{"id":631659,"name":"Power Point Presentations","url":"https://www.academia.edu/Documents/in/Power_Point_Presentations"},{"id":651491,"name":"Web Information Systems","url":"https://www.academia.edu/Documents/in/Web_Information_Systems"},{"id":672167,"name":"Cross-Media Information Spaces and Architectures (CISA)","url":"https://www.academia.edu/Documents/in/Cross-Media_Information_Spaces_and_Architectures_CISA_"},{"id":1019468,"name":"Teaching and Learning In Adult and Higher Education","url":"https://www.academia.edu/Documents/in/Teaching_and_Learning_In_Adult_and_Higher_Education"},{"id":1273494,"name":"Slideware","url":"https://www.academia.edu/Documents/in/Slideware"},{"id":1705150,"name":"MindXpres","url":"https://www.academia.edu/Documents/in/MindXpres"}],"urls":[{"id":4839679,"url":"http://www.beatsigner.com/flyers/MindXpres.pdf"},{"id":7945887,"url":"http://mindxpres.com/"}]}, dispatcherData: dispatcherData }); $(this).data('initialized', true); } }); $a.trackClickSource(".js-work-strip-work-link", "profile_work_strip") if (false) { Aedu.setUpFigureCarousel('profile-work-12785185-figures'); } }); </script> <div class="js-work-strip profile--work_container" data-work-id="12785289"><div class="profile--work_thumbnail hidden-xs"><a class="js-work-strip-work-link" data-click-track="profile-work-strip-thumbnail" href="https://www.academia.edu/12785289/ArtVis_Gaining_New_Insights_from_Digital_Artwork_Collections"><img alt="Research paper thumbnail of ArtVis - Gaining New Insights from Digital Artwork Collections" class="work-thumbnail" src="https://attachments.academia-assets.com/37820024/thumbnails/1.jpg" /></a></div><div class="wp-workCard wp-workCard_itemContainer"><div class="wp-workCard_item wp-workCard--title"><a class="js-work-strip-work-link text-gray-darker" data-click-track="profile-work-strip-title" href="https://www.academia.edu/12785289/ArtVis_Gaining_New_Insights_from_Digital_Artwork_Collections">ArtVis - Gaining New Insights from Digital Artwork Collections</a></div><div class="wp-workCard_item wp-workCard--actions"><span class="work-strip-bookmark-button-container"></span><a id="cf46dc33482e1a6847ab6120d4824141" class="wp-workCard--action" rel="nofollow" data-click-track="profile-work-strip-download" data-download="{&quot;attachment_id&quot;:37820024,&quot;asset_id&quot;:12785289,&quot;asset_type&quot;:&quot;Work&quot;,&quot;button_location&quot;:&quot;profile&quot;}" href="https://www.academia.edu/attachments/37820024/download_file?s=profile"><span><i class="fa fa-arrow-down"></i></span><span>Download</span></a><span class="wp-workCard--action visible-if-viewed-by-owner inline-block" style="display: none;"><span class="js-profile-work-strip-edit-button-wrapper profile-work-strip-edit-button-wrapper" data-work-id="12785289"><a class="js-profile-work-strip-edit-button" tabindex="0"><span><i class="fa fa-pencil"></i></span><span>Edit</span></a></span></span></div><div class="wp-workCard_item wp-workCard--stats"><span><span><span class="js-view-count view-count u-mr2x" data-work-id="12785289"><i class="fa fa-spinner fa-spin"></i></span><script>$(function () { var workId = 12785289; window.Academia.workViewCountsFetcher.queue(workId, function (count) { var description = window.$h.commaizeInt(count) + " " + window.$h.pluralize(count, 'View'); $(".js-view-count[data-work-id=12785289]").text(description); $(".js-view-count[data-work-id=12785289]").attr('title', description).tooltip(); }); });</script></span></span><span><span class="percentile-widget hidden"><span class="u-mr2x work-percentile"></span></span><script>$(function () { var workId = 12785289; window.Academia.workPercentilesFetcher.queue(workId, function (percentileText) { var container = $(".js-work-strip[data-work-id='12785289']"); container.find('.work-percentile').text(percentileText.charAt(0).toUpperCase() + percentileText.slice(1)); container.find('.percentile-widget').show(); container.find('.percentile-widget').removeClass('hidden'); }); });</script></span></div><div id="work-strip-premium-row-container"></div></div></div><script> require.config({ waitSeconds: 90 })(["https://a.academia-assets.com/assets/wow_profile-a9bf3a2bc8c89fa2a77156577594264ee8a0f214d74241bc0fcd3f69f8d107ac.js","https://a.academia-assets.com/assets/work_edit-ad038b8c047c1a8d4fa01b402d530ff93c45fee2137a149a4a5398bc8ad67560.js"], function() { // from javascript_helper.rb var dispatcherData = {} if (true){ window.WowProfile.dispatcher = window.WowProfile.dispatcher || _.clone(Backbone.Events); dispatcherData = { dispatcher: window.WowProfile.dispatcher, downloadLinkId: "cf46dc33482e1a6847ab6120d4824141" } } $('.js-work-strip[data-work-id=12785289]').each(function() { if (!$(this).data('initialized')) { new WowProfile.WorkStripView({ el: this, workJSON: {"id":12785289,"title":"ArtVis - Gaining New Insights from Digital Artwork Collections","translated_title":"","metadata":{"ai_abstract":"The ArtVis project employs innovative visualization techniques alongside a tangible user interface to enhance interaction with a vast digital collection of European artwork from the 11th to the 19th century. By enabling users to explore, analyze, and browse artworks through three interconnected visualization components, ArtVis aims to foster new insights. Specialized controls facilitate user-driven exploration across various dimensions, such as artist name, museum, artistic type, and time period, ultimately promoting a playful and exploratory user experience.","ai_title_tag":"ArtVis: Interactive Visualization of Artwork"},"translated_abstract":null,"internal_url":"https://www.academia.edu/12785289/ArtVis_Gaining_New_Insights_from_Digital_Artwork_Collections","translated_internal_url":"","created_at":"2015-06-03T18:58:21.633-07:00","preview_url":null,"current_user_can_edit":null,"current_user_is_owner":null,"owner_id":13155,"coauthors_can_edit":true,"document_type":"other","co_author_tags":[],"downloadable_attachments":[{"id":37820024,"title":"","file_type":"pdf","scribd_thumbnail_url":"https://attachments.academia-assets.com/37820024/thumbnails/1.jpg","file_name":"ArtVis.pdf","download_url":"https://www.academia.edu/attachments/37820024/download_file","bulk_download_file_name":"ArtVis_Gaining_New_Insights_from_Digital.pdf","bulk_download_url":"https://d1wqtxts1xzle7.cloudfront.net/37820024/ArtVis-libre.pdf?1433383253=\u0026response-content-disposition=attachment%3B+filename%3DArtVis_Gaining_New_Insights_from_Digital.pdf\u0026Expires=1744203890\u0026Signature=dgy8zBOYc31b~rFzfNRy~oyzzRE72DIWeoamyUTC~klHl-2jP-Efjat9gVqEjvP0dCa1l5BYz-E7RIEa5oR42enQgyXYm6QIuMuG9ImedzkL2RvIyUfPBkmMv0NNGXsZNv6rAGEeG0ZGLv-h0psRe-BVojQevfU22QQmFj28SLFMbAiBFXG-72IVBR5NqsCvzJLyBEhMah7cY8RqUSd388OuOHsJ1bU5F4myQsNOdb2uwxjdr9fDX8gTPmlNOi-kN7nkFRQdZ4nhbbqBXltxI~pwY-XQj0H8F4IlK7~qdgTvqz3dGhIf-7ALwptbHZUjCN5lLohIbua8OZfKDgC1eQ__\u0026Key-Pair-Id=APKAJLOHF5GGSLRBV4ZA"}],"slug":"ArtVis_Gaining_New_Insights_from_Digital_Artwork_Collections","translated_slug":"","page_count":1,"language":"en","content_type":"Work","summary":null,"impression_tracking_id":null,"owner":{"id":13155,"first_name":"Beat","middle_initials":null,"last_name":"Signer","page_name":"BeatSigner","domain_name":"vub","created_at":"2008-10-29T17:11:18.015-07:00","display_name":"Beat Signer","url":"https://vub.academia.edu/BeatSigner"},"attachments":[{"id":37820024,"title":"","file_type":"pdf","scribd_thumbnail_url":"https://attachments.academia-assets.com/37820024/thumbnails/1.jpg","file_name":"ArtVis.pdf","download_url":"https://www.academia.edu/attachments/37820024/download_file","bulk_download_file_name":"ArtVis_Gaining_New_Insights_from_Digital.pdf","bulk_download_url":"https://d1wqtxts1xzle7.cloudfront.net/37820024/ArtVis-libre.pdf?1433383253=\u0026response-content-disposition=attachment%3B+filename%3DArtVis_Gaining_New_Insights_from_Digital.pdf\u0026Expires=1744203890\u0026Signature=dgy8zBOYc31b~rFzfNRy~oyzzRE72DIWeoamyUTC~klHl-2jP-Efjat9gVqEjvP0dCa1l5BYz-E7RIEa5oR42enQgyXYm6QIuMuG9ImedzkL2RvIyUfPBkmMv0NNGXsZNv6rAGEeG0ZGLv-h0psRe-BVojQevfU22QQmFj28SLFMbAiBFXG-72IVBR5NqsCvzJLyBEhMah7cY8RqUSd388OuOHsJ1bU5F4myQsNOdb2uwxjdr9fDX8gTPmlNOi-kN7nkFRQdZ4nhbbqBXltxI~pwY-XQj0H8F4IlK7~qdgTvqz3dGhIf-7ALwptbHZUjCN5lLohIbua8OZfKDgC1eQ__\u0026Key-Pair-Id=APKAJLOHF5GGSLRBV4ZA"}],"research_interests":[{"id":37,"name":"Information Systems","url":"https://www.academia.edu/Documents/in/Information_Systems"},{"id":48,"name":"Engineering","url":"https://www.academia.edu/Documents/in/Engineering"},{"id":422,"name":"Computer Science","url":"https://www.academia.edu/Documents/in/Computer_Science"},{"id":449,"name":"Software Engineering","url":"https://www.academia.edu/Documents/in/Software_Engineering"},{"id":459,"name":"Information Science","url":"https://www.academia.edu/Documents/in/Information_Science"},{"id":472,"name":"Human Computer Interaction","url":"https://www.academia.edu/Documents/in/Human_Computer_Interaction"},{"id":491,"name":"Information Technology","url":"https://www.academia.edu/Documents/in/Information_Technology"},{"id":887,"name":"Teaching and Learning","url":"https://www.academia.edu/Documents/in/Teaching_and_Learning"},{"id":922,"name":"Education","url":"https://www.academia.edu/Documents/in/Education"},{"id":923,"name":"Technology","url":"https://www.academia.edu/Documents/in/Technology"},{"id":933,"name":"New Media","url":"https://www.academia.edu/Documents/in/New_Media"},{"id":1003,"name":"Educational Technology","url":"https://www.academia.edu/Documents/in/Educational_Technology"},{"id":1012,"name":"Digital Libraries","url":"https://www.academia.edu/Documents/in/Digital_Libraries"},{"id":1210,"name":"Informatics","url":"https://www.academia.edu/Documents/in/Informatics"},{"id":1236,"name":"Art","url":"https://www.academia.edu/Documents/in/Art"},{"id":1241,"name":"Knowledge Management","url":"https://www.academia.edu/Documents/in/Knowledge_Management"},{"id":1440,"name":"Visualization","url":"https://www.academia.edu/Documents/in/Visualization"},{"id":1453,"name":"Information Management","url":"https://www.academia.edu/Documents/in/Information_Management"},{"id":1609,"name":"E-learning","url":"https://www.academia.edu/Documents/in/E-learning"},{"id":2482,"name":"Database Systems","url":"https://www.academia.edu/Documents/in/Database_Systems"},{"id":2869,"name":"Digital Media","url":"https://www.academia.edu/Documents/in/Digital_Media"},{"id":2875,"name":"User Experience (UX)","url":"https://www.academia.edu/Documents/in/User_Experience_UX_"},{"id":2879,"name":"Ubiquitous Computing","url":"https://www.academia.edu/Documents/in/Ubiquitous_Computing"},{"id":3419,"name":"Multimedia","url":"https://www.academia.edu/Documents/in/Multimedia"},{"id":3424,"name":"Information Visualization","url":"https://www.academia.edu/Documents/in/Information_Visualization"},{"id":3457,"name":"Learning and Teaching","url":"https://www.academia.edu/Documents/in/Learning_and_Teaching"},{"id":4416,"name":"Interaction Design","url":"https://www.academia.edu/Documents/in/Interaction_Design"},{"id":5266,"name":"Human Information Interaction","url":"https://www.academia.edu/Documents/in/Human_Information_Interaction"},{"id":5673,"name":"Augmented Reality","url":"https://www.academia.edu/Documents/in/Augmented_Reality"},{"id":7585,"name":"ICT in Education","url":"https://www.academia.edu/Documents/in/ICT_in_Education"},{"id":9134,"name":"Pervasive Computing","url":"https://www.academia.edu/Documents/in/Pervasive_Computing"},{"id":9135,"name":"The Internet of Things","url":"https://www.academia.edu/Documents/in/The_Internet_of_Things"},{"id":10165,"name":"Interactive and Digital Media","url":"https://www.academia.edu/Documents/in/Interactive_and_Digital_Media"},{"id":11081,"name":"Multimodal Interaction","url":"https://www.academia.edu/Documents/in/Multimodal_Interaction"},{"id":11085,"name":"Cross-Media Information Spaces","url":"https://www.academia.edu/Documents/in/Cross-Media_Information_Spaces"},{"id":11119,"name":"User Interface","url":"https://www.academia.edu/Documents/in/User_Interface"},{"id":17758,"name":"Technology Enhanced Learning","url":"https://www.academia.edu/Documents/in/Technology_Enhanced_Learning"},{"id":18711,"name":"Technology-mediated teaching and learning","url":"https://www.academia.edu/Documents/in/Technology-mediated_teaching_and_learning"},{"id":20470,"name":"Tangible User Interfaces","url":"https://www.academia.edu/Documents/in/Tangible_User_Interfaces"},{"id":21201,"name":"Multimodality","url":"https://www.academia.edu/Documents/in/Multimodality"},{"id":23997,"name":"HCI","url":"https://www.academia.edu/Documents/in/HCI"},{"id":27360,"name":"Databases","url":"https://www.academia.edu/Documents/in/Databases"},{"id":44606,"name":"Hypermedia","url":"https://www.academia.edu/Documents/in/Hypermedia"},{"id":48200,"name":"Digital Library","url":"https://www.academia.edu/Documents/in/Digital_Library"},{"id":66003,"name":"Human-Machine Interaction","url":"https://www.academia.edu/Documents/in/Human-Machine_Interaction"},{"id":85420,"name":"Crossmedia","url":"https://www.academia.edu/Documents/in/Crossmedia"},{"id":1226904,"name":"Computer Science and Engineering","url":"https://www.academia.edu/Documents/in/Computer_Science_and_Engineering-1"}],"urls":[{"id":4839678,"url":"http://www.beatsigner.com/flyers/ArtVis.pdf"}]}, dispatcherData: dispatcherData }); $(this).data('initialized', true); } }); $a.trackClickSource(".js-work-strip-work-link", "profile_work_strip") if (false) { Aedu.setUpFigureCarousel('profile-work-12785289-figures'); } }); </script> <div class="profile--tab_heading_container js-section-heading" data-section="Papers" id="Papers"><h3 class="profile--tab_heading_container">Papers by Beat Signer</h3></div><div class="js-work-strip profile--work_container" data-work-id="40139780"><div class="profile--work_thumbnail hidden-xs"><a class="js-work-strip-work-link" data-click-track="profile-work-strip-thumbnail" href="https://www.academia.edu/40139780/Towards_Cross_Media_Information_Spaces_and_Architectures"><img alt="Research paper thumbnail of Towards Cross-Media Information Spaces and Architectures" class="work-thumbnail" src="https://attachments.academia-assets.com/77413706/thumbnails/1.jpg" /></a></div><div class="wp-workCard wp-workCard_itemContainer"><div class="wp-workCard_item wp-workCard--title"><a class="js-work-strip-work-link text-gray-darker" data-click-track="profile-work-strip-title" href="https://www.academia.edu/40139780/Towards_Cross_Media_Information_Spaces_and_Architectures">Towards Cross-Media Information Spaces and Architectures</a></div><div class="wp-workCard_item"><span class="js-work-more-abstract-truncated">The efficient management and retrieval of information via dedicated devices and data structures h...</span><a class="js-work-more-abstract" data-broccoli-component="work_strip.more_abstract" data-click-track="profile-work-strip-more-abstract" href="javascript:;"><span> more </span><span><i class="fa fa-caret-down"></i></span></a><span class="js-work-more-abstract-untruncated hidden">The efficient management and retrieval of information via dedicated devices and data structures has been investigated since the early days of Vannevar Bush&#39;s seminal article As We May Think introducing the Memex. However, nowadays information is usually fragmented across different media types, devices as well as digital and physical environments, and we are often struggling to retrieve specific information. We discuss three main issues to be addressed when developing solutions for managing information in these co-called cross-media information spaces. First, we have a look at an extensible cross-media linking solution based on the resource-selector-link (RSL) hypermedia metamodel where information can be integrated across applications, devices as well as digital and physical information environments. We then outline some of the limitations of existing digital document formats which are often just a simulation of paper documents and their affordances on desktop computers, and discuss more flexible document representations for cross-media information spaces. Further, new forms of human-information interaction and cross-media user interfaces—including some recent work on dynamic data physicalisation—are discussed. A number of research artefacts are used to illustrate different aspects of the presented data-centric approach for cross-media information spaces and architectures. Last but not least, we provide an outlook on how the embedding of the presented concepts at the level of an operating system might ultimately lead to new possibilities for cross-media information management and innovative forms of human-information interaction.</span></div><div class="wp-workCard_item wp-workCard--actions"><span class="work-strip-bookmark-button-container"></span><a id="ad0690a5d0beba195f5e5f3c3528a83e" class="wp-workCard--action" rel="nofollow" data-click-track="profile-work-strip-download" data-download="{&quot;attachment_id&quot;:77413706,&quot;asset_id&quot;:40139780,&quot;asset_type&quot;:&quot;Work&quot;,&quot;button_location&quot;:&quot;profile&quot;}" href="https://www.academia.edu/attachments/77413706/download_file?s=profile"><span><i class="fa fa-arrow-down"></i></span><span>Download</span></a><span class="wp-workCard--action visible-if-viewed-by-owner inline-block" style="display: none;"><span class="js-profile-work-strip-edit-button-wrapper profile-work-strip-edit-button-wrapper" data-work-id="40139780"><a class="js-profile-work-strip-edit-button" tabindex="0"><span><i class="fa fa-pencil"></i></span><span>Edit</span></a></span></span></div><div class="wp-workCard_item wp-workCard--stats"><span><span><span class="js-view-count view-count u-mr2x" data-work-id="40139780"><i class="fa fa-spinner fa-spin"></i></span><script>$(function () { var workId = 40139780; window.Academia.workViewCountsFetcher.queue(workId, function (count) { var description = window.$h.commaizeInt(count) + " " + window.$h.pluralize(count, 'View'); $(".js-view-count[data-work-id=40139780]").text(description); $(".js-view-count[data-work-id=40139780]").attr('title', description).tooltip(); }); });</script></span></span><span><span class="percentile-widget hidden"><span class="u-mr2x work-percentile"></span></span><script>$(function () { var workId = 40139780; window.Academia.workPercentilesFetcher.queue(workId, function (percentileText) { var container = $(".js-work-strip[data-work-id='40139780']"); container.find('.work-percentile').text(percentileText.charAt(0).toUpperCase() + percentileText.slice(1)); container.find('.percentile-widget').show(); container.find('.percentile-widget').removeClass('hidden'); }); });</script></span></div><div id="work-strip-premium-row-container"></div></div></div><script> require.config({ waitSeconds: 90 })(["https://a.academia-assets.com/assets/wow_profile-a9bf3a2bc8c89fa2a77156577594264ee8a0f214d74241bc0fcd3f69f8d107ac.js","https://a.academia-assets.com/assets/work_edit-ad038b8c047c1a8d4fa01b402d530ff93c45fee2137a149a4a5398bc8ad67560.js"], function() { // from javascript_helper.rb var dispatcherData = {} if (true){ window.WowProfile.dispatcher = window.WowProfile.dispatcher || _.clone(Backbone.Events); dispatcherData = { dispatcher: window.WowProfile.dispatcher, downloadLinkId: "ad0690a5d0beba195f5e5f3c3528a83e" } } $('.js-work-strip[data-work-id=40139780]').each(function() { if (!$(this).data('initialized')) { new WowProfile.WorkStripView({ el: this, workJSON: {"id":40139780,"title":"Towards Cross-Media Information Spaces and Architectures","translated_title":"","metadata":{"doi":"10.1109/RCIS.2019.8877105","abstract":"The efficient management and retrieval of information via dedicated devices and data structures has been investigated since the early days of Vannevar Bush's seminal article As We May Think introducing the Memex. However, nowadays information is usually fragmented across different media types, devices as well as digital and physical environments, and we are often struggling to retrieve specific information. We discuss three main issues to be addressed when developing solutions for managing information in these co-called cross-media information spaces. First, we have a look at an extensible cross-media linking solution based on the resource-selector-link (RSL) hypermedia metamodel where information can be integrated across applications, devices as well as digital and physical information environments. We then outline some of the limitations of existing digital document formats which are often just a simulation of paper documents and their affordances on desktop computers, and discuss more flexible document representations for cross-media information spaces. Further, new forms of human-information interaction and cross-media user interfaces—including some recent work on dynamic data physicalisation—are discussed. A number of research artefacts are used to illustrate different aspects of the presented data-centric approach for cross-media information spaces and architectures. Last but not least, we provide an outlook on how the embedding of the presented concepts at the level of an operating system might ultimately lead to new possibilities for cross-media information management and innovative forms of human-information interaction.","more_info":"Beat Signer, Proceedings of RCIS 2019, 13th International Conference on Research Challenges in Information Science, Brussels, Belgium, May 2019","publication_date":{"day":null,"month":null,"year":2019,"errors":{}}},"translated_abstract":"The efficient management and retrieval of information via dedicated devices and data structures has been investigated since the early days of Vannevar Bush's seminal article As We May Think introducing the Memex. However, nowadays information is usually fragmented across different media types, devices as well as digital and physical environments, and we are often struggling to retrieve specific information. We discuss three main issues to be addressed when developing solutions for managing information in these co-called cross-media information spaces. First, we have a look at an extensible cross-media linking solution based on the resource-selector-link (RSL) hypermedia metamodel where information can be integrated across applications, devices as well as digital and physical information environments. We then outline some of the limitations of existing digital document formats which are often just a simulation of paper documents and their affordances on desktop computers, and discuss more flexible document representations for cross-media information spaces. Further, new forms of human-information interaction and cross-media user interfaces—including some recent work on dynamic data physicalisation—are discussed. A number of research artefacts are used to illustrate different aspects of the presented data-centric approach for cross-media information spaces and architectures. Last but not least, we provide an outlook on how the embedding of the presented concepts at the level of an operating system might ultimately lead to new possibilities for cross-media information management and innovative forms of human-information interaction.","internal_url":"https://www.academia.edu/40139780/Towards_Cross_Media_Information_Spaces_and_Architectures","translated_internal_url":"","created_at":"2019-08-21T09:45:00.925-07:00","preview_url":null,"current_user_can_edit":null,"current_user_is_owner":null,"owner_id":13155,"coauthors_can_edit":true,"document_type":"paper","co_author_tags":[],"downloadable_attachments":[{"id":77413706,"title":"","file_type":"pdf","scribd_thumbnail_url":"https://attachments.academia-assets.com/77413706/thumbnails/1.jpg","file_name":"towards_cross_media_information_spaces_and_architectures.pdf","download_url":"https://www.academia.edu/attachments/77413706/download_file","bulk_download_file_name":"Towards_Cross_Media_Information_Spaces_a.pdf","bulk_download_url":"https://d1wqtxts1xzle7.cloudfront.net/77413706/towards_cross_media_information_spaces_and_architectures-libre.pdf?1640609738=\u0026response-content-disposition=attachment%3B+filename%3DTowards_Cross_Media_Information_Spaces_a.pdf\u0026Expires=1744203890\u0026Signature=QjfZjpxK53XI~slQCb9lXlrn4prZruc~Y8OLUpOFQS0FHWeEwRnOHLwSWeqXaHCloVH3Xn44UjXLgbDxdN64BZQV35DsAXcWUR6SQI-fvq4HdRiysZ1OoK57-dxFaYMQU5sH~mCdm3P8W2JUrAUcIPrhYU8Khv~ZhtQFU~PftjOhTiMDuJQDJGeGWinRPN313B8NUVp7biqXsJBPB-3LY9h93afI3Gopt-COoGcH1AAWzq2E3obzJe8Kd~ie4pIQwgPnun9kB93Mcxb~rnmSjBjzKfyIOdgUbKAVRKhgbYzlj2qjs2cZKu9CjTAHYbTLwJep8GV~~-s70mz7I6Q2Ug__\u0026Key-Pair-Id=APKAJLOHF5GGSLRBV4ZA"}],"slug":"Towards_Cross_Media_Information_Spaces_and_Architectures","translated_slug":"","page_count":7,"language":"en","content_type":"Work","summary":"The efficient management and retrieval of information via dedicated devices and data structures has been investigated since the early days of Vannevar Bush's seminal article As We May Think introducing the Memex. However, nowadays information is usually fragmented across different media types, devices as well as digital and physical environments, and we are often struggling to retrieve specific information. We discuss three main issues to be addressed when developing solutions for managing information in these co-called cross-media information spaces. First, we have a look at an extensible cross-media linking solution based on the resource-selector-link (RSL) hypermedia metamodel where information can be integrated across applications, devices as well as digital and physical information environments. We then outline some of the limitations of existing digital document formats which are often just a simulation of paper documents and their affordances on desktop computers, and discuss more flexible document representations for cross-media information spaces. Further, new forms of human-information interaction and cross-media user interfaces—including some recent work on dynamic data physicalisation—are discussed. A number of research artefacts are used to illustrate different aspects of the presented data-centric approach for cross-media information spaces and architectures. Last but not least, we provide an outlook on how the embedding of the presented concepts at the level of an operating system might ultimately lead to new possibilities for cross-media information management and innovative forms of human-information interaction.","impression_tracking_id":null,"owner":{"id":13155,"first_name":"Beat","middle_initials":null,"last_name":"Signer","page_name":"BeatSigner","domain_name":"vub","created_at":"2008-10-29T17:11:18.015-07:00","display_name":"Beat Signer","url":"https://vub.academia.edu/BeatSigner"},"attachments":[{"id":77413706,"title":"","file_type":"pdf","scribd_thumbnail_url":"https://attachments.academia-assets.com/77413706/thumbnails/1.jpg","file_name":"towards_cross_media_information_spaces_and_architectures.pdf","download_url":"https://www.academia.edu/attachments/77413706/download_file","bulk_download_file_name":"Towards_Cross_Media_Information_Spaces_a.pdf","bulk_download_url":"https://d1wqtxts1xzle7.cloudfront.net/77413706/towards_cross_media_information_spaces_and_architectures-libre.pdf?1640609738=\u0026response-content-disposition=attachment%3B+filename%3DTowards_Cross_Media_Information_Spaces_a.pdf\u0026Expires=1744203890\u0026Signature=QjfZjpxK53XI~slQCb9lXlrn4prZruc~Y8OLUpOFQS0FHWeEwRnOHLwSWeqXaHCloVH3Xn44UjXLgbDxdN64BZQV35DsAXcWUR6SQI-fvq4HdRiysZ1OoK57-dxFaYMQU5sH~mCdm3P8W2JUrAUcIPrhYU8Khv~ZhtQFU~PftjOhTiMDuJQDJGeGWinRPN313B8NUVp7biqXsJBPB-3LY9h93afI3Gopt-COoGcH1AAWzq2E3obzJe8Kd~ie4pIQwgPnun9kB93Mcxb~rnmSjBjzKfyIOdgUbKAVRKhgbYzlj2qjs2cZKu9CjTAHYbTLwJep8GV~~-s70mz7I6Q2Ug__\u0026Key-Pair-Id=APKAJLOHF5GGSLRBV4ZA"}],"research_interests":[{"id":37,"name":"Information Systems","url":"https://www.academia.edu/Documents/in/Information_Systems"},{"id":48,"name":"Engineering","url":"https://www.academia.edu/Documents/in/Engineering"},{"id":422,"name":"Computer Science","url":"https://www.academia.edu/Documents/in/Computer_Science"},{"id":459,"name":"Information Science","url":"https://www.academia.edu/Documents/in/Information_Science"},{"id":472,"name":"Human Computer Interaction","url":"https://www.academia.edu/Documents/in/Human_Computer_Interaction"},{"id":491,"name":"Information Technology","url":"https://www.academia.edu/Documents/in/Information_Technology"},{"id":923,"name":"Technology","url":"https://www.academia.edu/Documents/in/Technology"},{"id":933,"name":"New Media","url":"https://www.academia.edu/Documents/in/New_Media"},{"id":1012,"name":"Digital Libraries","url":"https://www.academia.edu/Documents/in/Digital_Libraries"},{"id":1210,"name":"Informatics","url":"https://www.academia.edu/Documents/in/Informatics"},{"id":1453,"name":"Information Management","url":"https://www.academia.edu/Documents/in/Information_Management"},{"id":2879,"name":"Ubiquitous Computing","url":"https://www.academia.edu/Documents/in/Ubiquitous_Computing"},{"id":5266,"name":"Human Information Interaction","url":"https://www.academia.edu/Documents/in/Human_Information_Interaction"},{"id":11085,"name":"Cross-Media Information Spaces","url":"https://www.academia.edu/Documents/in/Cross-Media_Information_Spaces"},{"id":42095,"name":"Document Engineering","url":"https://www.academia.edu/Documents/in/Document_Engineering"},{"id":44606,"name":"Hypermedia","url":"https://www.academia.edu/Documents/in/Hypermedia"},{"id":54192,"name":"Hypertext","url":"https://www.academia.edu/Documents/in/Hypertext"},{"id":85420,"name":"Crossmedia","url":"https://www.academia.edu/Documents/in/Crossmedia"},{"id":141114,"name":"World Wide Web","url":"https://www.academia.edu/Documents/in/World_Wide_Web"},{"id":1251579,"name":"Human-Computer Interaction","url":"https://www.academia.edu/Documents/in/Human-Computer_Interaction"}],"urls":[{"id":9196123,"url":"https://beatsigner.com/publications/towards-cross-media-information-spaces-and-architectures.pdf"}]}, dispatcherData: dispatcherData }); $(this).data('initialized', true); } }); $a.trackClickSource(".js-work-strip-work-link", "profile_work_strip") if (false) { Aedu.setUpFigureCarousel('profile-work-40139780-figures'); } }); </script> <div class="js-work-strip profile--work_container" data-work-id="120601781"><div class="profile--work_thumbnail hidden-xs"><a class="js-work-strip-work-link" data-click-track="profile-work-strip-thumbnail" href="https://www.academia.edu/120601781/Pen_based_Interaction"><img alt="Research paper thumbnail of Pen-based Interaction" class="work-thumbnail" src="https://attachments.academia-assets.com/120841760/thumbnails/1.jpg" /></a></div><div class="wp-workCard wp-workCard_itemContainer"><div class="wp-workCard_item wp-workCard--title"><a class="js-work-strip-work-link text-gray-darker" data-click-track="profile-work-strip-title" href="https://www.academia.edu/120601781/Pen_based_Interaction">Pen-based Interaction</a></div><div class="wp-workCard_item"><span class="js-work-more-abstract-truncated">The use of pens in human-computer interaction has been investigated since Ivan Sutherland&#39;s Sketc...</span><a class="js-work-more-abstract" data-broccoli-component="work_strip.more_abstract" data-click-track="profile-work-strip-more-abstract" href="javascript:;"><span> more </span><span><i class="fa fa-caret-down"></i></span></a><span class="js-work-more-abstract-untruncated hidden">The use of pens in human-computer interaction has been investigated since Ivan Sutherland&#39;s Sketchpad graphical communication system in the early 1960s. We provide an overview of the major developments in pen-based interaction over the last six decades and compare different hardware solutions and pen-tracking techniques. In addition to pen-based interaction with digital devices, we discuss more recent digital pen and paper solutions where pen and paper-based interaction is augmented with digital information and services. We outline different interface and interaction styles and present various academic as well as commercial application domains where pen-based interaction has been successfully applied. Furthermore, we discuss several issues to be considered when designing pen-based interactions and conclude with an outlook of potential future challenges and directions for penbased human-computer interaction.</span></div><div class="wp-workCard_item"><div class="carousel-container carousel-container--sm" id="profile-work-120601781-figures"><div class="prev-slide-container js-prev-button-container"><button aria-label="Previous" class="carousel-navigation-button js-profile-work-120601781-figures-prev"><span class="material-symbols-outlined" style="font-size: 24px" translate="no">arrow_back_ios</span></button></div><div class="slides-container js-slides-container"><figure class="figure-slide-container"><a href="https://www.academia.edu/figures/51867935/figure-1-early-pen-based-devices-and-technologies"><img alt="Fig. 1 Early pen-based devices and technologies " class="figure-slide-image" src="https://figures.academia-assets.com/120841760/figure_001.jpg" /></a></figure><figure class="figure-slide-container"><a href="https://www.academia.edu/figures/51867955/figure-2-modern-pen-based-devices-and-technologies"><img alt="Fig. 2 Modern pen-based devices and technologies " class="figure-slide-image" src="https://figures.academia-assets.com/120841760/figure_002.jpg" /></a></figure></div><div class="next-slide-container js-next-button-container"><button aria-label="Next" class="carousel-navigation-button js-profile-work-120601781-figures-next"><span class="material-symbols-outlined" style="font-size: 24px" translate="no">arrow_forward_ios</span></button></div></div></div><div class="wp-workCard_item wp-workCard--actions"><span class="work-strip-bookmark-button-container"></span><a id="84a21d71cafca7d213e9d6c8475d466c" class="wp-workCard--action" rel="nofollow" data-click-track="profile-work-strip-download" data-download="{&quot;attachment_id&quot;:120841760,&quot;asset_id&quot;:120601781,&quot;asset_type&quot;:&quot;Work&quot;,&quot;button_location&quot;:&quot;profile&quot;}" href="https://www.academia.edu/attachments/120841760/download_file?s=profile"><span><i class="fa fa-arrow-down"></i></span><span>Download</span></a><span class="wp-workCard--action visible-if-viewed-by-owner inline-block" style="display: none;"><span class="js-profile-work-strip-edit-button-wrapper profile-work-strip-edit-button-wrapper" data-work-id="120601781"><a class="js-profile-work-strip-edit-button" tabindex="0"><span><i class="fa fa-pencil"></i></span><span>Edit</span></a></span></span></div><div class="wp-workCard_item wp-workCard--stats"><span><span><span class="js-view-count view-count u-mr2x" data-work-id="120601781"><i class="fa fa-spinner fa-spin"></i></span><script>$(function () { var workId = 120601781; window.Academia.workViewCountsFetcher.queue(workId, function (count) { var description = window.$h.commaizeInt(count) + " " + window.$h.pluralize(count, 'View'); $(".js-view-count[data-work-id=120601781]").text(description); $(".js-view-count[data-work-id=120601781]").attr('title', description).tooltip(); }); });</script></span></span><span><span class="percentile-widget hidden"><span class="u-mr2x work-percentile"></span></span><script>$(function () { var workId = 120601781; window.Academia.workPercentilesFetcher.queue(workId, function (percentileText) { var container = $(".js-work-strip[data-work-id='120601781']"); container.find('.work-percentile').text(percentileText.charAt(0).toUpperCase() + percentileText.slice(1)); container.find('.percentile-widget').show(); container.find('.percentile-widget').removeClass('hidden'); }); });</script></span></div><div id="work-strip-premium-row-container"></div></div></div><script> require.config({ waitSeconds: 90 })(["https://a.academia-assets.com/assets/wow_profile-a9bf3a2bc8c89fa2a77156577594264ee8a0f214d74241bc0fcd3f69f8d107ac.js","https://a.academia-assets.com/assets/work_edit-ad038b8c047c1a8d4fa01b402d530ff93c45fee2137a149a4a5398bc8ad67560.js"], function() { // from javascript_helper.rb var dispatcherData = {} if (true){ window.WowProfile.dispatcher = window.WowProfile.dispatcher || _.clone(Backbone.Events); dispatcherData = { dispatcher: window.WowProfile.dispatcher, downloadLinkId: "84a21d71cafca7d213e9d6c8475d466c" } } $('.js-work-strip[data-work-id=120601781]').each(function() { if (!$(this).data('initialized')) { new WowProfile.WorkStripView({ el: this, workJSON: {"id":120601781,"title":"Pen-based Interaction","translated_title":"","metadata":{"doi":"10.1007/978-3-319-27648-9_102-1","abstract":"The use of pens in human-computer interaction has been investigated since Ivan Sutherland's Sketchpad graphical communication system in the early 1960s. We provide an overview of the major developments in pen-based interaction over the last six decades and compare different hardware solutions and pen-tracking techniques. In addition to pen-based interaction with digital devices, we discuss more recent digital pen and paper solutions where pen and paper-based interaction is augmented with digital information and services. We outline different interface and interaction styles and present various academic as well as commercial application domains where pen-based interaction has been successfully applied. Furthermore, we discuss several issues to be considered when designing pen-based interactions and conclude with an outlook of potential future challenges and directions for penbased human-computer interaction.","more_info":"Beat Signer, Handbook of Human Computer Interaction, Major Reference Work, Springer Nature, 2025","publication_date":{"day":null,"month":null,"year":2025,"errors":{}}},"translated_abstract":"The use of pens in human-computer interaction has been investigated since Ivan Sutherland's Sketchpad graphical communication system in the early 1960s. We provide an overview of the major developments in pen-based interaction over the last six decades and compare different hardware solutions and pen-tracking techniques. In addition to pen-based interaction with digital devices, we discuss more recent digital pen and paper solutions where pen and paper-based interaction is augmented with digital information and services. We outline different interface and interaction styles and present various academic as well as commercial application domains where pen-based interaction has been successfully applied. Furthermore, we discuss several issues to be considered when designing pen-based interactions and conclude with an outlook of potential future challenges and directions for penbased human-computer interaction.","internal_url":"https://www.academia.edu/120601781/Pen_based_Interaction","translated_internal_url":"","created_at":"2024-06-05T17:06:12.712-07:00","preview_url":null,"current_user_can_edit":null,"current_user_is_owner":null,"owner_id":13155,"coauthors_can_edit":true,"document_type":"paper","co_author_tags":[],"downloadable_attachments":[{"id":120841760,"title":"","file_type":"pdf","scribd_thumbnail_url":"https://attachments.academia-assets.com/120841760/thumbnails/1.jpg","file_name":"signer_HCIhandbook2025.pdf","download_url":"https://www.academia.edu/attachments/120841760/download_file","bulk_download_file_name":"Pen_based_Interaction.pdf","bulk_download_url":"https://d1wqtxts1xzle7.cloudfront.net/120841760/signer_HCIhandbook2025-libre.pdf?1736963196=\u0026response-content-disposition=attachment%3B+filename%3DPen_based_Interaction.pdf\u0026Expires=1744203890\u0026Signature=X9RgsdYhBbZAipbbOyQH~gZhPyD6ZdK0CrdJ-kXG2b-aSpDS6pTrB3sGAmAcJLAXX-c-u2trydCc9ZyejF-LRRZnGW5~rtTAPs6UpWdgqvmewJnX1nWpAW1LGOe~guI8W9CH3PaE9mNxf5nG2DvgNdnxMmlhE2T4IV6pb6h7f1Mw9NtGuM0aPG8IgIhcMrHhIv62kP2kstXvNZgnCYLHvkaMibVA34qKzke2wVIAzwqCix1zgZnw5mewPlD-6cdhtrO7PtBW4L1ug~ZN9xPm0C0nnRqsA-2-P33QkZWNUHybU4vbx4Em1p0ItQVNqsUevc8OFUGxTsF-DTCrFEx~Yg__\u0026Key-Pair-Id=APKAJLOHF5GGSLRBV4ZA"}],"slug":"Pen_based_Interaction","translated_slug":"","page_count":24,"language":"en","content_type":"Work","summary":"The use of pens in human-computer interaction has been investigated since Ivan Sutherland's Sketchpad graphical communication system in the early 1960s. We provide an overview of the major developments in pen-based interaction over the last six decades and compare different hardware solutions and pen-tracking techniques. In addition to pen-based interaction with digital devices, we discuss more recent digital pen and paper solutions where pen and paper-based interaction is augmented with digital information and services. We outline different interface and interaction styles and present various academic as well as commercial application domains where pen-based interaction has been successfully applied. Furthermore, we discuss several issues to be considered when designing pen-based interactions and conclude with an outlook of potential future challenges and directions for penbased human-computer interaction.","impression_tracking_id":null,"owner":{"id":13155,"first_name":"Beat","middle_initials":null,"last_name":"Signer","page_name":"BeatSigner","domain_name":"vub","created_at":"2008-10-29T17:11:18.015-07:00","display_name":"Beat Signer","url":"https://vub.academia.edu/BeatSigner"},"attachments":[{"id":120841760,"title":"","file_type":"pdf","scribd_thumbnail_url":"https://attachments.academia-assets.com/120841760/thumbnails/1.jpg","file_name":"signer_HCIhandbook2025.pdf","download_url":"https://www.academia.edu/attachments/120841760/download_file","bulk_download_file_name":"Pen_based_Interaction.pdf","bulk_download_url":"https://d1wqtxts1xzle7.cloudfront.net/120841760/signer_HCIhandbook2025-libre.pdf?1736963196=\u0026response-content-disposition=attachment%3B+filename%3DPen_based_Interaction.pdf\u0026Expires=1744203890\u0026Signature=X9RgsdYhBbZAipbbOyQH~gZhPyD6ZdK0CrdJ-kXG2b-aSpDS6pTrB3sGAmAcJLAXX-c-u2trydCc9ZyejF-LRRZnGW5~rtTAPs6UpWdgqvmewJnX1nWpAW1LGOe~guI8W9CH3PaE9mNxf5nG2DvgNdnxMmlhE2T4IV6pb6h7f1Mw9NtGuM0aPG8IgIhcMrHhIv62kP2kstXvNZgnCYLHvkaMibVA34qKzke2wVIAzwqCix1zgZnw5mewPlD-6cdhtrO7PtBW4L1ug~ZN9xPm0C0nnRqsA-2-P33QkZWNUHybU4vbx4Em1p0ItQVNqsUevc8OFUGxTsF-DTCrFEx~Yg__\u0026Key-Pair-Id=APKAJLOHF5GGSLRBV4ZA"}],"research_interests":[{"id":422,"name":"Computer Science","url":"https://www.academia.edu/Documents/in/Computer_Science"},{"id":472,"name":"Human Computer Interaction","url":"https://www.academia.edu/Documents/in/Human_Computer_Interaction"},{"id":491,"name":"Information Technology","url":"https://www.academia.edu/Documents/in/Information_Technology"},{"id":1210,"name":"Informatics","url":"https://www.academia.edu/Documents/in/Informatics"},{"id":1380,"name":"Computer Engineering","url":"https://www.academia.edu/Documents/in/Computer_Engineering"},{"id":4416,"name":"Interaction Design","url":"https://www.academia.edu/Documents/in/Interaction_Design"},{"id":11081,"name":"Multimodal Interaction","url":"https://www.academia.edu/Documents/in/Multimodal_Interaction"},{"id":11085,"name":"Cross-Media Information Spaces","url":"https://www.academia.edu/Documents/in/Cross-Media_Information_Spaces"},{"id":11086,"name":"Interactive Paper","url":"https://www.academia.edu/Documents/in/Interactive_Paper"},{"id":11119,"name":"User Interface","url":"https://www.academia.edu/Documents/in/User_Interface"},{"id":14203,"name":"Natural User Interfaces","url":"https://www.academia.edu/Documents/in/Natural_User_Interfaces"},{"id":15951,"name":"Digital Pen and Paper","url":"https://www.academia.edu/Documents/in/Digital_Pen_and_Paper"},{"id":23997,"name":"HCI","url":"https://www.academia.edu/Documents/in/HCI"},{"id":36689,"name":"Human-Computer Interface","url":"https://www.academia.edu/Documents/in/Human-Computer_Interface"},{"id":97585,"name":"User interfaces","url":"https://www.academia.edu/Documents/in/User_interfaces"},{"id":126585,"name":"Human Computer Interaction Design","url":"https://www.academia.edu/Documents/in/Human_Computer_Interaction_Design"},{"id":469749,"name":"Human Computer Interface","url":"https://www.academia.edu/Documents/in/Human_Computer_Interface"},{"id":851044,"name":"Computer Human Interaction","url":"https://www.academia.edu/Documents/in/Computer_Human_Interaction"},{"id":1226904,"name":"Computer Science and Engineering","url":"https://www.academia.edu/Documents/in/Computer_Science_and_Engineering-1"},{"id":1251579,"name":"Human-Computer Interaction","url":"https://www.academia.edu/Documents/in/Human-Computer_Interaction"}],"urls":[{"id":42685186,"url":"https://beatsigner.com/publications/pen-based-interaction.pdf"}]}, dispatcherData: dispatcherData }); $(this).data('initialized', true); } }); $a.trackClickSource(".js-work-strip-work-link", "profile_work_strip") if (true) { Aedu.setUpFigureCarousel('profile-work-120601781-figures'); } }); </script> <div class="js-work-strip profile--work_container" data-work-id="241739"><div class="profile--work_thumbnail hidden-xs"><a class="js-work-strip-work-link" data-click-track="profile-work-strip-thumbnail" href="https://www.academia.edu/241739/What_is_Wrong_with_Digital_Documents_A_Conceptual_Model_for_Structural_Cross_Media_Content_Composition_and_Reuse"><img alt="Research paper thumbnail of What is Wrong with Digital Documents? A Conceptual Model for Structural Cross-Media Content Composition and Reuse" class="work-thumbnail" src="https://attachments.academia-assets.com/59300463/thumbnails/1.jpg" /></a></div><div class="wp-workCard wp-workCard_itemContainer"><div class="wp-workCard_item wp-workCard--title"><a class="js-work-strip-work-link text-gray-darker" data-click-track="profile-work-strip-title" href="https://www.academia.edu/241739/What_is_Wrong_with_Digital_Documents_A_Conceptual_Model_for_Structural_Cross_Media_Content_Composition_and_Reuse">What is Wrong with Digital Documents? A Conceptual Model for Structural Cross-Media Content Composition and Reuse</a></div><div class="wp-workCard_item"><span class="js-work-more-abstract-truncated">Many of today&#39;s digital document formats are strongly based on a digital emulation of printed med...</span><a class="js-work-more-abstract" data-broccoli-component="work_strip.more_abstract" data-click-track="profile-work-strip-more-abstract" href="javascript:;"><span> more </span><span><i class="fa fa-caret-down"></i></span></a><span class="js-work-more-abstract-untruncated hidden">Many of today&#39;s digital document formats are strongly based on a digital emulation of printed media. While such a paper simulation might be appropriate for the visualisation of certain digital content, it is generally not the most effective solution for digitally managing and storing information. The oversimplistic modelling of digital documents as monolithic blocks of linear content, with a lack of structural semantics, does not pay attention to some of the superior features that digital media offers in comparison to traditional paper documents. For example, existing digital document formats adopt the limitations of paper documents by unnecessarily replicating content via copy and paste operations, instead of digitally embedding and reusing parts of digital documents via structural references. We introduce a conceptual model for structural cross-media content composition and highlight how the proposed solution not only enables the reuse of content via structural relationships, but also supports dynamic and context-dependent document adaptation, structural content annotations as well as the integration of arbitrary non-textual media types. We further discuss solutions for the fluid navigation and cross-media content publishing based on the proposed structural cross-media content model.</span></div><div class="wp-workCard_item wp-workCard--actions"><span class="work-strip-bookmark-button-container"></span><a id="ee3429dda2158feb0d7531a0fc15735d" class="wp-workCard--action" rel="nofollow" data-click-track="profile-work-strip-download" data-download="{&quot;attachment_id&quot;:59300463,&quot;asset_id&quot;:241739,&quot;asset_type&quot;:&quot;Work&quot;,&quot;button_location&quot;:&quot;profile&quot;}" href="https://www.academia.edu/attachments/59300463/download_file?s=profile"><span><i class="fa fa-arrow-down"></i></span><span>Download</span></a><span class="wp-workCard--action visible-if-viewed-by-owner inline-block" style="display: none;"><span class="js-profile-work-strip-edit-button-wrapper profile-work-strip-edit-button-wrapper" data-work-id="241739"><a class="js-profile-work-strip-edit-button" tabindex="0"><span><i class="fa fa-pencil"></i></span><span>Edit</span></a></span></span></div><div class="wp-workCard_item wp-workCard--stats"><span><span><span class="js-view-count view-count u-mr2x" data-work-id="241739"><i class="fa fa-spinner fa-spin"></i></span><script>$(function () { var workId = 241739; window.Academia.workViewCountsFetcher.queue(workId, function (count) { var description = window.$h.commaizeInt(count) + " " + window.$h.pluralize(count, 'View'); $(".js-view-count[data-work-id=241739]").text(description); $(".js-view-count[data-work-id=241739]").attr('title', description).tooltip(); }); });</script></span></span><span><span class="percentile-widget hidden"><span class="u-mr2x work-percentile"></span></span><script>$(function () { var workId = 241739; window.Academia.workPercentilesFetcher.queue(workId, function (percentileText) { var container = $(".js-work-strip[data-work-id='241739']"); container.find('.work-percentile').text(percentileText.charAt(0).toUpperCase() + percentileText.slice(1)); container.find('.percentile-widget').show(); container.find('.percentile-widget').removeClass('hidden'); }); });</script></span></div><div id="work-strip-premium-row-container"></div></div></div><script> require.config({ waitSeconds: 90 })(["https://a.academia-assets.com/assets/wow_profile-a9bf3a2bc8c89fa2a77156577594264ee8a0f214d74241bc0fcd3f69f8d107ac.js","https://a.academia-assets.com/assets/work_edit-ad038b8c047c1a8d4fa01b402d530ff93c45fee2137a149a4a5398bc8ad67560.js"], function() { // from javascript_helper.rb var dispatcherData = {} if (true){ window.WowProfile.dispatcher = window.WowProfile.dispatcher || _.clone(Backbone.Events); dispatcherData = { dispatcher: window.WowProfile.dispatcher, downloadLinkId: "ee3429dda2158feb0d7531a0fc15735d" } } $('.js-work-strip[data-work-id=241739]').each(function() { if (!$(this).data('initialized')) { new WowProfile.WorkStripView({ el: this, workJSON: {"id":241739,"title":"What is Wrong with Digital Documents? A Conceptual Model for Structural Cross-Media Content Composition and Reuse","translated_title":"","metadata":{"doi":"10.1007/978-3-642-16373-9_28","abstract":"Many of today's digital document formats are strongly based on a digital emulation of printed media. While such a paper simulation might be appropriate for the visualisation of certain digital content, it is generally not the most effective solution for digitally managing and storing information. The oversimplistic modelling of digital documents as monolithic blocks of linear content, with a lack of structural semantics, does not pay attention to some of the superior features that digital media offers in comparison to traditional paper documents. For example, existing digital document formats adopt the limitations of paper documents by unnecessarily replicating content via copy and paste operations, instead of digitally embedding and reusing parts of digital documents via structural references. We introduce a conceptual model for structural cross-media content composition and highlight how the proposed solution not only enables the reuse of content via structural relationships, but also supports dynamic and context-dependent document adaptation, structural content annotations as well as the integration of arbitrary non-textual media types. We further discuss solutions for the fluid navigation and cross-media content publishing based on the proposed structural cross-media content model.","more_info":"Beat Signer, Proceedings of ER 2010, 29th International Conference on Conceptual Modeling, Vancouver, Canada, November 2010","publication_date":{"day":null,"month":null,"year":2010,"errors":{}}},"translated_abstract":"Many of today's digital document formats are strongly based on a digital emulation of printed media. While such a paper simulation might be appropriate for the visualisation of certain digital content, it is generally not the most effective solution for digitally managing and storing information. The oversimplistic modelling of digital documents as monolithic blocks of linear content, with a lack of structural semantics, does not pay attention to some of the superior features that digital media offers in comparison to traditional paper documents. For example, existing digital document formats adopt the limitations of paper documents by unnecessarily replicating content via copy and paste operations, instead of digitally embedding and reusing parts of digital documents via structural references. We introduce a conceptual model for structural cross-media content composition and highlight how the proposed solution not only enables the reuse of content via structural relationships, but also supports dynamic and context-dependent document adaptation, structural content annotations as well as the integration of arbitrary non-textual media types. We further discuss solutions for the fluid navigation and cross-media content publishing based on the proposed structural cross-media content model.","internal_url":"https://www.academia.edu/241739/What_is_Wrong_with_Digital_Documents_A_Conceptual_Model_for_Structural_Cross_Media_Content_Composition_and_Reuse","translated_internal_url":"","created_at":"2010-06-02T18:51:41.095-07:00","preview_url":null,"current_user_can_edit":null,"current_user_is_owner":null,"owner_id":13155,"coauthors_can_edit":true,"document_type":"paper","co_author_tags":[],"downloadable_attachments":[{"id":59300463,"title":"","file_type":"pdf","scribd_thumbnail_url":"https://attachments.academia-assets.com/59300463/thumbnails/1.jpg","file_name":"signer_ER2010.pdf","download_url":"https://www.academia.edu/attachments/59300463/download_file","bulk_download_file_name":"What_is_Wrong_with_Digital_Documents_A_C.pdf","bulk_download_url":"https://d1wqtxts1xzle7.cloudfront.net/59300463/signer_ER2010-libre.pdf?1558175311=\u0026response-content-disposition=attachment%3B+filename%3DWhat_is_Wrong_with_Digital_Documents_A_C.pdf\u0026Expires=1744203890\u0026Signature=Xit-FFWX-3MErrm0MXkyG3zNyEAp0IcpdP21jgaKoDzVIKCgDHlXMz4PlRi7GykrdNi13ho1vH9wO-89~PbKUwnAbHmJ9uodK9UE79wGIDgXtU4oBDRQ7pbsB7StvujgbDh8tRsS2WZjU~F-dOwm2VsylSab9~DEtJy6Bzv1memLP299Cr0uTYZtoFKv2bHP0DN9L~e3FAzeiAV21gfxRq64CGAY3OAQIy06avFHArSwJJfz-Cj4Su2RFtRmrH2jL~EnCi3U2YI~rwZtJ7pSPAi-lF6VLifNeJqL4ugaeMhiMfEvLUMBjLknN0TEfuNFQS-0bTZJJLIT2LKY6wPdYA__\u0026Key-Pair-Id=APKAJLOHF5GGSLRBV4ZA"}],"slug":"What_is_Wrong_with_Digital_Documents_A_Conceptual_Model_for_Structural_Cross_Media_Content_Composition_and_Reuse","translated_slug":"","page_count":14,"language":"en","content_type":"Work","summary":"Many of today's digital document formats are strongly based on a digital emulation of printed media. While such a paper simulation might be appropriate for the visualisation of certain digital content, it is generally not the most effective solution for digitally managing and storing information. The oversimplistic modelling of digital documents as monolithic blocks of linear content, with a lack of structural semantics, does not pay attention to some of the superior features that digital media offers in comparison to traditional paper documents. For example, existing digital document formats adopt the limitations of paper documents by unnecessarily replicating content via copy and paste operations, instead of digitally embedding and reusing parts of digital documents via structural references. We introduce a conceptual model for structural cross-media content composition and highlight how the proposed solution not only enables the reuse of content via structural relationships, but also supports dynamic and context-dependent document adaptation, structural content annotations as well as the integration of arbitrary non-textual media types. We further discuss solutions for the fluid navigation and cross-media content publishing based on the proposed structural cross-media content model.","impression_tracking_id":null,"owner":{"id":13155,"first_name":"Beat","middle_initials":null,"last_name":"Signer","page_name":"BeatSigner","domain_name":"vub","created_at":"2008-10-29T17:11:18.015-07:00","display_name":"Beat Signer","url":"https://vub.academia.edu/BeatSigner"},"attachments":[{"id":59300463,"title":"","file_type":"pdf","scribd_thumbnail_url":"https://attachments.academia-assets.com/59300463/thumbnails/1.jpg","file_name":"signer_ER2010.pdf","download_url":"https://www.academia.edu/attachments/59300463/download_file","bulk_download_file_name":"What_is_Wrong_with_Digital_Documents_A_C.pdf","bulk_download_url":"https://d1wqtxts1xzle7.cloudfront.net/59300463/signer_ER2010-libre.pdf?1558175311=\u0026response-content-disposition=attachment%3B+filename%3DWhat_is_Wrong_with_Digital_Documents_A_C.pdf\u0026Expires=1744203890\u0026Signature=Xit-FFWX-3MErrm0MXkyG3zNyEAp0IcpdP21jgaKoDzVIKCgDHlXMz4PlRi7GykrdNi13ho1vH9wO-89~PbKUwnAbHmJ9uodK9UE79wGIDgXtU4oBDRQ7pbsB7StvujgbDh8tRsS2WZjU~F-dOwm2VsylSab9~DEtJy6Bzv1memLP299Cr0uTYZtoFKv2bHP0DN9L~e3FAzeiAV21gfxRq64CGAY3OAQIy06avFHArSwJJfz-Cj4Su2RFtRmrH2jL~EnCi3U2YI~rwZtJ7pSPAi-lF6VLifNeJqL4ugaeMhiMfEvLUMBjLknN0TEfuNFQS-0bTZJJLIT2LKY6wPdYA__\u0026Key-Pair-Id=APKAJLOHF5GGSLRBV4ZA"}],"research_interests":[{"id":37,"name":"Information Systems","url":"https://www.academia.edu/Documents/in/Information_Systems"},{"id":48,"name":"Engineering","url":"https://www.academia.edu/Documents/in/Engineering"},{"id":422,"name":"Computer Science","url":"https://www.academia.edu/Documents/in/Computer_Science"},{"id":449,"name":"Software Engineering","url":"https://www.academia.edu/Documents/in/Software_Engineering"},{"id":472,"name":"Human Computer Interaction","url":"https://www.academia.edu/Documents/in/Human_Computer_Interaction"},{"id":491,"name":"Information Technology","url":"https://www.academia.edu/Documents/in/Information_Technology"},{"id":922,"name":"Education","url":"https://www.academia.edu/Documents/in/Education"},{"id":923,"name":"Technology","url":"https://www.academia.edu/Documents/in/Technology"},{"id":933,"name":"New Media","url":"https://www.academia.edu/Documents/in/New_Media"},{"id":1012,"name":"Digital Libraries","url":"https://www.academia.edu/Documents/in/Digital_Libraries"},{"id":1210,"name":"Informatics","url":"https://www.academia.edu/Documents/in/Informatics"},{"id":1380,"name":"Computer Engineering","url":"https://www.academia.edu/Documents/in/Computer_Engineering"},{"id":1384,"name":"Web Engineering","url":"https://www.academia.edu/Documents/in/Web_Engineering"},{"id":1453,"name":"Information Management","url":"https://www.academia.edu/Documents/in/Information_Management"},{"id":1609,"name":"E-learning","url":"https://www.academia.edu/Documents/in/E-learning"},{"id":2482,"name":"Database Systems","url":"https://www.academia.edu/Documents/in/Database_Systems"},{"id":2869,"name":"Digital Media","url":"https://www.academia.edu/Documents/in/Digital_Media"},{"id":3419,"name":"Multimedia","url":"https://www.academia.edu/Documents/in/Multimedia"},{"id":4568,"name":"Liberalism","url":"https://www.academia.edu/Documents/in/Liberalism"},{"id":4682,"name":"Reading Habits/Attitudes","url":"https://www.academia.edu/Documents/in/Reading_Habits_Attitudes"},{"id":7454,"name":"Information Communication Technology","url":"https://www.academia.edu/Documents/in/Information_Communication_Technology"},{"id":8215,"name":"Conceptual Modelling","url":"https://www.academia.edu/Documents/in/Conceptual_Modelling"},{"id":9471,"name":"Reading","url":"https://www.academia.edu/Documents/in/Reading"},{"id":10048,"name":"Future Media","url":"https://www.academia.edu/Documents/in/Future_Media"},{"id":10165,"name":"Interactive and Digital Media","url":"https://www.academia.edu/Documents/in/Interactive_and_Digital_Media"},{"id":10249,"name":"Writing","url":"https://www.academia.edu/Documents/in/Writing"},{"id":11085,"name":"Cross-Media Information Spaces","url":"https://www.academia.edu/Documents/in/Cross-Media_Information_Spaces"},{"id":11123,"name":"Personal Information Management","url":"https://www.academia.edu/Documents/in/Personal_Information_Management"},{"id":29124,"name":"Web Science","url":"https://www.academia.edu/Documents/in/Web_Science"},{"id":41474,"name":"Cross Media Platforms","url":"https://www.academia.edu/Documents/in/Cross_Media_Platforms"},{"id":42095,"name":"Document Engineering","url":"https://www.academia.edu/Documents/in/Document_Engineering"},{"id":44606,"name":"Hypermedia","url":"https://www.academia.edu/Documents/in/Hypermedia"},{"id":48200,"name":"Digital Library","url":"https://www.academia.edu/Documents/in/Digital_Library"},{"id":54192,"name":"Hypertext","url":"https://www.academia.edu/Documents/in/Hypertext"},{"id":193390,"name":"RSL","url":"https://www.academia.edu/Documents/in/RSL"},{"id":369022,"name":"Cross Media","url":"https://www.academia.edu/Documents/in/Cross_Media"},{"id":402880,"name":"Personal Information Management - PIM","url":"https://www.academia.edu/Documents/in/Personal_Information_Management_-_PIM"},{"id":418691,"name":"Multimedia Computing","url":"https://www.academia.edu/Documents/in/Multimedia_Computing"},{"id":668047,"name":"Digital Media and Interaction Design","url":"https://www.academia.edu/Documents/in/Digital_Media_and_Interaction_Design"}],"urls":[{"id":9196129,"url":"https://beatsigner.com/publications/what-is-wrong-with-digital-documents-a-conceptual-model-for-structural-cross-media-content-composition-and-reuse.pdf"}]}, dispatcherData: dispatcherData }); $(this).data('initialized', true); } }); $a.trackClickSource(".js-work-strip-work-link", "profile_work_strip") if (false) { Aedu.setUpFigureCarousel('profile-work-241739-figures'); } }); </script> <div class="js-work-strip profile--work_container" data-work-id="37336859"><div class="profile--work_thumbnail hidden-xs"><a class="js-work-strip-work-link" data-click-track="profile-work-strip-thumbnail" href="https://www.academia.edu/37336859/Towards_a_Framework_for_Dynamic_Data_Physicalisation"><img alt="Research paper thumbnail of Towards a Framework for Dynamic Data Physicalisation" class="work-thumbnail" src="https://attachments.academia-assets.com/59746352/thumbnails/1.jpg" /></a></div><div class="wp-workCard wp-workCard_itemContainer"><div class="wp-workCard_item wp-workCard--title"><a class="js-work-strip-work-link text-gray-darker" data-click-track="profile-work-strip-title" href="https://www.academia.edu/37336859/Towards_a_Framework_for_Dynamic_Data_Physicalisation">Towards a Framework for Dynamic Data Physicalisation</a></div><div class="wp-workCard_item wp-workCard--coauthors"><span>by </span><span><a class="" data-click-track="profile-work-strip-authors" href="https://vub.academia.edu/BeatSigner">Beat Signer</a>, <a class="" data-click-track="profile-work-strip-authors" href="https://vub.academia.edu/PayamEbrahimi">Payam Ebrahimi</a>, and <a class="" data-click-track="profile-work-strip-authors" href="https://vub.academia.edu/AhmedKareemAAbdullah">Ahmed K.A. Abdullah</a></span></div><div class="wp-workCard_item"><span class="js-work-more-abstract-truncated">Advanced data visualisation techniques enable the exploration and analysis of large datasets. Rec...</span><a class="js-work-more-abstract" data-broccoli-component="work_strip.more_abstract" data-click-track="profile-work-strip-more-abstract" href="javascript:;"><span> more </span><span><i class="fa fa-caret-down"></i></span></a><span class="js-work-more-abstract-untruncated hidden">Advanced data visualisation techniques enable the exploration and analysis of large datasets. Recently, there is the emerging field of data physicalisation, where data is represented in physical space (e.g. via physical models) and can no longer only be explored visually, but also by making use of other senses such as touch. Most existing data physicalisation solutions are static and cannot be dynamically updated based on a user&#39;s interaction. Our goal is to develop a framework for new forms of dynamic data physicalisation in order to support an interactive exploration and analysis of datasets. Based on a study of the design space for dynamic data physicalisation, we are therefore working on a grammar for representing the fundamental physical operations and interactions that can be applied to the underlying data. Our envisioned extensible data physicalisation framework will enable the rapid prototyping of dynamic data physicalisations and thereby support researchers who want to experiment with new combinations of physical variables or output devices for dynamic data physicalisation as well as designers and application developers who are interested in the development of innovative dynamic data physicalisation solutions.</span></div><div class="wp-workCard_item wp-workCard--actions"><span class="work-strip-bookmark-button-container"></span><a id="004fb5eaa4f6acfa8e3b4f76cd783a1d" class="wp-workCard--action" rel="nofollow" data-click-track="profile-work-strip-download" data-download="{&quot;attachment_id&quot;:59746352,&quot;asset_id&quot;:37336859,&quot;asset_type&quot;:&quot;Work&quot;,&quot;button_location&quot;:&quot;profile&quot;}" href="https://www.academia.edu/attachments/59746352/download_file?s=profile"><span><i class="fa fa-arrow-down"></i></span><span>Download</span></a><span class="wp-workCard--action visible-if-viewed-by-owner inline-block" style="display: none;"><span class="js-profile-work-strip-edit-button-wrapper profile-work-strip-edit-button-wrapper" data-work-id="37336859"><a class="js-profile-work-strip-edit-button" tabindex="0"><span><i class="fa fa-pencil"></i></span><span>Edit</span></a></span></span></div><div class="wp-workCard_item wp-workCard--stats"><span><span><span class="js-view-count view-count u-mr2x" data-work-id="37336859"><i class="fa fa-spinner fa-spin"></i></span><script>$(function () { var workId = 37336859; window.Academia.workViewCountsFetcher.queue(workId, function (count) { var description = window.$h.commaizeInt(count) + " " + window.$h.pluralize(count, 'View'); $(".js-view-count[data-work-id=37336859]").text(description); $(".js-view-count[data-work-id=37336859]").attr('title', description).tooltip(); }); });</script></span></span><span><span class="percentile-widget hidden"><span class="u-mr2x work-percentile"></span></span><script>$(function () { var workId = 37336859; window.Academia.workPercentilesFetcher.queue(workId, function (percentileText) { var container = $(".js-work-strip[data-work-id='37336859']"); container.find('.work-percentile').text(percentileText.charAt(0).toUpperCase() + percentileText.slice(1)); container.find('.percentile-widget').show(); container.find('.percentile-widget').removeClass('hidden'); }); });</script></span></div><div id="work-strip-premium-row-container"></div></div></div><script> require.config({ waitSeconds: 90 })(["https://a.academia-assets.com/assets/wow_profile-a9bf3a2bc8c89fa2a77156577594264ee8a0f214d74241bc0fcd3f69f8d107ac.js","https://a.academia-assets.com/assets/work_edit-ad038b8c047c1a8d4fa01b402d530ff93c45fee2137a149a4a5398bc8ad67560.js"], function() { // from javascript_helper.rb var dispatcherData = {} if (true){ window.WowProfile.dispatcher = window.WowProfile.dispatcher || _.clone(Backbone.Events); dispatcherData = { dispatcher: window.WowProfile.dispatcher, downloadLinkId: "004fb5eaa4f6acfa8e3b4f76cd783a1d" } } $('.js-work-strip[data-work-id=37336859]').each(function() { if (!$(this).data('initialized')) { new WowProfile.WorkStripView({ el: this, workJSON: {"id":37336859,"title":"Towards a Framework for Dynamic Data Physicalisation","translated_title":"","metadata":{"abstract":"Advanced data visualisation techniques enable the exploration and analysis of large datasets. Recently, there is the emerging field of data physicalisation, where data is represented in physical space (e.g. via physical models) and can no longer only be explored visually, but also by making use of other senses such as touch. Most existing data physicalisation solutions are static and cannot be dynamically updated based on a user's interaction. Our goal is to develop a framework for new forms of dynamic data physicalisation in order to support an interactive exploration and analysis of datasets. Based on a study of the design space for dynamic data physicalisation, we are therefore working on a grammar for representing the fundamental physical operations and interactions that can be applied to the underlying data. Our envisioned extensible data physicalisation framework will enable the rapid prototyping of dynamic data physicalisations and thereby support researchers who want to experiment with new combinations of physical variables or output devices for dynamic data physicalisation as well as designers and application developers who are interested in the development of innovative dynamic data physicalisation solutions. ","more_info":" Beat Signer, Payam Ebrahimi, Timothy J. Curtin and Ahmed K.A. Abdullah, International Workshop Toward a Design Language for Data Physicalization, Berlin, Germany, October 2018","publication_date":{"day":null,"month":null,"year":2018,"errors":{}}},"translated_abstract":"Advanced data visualisation techniques enable the exploration and analysis of large datasets. Recently, there is the emerging field of data physicalisation, where data is represented in physical space (e.g. via physical models) and can no longer only be explored visually, but also by making use of other senses such as touch. Most existing data physicalisation solutions are static and cannot be dynamically updated based on a user's interaction. Our goal is to develop a framework for new forms of dynamic data physicalisation in order to support an interactive exploration and analysis of datasets. Based on a study of the design space for dynamic data physicalisation, we are therefore working on a grammar for representing the fundamental physical operations and interactions that can be applied to the underlying data. Our envisioned extensible data physicalisation framework will enable the rapid prototyping of dynamic data physicalisations and thereby support researchers who want to experiment with new combinations of physical variables or output devices for dynamic data physicalisation as well as designers and application developers who are interested in the development of innovative dynamic data physicalisation solutions. ","internal_url":"https://www.academia.edu/37336859/Towards_a_Framework_for_Dynamic_Data_Physicalisation","translated_internal_url":"","created_at":"2018-09-03T13:04:25.315-07:00","preview_url":null,"current_user_can_edit":null,"current_user_is_owner":null,"owner_id":13155,"coauthors_can_edit":true,"document_type":"paper","co_author_tags":[{"id":31847683,"work_id":37336859,"tagging_user_id":13155,"tagged_user_id":3163292,"co_author_invite_id":null,"email":"p***m@gmail.com","affiliation":"Vrije Universiteit Brussel","display_order":1,"name":"Payam Ebrahimi","title":"Towards a Framework for Dynamic Data Physicalisation"},{"id":31847684,"work_id":37336859,"tagging_user_id":13155,"tagged_user_id":62307900,"co_author_invite_id":null,"email":"t***n@vub.ac.be","display_order":2,"name":"Timothy Curtin","title":"Towards a Framework for Dynamic Data Physicalisation"},{"id":31871736,"work_id":37336859,"tagging_user_id":13155,"tagged_user_id":91228720,"co_author_invite_id":null,"email":"a***h@gmail.com","affiliation":"Vrije Universiteit Brussel","display_order":3,"name":"Ahmed K.A. Abdullah","title":"Towards a Framework for Dynamic Data Physicalisation"}],"downloadable_attachments":[{"id":59746352,"title":"","file_type":"pdf","scribd_thumbnail_url":"https://attachments.academia-assets.com/59746352/thumbnails/1.jpg","file_name":"signer_DataPhys2018.pdf","download_url":"https://www.academia.edu/attachments/59746352/download_file","bulk_download_file_name":"Towards_a_Framework_for_Dynamic_Data_Phy.pdf","bulk_download_url":"https://d1wqtxts1xzle7.cloudfront.net/59746352/signer_DataPhys2018-libre.pdf?1560637089=\u0026response-content-disposition=attachment%3B+filename%3DTowards_a_Framework_for_Dynamic_Data_Phy.pdf\u0026Expires=1744203890\u0026Signature=b12zZvVZvmqYV1PGe9xhwnT70V2Jxe1KCZWGCdIiQaEcwUrv7-LHMWXtR8O8Vagz3vZYVsJl1nnXYRrJsBCvdgkqXhMOEH8oOYTq1YgLSGZGI-gjpAqUJxHwuU1WY-lbXmsHHwZ-AjMe4WMAMQeHkpGpEC3LXzKnM-1ml8eJW79mPozzThZeGUDvjF8HVbhy8GoG4YBpQYfZil6aDxihzzpFKuiKSodAhI2Hm6S6yNSycPNQgV8xnCDLpZh2fGXx7i7ed12dUahy-oKTA~avIU6ekGIqYdt8iWb8D4a-aGXamBMTIDPngL0XeTBKs~9fljdj-EBZ8pq~IyRMVkdA3A__\u0026Key-Pair-Id=APKAJLOHF5GGSLRBV4ZA"}],"slug":"Towards_a_Framework_for_Dynamic_Data_Physicalisation","translated_slug":"","page_count":4,"language":"en","content_type":"Work","summary":"Advanced data visualisation techniques enable the exploration and analysis of large datasets. Recently, there is the emerging field of data physicalisation, where data is represented in physical space (e.g. via physical models) and can no longer only be explored visually, but also by making use of other senses such as touch. Most existing data physicalisation solutions are static and cannot be dynamically updated based on a user's interaction. Our goal is to develop a framework for new forms of dynamic data physicalisation in order to support an interactive exploration and analysis of datasets. Based on a study of the design space for dynamic data physicalisation, we are therefore working on a grammar for representing the fundamental physical operations and interactions that can be applied to the underlying data. Our envisioned extensible data physicalisation framework will enable the rapid prototyping of dynamic data physicalisations and thereby support researchers who want to experiment with new combinations of physical variables or output devices for dynamic data physicalisation as well as designers and application developers who are interested in the development of innovative dynamic data physicalisation solutions. ","impression_tracking_id":null,"owner":{"id":13155,"first_name":"Beat","middle_initials":null,"last_name":"Signer","page_name":"BeatSigner","domain_name":"vub","created_at":"2008-10-29T17:11:18.015-07:00","display_name":"Beat Signer","url":"https://vub.academia.edu/BeatSigner"},"attachments":[{"id":59746352,"title":"","file_type":"pdf","scribd_thumbnail_url":"https://attachments.academia-assets.com/59746352/thumbnails/1.jpg","file_name":"signer_DataPhys2018.pdf","download_url":"https://www.academia.edu/attachments/59746352/download_file","bulk_download_file_name":"Towards_a_Framework_for_Dynamic_Data_Phy.pdf","bulk_download_url":"https://d1wqtxts1xzle7.cloudfront.net/59746352/signer_DataPhys2018-libre.pdf?1560637089=\u0026response-content-disposition=attachment%3B+filename%3DTowards_a_Framework_for_Dynamic_Data_Phy.pdf\u0026Expires=1744203890\u0026Signature=b12zZvVZvmqYV1PGe9xhwnT70V2Jxe1KCZWGCdIiQaEcwUrv7-LHMWXtR8O8Vagz3vZYVsJl1nnXYRrJsBCvdgkqXhMOEH8oOYTq1YgLSGZGI-gjpAqUJxHwuU1WY-lbXmsHHwZ-AjMe4WMAMQeHkpGpEC3LXzKnM-1ml8eJW79mPozzThZeGUDvjF8HVbhy8GoG4YBpQYfZil6aDxihzzpFKuiKSodAhI2Hm6S6yNSycPNQgV8xnCDLpZh2fGXx7i7ed12dUahy-oKTA~avIU6ekGIqYdt8iWb8D4a-aGXamBMTIDPngL0XeTBKs~9fljdj-EBZ8pq~IyRMVkdA3A__\u0026Key-Pair-Id=APKAJLOHF5GGSLRBV4ZA"}],"research_interests":[{"id":48,"name":"Engineering","url":"https://www.academia.edu/Documents/in/Engineering"},{"id":422,"name":"Computer Science","url":"https://www.academia.edu/Documents/in/Computer_Science"},{"id":449,"name":"Software Engineering","url":"https://www.academia.edu/Documents/in/Software_Engineering"},{"id":472,"name":"Human Computer Interaction","url":"https://www.academia.edu/Documents/in/Human_Computer_Interaction"},{"id":491,"name":"Information Technology","url":"https://www.academia.edu/Documents/in/Information_Technology"},{"id":923,"name":"Technology","url":"https://www.academia.edu/Documents/in/Technology"},{"id":933,"name":"New Media","url":"https://www.academia.edu/Documents/in/New_Media"},{"id":1210,"name":"Informatics","url":"https://www.academia.edu/Documents/in/Informatics"},{"id":3424,"name":"Information Visualization","url":"https://www.academia.edu/Documents/in/Information_Visualization"},{"id":4205,"name":"Data Analysis","url":"https://www.academia.edu/Documents/in/Data_Analysis"},{"id":4416,"name":"Interaction Design","url":"https://www.academia.edu/Documents/in/Interaction_Design"},{"id":5266,"name":"Human Information Interaction","url":"https://www.academia.edu/Documents/in/Human_Information_Interaction"},{"id":5673,"name":"Augmented Reality","url":"https://www.academia.edu/Documents/in/Augmented_Reality"},{"id":11119,"name":"User Interface","url":"https://www.academia.edu/Documents/in/User_Interface"},{"id":20470,"name":"Tangible User Interfaces","url":"https://www.academia.edu/Documents/in/Tangible_User_Interfaces"},{"id":21201,"name":"Multimodality","url":"https://www.academia.edu/Documents/in/Multimodality"},{"id":85420,"name":"Crossmedia","url":"https://www.academia.edu/Documents/in/Crossmedia"},{"id":126300,"name":"Big Data","url":"https://www.academia.edu/Documents/in/Big_Data"},{"id":140531,"name":"Framework","url":"https://www.academia.edu/Documents/in/Framework"},{"id":1993399,"name":"Data Physicalisation","url":"https://www.academia.edu/Documents/in/Data_Physicalisation"}],"urls":[{"id":9196125,"url":"https://beatsigner.com/publications/towards-a-framework-for-dynamic-data-physicalisation.pdf"}]}, dispatcherData: dispatcherData }); $(this).data('initialized', true); } }); $a.trackClickSource(".js-work-strip-work-link", "profile_work_strip") if (false) { Aedu.setUpFigureCarousel('profile-work-37336859-figures'); } }); </script> <div class="js-work-strip profile--work_container" data-work-id="122578719"><div class="profile--work_thumbnail hidden-xs"><a class="js-work-strip-work-link" data-click-track="profile-work-strip-thumbnail" href="https://www.academia.edu/122578719/As_We_May_Interact_Challenges_and_Opportunities_for_Next_Generation_Human_Information_Interaction"><img alt="Research paper thumbnail of As We May Interact: Challenges and Opportunities for Next-Generation Human-Information Interaction" class="work-thumbnail" src="https://attachments.academia-assets.com/117388797/thumbnails/1.jpg" /></a></div><div class="wp-workCard wp-workCard_itemContainer"><div class="wp-workCard_item wp-workCard--title"><a class="js-work-strip-work-link text-gray-darker" data-click-track="profile-work-strip-title" href="https://www.academia.edu/122578719/As_We_May_Interact_Challenges_and_Opportunities_for_Next_Generation_Human_Information_Interaction">As We May Interact: Challenges and Opportunities for Next-Generation Human-Information Interaction</a></div><div class="wp-workCard_item"><span class="js-work-more-abstract-truncated">Long before the advent of personal computing, Vannevar Bush envisioned the Memex as a solution to...</span><a class="js-work-more-abstract" data-broccoli-component="work_strip.more_abstract" data-click-track="profile-work-strip-more-abstract" href="javascript:;"><span> more </span><span><i class="fa fa-caret-down"></i></span></a><span class="js-work-more-abstract-untruncated hidden">Long before the advent of personal computing, Vannevar Bush envisioned the Memex as a solution to address information overload by enhancing the management and refinding of information through associative trails. While other hypertext pioneers like Douglas Engelbart and Ted Nelson introduced advanced hypertext concepts to create more flexible document structures and augment the human intellect, some of their original ideas are still absent in our daily interaction with documents and information systems. Today, many digital document formats mimic paper documents without fully leveraging the opportunities offered by digital media and documents are often organised in hierarchical file structures. In this keynote, we explore how cross-media technologies, such as the resource-selector-link (RSL) hypermedia metamodel, can be used to organise and interact with information across digital and physical spaces. While emerging wearable mixed reality (MR) headsets offer new possibilities to augment the human intellect, we discuss how hypermedia research, in combination with other technologies, could play a major role in providing the necessary linked data and hypertext infrastructure for this augmentation process. We outline the challenges and opportunities for next-generation multimodal human-information interaction enabled by flexible cross-media information spaces and document structures in combination with upcoming mixed and virtual reality solutions.</span></div><div class="wp-workCard_item"><div class="carousel-container carousel-container--sm" id="profile-work-122578719-figures"><div class="prev-slide-container js-prev-button-container"><button aria-label="Previous" class="carousel-navigation-button js-profile-work-122578719-figures-prev"><span class="material-symbols-outlined" style="font-size: 24px" translate="no">arrow_back_ios</span></button></div><div class="slides-container js-slides-container"><figure class="figure-slide-container"><a href="https://www.academia.edu/figures/28930603/figure-1-next-generation-human-information-interaction-image"><img alt="Figure 1: Next-generation human-information interaction (image created with the assistance of DALL-E 3) " class="figure-slide-image" src="https://figures.academia-assets.com/117388797/figure_001.jpg" /></a></figure></div><div class="next-slide-container js-next-button-container"><button aria-label="Next" class="carousel-navigation-button js-profile-work-122578719-figures-next"><span class="material-symbols-outlined" style="font-size: 24px" translate="no">arrow_forward_ios</span></button></div></div></div><div class="wp-workCard_item wp-workCard--actions"><span class="work-strip-bookmark-button-container"></span><a id="f42d8dd417c294d86168cd81aa0a0cef" class="wp-workCard--action" rel="nofollow" data-click-track="profile-work-strip-download" data-download="{&quot;attachment_id&quot;:117388797,&quot;asset_id&quot;:122578719,&quot;asset_type&quot;:&quot;Work&quot;,&quot;button_location&quot;:&quot;profile&quot;}" href="https://www.academia.edu/attachments/117388797/download_file?s=profile"><span><i class="fa fa-arrow-down"></i></span><span>Download</span></a><span class="wp-workCard--action visible-if-viewed-by-owner inline-block" style="display: none;"><span class="js-profile-work-strip-edit-button-wrapper profile-work-strip-edit-button-wrapper" data-work-id="122578719"><a class="js-profile-work-strip-edit-button" tabindex="0"><span><i class="fa fa-pencil"></i></span><span>Edit</span></a></span></span></div><div class="wp-workCard_item wp-workCard--stats"><span><span><span class="js-view-count view-count u-mr2x" data-work-id="122578719"><i class="fa fa-spinner fa-spin"></i></span><script>$(function () { var workId = 122578719; window.Academia.workViewCountsFetcher.queue(workId, function (count) { var description = window.$h.commaizeInt(count) + " " + window.$h.pluralize(count, 'View'); $(".js-view-count[data-work-id=122578719]").text(description); $(".js-view-count[data-work-id=122578719]").attr('title', description).tooltip(); }); });</script></span></span><span><span class="percentile-widget hidden"><span class="u-mr2x work-percentile"></span></span><script>$(function () { var workId = 122578719; window.Academia.workPercentilesFetcher.queue(workId, function (percentileText) { var container = $(".js-work-strip[data-work-id='122578719']"); container.find('.work-percentile').text(percentileText.charAt(0).toUpperCase() + percentileText.slice(1)); container.find('.percentile-widget').show(); container.find('.percentile-widget').removeClass('hidden'); }); });</script></span></div><div id="work-strip-premium-row-container"></div></div></div><script> require.config({ waitSeconds: 90 })(["https://a.academia-assets.com/assets/wow_profile-a9bf3a2bc8c89fa2a77156577594264ee8a0f214d74241bc0fcd3f69f8d107ac.js","https://a.academia-assets.com/assets/work_edit-ad038b8c047c1a8d4fa01b402d530ff93c45fee2137a149a4a5398bc8ad67560.js"], function() { // from javascript_helper.rb var dispatcherData = {} if (true){ window.WowProfile.dispatcher = window.WowProfile.dispatcher || _.clone(Backbone.Events); dispatcherData = { dispatcher: window.WowProfile.dispatcher, downloadLinkId: "f42d8dd417c294d86168cd81aa0a0cef" } } $('.js-work-strip[data-work-id=122578719]').each(function() { if (!$(this).data('initialized')) { new WowProfile.WorkStripView({ el: this, workJSON: {"id":122578719,"title":"As We May Interact: Challenges and Opportunities for Next-Generation Human-Information Interaction","translated_title":"","metadata":{"doi":"10.1145/3679058.3688629","abstract":"Long before the advent of personal computing, Vannevar Bush envisioned the Memex as a solution to address information overload by enhancing the management and refinding of information through associative trails. While other hypertext pioneers like Douglas Engelbart and Ted Nelson introduced advanced hypertext concepts to create more flexible document structures and augment the human intellect, some of their original ideas are still absent in our daily interaction with documents and information systems. Today, many digital document formats mimic paper documents without fully leveraging the opportunities offered by digital media and documents are often organised in hierarchical file structures. In this keynote, we explore how cross-media technologies, such as the resource-selector-link (RSL) hypermedia metamodel, can be used to organise and interact with information across digital and physical spaces. While emerging wearable mixed reality (MR) headsets offer new possibilities to augment the human intellect, we discuss how hypermedia research, in combination with other technologies, could play a major role in providing the necessary linked data and hypertext infrastructure for this augmentation process. We outline the challenges and opportunities for next-generation multimodal human-information interaction enabled by flexible cross-media information spaces and document structures in combination with upcoming mixed and virtual reality solutions.","more_info":"Beat Signer, Proceedings of HUMAN 2024, 7th Workshop on Human Factors in Hypertext, Poznan, Poland, September 2024","ai_title_tag":"Future Human-Information Interaction: Challenges \u0026 Opportunities","publication_date":{"day":null,"month":null,"year":2024,"errors":{}}},"translated_abstract":"Long before the advent of personal computing, Vannevar Bush envisioned the Memex as a solution to address information overload by enhancing the management and refinding of information through associative trails. While other hypertext pioneers like Douglas Engelbart and Ted Nelson introduced advanced hypertext concepts to create more flexible document structures and augment the human intellect, some of their original ideas are still absent in our daily interaction with documents and information systems. Today, many digital document formats mimic paper documents without fully leveraging the opportunities offered by digital media and documents are often organised in hierarchical file structures. In this keynote, we explore how cross-media technologies, such as the resource-selector-link (RSL) hypermedia metamodel, can be used to organise and interact with information across digital and physical spaces. While emerging wearable mixed reality (MR) headsets offer new possibilities to augment the human intellect, we discuss how hypermedia research, in combination with other technologies, could play a major role in providing the necessary linked data and hypertext infrastructure for this augmentation process. We outline the challenges and opportunities for next-generation multimodal human-information interaction enabled by flexible cross-media information spaces and document structures in combination with upcoming mixed and virtual reality solutions.","internal_url":"https://www.academia.edu/122578719/As_We_May_Interact_Challenges_and_Opportunities_for_Next_Generation_Human_Information_Interaction","translated_internal_url":"","created_at":"2024-08-04T15:34:39.942-07:00","preview_url":null,"current_user_can_edit":null,"current_user_is_owner":null,"owner_id":13155,"coauthors_can_edit":true,"document_type":"paper","co_author_tags":[],"downloadable_attachments":[{"id":117388797,"title":"","file_type":"pdf","scribd_thumbnail_url":"https://attachments.academia-assets.com/117388797/thumbnails/1.jpg","file_name":"signer_HUMAN2024.pdf","download_url":"https://www.academia.edu/attachments/117388797/download_file","bulk_download_file_name":"As_We_May_Interact_Challenges_and_Opport.pdf","bulk_download_url":"https://d1wqtxts1xzle7.cloudfront.net/117388797/signer_HUMAN2024-libre.pdf?1723470508=\u0026response-content-disposition=attachment%3B+filename%3DAs_We_May_Interact_Challenges_and_Opport.pdf\u0026Expires=1744203890\u0026Signature=ZvKPFbsOIV5ESwBAp0CE7OPyE-U1xp8Ne6OK70bqUo4c8xFyD3oPoeOI~rYaoG0~OG3F~asxmV4NVMrUOv2YFrqV5htoMvz4NHee2T19KGdh7fOPmvgcG4dxQM5GoFZNi5GGhmv~5pOM46XbpnB0qd-UlRXPyi~jgA67yWjvOlgOvsYLcc50z8cDoMVrvCOmImR3p~DxsWf2xhwqJvVzspIMRgNrq7GkdG~xujExFlzlQoEsnSN76VsoApATWS7h4fZpTSa9aZkB94H302YxOVjb8XVXl8gcChZgvUgSvcf5o-alo7tNggbeQjd0pNLfPtsWQsohxK5FXaZnOcy-pA__\u0026Key-Pair-Id=APKAJLOHF5GGSLRBV4ZA"}],"slug":"As_We_May_Interact_Challenges_and_Opportunities_for_Next_Generation_Human_Information_Interaction","translated_slug":"","page_count":2,"language":"en","content_type":"Work","summary":"Long before the advent of personal computing, Vannevar Bush envisioned the Memex as a solution to address information overload by enhancing the management and refinding of information through associative trails. While other hypertext pioneers like Douglas Engelbart and Ted Nelson introduced advanced hypertext concepts to create more flexible document structures and augment the human intellect, some of their original ideas are still absent in our daily interaction with documents and information systems. Today, many digital document formats mimic paper documents without fully leveraging the opportunities offered by digital media and documents are often organised in hierarchical file structures. In this keynote, we explore how cross-media technologies, such as the resource-selector-link (RSL) hypermedia metamodel, can be used to organise and interact with information across digital and physical spaces. While emerging wearable mixed reality (MR) headsets offer new possibilities to augment the human intellect, we discuss how hypermedia research, in combination with other technologies, could play a major role in providing the necessary linked data and hypertext infrastructure for this augmentation process. We outline the challenges and opportunities for next-generation multimodal human-information interaction enabled by flexible cross-media information spaces and document structures in combination with upcoming mixed and virtual reality solutions.","impression_tracking_id":null,"owner":{"id":13155,"first_name":"Beat","middle_initials":null,"last_name":"Signer","page_name":"BeatSigner","domain_name":"vub","created_at":"2008-10-29T17:11:18.015-07:00","display_name":"Beat Signer","url":"https://vub.academia.edu/BeatSigner"},"attachments":[{"id":117388797,"title":"","file_type":"pdf","scribd_thumbnail_url":"https://attachments.academia-assets.com/117388797/thumbnails/1.jpg","file_name":"signer_HUMAN2024.pdf","download_url":"https://www.academia.edu/attachments/117388797/download_file","bulk_download_file_name":"As_We_May_Interact_Challenges_and_Opport.pdf","bulk_download_url":"https://d1wqtxts1xzle7.cloudfront.net/117388797/signer_HUMAN2024-libre.pdf?1723470508=\u0026response-content-disposition=attachment%3B+filename%3DAs_We_May_Interact_Challenges_and_Opport.pdf\u0026Expires=1744203890\u0026Signature=ZvKPFbsOIV5ESwBAp0CE7OPyE-U1xp8Ne6OK70bqUo4c8xFyD3oPoeOI~rYaoG0~OG3F~asxmV4NVMrUOv2YFrqV5htoMvz4NHee2T19KGdh7fOPmvgcG4dxQM5GoFZNi5GGhmv~5pOM46XbpnB0qd-UlRXPyi~jgA67yWjvOlgOvsYLcc50z8cDoMVrvCOmImR3p~DxsWf2xhwqJvVzspIMRgNrq7GkdG~xujExFlzlQoEsnSN76VsoApATWS7h4fZpTSa9aZkB94H302YxOVjb8XVXl8gcChZgvUgSvcf5o-alo7tNggbeQjd0pNLfPtsWQsohxK5FXaZnOcy-pA__\u0026Key-Pair-Id=APKAJLOHF5GGSLRBV4ZA"}],"research_interests":[{"id":37,"name":"Information Systems","url":"https://www.academia.edu/Documents/in/Information_Systems"},{"id":422,"name":"Computer Science","url":"https://www.academia.edu/Documents/in/Computer_Science"},{"id":472,"name":"Human Computer Interaction","url":"https://www.academia.edu/Documents/in/Human_Computer_Interaction"},{"id":491,"name":"Information Technology","url":"https://www.academia.edu/Documents/in/Information_Technology"},{"id":1210,"name":"Informatics","url":"https://www.academia.edu/Documents/in/Informatics"},{"id":5266,"name":"Human Information Interaction","url":"https://www.academia.edu/Documents/in/Human_Information_Interaction"},{"id":5673,"name":"Augmented Reality","url":"https://www.academia.edu/Documents/in/Augmented_Reality"},{"id":11081,"name":"Multimodal Interaction","url":"https://www.academia.edu/Documents/in/Multimodal_Interaction"},{"id":11085,"name":"Cross-Media Information Spaces","url":"https://www.academia.edu/Documents/in/Cross-Media_Information_Spaces"},{"id":39370,"name":"Mixed Reality","url":"https://www.academia.edu/Documents/in/Mixed_Reality"},{"id":44606,"name":"Hypermedia","url":"https://www.academia.edu/Documents/in/Hypermedia"},{"id":54192,"name":"Hypertext","url":"https://www.academia.edu/Documents/in/Hypertext"}],"urls":[{"id":43816080,"url":"https://beatsigner.com/publications/as-we-may-interact-challenges-and-opportunities-for-next-generation-human-information-interaction.pdf"}]}, dispatcherData: dispatcherData }); $(this).data('initialized', true); } }); $a.trackClickSource(".js-work-strip-work-link", "profile_work_strip") if (true) { Aedu.setUpFigureCarousel('profile-work-122578719-figures'); } }); </script> <div class="js-work-strip profile--work_container" data-work-id="270763"><div class="profile--work_thumbnail hidden-xs"><a class="js-work-strip-work-link" data-click-track="profile-work-strip-thumbnail" href="https://www.academia.edu/270763/Interactive_Paper_Past_Present_and_Future"><img alt="Research paper thumbnail of Interactive Paper: Past, Present and Future" class="work-thumbnail" src="https://attachments.academia-assets.com/45228887/thumbnails/1.jpg" /></a></div><div class="wp-workCard wp-workCard_itemContainer"><div class="wp-workCard_item wp-workCard--title"><a class="js-work-strip-work-link text-gray-darker" data-click-track="profile-work-strip-title" href="https://www.academia.edu/270763/Interactive_Paper_Past_Present_and_Future">Interactive Paper: Past, Present and Future</a></div><div class="wp-workCard_item"><span class="js-work-more-abstract-truncated">Over the last few years, there has been a significant increase in the number of researchers deali...</span><a class="js-work-more-abstract" data-broccoli-component="work_strip.more_abstract" data-click-track="profile-work-strip-more-abstract" href="javascript:;"><span> more </span><span><i class="fa fa-caret-down"></i></span></a><span class="js-work-more-abstract-untruncated hidden">Over the last few years, there has been a significant increase in the number of researchers dealing with the integration of paper and digital information or services. While recent technological developments enable new forms of paper-digital integration and interaction, some of the original research on interactive paper dates back almost twenty years. We give a brief overview of the most relevant past and current interactive paper developments. Then, based on our experience in developing a wide variety of interactive paper solutions over the last decade, as well as the results of other research groups, we outline future directions and challenges for the realisation of innovative interactive paper solutions. Further, we propose the definition of common data formats and interactive paper design patterns to ensure future cross-application and framework interoperability.</span></div><div class="wp-workCard_item wp-workCard--actions"><span class="work-strip-bookmark-button-container"></span><a id="0927aad0340b06e2978fe29a3b62271b" class="wp-workCard--action" rel="nofollow" data-click-track="profile-work-strip-download" data-download="{&quot;attachment_id&quot;:45228887,&quot;asset_id&quot;:270763,&quot;asset_type&quot;:&quot;Work&quot;,&quot;button_location&quot;:&quot;profile&quot;}" href="https://www.academia.edu/attachments/45228887/download_file?s=profile"><span><i class="fa fa-arrow-down"></i></span><span>Download</span></a><span class="wp-workCard--action visible-if-viewed-by-owner inline-block" style="display: none;"><span class="js-profile-work-strip-edit-button-wrapper profile-work-strip-edit-button-wrapper" data-work-id="270763"><a class="js-profile-work-strip-edit-button" tabindex="0"><span><i class="fa fa-pencil"></i></span><span>Edit</span></a></span></span></div><div class="wp-workCard_item wp-workCard--stats"><span><span><span class="js-view-count view-count u-mr2x" data-work-id="270763"><i class="fa fa-spinner fa-spin"></i></span><script>$(function () { var workId = 270763; window.Academia.workViewCountsFetcher.queue(workId, function (count) { var description = window.$h.commaizeInt(count) + " " + window.$h.pluralize(count, 'View'); $(".js-view-count[data-work-id=270763]").text(description); $(".js-view-count[data-work-id=270763]").attr('title', description).tooltip(); }); });</script></span></span><span><span class="percentile-widget hidden"><span class="u-mr2x work-percentile"></span></span><script>$(function () { var workId = 270763; window.Academia.workPercentilesFetcher.queue(workId, function (percentileText) { var container = $(".js-work-strip[data-work-id='270763']"); container.find('.work-percentile').text(percentileText.charAt(0).toUpperCase() + percentileText.slice(1)); container.find('.percentile-widget').show(); container.find('.percentile-widget').removeClass('hidden'); }); });</script></span></div><div id="work-strip-premium-row-container"></div></div></div><script> require.config({ waitSeconds: 90 })(["https://a.academia-assets.com/assets/wow_profile-a9bf3a2bc8c89fa2a77156577594264ee8a0f214d74241bc0fcd3f69f8d107ac.js","https://a.academia-assets.com/assets/work_edit-ad038b8c047c1a8d4fa01b402d530ff93c45fee2137a149a4a5398bc8ad67560.js"], function() { // from javascript_helper.rb var dispatcherData = {} if (true){ window.WowProfile.dispatcher = window.WowProfile.dispatcher || _.clone(Backbone.Events); dispatcherData = { dispatcher: window.WowProfile.dispatcher, downloadLinkId: "0927aad0340b06e2978fe29a3b62271b" } } $('.js-work-strip[data-work-id=270763]').each(function() { if (!$(this).data('initialized')) { new WowProfile.WorkStripView({ el: this, workJSON: {"id":270763,"title":"Interactive Paper: Past, Present and Future","translated_title":"","metadata":{"abstract":"Over the last few years, there has been a significant increase in the number of researchers dealing with the integration of paper and digital information or services. While recent technological developments enable new forms of paper-digital integration and interaction, some of the original research on interactive paper dates back almost twenty years. We give a brief overview of the most relevant past and current interactive paper developments. Then, based on our experience in developing a wide variety of interactive paper solutions over the last decade, as well as the results of other research groups, we outline future directions and challenges for the realisation of innovative interactive paper solutions. Further, we propose the definition of common data formats and interactive paper design patterns to ensure future cross-application and framework interoperability.","more_info":"Beat Signer and Moira C. Norrie, Proceedings of PaperComp 2010, 1st International Workshop on Paper Computing, Copenhagen Denmark, September 2010","ai_title_tag":"Interactive Paper: History and Future Directions","publication_date":{"day":null,"month":null,"year":2010,"errors":{}}},"translated_abstract":"Over the last few years, there has been a significant increase in the number of researchers dealing with the integration of paper and digital information or services. While recent technological developments enable new forms of paper-digital integration and interaction, some of the original research on interactive paper dates back almost twenty years. We give a brief overview of the most relevant past and current interactive paper developments. Then, based on our experience in developing a wide variety of interactive paper solutions over the last decade, as well as the results of other research groups, we outline future directions and challenges for the realisation of innovative interactive paper solutions. Further, we propose the definition of common data formats and interactive paper design patterns to ensure future cross-application and framework interoperability.","internal_url":"https://www.academia.edu/270763/Interactive_Paper_Past_Present_and_Future","translated_internal_url":"","created_at":"2010-07-22T05:12:24.899-07:00","preview_url":null,"current_user_can_edit":null,"current_user_is_owner":null,"owner_id":13155,"coauthors_can_edit":true,"document_type":"paper","co_author_tags":[{"id":36173143,"work_id":270763,"tagging_user_id":13155,"tagged_user_id":120441,"co_author_invite_id":null,"email":"n***e@inf.ethz.ch","affiliation":"Swiss Federal Institute of Technology (ETH)","display_order":1,"name":"Moira Norrie","title":"Interactive Paper: Past, Present and Future"}],"downloadable_attachments":[{"id":45228887,"title":"","file_type":"pdf","scribd_thumbnail_url":"https://attachments.academia-assets.com/45228887/thumbnails/1.jpg","file_name":"signer_PaperComp2010.pdf","download_url":"https://www.academia.edu/attachments/45228887/download_file","bulk_download_file_name":"Interactive_Paper_Past_Present_and_Futur.pdf","bulk_download_url":"https://d1wqtxts1xzle7.cloudfront.net/45228887/signer_PaperComp2010-libre.pdf?1462042110=\u0026response-content-disposition=attachment%3B+filename%3DInteractive_Paper_Past_Present_and_Futur.pdf\u0026Expires=1744203890\u0026Signature=NQceGH2NTHagxvnQYxiXdHRtAdSI~F8t5rb8iNs2dSPrmB8TUOOIPmy-CC3OEIbsX4VcsOh0idzN~WsMkR8Lsg~774OHIpMI84AqrxFraQhy40WmVmADzi1cxVtvgdgZ4m1qabxyEbtYuzeNAmSc5tE~Gq6p8iBC4VIWeNBsbnCG3itGCVVtqBbtPpdXjaoF95jC-FTh6EmrE3eePq2VuMgvX3dtZcsQE-yiSnsnBs6sdaydHNhDr3evNeXPBI3wzyvH~W5eqpacCamqS7lVkIyeGwg1Hk2eUb0haiUgWvcuAVrZ5eQ4lVHjbBNt5-K~ZI4qUAhw3i7Ak0SGnzSAbg__\u0026Key-Pair-Id=APKAJLOHF5GGSLRBV4ZA"}],"slug":"Interactive_Paper_Past_Present_and_Future","translated_slug":"","page_count":4,"language":"en","content_type":"Work","summary":"Over the last few years, there has been a significant increase in the number of researchers dealing with the integration of paper and digital information or services. While recent technological developments enable new forms of paper-digital integration and interaction, some of the original research on interactive paper dates back almost twenty years. We give a brief overview of the most relevant past and current interactive paper developments. Then, based on our experience in developing a wide variety of interactive paper solutions over the last decade, as well as the results of other research groups, we outline future directions and challenges for the realisation of innovative interactive paper solutions. Further, we propose the definition of common data formats and interactive paper design patterns to ensure future cross-application and framework interoperability.","impression_tracking_id":null,"owner":{"id":13155,"first_name":"Beat","middle_initials":null,"last_name":"Signer","page_name":"BeatSigner","domain_name":"vub","created_at":"2008-10-29T17:11:18.015-07:00","display_name":"Beat Signer","url":"https://vub.academia.edu/BeatSigner"},"attachments":[{"id":45228887,"title":"","file_type":"pdf","scribd_thumbnail_url":"https://attachments.academia-assets.com/45228887/thumbnails/1.jpg","file_name":"signer_PaperComp2010.pdf","download_url":"https://www.academia.edu/attachments/45228887/download_file","bulk_download_file_name":"Interactive_Paper_Past_Present_and_Futur.pdf","bulk_download_url":"https://d1wqtxts1xzle7.cloudfront.net/45228887/signer_PaperComp2010-libre.pdf?1462042110=\u0026response-content-disposition=attachment%3B+filename%3DInteractive_Paper_Past_Present_and_Futur.pdf\u0026Expires=1744203890\u0026Signature=NQceGH2NTHagxvnQYxiXdHRtAdSI~F8t5rb8iNs2dSPrmB8TUOOIPmy-CC3OEIbsX4VcsOh0idzN~WsMkR8Lsg~774OHIpMI84AqrxFraQhy40WmVmADzi1cxVtvgdgZ4m1qabxyEbtYuzeNAmSc5tE~Gq6p8iBC4VIWeNBsbnCG3itGCVVtqBbtPpdXjaoF95jC-FTh6EmrE3eePq2VuMgvX3dtZcsQE-yiSnsnBs6sdaydHNhDr3evNeXPBI3wzyvH~W5eqpacCamqS7lVkIyeGwg1Hk2eUb0haiUgWvcuAVrZ5eQ4lVHjbBNt5-K~ZI4qUAhw3i7Ak0SGnzSAbg__\u0026Key-Pair-Id=APKAJLOHF5GGSLRBV4ZA"}],"research_interests":[{"id":37,"name":"Information Systems","url":"https://www.academia.edu/Documents/in/Information_Systems"},{"id":48,"name":"Engineering","url":"https://www.academia.edu/Documents/in/Engineering"},{"id":422,"name":"Computer Science","url":"https://www.academia.edu/Documents/in/Computer_Science"},{"id":449,"name":"Software Engineering","url":"https://www.academia.edu/Documents/in/Software_Engineering"},{"id":459,"name":"Information Science","url":"https://www.academia.edu/Documents/in/Information_Science"},{"id":472,"name":"Human Computer Interaction","url":"https://www.academia.edu/Documents/in/Human_Computer_Interaction"},{"id":491,"name":"Information Technology","url":"https://www.academia.edu/Documents/in/Information_Technology"},{"id":492,"name":"Management Information Systems","url":"https://www.academia.edu/Documents/in/Management_Information_Systems"},{"id":854,"name":"Computer Vision","url":"https://www.academia.edu/Documents/in/Computer_Vision"},{"id":859,"name":"Communication","url":"https://www.academia.edu/Documents/in/Communication"},{"id":922,"name":"Education","url":"https://www.academia.edu/Documents/in/Education"},{"id":923,"name":"Technology","url":"https://www.academia.edu/Documents/in/Technology"},{"id":933,"name":"New Media","url":"https://www.academia.edu/Documents/in/New_Media"},{"id":1012,"name":"Digital Libraries","url":"https://www.academia.edu/Documents/in/Digital_Libraries"},{"id":1210,"name":"Informatics","url":"https://www.academia.edu/Documents/in/Informatics"},{"id":1236,"name":"Art","url":"https://www.academia.edu/Documents/in/Art"},{"id":1380,"name":"Computer Engineering","url":"https://www.academia.edu/Documents/in/Computer_Engineering"},{"id":1384,"name":"Web Engineering","url":"https://www.academia.edu/Documents/in/Web_Engineering"},{"id":1453,"name":"Information Management","url":"https://www.academia.edu/Documents/in/Information_Management"},{"id":1609,"name":"E-learning","url":"https://www.academia.edu/Documents/in/E-learning"},{"id":2482,"name":"Database Systems","url":"https://www.academia.edu/Documents/in/Database_Systems"},{"id":2852,"name":"Narrative","url":"https://www.academia.edu/Documents/in/Narrative"},{"id":2869,"name":"Digital Media","url":"https://www.academia.edu/Documents/in/Digital_Media"},{"id":2879,"name":"Ubiquitous Computing","url":"https://www.academia.edu/Documents/in/Ubiquitous_Computing"},{"id":3419,"name":"Multimedia","url":"https://www.academia.edu/Documents/in/Multimedia"},{"id":3934,"name":"Visual Narrative","url":"https://www.academia.edu/Documents/in/Visual_Narrative"},{"id":4198,"name":"Mobile Technology","url":"https://www.academia.edu/Documents/in/Mobile_Technology"},{"id":4416,"name":"Interaction Design","url":"https://www.academia.edu/Documents/in/Interaction_Design"},{"id":5673,"name":"Augmented Reality","url":"https://www.academia.edu/Documents/in/Augmented_Reality"},{"id":10165,"name":"Interactive and Digital Media","url":"https://www.academia.edu/Documents/in/Interactive_and_Digital_Media"},{"id":11081,"name":"Multimodal Interaction","url":"https://www.academia.edu/Documents/in/Multimodal_Interaction"},{"id":11085,"name":"Cross-Media Information Spaces","url":"https://www.academia.edu/Documents/in/Cross-Media_Information_Spaces"},{"id":11086,"name":"Interactive Paper","url":"https://www.academia.edu/Documents/in/Interactive_Paper"},{"id":11119,"name":"User Interface","url":"https://www.academia.edu/Documents/in/User_Interface"},{"id":12650,"name":"Intelligent User Interface Agents","url":"https://www.academia.edu/Documents/in/Intelligent_User_Interface_Agents"},{"id":14203,"name":"Natural User Interfaces","url":"https://www.academia.edu/Documents/in/Natural_User_Interfaces"},{"id":15951,"name":"Digital Pen and Paper","url":"https://www.academia.edu/Documents/in/Digital_Pen_and_Paper"},{"id":20470,"name":"Tangible User Interfaces","url":"https://www.academia.edu/Documents/in/Tangible_User_Interfaces"},{"id":21201,"name":"Multimodality","url":"https://www.academia.edu/Documents/in/Multimodality"},{"id":23997,"name":"HCI","url":"https://www.academia.edu/Documents/in/HCI"},{"id":26825,"name":"Mobile Computing","url":"https://www.academia.edu/Documents/in/Mobile_Computing"},{"id":36689,"name":"Human-Computer Interface","url":"https://www.academia.edu/Documents/in/Human-Computer_Interface"},{"id":37228,"name":"Multimodal Interfaces","url":"https://www.academia.edu/Documents/in/Multimodal_Interfaces"},{"id":39369,"name":"Augmented Reality (Computer Science)","url":"https://www.academia.edu/Documents/in/Augmented_Reality_Computer_Science_"},{"id":41030,"name":"Mobile Augmented Reality","url":"https://www.academia.edu/Documents/in/Mobile_Augmented_Reality"},{"id":41474,"name":"Cross Media Platforms","url":"https://www.academia.edu/Documents/in/Cross_Media_Platforms"},{"id":44606,"name":"Hypermedia","url":"https://www.academia.edu/Documents/in/Hypermedia"},{"id":51492,"name":"Pulp and Paper + Recycled Paper","url":"https://www.academia.edu/Documents/in/Pulp_and_Paper_Recycled_Paper"},{"id":58120,"name":"pulp and paper Technology","url":"https://www.academia.edu/Documents/in/pulp_and_paper_Technology"},{"id":69848,"name":"Presentation","url":"https://www.academia.edu/Documents/in/Presentation"},{"id":69857,"name":"PowerPoint","url":"https://www.academia.edu/Documents/in/PowerPoint"},{"id":97585,"name":"User interfaces","url":"https://www.academia.edu/Documents/in/User_interfaces"},{"id":182290,"name":"Tangible User Interface, Tangible Programming","url":"https://www.academia.edu/Documents/in/Tangible_User_Interface_Tangible_Programming"},{"id":227601,"name":"Intelligent User Interfaces","url":"https://www.academia.edu/Documents/in/Intelligent_User_Interfaces"},{"id":242420,"name":"Presentation Slides","url":"https://www.academia.edu/Documents/in/Presentation_Slides"},{"id":255094,"name":"Computer User Interface Design","url":"https://www.academia.edu/Documents/in/Computer_User_Interface_Design"},{"id":311668,"name":"Digital Story Telling","url":"https://www.academia.edu/Documents/in/Digital_Story_Telling"},{"id":369022,"name":"Cross Media","url":"https://www.academia.edu/Documents/in/Cross_Media"},{"id":407764,"name":"Visual Presentation","url":"https://www.academia.edu/Documents/in/Visual_Presentation"},{"id":470389,"name":"COMPUTER SCIENCE \u0026 ENGINEERING","url":"https://www.academia.edu/Documents/in/COMPUTER_SCIENCE_and_ENGINEERING-3"},{"id":531041,"name":"Augmented Reality Book","url":"https://www.academia.edu/Documents/in/Augmented_Reality_Book"},{"id":554420,"name":"PPT Presentation","url":"https://www.academia.edu/Documents/in/PPT_Presentation"},{"id":631659,"name":"Power Point Presentations","url":"https://www.academia.edu/Documents/in/Power_Point_Presentations"},{"id":721414,"name":"Augmented Paper","url":"https://www.academia.edu/Documents/in/Augmented_Paper"},{"id":1251579,"name":"Human-Computer Interaction","url":"https://www.academia.edu/Documents/in/Human-Computer_Interaction"},{"id":1273494,"name":"Slideware","url":"https://www.academia.edu/Documents/in/Slideware"}],"urls":[{"id":9196127,"url":"https://beatsigner.com/publications/interactive-paper-past-present-and-future.pdf"}]}, dispatcherData: dispatcherData }); $(this).data('initialized', true); } }); $a.trackClickSource(".js-work-strip-work-link", "profile_work_strip") if (false) { Aedu.setUpFigureCarousel('profile-work-270763-figures'); } }); </script> <div class="js-work-strip profile--work_container" data-work-id="32057799"><div class="profile--work_thumbnail hidden-xs"><a class="js-work-strip-work-link" data-click-track="profile-work-strip-thumbnail" href="https://www.academia.edu/32057799/Tangible_Holograms_Towards_Mobile_Physical_Augmentation_of_Virtual_Objects"><img alt="Research paper thumbnail of Tangible Holograms: Towards Mobile Physical Augmentation of Virtual Objects" class="work-thumbnail" src="https://attachments.academia-assets.com/62885113/thumbnails/1.jpg" /></a></div><div class="wp-workCard wp-workCard_itemContainer"><div class="wp-workCard_item wp-workCard--title"><a class="js-work-strip-work-link text-gray-darker" data-click-track="profile-work-strip-title" href="https://www.academia.edu/32057799/Tangible_Holograms_Towards_Mobile_Physical_Augmentation_of_Virtual_Objects">Tangible Holograms: Towards Mobile Physical Augmentation of Virtual Objects</a></div><div class="wp-workCard_item wp-workCard--coauthors"><span>by </span><span><a class="" data-click-track="profile-work-strip-authors" href="https://vub.academia.edu/BeatSigner">Beat Signer</a> and <a class="" data-click-track="profile-work-strip-authors" href="https://independent.academia.edu/CurtinTimothy">Timothy Curtin</a></span></div><div class="wp-workCard_item"><span class="js-work-more-abstract-truncated">The last two decades have seen the emergence and steady development of tangible user interfaces. ...</span><a class="js-work-more-abstract" data-broccoli-component="work_strip.more_abstract" data-click-track="profile-work-strip-more-abstract" href="javascript:;"><span> more </span><span><i class="fa fa-caret-down"></i></span></a><span class="js-work-more-abstract-untruncated hidden">The last two decades have seen the emergence and steady development of tangible user interfaces. While most of these interfaces are applied for input—with output still on traditional computer screens—the goal of programmable matter and actuated shape-changing materials is to directly use the physical objects for visual or tangible feedback. Advances in material sciences and flexible display technologies are investigated to enable such reconfigurable physical objects. While existing solutions aim for making physical objects more controllable via the digital world, we propose an approach where holograms (virtual objects) in a mixed reality environment are augmented with physical variables such as shape, texture or temperature. As such, the support for mobility forms an important contribution of the proposed solution since it enables users to freely move within and across environments. Furthermore, our augmented virtual objects can co-exist in a single environment with programmable matter and other actuated shape-changing solutions. The future potential of the proposed approach is illustrated in two usage scenarios and we hope that the presentation of our work in progress on a novel way to realise tangible holograms will foster some lively discussions in the CHI community.</span></div><div class="wp-workCard_item wp-workCard--actions"><span class="work-strip-bookmark-button-container"></span><a id="5218acd91b146c128423507ecb57e5c2" class="wp-workCard--action" rel="nofollow" data-click-track="profile-work-strip-download" data-download="{&quot;attachment_id&quot;:62885113,&quot;asset_id&quot;:32057799,&quot;asset_type&quot;:&quot;Work&quot;,&quot;button_location&quot;:&quot;profile&quot;}" href="https://www.academia.edu/attachments/62885113/download_file?s=profile"><span><i class="fa fa-arrow-down"></i></span><span>Download</span></a><span class="wp-workCard--action visible-if-viewed-by-owner inline-block" style="display: none;"><span class="js-profile-work-strip-edit-button-wrapper profile-work-strip-edit-button-wrapper" data-work-id="32057799"><a class="js-profile-work-strip-edit-button" tabindex="0"><span><i class="fa fa-pencil"></i></span><span>Edit</span></a></span></span></div><div class="wp-workCard_item wp-workCard--stats"><span><span><span class="js-view-count view-count u-mr2x" data-work-id="32057799"><i class="fa fa-spinner fa-spin"></i></span><script>$(function () { var workId = 32057799; window.Academia.workViewCountsFetcher.queue(workId, function (count) { var description = window.$h.commaizeInt(count) + " " + window.$h.pluralize(count, 'View'); $(".js-view-count[data-work-id=32057799]").text(description); $(".js-view-count[data-work-id=32057799]").attr('title', description).tooltip(); }); });</script></span></span><span><span class="percentile-widget hidden"><span class="u-mr2x work-percentile"></span></span><script>$(function () { var workId = 32057799; window.Academia.workPercentilesFetcher.queue(workId, function (percentileText) { var container = $(".js-work-strip[data-work-id='32057799']"); container.find('.work-percentile').text(percentileText.charAt(0).toUpperCase() + percentileText.slice(1)); container.find('.percentile-widget').show(); container.find('.percentile-widget').removeClass('hidden'); }); });</script></span></div><div id="work-strip-premium-row-container"></div></div></div><script> require.config({ waitSeconds: 90 })(["https://a.academia-assets.com/assets/wow_profile-a9bf3a2bc8c89fa2a77156577594264ee8a0f214d74241bc0fcd3f69f8d107ac.js","https://a.academia-assets.com/assets/work_edit-ad038b8c047c1a8d4fa01b402d530ff93c45fee2137a149a4a5398bc8ad67560.js"], function() { // from javascript_helper.rb var dispatcherData = {} if (true){ window.WowProfile.dispatcher = window.WowProfile.dispatcher || _.clone(Backbone.Events); dispatcherData = { dispatcher: window.WowProfile.dispatcher, downloadLinkId: "5218acd91b146c128423507ecb57e5c2" } } $('.js-work-strip[data-work-id=32057799]').each(function() { if (!$(this).data('initialized')) { new WowProfile.WorkStripView({ el: this, workJSON: {"id":32057799,"title":"Tangible Holograms: Towards Mobile Physical Augmentation of Virtual Objects","translated_title":"","metadata":{"abstract":"The last two decades have seen the emergence and steady development of tangible user interfaces. While most of these interfaces are applied for input—with output still on traditional computer screens—the goal of programmable matter and actuated shape-changing materials is to directly use the physical objects for visual or tangible feedback. Advances in material sciences and flexible display technologies are investigated to enable such reconfigurable physical objects. While existing solutions aim for making physical objects more controllable via the digital world, we propose an approach where holograms (virtual objects) in a mixed reality environment are augmented with physical variables such as shape, texture or temperature. As such, the support for mobility forms an important contribution of the proposed solution since it enables users to freely move within and across environments. Furthermore, our augmented virtual objects can co-exist in a single environment with programmable matter and other actuated shape-changing solutions. The future potential of the proposed approach is illustrated in two usage scenarios and we hope that the presentation of our work in progress on a novel way to realise tangible holograms will foster some lively discussions in the CHI community.","more_info":"Technical Report WISE Lab, WISE-2017-01, March 2017","publication_date":{"day":null,"month":null,"year":2017,"errors":{}}},"translated_abstract":"The last two decades have seen the emergence and steady development of tangible user interfaces. While most of these interfaces are applied for input—with output still on traditional computer screens—the goal of programmable matter and actuated shape-changing materials is to directly use the physical objects for visual or tangible feedback. Advances in material sciences and flexible display technologies are investigated to enable such reconfigurable physical objects. While existing solutions aim for making physical objects more controllable via the digital world, we propose an approach where holograms (virtual objects) in a mixed reality environment are augmented with physical variables such as shape, texture or temperature. As such, the support for mobility forms an important contribution of the proposed solution since it enables users to freely move within and across environments. Furthermore, our augmented virtual objects can co-exist in a single environment with programmable matter and other actuated shape-changing solutions. The future potential of the proposed approach is illustrated in two usage scenarios and we hope that the presentation of our work in progress on a novel way to realise tangible holograms will foster some lively discussions in the CHI community.","internal_url":"https://www.academia.edu/32057799/Tangible_Holograms_Towards_Mobile_Physical_Augmentation_of_Virtual_Objects","translated_internal_url":"","created_at":"2017-03-26T18:32:27.736-07:00","preview_url":null,"current_user_can_edit":null,"current_user_is_owner":null,"owner_id":13155,"coauthors_can_edit":true,"document_type":"paper","co_author_tags":[{"id":28195055,"work_id":32057799,"tagging_user_id":13155,"tagged_user_id":62307900,"co_author_invite_id":6156284,"email":"t***n@vub.ac.be","display_order":1,"name":"Timothy Curtin","title":"Tangible Holograms: Towards Mobile Physical Augmentation of Virtual Objects"}],"downloadable_attachments":[{"id":62885113,"title":"","file_type":"pdf","scribd_thumbnail_url":"https://attachments.academia-assets.com/62885113/thumbnails/1.jpg","file_name":"signer_arXiv2017.pdf","download_url":"https://www.academia.edu/attachments/62885113/download_file","bulk_download_file_name":"Tangible_Holograms_Towards_Mobile_Physic.pdf","bulk_download_url":"https://d1wqtxts1xzle7.cloudfront.net/62885113/signer_arXiv2017-libre.pdf?1586425217=\u0026response-content-disposition=attachment%3B+filename%3DTangible_Holograms_Towards_Mobile_Physic.pdf\u0026Expires=1744203890\u0026Signature=LlEIe1FrLhHeJYZSqBscfmYsTYw1CUdK-K8BqsCUrIZ8nLBUCIY49vWOR4lO5gQ7cx-NXYLu7hbgIAVgsqLcRCTxR0qxptd~vyKY~uEqA4lHiJDD3J6Su1wZegb2mMCCYTShETWOKtsVYQpuxt3gCHpFVjlgC6uC-W6qdl39GqQeMsBqT2O3mzuCnUkrRWJxf-Fp34r0Ou~SOB0bW0ZyXug-aL2aNmjDmXKae-OOuLpq4V~-vFZhOOjrljyf-7~VycgjkWltjGynTjYFTKAelb~e2kvY1qgtwiIDO7Dizs-2eCFTnacBM2yF7hpdcE9M2idQj6UR3pJp6AGYkJbylA__\u0026Key-Pair-Id=APKAJLOHF5GGSLRBV4ZA"}],"slug":"Tangible_Holograms_Towards_Mobile_Physical_Augmentation_of_Virtual_Objects","translated_slug":"","page_count":8,"language":"en","content_type":"Work","summary":"The last two decades have seen the emergence and steady development of tangible user interfaces. While most of these interfaces are applied for input—with output still on traditional computer screens—the goal of programmable matter and actuated shape-changing materials is to directly use the physical objects for visual or tangible feedback. Advances in material sciences and flexible display technologies are investigated to enable such reconfigurable physical objects. While existing solutions aim for making physical objects more controllable via the digital world, we propose an approach where holograms (virtual objects) in a mixed reality environment are augmented with physical variables such as shape, texture or temperature. As such, the support for mobility forms an important contribution of the proposed solution since it enables users to freely move within and across environments. Furthermore, our augmented virtual objects can co-exist in a single environment with programmable matter and other actuated shape-changing solutions. The future potential of the proposed approach is illustrated in two usage scenarios and we hope that the presentation of our work in progress on a novel way to realise tangible holograms will foster some lively discussions in the CHI community.","impression_tracking_id":null,"owner":{"id":13155,"first_name":"Beat","middle_initials":null,"last_name":"Signer","page_name":"BeatSigner","domain_name":"vub","created_at":"2008-10-29T17:11:18.015-07:00","display_name":"Beat Signer","url":"https://vub.academia.edu/BeatSigner"},"attachments":[{"id":62885113,"title":"","file_type":"pdf","scribd_thumbnail_url":"https://attachments.academia-assets.com/62885113/thumbnails/1.jpg","file_name":"signer_arXiv2017.pdf","download_url":"https://www.academia.edu/attachments/62885113/download_file","bulk_download_file_name":"Tangible_Holograms_Towards_Mobile_Physic.pdf","bulk_download_url":"https://d1wqtxts1xzle7.cloudfront.net/62885113/signer_arXiv2017-libre.pdf?1586425217=\u0026response-content-disposition=attachment%3B+filename%3DTangible_Holograms_Towards_Mobile_Physic.pdf\u0026Expires=1744203890\u0026Signature=LlEIe1FrLhHeJYZSqBscfmYsTYw1CUdK-K8BqsCUrIZ8nLBUCIY49vWOR4lO5gQ7cx-NXYLu7hbgIAVgsqLcRCTxR0qxptd~vyKY~uEqA4lHiJDD3J6Su1wZegb2mMCCYTShETWOKtsVYQpuxt3gCHpFVjlgC6uC-W6qdl39GqQeMsBqT2O3mzuCnUkrRWJxf-Fp34r0Ou~SOB0bW0ZyXug-aL2aNmjDmXKae-OOuLpq4V~-vFZhOOjrljyf-7~VycgjkWltjGynTjYFTKAelb~e2kvY1qgtwiIDO7Dizs-2eCFTnacBM2yF7hpdcE9M2idQj6UR3pJp6AGYkJbylA__\u0026Key-Pair-Id=APKAJLOHF5GGSLRBV4ZA"}],"research_interests":[{"id":37,"name":"Information Systems","url":"https://www.academia.edu/Documents/in/Information_Systems"},{"id":48,"name":"Engineering","url":"https://www.academia.edu/Documents/in/Engineering"},{"id":77,"name":"Robotics","url":"https://www.academia.edu/Documents/in/Robotics"},{"id":422,"name":"Computer Science","url":"https://www.academia.edu/Documents/in/Computer_Science"},{"id":449,"name":"Software Engineering","url":"https://www.academia.edu/Documents/in/Software_Engineering"},{"id":471,"name":"Robotics (Computer Science)","url":"https://www.academia.edu/Documents/in/Robotics_Computer_Science_"},{"id":472,"name":"Human Computer Interaction","url":"https://www.academia.edu/Documents/in/Human_Computer_Interaction"},{"id":491,"name":"Information Technology","url":"https://www.academia.edu/Documents/in/Information_Technology"},{"id":492,"name":"Management Information Systems","url":"https://www.academia.edu/Documents/in/Management_Information_Systems"},{"id":511,"name":"Materials Science","url":"https://www.academia.edu/Documents/in/Materials_Science"},{"id":923,"name":"Technology","url":"https://www.academia.edu/Documents/in/Technology"},{"id":933,"name":"New Media","url":"https://www.academia.edu/Documents/in/New_Media"},{"id":1210,"name":"Informatics","url":"https://www.academia.edu/Documents/in/Informatics"},{"id":1380,"name":"Computer Engineering","url":"https://www.academia.edu/Documents/in/Computer_Engineering"},{"id":1440,"name":"Visualization","url":"https://www.academia.edu/Documents/in/Visualization"},{"id":2043,"name":"Mobile Robotics","url":"https://www.academia.edu/Documents/in/Mobile_Robotics"},{"id":2869,"name":"Digital Media","url":"https://www.academia.edu/Documents/in/Digital_Media"},{"id":2875,"name":"User Experience (UX)","url":"https://www.academia.edu/Documents/in/User_Experience_UX_"},{"id":2879,"name":"Ubiquitous Computing","url":"https://www.academia.edu/Documents/in/Ubiquitous_Computing"},{"id":3419,"name":"Multimedia","url":"https://www.academia.edu/Documents/in/Multimedia"},{"id":3424,"name":"Information Visualization","url":"https://www.academia.edu/Documents/in/Information_Visualization"},{"id":4198,"name":"Mobile Technology","url":"https://www.academia.edu/Documents/in/Mobile_Technology"},{"id":4416,"name":"Interaction Design","url":"https://www.academia.edu/Documents/in/Interaction_Design"},{"id":5044,"name":"Embodiment","url":"https://www.academia.edu/Documents/in/Embodiment"},{"id":5266,"name":"Human Information Interaction","url":"https://www.academia.edu/Documents/in/Human_Information_Interaction"},{"id":5673,"name":"Augmented Reality","url":"https://www.academia.edu/Documents/in/Augmented_Reality"},{"id":9134,"name":"Pervasive Computing","url":"https://www.academia.edu/Documents/in/Pervasive_Computing"},{"id":9135,"name":"The Internet of Things","url":"https://www.academia.edu/Documents/in/The_Internet_of_Things"},{"id":10165,"name":"Interactive and Digital Media","url":"https://www.academia.edu/Documents/in/Interactive_and_Digital_Media"},{"id":11085,"name":"Cross-Media Information Spaces","url":"https://www.academia.edu/Documents/in/Cross-Media_Information_Spaces"},{"id":11119,"name":"User Interface","url":"https://www.academia.edu/Documents/in/User_Interface"},{"id":20470,"name":"Tangible User Interfaces","url":"https://www.academia.edu/Documents/in/Tangible_User_Interfaces"},{"id":20481,"name":"Information Visualisation","url":"https://www.academia.edu/Documents/in/Information_Visualisation"},{"id":24002,"name":"Materials Science and Engineering","url":"https://www.academia.edu/Documents/in/Materials_Science_and_Engineering"},{"id":25035,"name":"Material Science","url":"https://www.academia.edu/Documents/in/Material_Science"},{"id":26825,"name":"Mobile Computing","url":"https://www.academia.edu/Documents/in/Mobile_Computing"},{"id":33288,"name":"Affordances","url":"https://www.academia.edu/Documents/in/Affordances"},{"id":39370,"name":"Mixed Reality","url":"https://www.academia.edu/Documents/in/Mixed_Reality"},{"id":41030,"name":"Mobile Augmented Reality","url":"https://www.academia.edu/Documents/in/Mobile_Augmented_Reality"},{"id":47980,"name":"Data Visualization","url":"https://www.academia.edu/Documents/in/Data_Visualization"},{"id":48591,"name":"Data Visualisation","url":"https://www.academia.edu/Documents/in/Data_Visualisation"},{"id":59072,"name":"Mixed Reality research","url":"https://www.academia.edu/Documents/in/Mixed_Reality_research"},{"id":85420,"name":"Crossmedia","url":"https://www.academia.edu/Documents/in/Crossmedia"},{"id":88758,"name":"Multisensory Design","url":"https://www.academia.edu/Documents/in/Multisensory_Design"},{"id":88960,"name":"Computer Generated Holograms","url":"https://www.academia.edu/Documents/in/Computer_Generated_Holograms"},{"id":90556,"name":"TEI (Tangible, Embedded, and Embodied Interaction)","url":"https://www.academia.edu/Documents/in/TEI_Tangible_Embedded_and_Embodied_Interaction_"},{"id":126300,"name":"Big Data","url":"https://www.academia.edu/Documents/in/Big_Data"},{"id":126585,"name":"Human Computer Interaction Design","url":"https://www.academia.edu/Documents/in/Human_Computer_Interaction_Design"},{"id":173458,"name":"Tangible Computing","url":"https://www.academia.edu/Documents/in/Tangible_Computing"},{"id":182290,"name":"Tangible User Interface, Tangible Programming","url":"https://www.academia.edu/Documents/in/Tangible_User_Interface_Tangible_Programming"},{"id":188142,"name":"Inverse Kinematics","url":"https://www.academia.edu/Documents/in/Inverse_Kinematics"},{"id":271153,"name":"Data representation","url":"https://www.academia.edu/Documents/in/Data_representation"},{"id":289278,"name":"Big Data Analytics","url":"https://www.academia.edu/Documents/in/Big_Data_Analytics"},{"id":321828,"name":"Programmable Matter","url":"https://www.academia.edu/Documents/in/Programmable_Matter"},{"id":359271,"name":"Tangible Interfaces","url":"https://www.academia.edu/Documents/in/Tangible_Interfaces"},{"id":411019,"name":"Complex Data Visualizations","url":"https://www.academia.edu/Documents/in/Complex_Data_Visualizations"},{"id":413148,"name":"Big Data / Analytics / Data Mining","url":"https://www.academia.edu/Documents/in/Big_Data_Analytics_Data_Mining"},{"id":416425,"name":"Tangible Media","url":"https://www.academia.edu/Documents/in/Tangible_Media"},{"id":448968,"name":"Tangible Interaction","url":"https://www.academia.edu/Documents/in/Tangible_Interaction"},{"id":449899,"name":"Big Data Visualisation","url":"https://www.academia.edu/Documents/in/Big_Data_Visualisation"},{"id":458849,"name":"Pervasive and Ubiquitous Computing","url":"https://www.academia.edu/Documents/in/Pervasive_and_Ubiquitous_Computing"},{"id":463287,"name":"Tangibles","url":"https://www.academia.edu/Documents/in/Tangibles"},{"id":581039,"name":"Visualization of Big Data","url":"https://www.academia.edu/Documents/in/Visualization_of_Big_Data"},{"id":711190,"name":"Tangible BIts","url":"https://www.academia.edu/Documents/in/Tangible_BIts"},{"id":792615,"name":"Holograms","url":"https://www.academia.edu/Documents/in/Holograms"},{"id":876643,"name":"Tangible User interface","url":"https://www.academia.edu/Documents/in/Tangible_User_interface"},{"id":1004370,"name":"Holographic Display","url":"https://www.academia.edu/Documents/in/Holographic_Display"},{"id":1145332,"name":"Shape-Changing Interfaces","url":"https://www.academia.edu/Documents/in/Shape-Changing_Interfaces"},{"id":1993399,"name":"Data Physicalisation","url":"https://www.academia.edu/Documents/in/Data_Physicalisation"},{"id":2668784,"name":"HoloLens","url":"https://www.academia.edu/Documents/in/HoloLens-1"},{"id":2947471,"name":"Data Physicalization","url":"https://www.academia.edu/Documents/in/Data_Physicalization"}],"urls":[{"id":9196130,"url":"https://beatsigner.com/publications/tangible-holograms-towards-mobile-physical-augmentation-of-virtual-objects.pdf"}]}, dispatcherData: dispatcherData }); $(this).data('initialized', true); } }); $a.trackClickSource(".js-work-strip-work-link", "profile_work_strip") if (false) { Aedu.setUpFigureCarousel('profile-work-32057799-figures'); } }); </script> <div class="js-work-strip profile--work_container" data-work-id="50140273"><div class="profile--work_thumbnail hidden-xs"><a class="js-work-strip-work-link" data-click-track="profile-work-strip-thumbnail" href="https://www.academia.edu/50140273/Back_to_the_Future_Bringing_Original_Hypermedia_and_Cross_Media_Concepts_to_Modern_Desktop_Environments"><img alt="Research paper thumbnail of Back to the Future: Bringing Original Hypermedia and Cross-Media Concepts to Modern Desktop Environments" class="work-thumbnail" src="https://attachments.academia-assets.com/68237788/thumbnails/1.jpg" /></a></div><div class="wp-workCard wp-workCard_itemContainer"><div class="wp-workCard_item wp-workCard--title"><a class="js-work-strip-work-link text-gray-darker" data-click-track="profile-work-strip-title" href="https://www.academia.edu/50140273/Back_to_the_Future_Bringing_Original_Hypermedia_and_Cross_Media_Concepts_to_Modern_Desktop_Environments">Back to the Future: Bringing Original Hypermedia and Cross-Media Concepts to Modern Desktop Environments</a></div><div class="wp-workCard_item wp-workCard--coauthors"><span>by </span><span><a class="" data-click-track="profile-work-strip-authors" href="https://vub.academia.edu/BeatSigner">Beat Signer</a>, <a class="" data-click-track="profile-work-strip-authors" href="https://independent.academia.edu/RobertvanBarlingen">Robert van Barlingen</a>, and <a class="" data-click-track="profile-work-strip-authors" href="https://independent.academia.edu/ReinoutRoels">Reinout Roels</a></span></div><div class="wp-workCard_item"><span class="js-work-more-abstract-truncated">Over the last few decades, we have seen massive improvements in computing power, but nevertheless...</span><a class="js-work-more-abstract" data-broccoli-component="work_strip.more_abstract" data-click-track="profile-work-strip-more-abstract" href="javascript:;"><span> more </span><span><i class="fa fa-caret-down"></i></span></a><span class="js-work-more-abstract-untruncated hidden">Over the last few decades, we have seen massive improvements in computing power, but nevertheless we still rely on digital documents and file systems that were originally created by mimicking the characteristics of physical storage media with all its limitations. This is quite surprising given that even before the existence of the computer, Information Science visionaries such as Vannevar Bush described more powerful information management solutions. We therefore aim to improve the way information is managed in modern desktop environments by embedding a hypermedia engine offering rich hypermedia and cross-media concepts at the level of an operating system. We discuss the resource-selector-link (RSL) hypermedia metamodel as a candidate for realising such a general hypermedia engine and highlight its flexibility based on a number of domain-specific applications that have been developed over the last two decades. The underlying content repository will no longer rely on monolithic files, but rather contain a user&#39;s data in the form of content fragments, such as snippets of text or images, which are structurally linked to form the corresponding documents, and can be reused in other documents or even shared across computers. By increasing the scope to a system-wide hypermedia engine, we have to deal with fundamental challenges related to granularity, interoperability or context resolving. We strongly believe that computing technology has evolved enough to revisit and address these challenges, laying the foundation for a wide range of innovative use cases for efficiently managing cross-media content in modern desktop environments.</span></div><div class="wp-workCard_item wp-workCard--actions"><span class="work-strip-bookmark-button-container"></span><a id="e212459d23a8d6c746e89189b6108b0d" class="wp-workCard--action" rel="nofollow" data-click-track="profile-work-strip-download" data-download="{&quot;attachment_id&quot;:68237788,&quot;asset_id&quot;:50140273,&quot;asset_type&quot;:&quot;Work&quot;,&quot;button_location&quot;:&quot;profile&quot;}" href="https://www.academia.edu/attachments/68237788/download_file?s=profile"><span><i class="fa fa-arrow-down"></i></span><span>Download</span></a><span class="wp-workCard--action visible-if-viewed-by-owner inline-block" style="display: none;"><span class="js-profile-work-strip-edit-button-wrapper profile-work-strip-edit-button-wrapper" data-work-id="50140273"><a class="js-profile-work-strip-edit-button" tabindex="0"><span><i class="fa fa-pencil"></i></span><span>Edit</span></a></span></span></div><div class="wp-workCard_item wp-workCard--stats"><span><span><span class="js-view-count view-count u-mr2x" data-work-id="50140273"><i class="fa fa-spinner fa-spin"></i></span><script>$(function () { var workId = 50140273; window.Academia.workViewCountsFetcher.queue(workId, function (count) { var description = window.$h.commaizeInt(count) + " " + window.$h.pluralize(count, 'View'); $(".js-view-count[data-work-id=50140273]").text(description); $(".js-view-count[data-work-id=50140273]").attr('title', description).tooltip(); }); });</script></span></span><span><span class="percentile-widget hidden"><span class="u-mr2x work-percentile"></span></span><script>$(function () { var workId = 50140273; window.Academia.workPercentilesFetcher.queue(workId, function (percentileText) { var container = $(".js-work-strip[data-work-id='50140273']"); container.find('.work-percentile').text(percentileText.charAt(0).toUpperCase() + percentileText.slice(1)); container.find('.percentile-widget').show(); container.find('.percentile-widget').removeClass('hidden'); }); });</script></span></div><div id="work-strip-premium-row-container"></div></div></div><script> require.config({ waitSeconds: 90 })(["https://a.academia-assets.com/assets/wow_profile-a9bf3a2bc8c89fa2a77156577594264ee8a0f214d74241bc0fcd3f69f8d107ac.js","https://a.academia-assets.com/assets/work_edit-ad038b8c047c1a8d4fa01b402d530ff93c45fee2137a149a4a5398bc8ad67560.js"], function() { // from javascript_helper.rb var dispatcherData = {} if (true){ window.WowProfile.dispatcher = window.WowProfile.dispatcher || _.clone(Backbone.Events); dispatcherData = { dispatcher: window.WowProfile.dispatcher, downloadLinkId: "e212459d23a8d6c746e89189b6108b0d" } } $('.js-work-strip[data-work-id=50140273]').each(function() { if (!$(this).data('initialized')) { new WowProfile.WorkStripView({ el: this, workJSON: {"id":50140273,"title":"Back to the Future: Bringing Original Hypermedia and Cross-Media Concepts to Modern Desktop Environments","translated_title":"","metadata":{"doi":"10.1145/3465336.3475122","abstract":"Over the last few decades, we have seen massive improvements in computing power, but nevertheless we still rely on digital documents and file systems that were originally created by mimicking the characteristics of physical storage media with all its limitations. This is quite surprising given that even before the existence of the computer, Information Science visionaries such as Vannevar Bush described more powerful information management solutions. We therefore aim to improve the way information is managed in modern desktop environments by embedding a hypermedia engine offering rich hypermedia and cross-media concepts at the level of an operating system. We discuss the resource-selector-link (RSL) hypermedia metamodel as a candidate for realising such a general hypermedia engine and highlight its flexibility based on a number of domain-specific applications that have been developed over the last two decades. The underlying content repository will no longer rely on monolithic files, but rather contain a user's data in the form of content fragments, such as snippets of text or images, which are structurally linked to form the corresponding documents, and can be reused in other documents or even shared across computers. By increasing the scope to a system-wide hypermedia engine, we have to deal with fundamental challenges related to granularity, interoperability or context resolving. We strongly believe that computing technology has evolved enough to revisit and address these challenges, laying the foundation for a wide range of innovative use cases for efficiently managing cross-media content in modern desktop environments.","more_info":"Beat Signer, Reinout Roels, Robert Van Barlingen and Brent Willems, Proceedings of Hypertext 2021, 32nd ACM Conference on Hypertext and Social Media, Virtual Event, August 2021","ai_title_tag":"Revolutionizing Desktop Information Management","publication_date":{"day":null,"month":null,"year":2021,"errors":{}}},"translated_abstract":"Over the last few decades, we have seen massive improvements in computing power, but nevertheless we still rely on digital documents and file systems that were originally created by mimicking the characteristics of physical storage media with all its limitations. This is quite surprising given that even before the existence of the computer, Information Science visionaries such as Vannevar Bush described more powerful information management solutions. We therefore aim to improve the way information is managed in modern desktop environments by embedding a hypermedia engine offering rich hypermedia and cross-media concepts at the level of an operating system. We discuss the resource-selector-link (RSL) hypermedia metamodel as a candidate for realising such a general hypermedia engine and highlight its flexibility based on a number of domain-specific applications that have been developed over the last two decades. The underlying content repository will no longer rely on monolithic files, but rather contain a user's data in the form of content fragments, such as snippets of text or images, which are structurally linked to form the corresponding documents, and can be reused in other documents or even shared across computers. By increasing the scope to a system-wide hypermedia engine, we have to deal with fundamental challenges related to granularity, interoperability or context resolving. We strongly believe that computing technology has evolved enough to revisit and address these challenges, laying the foundation for a wide range of innovative use cases for efficiently managing cross-media content in modern desktop environments.","internal_url":"https://www.academia.edu/50140273/Back_to_the_Future_Bringing_Original_Hypermedia_and_Cross_Media_Concepts_to_Modern_Desktop_Environments","translated_internal_url":"","created_at":"2021-07-21T15:06:14.924-07:00","preview_url":null,"current_user_can_edit":null,"current_user_is_owner":null,"owner_id":13155,"coauthors_can_edit":true,"document_type":"paper","co_author_tags":[{"id":36722513,"work_id":50140273,"tagging_user_id":13155,"tagged_user_id":null,"co_author_invite_id":7272979,"email":"b***m@vub.be","display_order":1,"name":"Brent Willems","title":"Back to the Future: Bringing Original Hypermedia and Cross-Media Concepts to Modern Desktop Environments"},{"id":36790130,"work_id":50140273,"tagging_user_id":13155,"tagged_user_id":200968837,"co_author_invite_id":null,"email":"b***n@gmail.com","display_order":2,"name":"Robert van Barlingen","title":"Back to the Future: Bringing Original Hypermedia and Cross-Media Concepts to Modern Desktop Environments"},{"id":36722512,"work_id":50140273,"tagging_user_id":13155,"tagged_user_id":2181828,"co_author_invite_id":7272978,"email":"r***s@gmail.com","display_order":3,"name":"Reinout Roels","title":"Back to the Future: Bringing Original Hypermedia and Cross-Media Concepts to Modern Desktop Environments"}],"downloadable_attachments":[{"id":68237788,"title":"","file_type":"pdf","scribd_thumbnail_url":"https://attachments.academia-assets.com/68237788/thumbnails/1.jpg","file_name":"Signer_Hypertext2021.pdf","download_url":"https://www.academia.edu/attachments/68237788/download_file","bulk_download_file_name":"Back_to_the_Future_Bringing_Original_Hyp.pdf","bulk_download_url":"https://d1wqtxts1xzle7.cloudfront.net/68237788/Signer_Hypertext2021-libre.pdf?1626905710=\u0026response-content-disposition=attachment%3B+filename%3DBack_to_the_Future_Bringing_Original_Hyp.pdf\u0026Expires=1744203890\u0026Signature=C76yqaupcwNsCY4c~KUoeS1hVbmkS2GHGs3ByfuwWHe0FllfJ8Z49KwpGmyTyebu6YwNKYqQFKDxT9a8OFKptbu1aG5KGLKxX9vvy9f5Ds7zbxc08bXmr~4idCXQvWWsKavNaNjiRSVbRwDF-uB7cWsC-aYb08lYxMPp3R6-t8FQoCNA5LF-X4j2aeRX~i-I0jLC4~haZNpBSx2WQCQ4amFdUUY33rE8fF-yLPJMmbmOhG0dvKDUor-SqJt3neGnB0QmiUnop4KtZV5VMKc~jlxVduxQMS-95AwxPjGkluprpg~36Fy0rYca-9nNo2hcQAVNUKZDuWcZ9G65tKcDDA__\u0026Key-Pair-Id=APKAJLOHF5GGSLRBV4ZA"}],"slug":"Back_to_the_Future_Bringing_Original_Hypermedia_and_Cross_Media_Concepts_to_Modern_Desktop_Environments","translated_slug":"","page_count":6,"language":"en","content_type":"Work","summary":"Over the last few decades, we have seen massive improvements in computing power, but nevertheless we still rely on digital documents and file systems that were originally created by mimicking the characteristics of physical storage media with all its limitations. This is quite surprising given that even before the existence of the computer, Information Science visionaries such as Vannevar Bush described more powerful information management solutions. We therefore aim to improve the way information is managed in modern desktop environments by embedding a hypermedia engine offering rich hypermedia and cross-media concepts at the level of an operating system. We discuss the resource-selector-link (RSL) hypermedia metamodel as a candidate for realising such a general hypermedia engine and highlight its flexibility based on a number of domain-specific applications that have been developed over the last two decades. The underlying content repository will no longer rely on monolithic files, but rather contain a user's data in the form of content fragments, such as snippets of text or images, which are structurally linked to form the corresponding documents, and can be reused in other documents or even shared across computers. By increasing the scope to a system-wide hypermedia engine, we have to deal with fundamental challenges related to granularity, interoperability or context resolving. We strongly believe that computing technology has evolved enough to revisit and address these challenges, laying the foundation for a wide range of innovative use cases for efficiently managing cross-media content in modern desktop environments.","impression_tracking_id":null,"owner":{"id":13155,"first_name":"Beat","middle_initials":null,"last_name":"Signer","page_name":"BeatSigner","domain_name":"vub","created_at":"2008-10-29T17:11:18.015-07:00","display_name":"Beat Signer","url":"https://vub.academia.edu/BeatSigner"},"attachments":[{"id":68237788,"title":"","file_type":"pdf","scribd_thumbnail_url":"https://attachments.academia-assets.com/68237788/thumbnails/1.jpg","file_name":"Signer_Hypertext2021.pdf","download_url":"https://www.academia.edu/attachments/68237788/download_file","bulk_download_file_name":"Back_to_the_Future_Bringing_Original_Hyp.pdf","bulk_download_url":"https://d1wqtxts1xzle7.cloudfront.net/68237788/Signer_Hypertext2021-libre.pdf?1626905710=\u0026response-content-disposition=attachment%3B+filename%3DBack_to_the_Future_Bringing_Original_Hyp.pdf\u0026Expires=1744203890\u0026Signature=C76yqaupcwNsCY4c~KUoeS1hVbmkS2GHGs3ByfuwWHe0FllfJ8Z49KwpGmyTyebu6YwNKYqQFKDxT9a8OFKptbu1aG5KGLKxX9vvy9f5Ds7zbxc08bXmr~4idCXQvWWsKavNaNjiRSVbRwDF-uB7cWsC-aYb08lYxMPp3R6-t8FQoCNA5LF-X4j2aeRX~i-I0jLC4~haZNpBSx2WQCQ4amFdUUY33rE8fF-yLPJMmbmOhG0dvKDUor-SqJt3neGnB0QmiUnop4KtZV5VMKc~jlxVduxQMS-95AwxPjGkluprpg~36Fy0rYca-9nNo2hcQAVNUKZDuWcZ9G65tKcDDA__\u0026Key-Pair-Id=APKAJLOHF5GGSLRBV4ZA"}],"research_interests":[{"id":37,"name":"Information Systems","url":"https://www.academia.edu/Documents/in/Information_Systems"},{"id":422,"name":"Computer Science","url":"https://www.academia.edu/Documents/in/Computer_Science"},{"id":459,"name":"Information Science","url":"https://www.academia.edu/Documents/in/Information_Science"},{"id":1210,"name":"Informatics","url":"https://www.academia.edu/Documents/in/Informatics"},{"id":10472,"name":"Web Applications","url":"https://www.academia.edu/Documents/in/Web_Applications"},{"id":11085,"name":"Cross-Media Information Spaces","url":"https://www.academia.edu/Documents/in/Cross-Media_Information_Spaces"},{"id":41474,"name":"Cross Media Platforms","url":"https://www.academia.edu/Documents/in/Cross_Media_Platforms"},{"id":42095,"name":"Document Engineering","url":"https://www.academia.edu/Documents/in/Document_Engineering"},{"id":44606,"name":"Hypermedia","url":"https://www.academia.edu/Documents/in/Hypermedia"},{"id":54192,"name":"Hypertext","url":"https://www.academia.edu/Documents/in/Hypertext"},{"id":284081,"name":"Vannevar Bush","url":"https://www.academia.edu/Documents/in/Vannevar_Bush"},{"id":627593,"name":"File management for operating systems","url":"https://www.academia.edu/Documents/in/File_management_for_operating_systems"},{"id":640377,"name":"Document Management","url":"https://www.academia.edu/Documents/in/Document_Management"},{"id":641270,"name":"Computer Information Systems","url":"https://www.academia.edu/Documents/in/Computer_Information_Systems"},{"id":974547,"name":"New Trends \u0026 Technologies of Information Science","url":"https://www.academia.edu/Documents/in/New_Trends_and_Technologies_of_Information_Science"},{"id":1258169,"name":"File Manager","url":"https://www.academia.edu/Documents/in/File_Manager"},{"id":1315262,"name":"Douglas Engelbart","url":"https://www.academia.edu/Documents/in/Douglas_Engelbart"},{"id":3745667,"name":"File management","url":"https://www.academia.edu/Documents/in/File_management"}],"urls":[{"id":10515097,"url":"https://beatsigner.com/publications/back-to-the-future-bringing-original-hypermedia-and-cross-media-concepts-to-modern-desktop-environments.pdf"}]}, dispatcherData: dispatcherData }); $(this).data('initialized', true); } }); $a.trackClickSource(".js-work-strip-work-link", "profile_work_strip") if (false) { Aedu.setUpFigureCarousel('profile-work-50140273-figures'); } }); </script> <div class="js-work-strip profile--work_container" data-work-id="44902025"><div class="profile--work_thumbnail hidden-xs"><a class="js-work-strip-work-link" data-click-track="profile-work-strip-thumbnail" href="https://www.academia.edu/44902025/OpenHPS_An_Open_Source_Hybrid_Positioning_System"><img alt="Research paper thumbnail of OpenHPS: An Open Source Hybrid Positioning System" class="work-thumbnail" src="https://attachments.academia-assets.com/65488861/thumbnails/1.jpg" /></a></div><div class="wp-workCard wp-workCard_itemContainer"><div class="wp-workCard_item wp-workCard--title"><a class="js-work-strip-work-link text-gray-darker" data-click-track="profile-work-strip-title" href="https://www.academia.edu/44902025/OpenHPS_An_Open_Source_Hybrid_Positioning_System">OpenHPS: An Open Source Hybrid Positioning System</a></div><div class="wp-workCard_item"><span class="js-work-more-abstract-truncated">Positioning systems and frameworks use various techniques to determine the position of an object....</span><a class="js-work-more-abstract" data-broccoli-component="work_strip.more_abstract" data-click-track="profile-work-strip-more-abstract" href="javascript:;"><span> more </span><span><i class="fa fa-caret-down"></i></span></a><span class="js-work-more-abstract-untruncated hidden">Positioning systems and frameworks use various techniques to determine the position of an object. Some of the existing solutions combine different sensory data at the time of positioning in order to compute more accurate positions by reducing the error introduced by the used individual positioning techniques. We present OpenHPS, a generic hybrid positioning system implemented in TypeScript, that can not only reduce the error during tracking by fusing different sensory data based on different algorithms, but also also make use of combined tracking techniques when calibrating or training the system. In addition to a detailed discussion of the architecture, features and implementation of the extensible open source OpenHPS framework, we illustrate the use of our solution in a demonstrator application fusing different positioning techniques. While OpenHPS offers a number of positioning techniques, future extensions might integrate new positioning methods or algorithms and support additional levels of abstraction including symbolic locations.</span></div><div class="wp-workCard_item wp-workCard--actions"><span class="work-strip-bookmark-button-container"></span><a id="b5a637de390718fbeeaa78c96b61f2b6" class="wp-workCard--action" rel="nofollow" data-click-track="profile-work-strip-download" data-download="{&quot;attachment_id&quot;:65488861,&quot;asset_id&quot;:44902025,&quot;asset_type&quot;:&quot;Work&quot;,&quot;button_location&quot;:&quot;profile&quot;}" href="https://www.academia.edu/attachments/65488861/download_file?s=profile"><span><i class="fa fa-arrow-down"></i></span><span>Download</span></a><span class="wp-workCard--action visible-if-viewed-by-owner inline-block" style="display: none;"><span class="js-profile-work-strip-edit-button-wrapper profile-work-strip-edit-button-wrapper" data-work-id="44902025"><a class="js-profile-work-strip-edit-button" tabindex="0"><span><i class="fa fa-pencil"></i></span><span>Edit</span></a></span></span></div><div class="wp-workCard_item wp-workCard--stats"><span><span><span class="js-view-count view-count u-mr2x" data-work-id="44902025"><i class="fa fa-spinner fa-spin"></i></span><script>$(function () { var workId = 44902025; window.Academia.workViewCountsFetcher.queue(workId, function (count) { var description = window.$h.commaizeInt(count) + " " + window.$h.pluralize(count, 'View'); $(".js-view-count[data-work-id=44902025]").text(description); $(".js-view-count[data-work-id=44902025]").attr('title', description).tooltip(); }); });</script></span></span><span><span class="percentile-widget hidden"><span class="u-mr2x work-percentile"></span></span><script>$(function () { var workId = 44902025; window.Academia.workPercentilesFetcher.queue(workId, function (percentileText) { var container = $(".js-work-strip[data-work-id='44902025']"); container.find('.work-percentile').text(percentileText.charAt(0).toUpperCase() + percentileText.slice(1)); container.find('.percentile-widget').show(); container.find('.percentile-widget').removeClass('hidden'); }); });</script></span></div><div id="work-strip-premium-row-container"></div></div></div><script> require.config({ waitSeconds: 90 })(["https://a.academia-assets.com/assets/wow_profile-a9bf3a2bc8c89fa2a77156577594264ee8a0f214d74241bc0fcd3f69f8d107ac.js","https://a.academia-assets.com/assets/work_edit-ad038b8c047c1a8d4fa01b402d530ff93c45fee2137a149a4a5398bc8ad67560.js"], function() { // from javascript_helper.rb var dispatcherData = {} if (true){ window.WowProfile.dispatcher = window.WowProfile.dispatcher || _.clone(Backbone.Events); dispatcherData = { dispatcher: window.WowProfile.dispatcher, downloadLinkId: "b5a637de390718fbeeaa78c96b61f2b6" } } $('.js-work-strip[data-work-id=44902025]').each(function() { if (!$(this).data('initialized')) { new WowProfile.WorkStripView({ el: this, workJSON: {"id":44902025,"title":"OpenHPS: An Open Source Hybrid Positioning System","translated_title":"","metadata":{"abstract":"Positioning systems and frameworks use various techniques to determine the position of an object. Some of the existing solutions combine different sensory data at the time of positioning in order to compute more accurate positions by reducing the error introduced by the used individual positioning techniques. We present OpenHPS, a generic hybrid positioning system implemented in TypeScript, that can not only reduce the error during tracking by fusing different sensory data based on different algorithms, but also also make use of combined tracking techniques when calibrating or training the system. In addition to a detailed discussion of the architecture, features and implementation of the extensible open source OpenHPS framework, we illustrate the use of our solution in a demonstrator application fusing different positioning techniques. While OpenHPS offers a number of positioning techniques, future extensions might integrate new positioning methods or algorithms and support additional levels of abstraction including symbolic locations.","more_info":"Technical Report WISE Lab, WISE-2020-01, December 2020","ai_title_tag":"OpenHPS: A Hybrid Open Source Positioning System","publication_date":{"day":null,"month":null,"year":2020,"errors":{}}},"translated_abstract":"Positioning systems and frameworks use various techniques to determine the position of an object. Some of the existing solutions combine different sensory data at the time of positioning in order to compute more accurate positions by reducing the error introduced by the used individual positioning techniques. We present OpenHPS, a generic hybrid positioning system implemented in TypeScript, that can not only reduce the error during tracking by fusing different sensory data based on different algorithms, but also also make use of combined tracking techniques when calibrating or training the system. In addition to a detailed discussion of the architecture, features and implementation of the extensible open source OpenHPS framework, we illustrate the use of our solution in a demonstrator application fusing different positioning techniques. While OpenHPS offers a number of positioning techniques, future extensions might integrate new positioning methods or algorithms and support additional levels of abstraction including symbolic locations.","internal_url":"https://www.academia.edu/44902025/OpenHPS_An_Open_Source_Hybrid_Positioning_System","translated_internal_url":"","created_at":"2021-01-14T02:23:15.818-08:00","preview_url":null,"current_user_can_edit":null,"current_user_is_owner":null,"owner_id":13155,"coauthors_can_edit":true,"document_type":"paper","co_author_tags":[{"id":38902015,"work_id":44902025,"tagging_user_id":13155,"tagged_user_id":61118416,"co_author_invite_id":null,"email":"m***l@gmail.com","affiliation":"Vrije Universiteit Brussel","display_order":-1,"name":"Maxim Van de Wynckel","title":"OpenHPS: An Open Source Hybrid Positioning System"}],"downloadable_attachments":[{"id":65488861,"title":"","file_type":"pdf","scribd_thumbnail_url":"https://attachments.academia-assets.com/65488861/thumbnails/1.jpg","file_name":"OpenHPS.pdf","download_url":"https://www.academia.edu/attachments/65488861/download_file","bulk_download_file_name":"OpenHPS_An_Open_Source_Hybrid_Positionin.pdf","bulk_download_url":"https://d1wqtxts1xzle7.cloudfront.net/65488861/OpenHPS-libre.pdf?1611352920=\u0026response-content-disposition=attachment%3B+filename%3DOpenHPS_An_Open_Source_Hybrid_Positionin.pdf\u0026Expires=1744203890\u0026Signature=G0AoN5OFeYvEQAE7EHjJX72tnmEWTrt7skE9GdN2FhwMZ1jaD1G7Qraa7Rq1OKv3zwiOLsJgIu8hfkicknarFwTSWYk-ar0Wk3qfmJE6rV36kZH1CaZ3K31ZK0gsVrBfX4-tzfsX1A3yKVXeihF~~u9GbECq3TefRrnm9cu7-fy7uTrV4LWFbDilOFg4YdCq19xfQsFlokuBsTOyM9dqYwbn6s43NO25TRHdQfeqwcsdsDdk1m~EA33weO0ZNjI05he1njpTuhYCBHV-RyVSA9UmkFlXmj28Ttec4H7m6CHZXNUa~fU8y0HIg4~1adcU2U1VQRxgTin-~VCpnGsaRA__\u0026Key-Pair-Id=APKAJLOHF5GGSLRBV4ZA"}],"slug":"OpenHPS_An_Open_Source_Hybrid_Positioning_System","translated_slug":"","page_count":17,"language":"en","content_type":"Work","summary":"Positioning systems and frameworks use various techniques to determine the position of an object. Some of the existing solutions combine different sensory data at the time of positioning in order to compute more accurate positions by reducing the error introduced by the used individual positioning techniques. We present OpenHPS, a generic hybrid positioning system implemented in TypeScript, that can not only reduce the error during tracking by fusing different sensory data based on different algorithms, but also also make use of combined tracking techniques when calibrating or training the system. In addition to a detailed discussion of the architecture, features and implementation of the extensible open source OpenHPS framework, we illustrate the use of our solution in a demonstrator application fusing different positioning techniques. While OpenHPS offers a number of positioning techniques, future extensions might integrate new positioning methods or algorithms and support additional levels of abstraction including symbolic locations.","impression_tracking_id":null,"owner":{"id":13155,"first_name":"Beat","middle_initials":null,"last_name":"Signer","page_name":"BeatSigner","domain_name":"vub","created_at":"2008-10-29T17:11:18.015-07:00","display_name":"Beat Signer","url":"https://vub.academia.edu/BeatSigner"},"attachments":[{"id":65488861,"title":"","file_type":"pdf","scribd_thumbnail_url":"https://attachments.academia-assets.com/65488861/thumbnails/1.jpg","file_name":"OpenHPS.pdf","download_url":"https://www.academia.edu/attachments/65488861/download_file","bulk_download_file_name":"OpenHPS_An_Open_Source_Hybrid_Positionin.pdf","bulk_download_url":"https://d1wqtxts1xzle7.cloudfront.net/65488861/OpenHPS-libre.pdf?1611352920=\u0026response-content-disposition=attachment%3B+filename%3DOpenHPS_An_Open_Source_Hybrid_Positionin.pdf\u0026Expires=1744203890\u0026Signature=G0AoN5OFeYvEQAE7EHjJX72tnmEWTrt7skE9GdN2FhwMZ1jaD1G7Qraa7Rq1OKv3zwiOLsJgIu8hfkicknarFwTSWYk-ar0Wk3qfmJE6rV36kZH1CaZ3K31ZK0gsVrBfX4-tzfsX1A3yKVXeihF~~u9GbECq3TefRrnm9cu7-fy7uTrV4LWFbDilOFg4YdCq19xfQsFlokuBsTOyM9dqYwbn6s43NO25TRHdQfeqwcsdsDdk1m~EA33weO0ZNjI05he1njpTuhYCBHV-RyVSA9UmkFlXmj28Ttec4H7m6CHZXNUa~fU8y0HIg4~1adcU2U1VQRxgTin-~VCpnGsaRA__\u0026Key-Pair-Id=APKAJLOHF5GGSLRBV4ZA"}],"research_interests":[{"id":422,"name":"Computer Science","url":"https://www.academia.edu/Documents/in/Computer_Science"},{"id":449,"name":"Software Engineering","url":"https://www.academia.edu/Documents/in/Software_Engineering"},{"id":472,"name":"Human Computer Interaction","url":"https://www.academia.edu/Documents/in/Human_Computer_Interaction"},{"id":491,"name":"Information Technology","url":"https://www.academia.edu/Documents/in/Information_Technology"},{"id":923,"name":"Technology","url":"https://www.academia.edu/Documents/in/Technology"},{"id":1210,"name":"Informatics","url":"https://www.academia.edu/Documents/in/Informatics"},{"id":1380,"name":"Computer Engineering","url":"https://www.academia.edu/Documents/in/Computer_Engineering"},{"id":2879,"name":"Ubiquitous Computing","url":"https://www.academia.edu/Documents/in/Ubiquitous_Computing"},{"id":4222,"name":"Hybrid Systems","url":"https://www.academia.edu/Documents/in/Hybrid_Systems"},{"id":4672,"name":"Open Source Software","url":"https://www.academia.edu/Documents/in/Open_Source_Software"},{"id":9988,"name":"Indoor Positioning","url":"https://www.academia.edu/Documents/in/Indoor_Positioning"},{"id":10907,"name":"Context-Aware Computing","url":"https://www.academia.edu/Documents/in/Context-Aware_Computing"},{"id":15158,"name":"Object Tracking (Computer Vision)","url":"https://www.academia.edu/Documents/in/Object_Tracking_Computer_Vision_"},{"id":53164,"name":"Context Awareness","url":"https://www.academia.edu/Documents/in/Context_Awareness"},{"id":83870,"name":"Ubiquitous Positioning","url":"https://www.academia.edu/Documents/in/Ubiquitous_Positioning"},{"id":90025,"name":"Tracking","url":"https://www.academia.edu/Documents/in/Tracking"},{"id":368857,"name":"Positioning","url":"https://www.academia.edu/Documents/in/Positioning"},{"id":491363,"name":"Vehicle Tracking","url":"https://www.academia.edu/Documents/in/Vehicle_Tracking"},{"id":1111705,"name":"Typescript","url":"https://www.academia.edu/Documents/in/Typescript"},{"id":3857627,"name":"OpenHPS","url":"https://www.academia.edu/Documents/in/OpenHPS"}],"urls":[{"id":9192959,"url":"https://beatsigner.com/publications/openhps-an-open-source-hybrid-positioning-system.pdf"}]}, dispatcherData: dispatcherData }); $(this).data('initialized', true); } }); $a.trackClickSource(".js-work-strip-work-link", "profile_work_strip") if (false) { Aedu.setUpFigureCarousel('profile-work-44902025-figures'); } }); </script> <div class="js-work-strip profile--work_container" data-work-id="82685780"><div class="profile--work_thumbnail hidden-xs"><a class="js-work-strip-work-link" data-click-track="profile-work-strip-thumbnail" href="https://www.academia.edu/82685780/A_Solid_based_Architecture_for_Decentralised_Interoperable_Location_Data"><img alt="Research paper thumbnail of A Solid-based Architecture for Decentralised Interoperable Location Data" class="work-thumbnail" src="https://attachments.academia-assets.com/88318267/thumbnails/1.jpg" /></a></div><div class="wp-workCard wp-workCard_itemContainer"><div class="wp-workCard_item wp-workCard--title"><a class="js-work-strip-work-link text-gray-darker" data-click-track="profile-work-strip-title" href="https://www.academia.edu/82685780/A_Solid_based_Architecture_for_Decentralised_Interoperable_Location_Data">A Solid-based Architecture for Decentralised Interoperable Location Data</a></div><div class="wp-workCard_item"><span class="js-work-more-abstract-truncated">In today&#39;s technological world of privacy-conscious users, the tracking of individuals via differ...</span><a class="js-work-more-abstract" data-broccoli-component="work_strip.more_abstract" data-click-track="profile-work-strip-more-abstract" href="javascript:;"><span> more </span><span><i class="fa fa-caret-down"></i></span></a><span class="js-work-more-abstract-untruncated hidden">In today&#39;s technological world of privacy-conscious users, the tracking of individuals via different positioning systems and services can be considered obtrusive. Furthermore, linking and integrating data from these positioning systems is not always possible or requires the major effort of creating new interfaces between systems. In this paper, we propose an architecture for the realisation of a decentralised positioning system based on the W3C&#39;s Solid platform specification. Using this specification, sensor data as well as an individual&#39;s location information is stored in secure decentralised data stores called Pods, that are hosted by user-selected Pod providers. We demonstrate that these Pods do not only offer transparent and interoperable data stores for persisting sensor data as well as processed location information, but also aid in linking multiple positioning systems for high-and low-level sensor fusion. For indoor positioning, this interoperability provides a way to offer users a single location-based service while also providing additional semantic context for other positioning systems to improve their data output. Developers of indoor positioning systems can store all data in a format that is readable, understandable and accessible by any other system that their users might be using, enabling collaboration between researchers and companies implementing these indoor positioning systems.</span></div><div class="wp-workCard_item wp-workCard--actions"><span class="work-strip-bookmark-button-container"></span><a id="6159c21c543eb34d72d104c24e120d56" class="wp-workCard--action" rel="nofollow" data-click-track="profile-work-strip-download" data-download="{&quot;attachment_id&quot;:88318267,&quot;asset_id&quot;:82685780,&quot;asset_type&quot;:&quot;Work&quot;,&quot;button_location&quot;:&quot;profile&quot;}" href="https://www.academia.edu/attachments/88318267/download_file?s=profile"><span><i class="fa fa-arrow-down"></i></span><span>Download</span></a><span class="wp-workCard--action visible-if-viewed-by-owner inline-block" style="display: none;"><span class="js-profile-work-strip-edit-button-wrapper profile-work-strip-edit-button-wrapper" data-work-id="82685780"><a class="js-profile-work-strip-edit-button" tabindex="0"><span><i class="fa fa-pencil"></i></span><span>Edit</span></a></span></span></div><div class="wp-workCard_item wp-workCard--stats"><span><span><span class="js-view-count view-count u-mr2x" data-work-id="82685780"><i class="fa fa-spinner fa-spin"></i></span><script>$(function () { var workId = 82685780; window.Academia.workViewCountsFetcher.queue(workId, function (count) { var description = window.$h.commaizeInt(count) + " " + window.$h.pluralize(count, 'View'); $(".js-view-count[data-work-id=82685780]").text(description); $(".js-view-count[data-work-id=82685780]").attr('title', description).tooltip(); }); });</script></span></span><span><span class="percentile-widget hidden"><span class="u-mr2x work-percentile"></span></span><script>$(function () { var workId = 82685780; window.Academia.workPercentilesFetcher.queue(workId, function (percentileText) { var container = $(".js-work-strip[data-work-id='82685780']"); container.find('.work-percentile').text(percentileText.charAt(0).toUpperCase() + percentileText.slice(1)); container.find('.percentile-widget').show(); container.find('.percentile-widget').removeClass('hidden'); }); });</script></span></div><div id="work-strip-premium-row-container"></div></div></div><script> require.config({ waitSeconds: 90 })(["https://a.academia-assets.com/assets/wow_profile-a9bf3a2bc8c89fa2a77156577594264ee8a0f214d74241bc0fcd3f69f8d107ac.js","https://a.academia-assets.com/assets/work_edit-ad038b8c047c1a8d4fa01b402d530ff93c45fee2137a149a4a5398bc8ad67560.js"], function() { // from javascript_helper.rb var dispatcherData = {} if (true){ window.WowProfile.dispatcher = window.WowProfile.dispatcher || _.clone(Backbone.Events); dispatcherData = { dispatcher: window.WowProfile.dispatcher, downloadLinkId: "6159c21c543eb34d72d104c24e120d56" } } $('.js-work-strip[data-work-id=82685780]').each(function() { if (!$(this).data('initialized')) { new WowProfile.WorkStripView({ el: this, workJSON: {"id":82685780,"title":"A Solid-based Architecture for Decentralised Interoperable Location Data","translated_title":"","metadata":{"abstract":"In today's technological world of privacy-conscious users, the tracking of individuals via different positioning systems and services can be considered obtrusive. Furthermore, linking and integrating data from these positioning systems is not always possible or requires the major effort of creating new interfaces between systems. In this paper, we propose an architecture for the realisation of a decentralised positioning system based on the W3C's Solid platform specification. Using this specification, sensor data as well as an individual's location information is stored in secure decentralised data stores called Pods, that are hosted by user-selected Pod providers. We demonstrate that these Pods do not only offer transparent and interoperable data stores for persisting sensor data as well as processed location information, but also aid in linking multiple positioning systems for high-and low-level sensor fusion. For indoor positioning, this interoperability provides a way to offer users a single location-based service while also providing additional semantic context for other positioning systems to improve their data output. Developers of indoor positioning systems can store all data in a format that is readable, understandable and accessible by any other system that their users might be using, enabling collaboration between researchers and companies implementing these indoor positioning systems.","more_info":"Maxim Van de Wynckel and Beat Signer, Proceedings of IPIN 2022 (WiP), 12th International Conference on Indoor Positioning and Indoor Navigation, Beijing, China, September 2022","ai_title_tag":"Decentralised Interoperable Location Data with Solid Architecture","publication_date":{"day":null,"month":null,"year":2022,"errors":{}}},"translated_abstract":"In today's technological world of privacy-conscious users, the tracking of individuals via different positioning systems and services can be considered obtrusive. Furthermore, linking and integrating data from these positioning systems is not always possible or requires the major effort of creating new interfaces between systems. In this paper, we propose an architecture for the realisation of a decentralised positioning system based on the W3C's Solid platform specification. Using this specification, sensor data as well as an individual's location information is stored in secure decentralised data stores called Pods, that are hosted by user-selected Pod providers. We demonstrate that these Pods do not only offer transparent and interoperable data stores for persisting sensor data as well as processed location information, but also aid in linking multiple positioning systems for high-and low-level sensor fusion. For indoor positioning, this interoperability provides a way to offer users a single location-based service while also providing additional semantic context for other positioning systems to improve their data output. Developers of indoor positioning systems can store all data in a format that is readable, understandable and accessible by any other system that their users might be using, enabling collaboration between researchers and companies implementing these indoor positioning systems.","internal_url":"https://www.academia.edu/82685780/A_Solid_based_Architecture_for_Decentralised_Interoperable_Location_Data","translated_internal_url":"","created_at":"2022-07-06T02:53:47.154-07:00","preview_url":null,"current_user_can_edit":null,"current_user_is_owner":null,"owner_id":13155,"coauthors_can_edit":true,"document_type":"paper","co_author_tags":[{"id":38496798,"work_id":82685780,"tagging_user_id":13155,"tagged_user_id":61118416,"co_author_invite_id":null,"email":"m***l@gmail.com","affiliation":"Vrije Universiteit Brussel","display_order":1,"name":"Maxim Van de Wynckel","title":"A Solid-based Architecture for Decentralised Interoperable Location Data"}],"downloadable_attachments":[{"id":88318267,"title":"","file_type":"pdf","scribd_thumbnail_url":"https://attachments.academia-assets.com/88318267/thumbnails/1.jpg","file_name":"van_de_wynckel_IPIN2022.pdf","download_url":"https://www.academia.edu/attachments/88318267/download_file","bulk_download_file_name":"A_Solid_based_Architecture_for_Decentral.pdf","bulk_download_url":"https://d1wqtxts1xzle7.cloudfront.net/88318267/van_de_wynckel_IPIN2022-libre.pdf?1657119020=\u0026response-content-disposition=attachment%3B+filename%3DA_Solid_based_Architecture_for_Decentral.pdf\u0026Expires=1744203890\u0026Signature=KHcf0DP8IW7jbNL7qFTbtSzvEHF7bIavN7jH41npkOk2bkUv8tfmrlZTkUO25m1zJzJRDD4jZ0mDZ-tGcHYOyyrzztvEuXXQiMBQkPh1ZmA7snilb9MneiIBRIWjZfBgsInYpyr01~BrtRvHZhbc-CXW0qHwHCU4mvRi~nl-PRp8Pzfpp3LgbAt0gRC1fQFqa8NHgL-L2QAaFOKyXk0hQWN4jrcGrvZq~o8K3ec25dAvDqGqMQD4ffPD~zFWWXfJ8lK~JQapDw4HHX~4mKNVwXSJxpX~si-LVR0OHxdfcdJCot0AhYP6Yr4VdFJqVRf3hLlK2culxNSKeeXQ0m1Ouw__\u0026Key-Pair-Id=APKAJLOHF5GGSLRBV4ZA"}],"slug":"A_Solid_based_Architecture_for_Decentralised_Interoperable_Location_Data","translated_slug":"","page_count":15,"language":"en","content_type":"Work","summary":"In today's technological world of privacy-conscious users, the tracking of individuals via different positioning systems and services can be considered obtrusive. Furthermore, linking and integrating data from these positioning systems is not always possible or requires the major effort of creating new interfaces between systems. In this paper, we propose an architecture for the realisation of a decentralised positioning system based on the W3C's Solid platform specification. Using this specification, sensor data as well as an individual's location information is stored in secure decentralised data stores called Pods, that are hosted by user-selected Pod providers. We demonstrate that these Pods do not only offer transparent and interoperable data stores for persisting sensor data as well as processed location information, but also aid in linking multiple positioning systems for high-and low-level sensor fusion. For indoor positioning, this interoperability provides a way to offer users a single location-based service while also providing additional semantic context for other positioning systems to improve their data output. Developers of indoor positioning systems can store all data in a format that is readable, understandable and accessible by any other system that their users might be using, enabling collaboration between researchers and companies implementing these indoor positioning systems.","impression_tracking_id":null,"owner":{"id":13155,"first_name":"Beat","middle_initials":null,"last_name":"Signer","page_name":"BeatSigner","domain_name":"vub","created_at":"2008-10-29T17:11:18.015-07:00","display_name":"Beat Signer","url":"https://vub.academia.edu/BeatSigner"},"attachments":[{"id":88318267,"title":"","file_type":"pdf","scribd_thumbnail_url":"https://attachments.academia-assets.com/88318267/thumbnails/1.jpg","file_name":"van_de_wynckel_IPIN2022.pdf","download_url":"https://www.academia.edu/attachments/88318267/download_file","bulk_download_file_name":"A_Solid_based_Architecture_for_Decentral.pdf","bulk_download_url":"https://d1wqtxts1xzle7.cloudfront.net/88318267/van_de_wynckel_IPIN2022-libre.pdf?1657119020=\u0026response-content-disposition=attachment%3B+filename%3DA_Solid_based_Architecture_for_Decentral.pdf\u0026Expires=1744203890\u0026Signature=KHcf0DP8IW7jbNL7qFTbtSzvEHF7bIavN7jH41npkOk2bkUv8tfmrlZTkUO25m1zJzJRDD4jZ0mDZ-tGcHYOyyrzztvEuXXQiMBQkPh1ZmA7snilb9MneiIBRIWjZfBgsInYpyr01~BrtRvHZhbc-CXW0qHwHCU4mvRi~nl-PRp8Pzfpp3LgbAt0gRC1fQFqa8NHgL-L2QAaFOKyXk0hQWN4jrcGrvZq~o8K3ec25dAvDqGqMQD4ffPD~zFWWXfJ8lK~JQapDw4HHX~4mKNVwXSJxpX~si-LVR0OHxdfcdJCot0AhYP6Yr4VdFJqVRf3hLlK2culxNSKeeXQ0m1Ouw__\u0026Key-Pair-Id=APKAJLOHF5GGSLRBV4ZA"}],"research_interests":[{"id":422,"name":"Computer Science","url":"https://www.academia.edu/Documents/in/Computer_Science"},{"id":1210,"name":"Informatics","url":"https://www.academia.edu/Documents/in/Informatics"},{"id":1380,"name":"Computer Engineering","url":"https://www.academia.edu/Documents/in/Computer_Engineering"},{"id":1495,"name":"Location Based Services","url":"https://www.academia.edu/Documents/in/Location_Based_Services"},{"id":4222,"name":"Hybrid Systems","url":"https://www.academia.edu/Documents/in/Hybrid_Systems"},{"id":9988,"name":"Indoor Positioning","url":"https://www.academia.edu/Documents/in/Indoor_Positioning"},{"id":86589,"name":"Outdoor Positioning","url":"https://www.academia.edu/Documents/in/Outdoor_Positioning"},{"id":368857,"name":"Positioning","url":"https://www.academia.edu/Documents/in/Positioning"},{"id":522700,"name":"Location-Based Service","url":"https://www.academia.edu/Documents/in/Location-Based_Service"},{"id":602903,"name":"Solid","url":"https://www.academia.edu/Documents/in/Solid"},{"id":3857627,"name":"OpenHPS","url":"https://www.academia.edu/Documents/in/OpenHPS"}],"urls":[{"id":29589462,"url":"https://beatsigner.com/publications/a-solid-based-architecture-for-decentralised-interoperable-location-data.pdf"}]}, dispatcherData: dispatcherData }); $(this).data('initialized', true); } }); $a.trackClickSource(".js-work-strip-work-link", "profile_work_strip") if (false) { Aedu.setUpFigureCarousel('profile-work-82685780-figures'); } }); </script> <div class="js-work-strip profile--work_container" data-work-id="50252266"><div class="profile--work_thumbnail hidden-xs"><a class="js-work-strip-work-link" data-click-track="profile-work-strip-thumbnail" href="https://www.academia.edu/50252266/Indoor_Positioning_Using_the_OpenHPS_Framework"><img alt="Research paper thumbnail of Indoor Positioning Using the OpenHPS Framework" class="work-thumbnail" src="https://attachments.academia-assets.com/68307205/thumbnails/1.jpg" /></a></div><div class="wp-workCard wp-workCard_itemContainer"><div class="wp-workCard_item wp-workCard--title"><a class="js-work-strip-work-link text-gray-darker" data-click-track="profile-work-strip-title" href="https://www.academia.edu/50252266/Indoor_Positioning_Using_the_OpenHPS_Framework">Indoor Positioning Using the OpenHPS Framework</a></div><div class="wp-workCard_item wp-workCard--coauthors"><span>by </span><span><a class="" data-click-track="profile-work-strip-authors" href="https://vub.academia.edu/BeatSigner">Beat Signer</a> and <a class="" data-click-track="profile-work-strip-authors" href="https://vub.academia.edu/MaximVandeWynckel">Maxim Van de Wynckel</a></span></div><div class="wp-workCard_item"><span class="js-work-more-abstract-truncated">Hybrid positioning frameworks use various sensors and algorithms to enhance positioning through d...</span><a class="js-work-more-abstract" data-broccoli-component="work_strip.more_abstract" data-click-track="profile-work-strip-more-abstract" href="javascript:;"><span> more </span><span><i class="fa fa-caret-down"></i></span></a><span class="js-work-more-abstract-untruncated hidden">Hybrid positioning frameworks use various sensors and algorithms to enhance positioning through different types of fusion. The optimisation of the fusion process requires the testing of different algorithm parameters and optimal lowas well as high-level sensor fusion techniques. The presented OpenHPS open source hybrid positioning system is a modular framework managing individual nodes in a process network, which can be configured to support concrete positioning use cases or to adapt to specific technologies. This modularity allows developers to rapidly develop and optimise their positioning system while still providing them the flexibility to add their own algorithms. In this paper we discuss how a process network developed with OpenHPS can be used to realise a customisable indoor positioning solution with an offline and online stage, and how it can be adapted for high accuracy or low latency. For the demonstration and validation of our indoor positioning solution, we further compiled a publicly available dataset containing data from WLAN access points, BLE beacons as well as several trajectories that include IMU data.</span></div><div class="wp-workCard_item"><div class="carousel-container carousel-container--sm" id="profile-work-50252266-figures"><div class="prev-slide-container js-prev-button-container"><button aria-label="Previous" class="carousel-navigation-button js-profile-work-50252266-figures-prev"><span class="material-symbols-outlined" style="font-size: 24px" translate="no">arrow_back_ios</span></button></div><div class="slides-container js-slides-container"><figure class="figure-slide-container"><a href="https://www.academia.edu/figures/33982157/figure-1-component-architecture-of-openhps-openhp-is-an-open"><img alt="Fig. 1: Component architecture of OpenHPS OpenHP3$? is an open source hybrid positioning framework implemented in TypeScript. The system is split into individual modules that provide extra functionality on top of a core component. The core component of the OpenHPS framework is a process network designed to sample sensor data to a position while other components extend this core function- ality with different data storage types, positioning techniques, abstractions and communication nodes. " class="figure-slide-image" src="https://figures.academia-assets.com/68307205/figure_001.jpg" /></a></figure><figure class="figure-slide-container"><a href="https://www.academia.edu/figures/33982169/figure-3-pulling-data-in-the-process-network-shows-data"><img alt="Fig. 3: Pulling data in the process network Fig. 2 shows data being pushed by a source node and handled by two processing nodes. Once a downstream node is ready with the frame, it resolves the promise signalling to the upstream node that new data can be accepted. Sink nodes emit an event upstream, indicating that data has been persisted. " class="figure-slide-image" src="https://figures.academia-assets.com/68307205/figure_002.jpg" /></a></figure><figure class="figure-slide-container"><a href="https://www.academia.edu/figures/33982204/figure-2-pushing-data-in-the-process-network-our-framework"><img alt="Fig. 2: Pushing data in the process network Our framework uses a push-pull-based stream for sampling sensor data based on existing stream-based software architec- tures. However, we optimised the framework for processing and handling positioning data. Source nodes that actively produce information, such as an IMU sensor, can push in- formation. Pull request actions trigger a push when a node is able to respond to the request. This behaviour is similar to Akka Streams [17], but unlike reactive streams our framework does not use the behaviour to implement back pressure in the system. Both the push and pull requests can be executed asynchronously, similar to reactive streams [18]. " class="figure-slide-image" src="https://figures.academia-assets.com/68307205/figure_003.jpg" /></a></figure><figure class="figure-slide-container"><a href="https://www.academia.edu/figures/33982216/figure-4-data-frame-containing-objects-and-sensor-data"><img alt="Fig. 4: Data frame containing objects and sensor data " class="figure-slide-image" src="https://figures.academia-assets.com/68307205/figure_004.jpg" /></a></figure><figure class="figure-slide-container"><a href="https://www.academia.edu/figures/33982230/figure-5-listing-location-based-service"><img alt="Listing 1: Location-based service " class="figure-slide-image" src="https://figures.academia-assets.com/68307205/figure_005.jpg" /></a></figure><figure class="figure-slide-container"><a href="https://www.academia.edu/figures/33982243/figure-5-listing-graph-shape-for-pedestrian-dead-reckoning"><img alt="Listing 3: Graph shape for pedestrian dead reckoning The fingerprint service used to store fingerprints is shared with the online stage. For the scope of our evaluation, we used various positioning methods ranging from BLE multilateration and cell identification using 11 BLE beacons, WLAN finger- printing and BLE fingerprinting. A high-level position fusion node fuses the positions based on their accuracy [14]. Finally, the calculated position is sent back to the mobile application through the socket sink node (orange) as indicated in Fig. 5. The effectiveness of OpenHPS as a hybrid positioning solution is validated with two scenarios in Sections IV-B and IV-C. " class="figure-slide-image" src="https://figures.academia-assets.com/68307205/figure_006.jpg" /></a></figure><figure class="figure-slide-container"><a href="https://www.academia.edu/figures/33982255/figure-5-positioning-model-for-server-offline-and-online"><img alt="Fig. 5: Positioning model for server, offline and online application " class="figure-slide-image" src="https://figures.academia-assets.com/68307205/figure_007.jpg" /></a></figure><figure class="figure-slide-container"><a href="https://www.academia.edu/figures/33982266/figure-6-for-the-evaluation-or-our-positioning-model-we"><img alt="For the evaluation or our positioning model, we created a fingerprinting dataset of a single floor in the building of our research lab [23]. A visual representation of our dataset is shown in Fig. 6. The dataset was recorded with a calibration application collecting information from WLAN access points, BLE beacons with a known position (blue) and an IMU sensor. " class="figure-slide-image" src="https://figures.academia-assets.com/68307205/figure_008.jpg" /></a></figure><figure class="figure-slide-container"><a href="https://www.academia.edu/figures/33982276/figure-7-symbolic-spaces-in-geojson-format-we-use-the"><img alt="Fig. 7: Symbolic spaces in GeoJSON format We use the symbolic space abstraction that has been intro- duced in Section III-E to create symbolic spaces for the rooms, corridors two lobbies and toilets. These symbolic spaces will be used to determine the hit rate and are illustrated in Fig. 7 exported as GeoJSON polygonal features. " class="figure-slide-image" src="https://figures.academia-assets.com/68307205/figure_009.jpg" /></a></figure><figure class="figure-slide-container"><a href="https://www.academia.edu/figures/33982287/figure-10-listing-positioning-with-fingerprinting-parameters"><img alt="Listing 4: Positioning with fingerprinting parameters " class="figure-slide-image" src="https://figures.academia-assets.com/68307205/figure_010.jpg" /></a></figure><figure class="figure-slide-container"><a href="https://www.academia.edu/figures/33982298/figure-8-test-trajectory-with-wlan-ble-and-imu-data"><img alt="Fig. 8: Test trajectory with WLAN, BLE and IMU data application sends the WLAN and BLE data to the server, where it is processed similarly to the test data points in Section IV-B, while the IMU data is used locally in the application to perform pedestrian dead reckoning. Trajectory sensor information was collected by keeping the phone at chest height while performing the trajectory at a normal walking pace. Other than the stationary points, the update frequency and accuracy is more important than the symbolic hit rate. " class="figure-slide-image" src="https://figures.academia-assets.com/68307205/figure_011.jpg" /></a></figure><figure class="figure-slide-container"><a href="https://www.academia.edu/figures/33982309/table-1-average-minimum-and-maximum-position-error-compared"><img alt="TABLE I: Average, minimum and maximum X/Y position error compared to the fused position " class="figure-slide-image" src="https://figures.academia-assets.com/68307205/table_001.jpg" /></a></figure><figure class="figure-slide-container"><a href="https://www.academia.edu/figures/33982319/figure-8-ii-sensor-fusion-comparison-for-test-trajectory-in"><img alt="TABLE II: Sensor fusion comparison for test trajectory In Table II we show the maximum and average error for our test trajectory with and without IMU data. The delay caused in the fingerprinting in combination with the slow update fre- quency causes a larger error compared to the real-time position during the trajectory. Note that flexibility of OpenHPS allows developers to experiment with different positioning algorithms and fusion techniques to further optimise the system. The expected trajectory is shown in red in Fig. 8. We determined the error by comparing the last known position with the actual expected position in the trajectory. While WLAN positioning and BLE cell identification can show a visual representation of the complete route, it only consists of 13 data points that are not synchronised with the user’s real- time position. This delay is due to the scan duration and the processing time on the server. " class="figure-slide-image" src="https://figures.academia-assets.com/68307205/table_002.jpg" /></a></figure></div><div class="next-slide-container js-next-button-container"><button aria-label="Next" class="carousel-navigation-button js-profile-work-50252266-figures-next"><span class="material-symbols-outlined" style="font-size: 24px" translate="no">arrow_forward_ios</span></button></div></div></div><div class="wp-workCard_item wp-workCard--actions"><span class="work-strip-bookmark-button-container"></span><a id="f355a9d3d0a49444dd5edf5a7ce7b5b7" class="wp-workCard--action" rel="nofollow" data-click-track="profile-work-strip-download" data-download="{&quot;attachment_id&quot;:68307205,&quot;asset_id&quot;:50252266,&quot;asset_type&quot;:&quot;Work&quot;,&quot;button_location&quot;:&quot;profile&quot;}" href="https://www.academia.edu/attachments/68307205/download_file?s=profile"><span><i class="fa fa-arrow-down"></i></span><span>Download</span></a><span class="wp-workCard--action visible-if-viewed-by-owner inline-block" style="display: none;"><span class="js-profile-work-strip-edit-button-wrapper profile-work-strip-edit-button-wrapper" data-work-id="50252266"><a class="js-profile-work-strip-edit-button" tabindex="0"><span><i class="fa fa-pencil"></i></span><span>Edit</span></a></span></span></div><div class="wp-workCard_item wp-workCard--stats"><span><span><span class="js-view-count view-count u-mr2x" data-work-id="50252266"><i class="fa fa-spinner fa-spin"></i></span><script>$(function () { var workId = 50252266; window.Academia.workViewCountsFetcher.queue(workId, function (count) { var description = window.$h.commaizeInt(count) + " " + window.$h.pluralize(count, 'View'); $(".js-view-count[data-work-id=50252266]").text(description); $(".js-view-count[data-work-id=50252266]").attr('title', description).tooltip(); }); });</script></span></span><span><span class="percentile-widget hidden"><span class="u-mr2x work-percentile"></span></span><script>$(function () { var workId = 50252266; window.Academia.workPercentilesFetcher.queue(workId, function (percentileText) { var container = $(".js-work-strip[data-work-id='50252266']"); container.find('.work-percentile').text(percentileText.charAt(0).toUpperCase() + percentileText.slice(1)); container.find('.percentile-widget').show(); container.find('.percentile-widget').removeClass('hidden'); }); });</script></span></div><div id="work-strip-premium-row-container"></div></div></div><script> require.config({ waitSeconds: 90 })(["https://a.academia-assets.com/assets/wow_profile-a9bf3a2bc8c89fa2a77156577594264ee8a0f214d74241bc0fcd3f69f8d107ac.js","https://a.academia-assets.com/assets/work_edit-ad038b8c047c1a8d4fa01b402d530ff93c45fee2137a149a4a5398bc8ad67560.js"], function() { // from javascript_helper.rb var dispatcherData = {} if (true){ window.WowProfile.dispatcher = window.WowProfile.dispatcher || _.clone(Backbone.Events); dispatcherData = { dispatcher: window.WowProfile.dispatcher, downloadLinkId: "f355a9d3d0a49444dd5edf5a7ce7b5b7" } } $('.js-work-strip[data-work-id=50252266]').each(function() { if (!$(this).data('initialized')) { new WowProfile.WorkStripView({ el: this, workJSON: {"id":50252266,"title":"Indoor Positioning Using the OpenHPS Framework","translated_title":"","metadata":{"doi":"10.1109/IPIN51156.2021.9662569","abstract":"Hybrid positioning frameworks use various sensors and algorithms to enhance positioning through different types of fusion. The optimisation of the fusion process requires the testing of different algorithm parameters and optimal lowas well as high-level sensor fusion techniques. The presented OpenHPS open source hybrid positioning system is a modular framework managing individual nodes in a process network, which can be configured to support concrete positioning use cases or to adapt to specific technologies. This modularity allows developers to rapidly develop and optimise their positioning system while still providing them the flexibility to add their own algorithms. In this paper we discuss how a process network developed with OpenHPS can be used to realise a customisable indoor positioning solution with an offline and online stage, and how it can be adapted for high accuracy or low latency. For the demonstration and validation of our indoor positioning solution, we further compiled a publicly available dataset containing data from WLAN access points, BLE beacons as well as several trajectories that include IMU data.","more_info":"Maxim Van de Wynckel and Beat Signer, Proceedings of IPIN 2021, 11th International Conference on Indoor Positioning and Indoor Navigation, Lloret de Mar, Spain, November 2021","publication_date":{"day":null,"month":null,"year":2021,"errors":{}}},"translated_abstract":"Hybrid positioning frameworks use various sensors and algorithms to enhance positioning through different types of fusion. The optimisation of the fusion process requires the testing of different algorithm parameters and optimal lowas well as high-level sensor fusion techniques. The presented OpenHPS open source hybrid positioning system is a modular framework managing individual nodes in a process network, which can be configured to support concrete positioning use cases or to adapt to specific technologies. This modularity allows developers to rapidly develop and optimise their positioning system while still providing them the flexibility to add their own algorithms. In this paper we discuss how a process network developed with OpenHPS can be used to realise a customisable indoor positioning solution with an offline and online stage, and how it can be adapted for high accuracy or low latency. For the demonstration and validation of our indoor positioning solution, we further compiled a publicly available dataset containing data from WLAN access points, BLE beacons as well as several trajectories that include IMU data.","internal_url":"https://www.academia.edu/50252266/Indoor_Positioning_Using_the_OpenHPS_Framework","translated_internal_url":"","created_at":"2021-07-25T14:31:02.900-07:00","preview_url":null,"current_user_can_edit":null,"current_user_is_owner":null,"owner_id":13155,"coauthors_can_edit":true,"document_type":"paper","co_author_tags":[{"id":36731995,"work_id":50252266,"tagging_user_id":13155,"tagged_user_id":61118416,"co_author_invite_id":null,"email":"m***l@gmail.com","affiliation":"Vrije Universiteit Brussel","display_order":1,"name":"Maxim Van de Wynckel","title":"Indoor Positioning Using the OpenHPS Framework"}],"downloadable_attachments":[{"id":68307205,"title":"","file_type":"pdf","scribd_thumbnail_url":"https://attachments.academia-assets.com/68307205/thumbnails/1.jpg","file_name":"van_de_wynckel_IPIN2021.pdf","download_url":"https://www.academia.edu/attachments/68307205/download_file","bulk_download_file_name":"Indoor_Positioning_Using_the_OpenHPS_Fra.pdf","bulk_download_url":"https://d1wqtxts1xzle7.cloudfront.net/68307205/van_de_wynckel_IPIN2021-libre.pdf?1627252092=\u0026response-content-disposition=attachment%3B+filename%3DIndoor_Positioning_Using_the_OpenHPS_Fra.pdf\u0026Expires=1744203891\u0026Signature=YG9DPlQIvrDeTTm8QsFyJcfOHYsRL6uVjXw3vQelu6lVoj00CxNwG5zyt9zJ2ToDYP4gOGUJX-BcPXEbvXNMW2Y1c~mZtO~~h9C3jDPmVkKafABMTV3MKYH3JPyF7cGazmK6vHMhmp0p3-QEPH~R40bzkYRDB6m2EM~zM2ad9FefVCW3iDIDkzGrLjagY7ary6XE74MDuc8OVGWcltFH~z315dJjNyf7whQReWlg3xmRr1Sn3Z~6-u31Aet3uu7-mCZR~MLEt9CybwatZU0WBFIkJ2fcXeoerpLldmMY64jVrVufcH8ZKbPxKEcvM6VoAMWTe0vzRqD6IfYCeWuwDw__\u0026Key-Pair-Id=APKAJLOHF5GGSLRBV4ZA"}],"slug":"Indoor_Positioning_Using_the_OpenHPS_Framework","translated_slug":"","page_count":8,"language":"en","content_type":"Work","summary":"Hybrid positioning frameworks use various sensors and algorithms to enhance positioning through different types of fusion. The optimisation of the fusion process requires the testing of different algorithm parameters and optimal lowas well as high-level sensor fusion techniques. The presented OpenHPS open source hybrid positioning system is a modular framework managing individual nodes in a process network, which can be configured to support concrete positioning use cases or to adapt to specific technologies. This modularity allows developers to rapidly develop and optimise their positioning system while still providing them the flexibility to add their own algorithms. In this paper we discuss how a process network developed with OpenHPS can be used to realise a customisable indoor positioning solution with an offline and online stage, and how it can be adapted for high accuracy or low latency. For the demonstration and validation of our indoor positioning solution, we further compiled a publicly available dataset containing data from WLAN access points, BLE beacons as well as several trajectories that include IMU data.","impression_tracking_id":null,"owner":{"id":13155,"first_name":"Beat","middle_initials":null,"last_name":"Signer","page_name":"BeatSigner","domain_name":"vub","created_at":"2008-10-29T17:11:18.015-07:00","display_name":"Beat Signer","url":"https://vub.academia.edu/BeatSigner"},"attachments":[{"id":68307205,"title":"","file_type":"pdf","scribd_thumbnail_url":"https://attachments.academia-assets.com/68307205/thumbnails/1.jpg","file_name":"van_de_wynckel_IPIN2021.pdf","download_url":"https://www.academia.edu/attachments/68307205/download_file","bulk_download_file_name":"Indoor_Positioning_Using_the_OpenHPS_Fra.pdf","bulk_download_url":"https://d1wqtxts1xzle7.cloudfront.net/68307205/van_de_wynckel_IPIN2021-libre.pdf?1627252092=\u0026response-content-disposition=attachment%3B+filename%3DIndoor_Positioning_Using_the_OpenHPS_Fra.pdf\u0026Expires=1744203891\u0026Signature=YG9DPlQIvrDeTTm8QsFyJcfOHYsRL6uVjXw3vQelu6lVoj00CxNwG5zyt9zJ2ToDYP4gOGUJX-BcPXEbvXNMW2Y1c~mZtO~~h9C3jDPmVkKafABMTV3MKYH3JPyF7cGazmK6vHMhmp0p3-QEPH~R40bzkYRDB6m2EM~zM2ad9FefVCW3iDIDkzGrLjagY7ary6XE74MDuc8OVGWcltFH~z315dJjNyf7whQReWlg3xmRr1Sn3Z~6-u31Aet3uu7-mCZR~MLEt9CybwatZU0WBFIkJ2fcXeoerpLldmMY64jVrVufcH8ZKbPxKEcvM6VoAMWTe0vzRqD6IfYCeWuwDw__\u0026Key-Pair-Id=APKAJLOHF5GGSLRBV4ZA"}],"research_interests":[{"id":422,"name":"Computer Science","url":"https://www.academia.edu/Documents/in/Computer_Science"},{"id":491,"name":"Information Technology","url":"https://www.academia.edu/Documents/in/Information_Technology"},{"id":1210,"name":"Informatics","url":"https://www.academia.edu/Documents/in/Informatics"},{"id":1380,"name":"Computer Engineering","url":"https://www.academia.edu/Documents/in/Computer_Engineering"},{"id":4222,"name":"Hybrid Systems","url":"https://www.academia.edu/Documents/in/Hybrid_Systems"},{"id":9988,"name":"Indoor Positioning","url":"https://www.academia.edu/Documents/in/Indoor_Positioning"},{"id":46957,"name":"Indoor Navigation","url":"https://www.academia.edu/Documents/in/Indoor_Navigation"},{"id":235328,"name":"Indoor Localization","url":"https://www.academia.edu/Documents/in/Indoor_Localization"},{"id":272690,"name":"Computer Sciencee","url":"https://www.academia.edu/Documents/in/Computer_Sciencee"},{"id":368857,"name":"Positioning","url":"https://www.academia.edu/Documents/in/Positioning"},{"id":482128,"name":"Pedestrian Dead-Reckoning","url":"https://www.academia.edu/Documents/in/Pedestrian_Dead-Reckoning"},{"id":707345,"name":"INDOOR POSITIONING SYSTEM","url":"https://www.academia.edu/Documents/in/INDOOR_POSITIONING_SYSTEM"},{"id":1483728,"name":"Data Fingerprinting","url":"https://www.academia.edu/Documents/in/Data_Fingerprinting"},{"id":3857627,"name":"OpenHPS","url":"https://www.academia.edu/Documents/in/OpenHPS"}],"urls":[{"id":29589477,"url":"https://beatsigner.com/publications/indoor-positioning-using-the-openhps-framework.pdf"}]}, dispatcherData: dispatcherData }); $(this).data('initialized', true); } }); $a.trackClickSource(".js-work-strip-work-link", "profile_work_strip") if (true) { Aedu.setUpFigureCarousel('profile-work-50252266-figures'); } }); </script> <div class="js-work-strip profile--work_container" data-work-id="7719770"><div class="profile--work_thumbnail hidden-xs"><a class="js-work-strip-work-link" data-click-track="profile-work-strip-thumbnail" href="https://www.academia.edu/7719770/MindXpres_An_Extensible_Content_driven_Cross_Media_Presentation_Platform"><img alt="Research paper thumbnail of MindXpres: An Extensible Content-driven Cross-Media Presentation Platform" class="work-thumbnail" src="https://attachments.academia-assets.com/45228932/thumbnails/1.jpg" /></a></div><div class="wp-workCard wp-workCard_itemContainer"><div class="wp-workCard_item wp-workCard--title"><a class="js-work-strip-work-link text-gray-darker" data-click-track="profile-work-strip-title" href="https://www.academia.edu/7719770/MindXpres_An_Extensible_Content_driven_Cross_Media_Presentation_Platform">MindXpres: An Extensible Content-driven Cross-Media Presentation Platform</a></div><div class="wp-workCard_item wp-workCard--coauthors"><span>by </span><span><a class="" data-click-track="profile-work-strip-authors" href="https://independent.academia.edu/ReinoutRoels">Reinout Roels</a> and <a class="" data-click-track="profile-work-strip-authors" href="https://vub.academia.edu/BeatSigner">Beat Signer</a></span></div><div class="wp-workCard_item"><span class="js-work-more-abstract-truncated">Existing presentation tools and document formats show a number of shortcomings in terms of the ma...</span><a class="js-work-more-abstract" data-broccoli-component="work_strip.more_abstract" data-click-track="profile-work-strip-more-abstract" href="javascript:;"><span> more </span><span><i class="fa fa-caret-down"></i></span></a><span class="js-work-more-abstract-untruncated hidden">Existing presentation tools and document formats show a number of shortcomings in terms of the management, visualisation and navigation of rich cross-media content. While slideware was originally designed for the production of physical transparencies, there is an increasing need for richer and more interactive media types. We investigate innovative forms of organising, visualising and navigating presentations. This includes the introduction of a new document format supporting the integration or transclusion of content from different presentations and cross-media sources as well as the non-linear navigation of presentations. We present MindXpres, a web technology-based extensible platform for content-driven cross-media presentations. The modular architecture and plug-in mechanism of MindXpres enable the reuse or integration of new visualisation and interaction components. Our MindXpres prototype forms a platform for the exploration and rapid prototyping of innovative concepts for presentation tools. Its support for multi-device user interfaces further enables an active participation of the audience which should ultimately result in more dynamic, engaging presentations and improved knowledge transfer.</span></div><div class="wp-workCard_item wp-workCard--actions"><span class="work-strip-bookmark-button-container"></span><a id="139346b3c87b1f7ad687fcc57e552021" class="wp-workCard--action" rel="nofollow" data-click-track="profile-work-strip-download" data-download="{&quot;attachment_id&quot;:45228932,&quot;asset_id&quot;:7719770,&quot;asset_type&quot;:&quot;Work&quot;,&quot;button_location&quot;:&quot;profile&quot;}" href="https://www.academia.edu/attachments/45228932/download_file?s=profile"><span><i class="fa fa-arrow-down"></i></span><span>Download</span></a><span class="wp-workCard--action visible-if-viewed-by-owner inline-block" style="display: none;"><span class="js-profile-work-strip-edit-button-wrapper profile-work-strip-edit-button-wrapper" data-work-id="7719770"><a class="js-profile-work-strip-edit-button" tabindex="0"><span><i class="fa fa-pencil"></i></span><span>Edit</span></a></span></span></div><div class="wp-workCard_item wp-workCard--stats"><span><span><span class="js-view-count view-count u-mr2x" data-work-id="7719770"><i class="fa fa-spinner fa-spin"></i></span><script>$(function () { var workId = 7719770; window.Academia.workViewCountsFetcher.queue(workId, function (count) { var description = window.$h.commaizeInt(count) + " " + window.$h.pluralize(count, 'View'); $(".js-view-count[data-work-id=7719770]").text(description); $(".js-view-count[data-work-id=7719770]").attr('title', description).tooltip(); }); });</script></span></span><span><span class="percentile-widget hidden"><span class="u-mr2x work-percentile"></span></span><script>$(function () { var workId = 7719770; window.Academia.workPercentilesFetcher.queue(workId, function (percentileText) { var container = $(".js-work-strip[data-work-id='7719770']"); container.find('.work-percentile').text(percentileText.charAt(0).toUpperCase() + percentileText.slice(1)); container.find('.percentile-widget').show(); container.find('.percentile-widget').removeClass('hidden'); }); });</script></span></div><div id="work-strip-premium-row-container"></div></div></div><script> require.config({ waitSeconds: 90 })(["https://a.academia-assets.com/assets/wow_profile-a9bf3a2bc8c89fa2a77156577594264ee8a0f214d74241bc0fcd3f69f8d107ac.js","https://a.academia-assets.com/assets/work_edit-ad038b8c047c1a8d4fa01b402d530ff93c45fee2137a149a4a5398bc8ad67560.js"], function() { // from javascript_helper.rb var dispatcherData = {} if (true){ window.WowProfile.dispatcher = window.WowProfile.dispatcher || _.clone(Backbone.Events); dispatcherData = { dispatcher: window.WowProfile.dispatcher, downloadLinkId: "139346b3c87b1f7ad687fcc57e552021" } } $('.js-work-strip[data-work-id=7719770]').each(function() { if (!$(this).data('initialized')) { new WowProfile.WorkStripView({ el: this, workJSON: {"id":7719770,"title":"MindXpres: An Extensible Content-driven Cross-Media Presentation Platform","translated_title":"","metadata":{"doi":"10.1007/978-3-319-11746-1_16","abstract":"Existing presentation tools and document formats show a number of shortcomings in terms of the management, visualisation and navigation of rich cross-media content. While slideware was originally designed for the production of physical transparencies, there is an increasing need for richer and more interactive media types. We investigate innovative forms of organising, visualising and navigating presentations. This includes the introduction of a new document format supporting the integration or transclusion of content from different presentations and cross-media sources as well as the non-linear navigation of presentations. We present MindXpres, a web technology-based extensible platform for content-driven cross-media presentations. The modular architecture and plug-in mechanism of MindXpres enable the reuse or integration of new visualisation and interaction components. Our MindXpres prototype forms a platform for the exploration and rapid prototyping of innovative concepts for presentation tools. Its support for multi-device user interfaces further enables an active participation of the audience which should ultimately result in more dynamic, engaging presentations and improved knowledge transfer.","more_info":"Reinout Roels and Beat Signer, Proceedings of WISE 2014, 15th International Conference on Web Information System Engineering, Thessaloniki, Greece, October, 2014","publication_date":{"day":null,"month":null,"year":2014,"errors":{}}},"translated_abstract":"Existing presentation tools and document formats show a number of shortcomings in terms of the management, visualisation and navigation of rich cross-media content. While slideware was originally designed for the production of physical transparencies, there is an increasing need for richer and more interactive media types. We investigate innovative forms of organising, visualising and navigating presentations. This includes the introduction of a new document format supporting the integration or transclusion of content from different presentations and cross-media sources as well as the non-linear navigation of presentations. We present MindXpres, a web technology-based extensible platform for content-driven cross-media presentations. The modular architecture and plug-in mechanism of MindXpres enable the reuse or integration of new visualisation and interaction components. Our MindXpres prototype forms a platform for the exploration and rapid prototyping of innovative concepts for presentation tools. Its support for multi-device user interfaces further enables an active participation of the audience which should ultimately result in more dynamic, engaging presentations and improved knowledge transfer.","internal_url":"https://www.academia.edu/7719770/MindXpres_An_Extensible_Content_driven_Cross_Media_Presentation_Platform","translated_internal_url":"","created_at":"2014-07-20T00:01:24.356-07:00","preview_url":null,"current_user_can_edit":null,"current_user_is_owner":null,"owner_id":13155,"coauthors_can_edit":true,"document_type":"paper","co_author_tags":[{"id":21069,"work_id":7719770,"tagging_user_id":13155,"tagged_user_id":2181828,"co_author_invite_id":null,"email":"r***s@gmail.com","display_order":-1,"name":"Reinout Roels","title":"MindXpres: An Extensible Content-driven Cross-Media Presentation Platform"}],"downloadable_attachments":[{"id":45228932,"title":"","file_type":"pdf","scribd_thumbnail_url":"https://attachments.academia-assets.com/45228932/thumbnails/1.jpg","file_name":"roels_WISE2014.pdf","download_url":"https://www.academia.edu/attachments/45228932/download_file","bulk_download_file_name":"MindXpres_An_Extensible_Content_driven_C.pdf","bulk_download_url":"https://d1wqtxts1xzle7.cloudfront.net/45228932/roels_WISE2014-libre.pdf?1462042109=\u0026response-content-disposition=attachment%3B+filename%3DMindXpres_An_Extensible_Content_driven_C.pdf\u0026Expires=1744203891\u0026Signature=ejvNXBgZMy3S9RhpHCofxwu0vPfPdnTkxt2m4eL67lFAoP4cpphwVqjyk6256Qcz-R7uQTPfS9LlSLBu~2evhJ6~7QJ0rGG-f~Tz3KZ9tj-OxpMwBdQjcs--GyEDKheuqzLqWYEcl9RW7c36NZCqZQY-ZwaFFuPc6Vc1eZNuKEVKJkLH0nctKaUuMnjsMnOChDqHG1nS1ZSexOU370Fpi6cGpepJ88vB2b3Qwr9LX~01wLM79djP0nNRf5aDQSDtzoj2V9Igyoz9kakP7U2jGz0UDAsngb8AmgPkAgm066CkKBpR6XKoqtKC~x3jM5v43PX8aTlhFle5~3ImAZx5nw__\u0026Key-Pair-Id=APKAJLOHF5GGSLRBV4ZA"}],"slug":"MindXpres_An_Extensible_Content_driven_Cross_Media_Presentation_Platform","translated_slug":"","page_count":15,"language":"en","content_type":"Work","summary":"Existing presentation tools and document formats show a number of shortcomings in terms of the management, visualisation and navigation of rich cross-media content. While slideware was originally designed for the production of physical transparencies, there is an increasing need for richer and more interactive media types. We investigate innovative forms of organising, visualising and navigating presentations. This includes the introduction of a new document format supporting the integration or transclusion of content from different presentations and cross-media sources as well as the non-linear navigation of presentations. We present MindXpres, a web technology-based extensible platform for content-driven cross-media presentations. The modular architecture and plug-in mechanism of MindXpres enable the reuse or integration of new visualisation and interaction components. Our MindXpres prototype forms a platform for the exploration and rapid prototyping of innovative concepts for presentation tools. Its support for multi-device user interfaces further enables an active participation of the audience which should ultimately result in more dynamic, engaging presentations and improved knowledge transfer.","impression_tracking_id":null,"owner":{"id":13155,"first_name":"Beat","middle_initials":null,"last_name":"Signer","page_name":"BeatSigner","domain_name":"vub","created_at":"2008-10-29T17:11:18.015-07:00","display_name":"Beat Signer","url":"https://vub.academia.edu/BeatSigner"},"attachments":[{"id":45228932,"title":"","file_type":"pdf","scribd_thumbnail_url":"https://attachments.academia-assets.com/45228932/thumbnails/1.jpg","file_name":"roels_WISE2014.pdf","download_url":"https://www.academia.edu/attachments/45228932/download_file","bulk_download_file_name":"MindXpres_An_Extensible_Content_driven_C.pdf","bulk_download_url":"https://d1wqtxts1xzle7.cloudfront.net/45228932/roels_WISE2014-libre.pdf?1462042109=\u0026response-content-disposition=attachment%3B+filename%3DMindXpres_An_Extensible_Content_driven_C.pdf\u0026Expires=1744203891\u0026Signature=ejvNXBgZMy3S9RhpHCofxwu0vPfPdnTkxt2m4eL67lFAoP4cpphwVqjyk6256Qcz-R7uQTPfS9LlSLBu~2evhJ6~7QJ0rGG-f~Tz3KZ9tj-OxpMwBdQjcs--GyEDKheuqzLqWYEcl9RW7c36NZCqZQY-ZwaFFuPc6Vc1eZNuKEVKJkLH0nctKaUuMnjsMnOChDqHG1nS1ZSexOU370Fpi6cGpepJ88vB2b3Qwr9LX~01wLM79djP0nNRf5aDQSDtzoj2V9Igyoz9kakP7U2jGz0UDAsngb8AmgPkAgm066CkKBpR6XKoqtKC~x3jM5v43PX8aTlhFle5~3ImAZx5nw__\u0026Key-Pair-Id=APKAJLOHF5GGSLRBV4ZA"}],"research_interests":[{"id":37,"name":"Information Systems","url":"https://www.academia.edu/Documents/in/Information_Systems"},{"id":48,"name":"Engineering","url":"https://www.academia.edu/Documents/in/Engineering"},{"id":422,"name":"Computer Science","url":"https://www.academia.edu/Documents/in/Computer_Science"},{"id":449,"name":"Software Engineering","url":"https://www.academia.edu/Documents/in/Software_Engineering"},{"id":459,"name":"Information Science","url":"https://www.academia.edu/Documents/in/Information_Science"},{"id":472,"name":"Human Computer Interaction","url":"https://www.academia.edu/Documents/in/Human_Computer_Interaction"},{"id":491,"name":"Information Technology","url":"https://www.academia.edu/Documents/in/Information_Technology"},{"id":859,"name":"Communication","url":"https://www.academia.edu/Documents/in/Communication"},{"id":887,"name":"Teaching and Learning","url":"https://www.academia.edu/Documents/in/Teaching_and_Learning"},{"id":922,"name":"Education","url":"https://www.academia.edu/Documents/in/Education"},{"id":923,"name":"Technology","url":"https://www.academia.edu/Documents/in/Technology"},{"id":1003,"name":"Educational Technology","url":"https://www.academia.edu/Documents/in/Educational_Technology"},{"id":1128,"name":"Computer Science Education","url":"https://www.academia.edu/Documents/in/Computer_Science_Education"},{"id":1210,"name":"Informatics","url":"https://www.academia.edu/Documents/in/Informatics"},{"id":1453,"name":"Information Management","url":"https://www.academia.edu/Documents/in/Information_Management"},{"id":1609,"name":"E-learning","url":"https://www.academia.edu/Documents/in/E-learning"},{"id":2621,"name":"Higher Education","url":"https://www.academia.edu/Documents/in/Higher_Education"},{"id":2869,"name":"Digital Media","url":"https://www.academia.edu/Documents/in/Digital_Media"},{"id":3095,"name":"Computer-Based Learning","url":"https://www.academia.edu/Documents/in/Computer-Based_Learning"},{"id":3424,"name":"Information Visualization","url":"https://www.academia.edu/Documents/in/Information_Visualization"},{"id":3457,"name":"Learning and Teaching","url":"https://www.academia.edu/Documents/in/Learning_and_Teaching"},{"id":5266,"name":"Human Information Interaction","url":"https://www.academia.edu/Documents/in/Human_Information_Interaction"},{"id":7585,"name":"ICT in Education","url":"https://www.academia.edu/Documents/in/ICT_in_Education"},{"id":8130,"name":"Web Development","url":"https://www.academia.edu/Documents/in/Web_Development"},{"id":8679,"name":"Computer Supported Collaborative Learning (CSCL)","url":"https://www.academia.edu/Documents/in/Computer_Supported_Collaborative_Learning_CSCL_"},{"id":10472,"name":"Web Applications","url":"https://www.academia.edu/Documents/in/Web_Applications"},{"id":11085,"name":"Cross-Media Information Spaces","url":"https://www.academia.edu/Documents/in/Cross-Media_Information_Spaces"},{"id":11119,"name":"User Interface","url":"https://www.academia.edu/Documents/in/User_Interface"},{"id":17758,"name":"Technology Enhanced Learning","url":"https://www.academia.edu/Documents/in/Technology_Enhanced_Learning"},{"id":17951,"name":"Learning And Teaching In Higher Education","url":"https://www.academia.edu/Documents/in/Learning_And_Teaching_In_Higher_Education"},{"id":18711,"name":"Technology-mediated teaching and learning","url":"https://www.academia.edu/Documents/in/Technology-mediated_teaching_and_learning"},{"id":20481,"name":"Information Visualisation","url":"https://www.academia.edu/Documents/in/Information_Visualisation"},{"id":23997,"name":"HCI","url":"https://www.academia.edu/Documents/in/HCI"},{"id":25475,"name":"Learning environments","url":"https://www.academia.edu/Documents/in/Learning_environments"},{"id":25681,"name":"E-learning 2.0","url":"https://www.academia.edu/Documents/in/E-learning_2.0"},{"id":33915,"name":"Technology-enhanced Learning","url":"https://www.academia.edu/Documents/in/Technology-enhanced_Learning"},{"id":37753,"name":"Teaching","url":"https://www.academia.edu/Documents/in/Teaching"},{"id":41474,"name":"Cross Media Platforms","url":"https://www.academia.edu/Documents/in/Cross_Media_Platforms"},{"id":42095,"name":"Document Engineering","url":"https://www.academia.edu/Documents/in/Document_Engineering"},{"id":69857,"name":"PowerPoint","url":"https://www.academia.edu/Documents/in/PowerPoint"},{"id":141114,"name":"World Wide Web","url":"https://www.academia.edu/Documents/in/World_Wide_Web"},{"id":168573,"name":"Lifelong learning and adult education","url":"https://www.academia.edu/Documents/in/Lifelong_learning_and_adult_education"},{"id":279114,"name":"Technology Enhanced Education","url":"https://www.academia.edu/Documents/in/Technology_Enhanced_Education"},{"id":369022,"name":"Cross Media","url":"https://www.academia.edu/Documents/in/Cross_Media"},{"id":651491,"name":"Web Information Systems","url":"https://www.academia.edu/Documents/in/Web_Information_Systems"},{"id":672167,"name":"Cross-Media Information Spaces and Architectures (CISA)","url":"https://www.academia.edu/Documents/in/Cross-Media_Information_Spaces_and_Architectures_CISA_"},{"id":1273494,"name":"Slideware","url":"https://www.academia.edu/Documents/in/Slideware"},{"id":1705150,"name":"MindXpres","url":"https://www.academia.edu/Documents/in/MindXpres"}],"urls":[{"id":9196132,"url":"https://beatsigner.com/publications/mindxpres-an-extensible-content-driven-cross-media-presentation-platform.pdf"}]}, dispatcherData: dispatcherData }); $(this).data('initialized', true); } }); $a.trackClickSource(".js-work-strip-work-link", "profile_work_strip") if (false) { Aedu.setUpFigureCarousel('profile-work-7719770-figures'); } }); </script> <div class="js-work-strip profile--work_container" data-work-id="91891912"><div class="profile--work_thumbnail hidden-xs"><a class="js-work-strip-work-link" data-click-track="profile-work-strip-thumbnail" href="https://www.academia.edu/91891912/Computing_Education_Research_as_a_Translational_Transdiscipline"><img alt="Research paper thumbnail of Computing Education Research as a Translational Transdiscipline" class="work-thumbnail" src="https://attachments.academia-assets.com/95744963/thumbnails/1.jpg" /></a></div><div class="wp-workCard wp-workCard_itemContainer"><div class="wp-workCard_item wp-workCard--title"><a class="js-work-strip-work-link text-gray-darker" data-click-track="profile-work-strip-title" href="https://www.academia.edu/91891912/Computing_Education_Research_as_a_Translational_Transdiscipline">Computing Education Research as a Translational Transdiscipline</a></div><div class="wp-workCard_item wp-workCard--coauthors"><span>by </span><span><a class="" data-click-track="profile-work-strip-authors" href="https://vub.academia.edu/BeatSigner">Beat Signer</a> and <a class="" data-click-track="profile-work-strip-authors" href="https://vub.academia.edu/YoshiMalaise">Yoshi Malaise</a></span></div><div class="wp-workCard_item"><span class="js-work-more-abstract-truncated">The field of Computing Education Research (CER) produces important insights into learning and not...</span><a class="js-work-more-abstract" data-broccoli-component="work_strip.more_abstract" data-click-track="profile-work-strip-more-abstract" href="javascript:;"><span> more </span><span><i class="fa fa-caret-down"></i></span></a><span class="js-work-more-abstract-untruncated hidden">The field of Computing Education Research (CER) produces important insights into learning and notable interventions, yet due to the research/practice divide these do not have the desired impact on learners or practitioners. Even within CER, Computing Education (CE) learning theories have limited influence on learning designs due to the theory/design divide, which is unfortunate given that the goal of CER is to impact learners and broaden access to computation.<br /><br />There is a lack of an overarching model defining CER as a unified field and providing a framework for discussion. While there is discussion around many of the core activities and practices in CER, we have yet to come across a holistic characterisation. We introduce a model of Translational Computing Education Research (TCER) that helps to understand and discuss CER as a field, bridge the divides and provide internal structure, while also making the field more approachable for interdisciplinary and non-academic collaborators. In our TCER model, theory and design are equally important but weighted differently depending on an activity&#39;s position along the research/practice continuum.<br /><br />In addition to the future exploration and exploitation of the presented TCER model, we propose further characterising CER as a field, applying the TCER model to understand past and contemporary CER, applying the model to address current challenges in CER, imagining what the field can become, as well as exploring the potential for translational research programmes to maximise the broader impact of computing education research.</span></div><div class="wp-workCard_item"><div class="carousel-container carousel-container--sm" id="profile-work-91891912-figures"><div class="prev-slide-container js-prev-button-container"><button aria-label="Previous" class="carousel-navigation-button js-profile-work-91891912-figures-prev"><span class="material-symbols-outlined" style="font-size: 24px" translate="no">arrow_back_ios</span></button></div><div class="slides-container js-slides-container"><figure class="figure-slide-container"><a href="https://www.academia.edu/figures/6322007/figure-1-computing-education-research-as-translational"><img alt="" class="figure-slide-image" src="https://figures.academia-assets.com/95744963/figure_001.jpg" /></a></figure><figure class="figure-slide-container"><a href="https://www.academia.edu/figures/6322020/figure-2-research-practice-and-theory-design-continuum"><img alt="Figure 2: Research/practice and theory/design continuum " class="figure-slide-image" src="https://figures.academia-assets.com/95744963/figure_002.jpg" /></a></figure></div><div class="next-slide-container js-next-button-container"><button aria-label="Next" class="carousel-navigation-button js-profile-work-91891912-figures-next"><span class="material-symbols-outlined" style="font-size: 24px" translate="no">arrow_forward_ios</span></button></div></div></div><div class="wp-workCard_item wp-workCard--actions"><span class="work-strip-bookmark-button-container"></span><a id="67d60a5e3a0018a1660936e51b17eefb" class="wp-workCard--action" rel="nofollow" data-click-track="profile-work-strip-download" data-download="{&quot;attachment_id&quot;:95744963,&quot;asset_id&quot;:91891912,&quot;asset_type&quot;:&quot;Work&quot;,&quot;button_location&quot;:&quot;profile&quot;}" href="https://www.academia.edu/attachments/95744963/download_file?s=profile"><span><i class="fa fa-arrow-down"></i></span><span>Download</span></a><span class="wp-workCard--action visible-if-viewed-by-owner inline-block" style="display: none;"><span class="js-profile-work-strip-edit-button-wrapper profile-work-strip-edit-button-wrapper" data-work-id="91891912"><a class="js-profile-work-strip-edit-button" tabindex="0"><span><i class="fa fa-pencil"></i></span><span>Edit</span></a></span></span></div><div class="wp-workCard_item wp-workCard--stats"><span><span><span class="js-view-count view-count u-mr2x" data-work-id="91891912"><i class="fa fa-spinner fa-spin"></i></span><script>$(function () { var workId = 91891912; window.Academia.workViewCountsFetcher.queue(workId, function (count) { var description = window.$h.commaizeInt(count) + " " + window.$h.pluralize(count, 'View'); $(".js-view-count[data-work-id=91891912]").text(description); $(".js-view-count[data-work-id=91891912]").attr('title', description).tooltip(); }); });</script></span></span><span><span class="percentile-widget hidden"><span class="u-mr2x work-percentile"></span></span><script>$(function () { var workId = 91891912; window.Academia.workPercentilesFetcher.queue(workId, function (percentileText) { var container = $(".js-work-strip[data-work-id='91891912']"); container.find('.work-percentile').text(percentileText.charAt(0).toUpperCase() + percentileText.slice(1)); container.find('.percentile-widget').show(); container.find('.percentile-widget').removeClass('hidden'); }); });</script></span></div><div id="work-strip-premium-row-container"></div></div></div><script> require.config({ waitSeconds: 90 })(["https://a.academia-assets.com/assets/wow_profile-a9bf3a2bc8c89fa2a77156577594264ee8a0f214d74241bc0fcd3f69f8d107ac.js","https://a.academia-assets.com/assets/work_edit-ad038b8c047c1a8d4fa01b402d530ff93c45fee2137a149a4a5398bc8ad67560.js"], function() { // from javascript_helper.rb var dispatcherData = {} if (true){ window.WowProfile.dispatcher = window.WowProfile.dispatcher || _.clone(Backbone.Events); dispatcherData = { dispatcher: window.WowProfile.dispatcher, downloadLinkId: "67d60a5e3a0018a1660936e51b17eefb" } } $('.js-work-strip[data-work-id=91891912]').each(function() { if (!$(this).data('initialized')) { new WowProfile.WorkStripView({ el: this, workJSON: {"id":91891912,"title":"Computing Education Research as a Translational Transdiscipline","translated_title":"","metadata":{"doi":"10.1145/3545945.3569771","abstract":"The field of Computing Education Research (CER) produces important insights into learning and notable interventions, yet due to the research/practice divide these do not have the desired impact on learners or practitioners. Even within CER, Computing Education (CE) learning theories have limited influence on learning designs due to the theory/design divide, which is unfortunate given that the goal of CER is to impact learners and broaden access to computation.\n\nThere is a lack of an overarching model defining CER as a unified field and providing a framework for discussion. While there is discussion around many of the core activities and practices in CER, we have yet to come across a holistic characterisation. We introduce a model of Translational Computing Education Research (TCER) that helps to understand and discuss CER as a field, bridge the divides and provide internal structure, while also making the field more approachable for interdisciplinary and non-academic collaborators. In our TCER model, theory and design are equally important but weighted differently depending on an activity's position along the research/practice continuum.\n\nIn addition to the future exploration and exploitation of the presented TCER model, we propose further characterising CER as a field, applying the TCER model to understand past and contemporary CER, applying the model to address current challenges in CER, imagining what the field can become, as well as exploring the potential for translational research programmes to maximise the broader impact of computing education research.","more_info":"Evan Cole, Yoshi Malaise and Beat Signer, Proceedings of SIGCSE 2023, 54th ACM Technical Symposium on Computer Science Education, Toronto, Canada, March 2023","ai_title_tag":"Translational Computing Education Research: Bridging Theory and Practice","publication_date":{"day":null,"month":null,"year":2023,"errors":{}}},"translated_abstract":"The field of Computing Education Research (CER) produces important insights into learning and notable interventions, yet due to the research/practice divide these do not have the desired impact on learners or practitioners. Even within CER, Computing Education (CE) learning theories have limited influence on learning designs due to the theory/design divide, which is unfortunate given that the goal of CER is to impact learners and broaden access to computation.\n\nThere is a lack of an overarching model defining CER as a unified field and providing a framework for discussion. While there is discussion around many of the core activities and practices in CER, we have yet to come across a holistic characterisation. We introduce a model of Translational Computing Education Research (TCER) that helps to understand and discuss CER as a field, bridge the divides and provide internal structure, while also making the field more approachable for interdisciplinary and non-academic collaborators. In our TCER model, theory and design are equally important but weighted differently depending on an activity's position along the research/practice continuum.\n\nIn addition to the future exploration and exploitation of the presented TCER model, we propose further characterising CER as a field, applying the TCER model to understand past and contemporary CER, applying the model to address current challenges in CER, imagining what the field can become, as well as exploring the potential for translational research programmes to maximise the broader impact of computing education research.","internal_url":"https://www.academia.edu/91891912/Computing_Education_Research_as_a_Translational_Transdiscipline","translated_internal_url":"","created_at":"2022-11-29T19:39:11.198-08:00","preview_url":null,"current_user_can_edit":null,"current_user_is_owner":null,"owner_id":13155,"coauthors_can_edit":true,"document_type":"paper","co_author_tags":[{"id":39133570,"work_id":91891912,"tagging_user_id":13155,"tagged_user_id":230910613,"co_author_invite_id":null,"email":"y***e@vub.be","affiliation":"Vrije Universiteit Brussel","display_order":1,"name":"Yoshi Malaise","title":"Computing Education Research as a Translational Transdiscipline"}],"downloadable_attachments":[{"id":95744963,"title":"","file_type":"pdf","scribd_thumbnail_url":"https://attachments.academia-assets.com/95744963/thumbnails/1.jpg","file_name":"cole_SIGCSE_2023.pdf","download_url":"https://www.academia.edu/attachments/95744963/download_file","bulk_download_file_name":"Computing_Education_Research_as_a_Transl.pdf","bulk_download_url":"https://d1wqtxts1xzle7.cloudfront.net/95744963/cole_SIGCSE_2023-libre.pdf?1671005207=\u0026response-content-disposition=attachment%3B+filename%3DComputing_Education_Research_as_a_Transl.pdf\u0026Expires=1744203891\u0026Signature=cj-9-aTFn-mnH43zUeQF3QOVeC7vVZUJ61XcrPikg1ux0R9Ct7TF19-Fueh--Z1NE-IIYyHsC3GQm67C7lY5Jpk4BJNueXi~oKk~ngiLZenB49KxBeKc6Scj2mhim8kdnZ4Gwb5BEd4-vdjImP2OpeQgbz9PT8Ewq6~R41-A8wvgeG6cv~VU8jirzgLc-xyY-g5a6Cr0RGOaRb19DlAZsjz6BM~iTvYsYt6U3feSLVvRjPa9DCVqC6idLUjIVsBSnCtjV-4W0Bt2zgVk21K3TFmMxh4Xp1y4vj~hB34d98VlZbKLpQ3P3n3HsulzXmt1dOkVAVB0xrAgDP2s2riaiQ__\u0026Key-Pair-Id=APKAJLOHF5GGSLRBV4ZA"}],"slug":"Computing_Education_Research_as_a_Translational_Transdiscipline","translated_slug":"","page_count":7,"language":"en","content_type":"Work","summary":"The field of Computing Education Research (CER) produces important insights into learning and notable interventions, yet due to the research/practice divide these do not have the desired impact on learners or practitioners. Even within CER, Computing Education (CE) learning theories have limited influence on learning designs due to the theory/design divide, which is unfortunate given that the goal of CER is to impact learners and broaden access to computation.\n\nThere is a lack of an overarching model defining CER as a unified field and providing a framework for discussion. While there is discussion around many of the core activities and practices in CER, we have yet to come across a holistic characterisation. We introduce a model of Translational Computing Education Research (TCER) that helps to understand and discuss CER as a field, bridge the divides and provide internal structure, while also making the field more approachable for interdisciplinary and non-academic collaborators. In our TCER model, theory and design are equally important but weighted differently depending on an activity's position along the research/practice continuum.\n\nIn addition to the future exploration and exploitation of the presented TCER model, we propose further characterising CER as a field, applying the TCER model to understand past and contemporary CER, applying the model to address current challenges in CER, imagining what the field can become, as well as exploring the potential for translational research programmes to maximise the broader impact of computing education research.","impression_tracking_id":null,"owner":{"id":13155,"first_name":"Beat","middle_initials":null,"last_name":"Signer","page_name":"BeatSigner","domain_name":"vub","created_at":"2008-10-29T17:11:18.015-07:00","display_name":"Beat Signer","url":"https://vub.academia.edu/BeatSigner"},"attachments":[{"id":95744963,"title":"","file_type":"pdf","scribd_thumbnail_url":"https://attachments.academia-assets.com/95744963/thumbnails/1.jpg","file_name":"cole_SIGCSE_2023.pdf","download_url":"https://www.academia.edu/attachments/95744963/download_file","bulk_download_file_name":"Computing_Education_Research_as_a_Transl.pdf","bulk_download_url":"https://d1wqtxts1xzle7.cloudfront.net/95744963/cole_SIGCSE_2023-libre.pdf?1671005207=\u0026response-content-disposition=attachment%3B+filename%3DComputing_Education_Research_as_a_Transl.pdf\u0026Expires=1744203891\u0026Signature=cj-9-aTFn-mnH43zUeQF3QOVeC7vVZUJ61XcrPikg1ux0R9Ct7TF19-Fueh--Z1NE-IIYyHsC3GQm67C7lY5Jpk4BJNueXi~oKk~ngiLZenB49KxBeKc6Scj2mhim8kdnZ4Gwb5BEd4-vdjImP2OpeQgbz9PT8Ewq6~R41-A8wvgeG6cv~VU8jirzgLc-xyY-g5a6Cr0RGOaRb19DlAZsjz6BM~iTvYsYt6U3feSLVvRjPa9DCVqC6idLUjIVsBSnCtjV-4W0Bt2zgVk21K3TFmMxh4Xp1y4vj~hB34d98VlZbKLpQ3P3n3HsulzXmt1dOkVAVB0xrAgDP2s2riaiQ__\u0026Key-Pair-Id=APKAJLOHF5GGSLRBV4ZA"}],"research_interests":[{"id":422,"name":"Computer Science","url":"https://www.academia.edu/Documents/in/Computer_Science"},{"id":922,"name":"Education","url":"https://www.academia.edu/Documents/in/Education"},{"id":1003,"name":"Educational Technology","url":"https://www.academia.edu/Documents/in/Educational_Technology"},{"id":1210,"name":"Informatics","url":"https://www.academia.edu/Documents/in/Informatics"},{"id":1601,"name":"Teacher Education","url":"https://www.academia.edu/Documents/in/Teacher_Education"},{"id":1736,"name":"Science Education","url":"https://www.academia.edu/Documents/in/Science_Education"},{"id":1902,"name":"Practice theory","url":"https://www.academia.edu/Documents/in/Practice_theory"},{"id":2066,"name":"Research Design","url":"https://www.academia.edu/Documents/in/Research_Design"},{"id":2621,"name":"Higher Education","url":"https://www.academia.edu/Documents/in/Higher_Education"},{"id":3429,"name":"Educational Research","url":"https://www.academia.edu/Documents/in/Educational_Research"},{"id":5799,"name":"Transdisciplinarity","url":"https://www.academia.edu/Documents/in/Transdisciplinarity"},{"id":27338,"name":"Educational Theory","url":"https://www.academia.edu/Documents/in/Educational_Theory"},{"id":154737,"name":"Best practices in education","url":"https://www.academia.edu/Documents/in/Best_practices_in_education"},{"id":312926,"name":"Computing education research","url":"https://www.academia.edu/Documents/in/Computing_education_research"},{"id":387046,"name":"Transdisciplinary research","url":"https://www.academia.edu/Documents/in/Transdisciplinary_research"},{"id":466616,"name":"Computing Education","url":"https://www.academia.edu/Documents/in/Computing_Education"},{"id":706697,"name":"Research Programmes","url":"https://www.academia.edu/Documents/in/Research_Programmes"},{"id":2412910,"name":"Computing education research (CER)","url":"https://www.academia.edu/Documents/in/Computing_education_research_CER_"}],"urls":[{"id":26993810,"url":"https://beatsigner.com/publications/computing-education-research-as-a-translational-transdiscipline.pdf"}]}, dispatcherData: dispatcherData }); $(this).data('initialized', true); } }); $a.trackClickSource(".js-work-strip-work-link", "profile_work_strip") if (true) { Aedu.setUpFigureCarousel('profile-work-91891912-figures'); } }); </script> <div class="js-work-strip profile--work_container" data-work-id="83930439"><div class="profile--work_thumbnail hidden-xs"><a class="js-work-strip-work-link" data-click-track="profile-work-strip-thumbnail" href="https://www.academia.edu/83930439/POSO_A_Generic_Positioning_System_Ontology"><img alt="Research paper thumbnail of POSO: A Generic Positioning System Ontology" class="work-thumbnail" src="https://attachments.academia-assets.com/89119455/thumbnails/1.jpg" /></a></div><div class="wp-workCard wp-workCard_itemContainer"><div class="wp-workCard_item wp-workCard--title"><a class="js-work-strip-work-link text-gray-darker" data-click-track="profile-work-strip-title" href="https://www.academia.edu/83930439/POSO_A_Generic_Positioning_System_Ontology">POSO: A Generic Positioning System Ontology</a></div><div class="wp-workCard_item wp-workCard--coauthors"><span>by </span><span><a class="" data-click-track="profile-work-strip-authors" href="https://vub.academia.edu/BeatSigner">Beat Signer</a> and <a class="" data-click-track="profile-work-strip-authors" href="https://vub.academia.edu/MaximVandeWynckel">Maxim Van de Wynckel</a></span></div><div class="wp-workCard_item"><span class="js-work-more-abstract-truncated">While satellite-based positioning systems are mainly used in outdoor environments, various other ...</span><a class="js-work-more-abstract" data-broccoli-component="work_strip.more_abstract" data-click-track="profile-work-strip-more-abstract" href="javascript:;"><span> more </span><span><i class="fa fa-caret-down"></i></span></a><span class="js-work-more-abstract-untruncated hidden">While satellite-based positioning systems are mainly used in outdoor environments, various other positioning techniques exist for different domains and use cases, including indoor or underground settings. The representation of spatial data via semantic linked data is well addressed by existing spatial ontologies. However, there is a primary focus on location data with its specific geographical context, but a lack of solutions for describing the different types of data generated by a positioning system and the used sampling techniques to obtain the data. In this paper we introduce a new generic Positioning System Ontology (POSO) that is built on top of the Semantic Sensor Network (SSN) and Sensor, Observation, Sample, and Actuator (SOSA) ontologies. With POSO, we provide missing concepts needed for describing a positioning system and its output with known positioning algorithms and techniques in mind. Thereby, we enable the improvement of hybrid positioning systems making use of multiple platforms and sensors that are described via the presented POSO ontology.</span></div><div class="wp-workCard_item"><div class="carousel-container carousel-container--sm" id="profile-work-83930439-figures"><div class="prev-slide-container js-prev-button-container"><button aria-label="Previous" class="carousel-navigation-button js-profile-work-83930439-figures-prev"><span class="material-symbols-outlined" style="font-size: 24px" translate="no">arrow_back_ios</span></button></div><div class="slides-container js-slides-container"><figure class="figure-slide-container"><a href="https://www.academia.edu/figures/22596793/figure-1-basic-structure-of-positioning-system-that-tracks"><img alt="Fig. 1: Basic structure of a positioning system that tracks entities " class="figure-slide-image" src="https://figures.academia-assets.com/89119455/figure_001.jpg" /></a></figure><figure class="figure-slide-container"><a href="https://www.academia.edu/figures/22596801/figure-2-positioning-systems-and-techniques-in-the-poso"><img alt="Fig. 2: Positioning systems and techniques in the POSO ontology " class="figure-slide-image" src="https://figures.academia-assets.com/89119455/figure_002.jpg" /></a></figure><figure class="figure-slide-container"><a href="https://www.academia.edu/figures/22596813/figure-3-example-of-positioning-system-with-position"><img alt="Fig. 3: Example of a positioning system with a position, orientation and velocity property " class="figure-slide-image" src="https://figures.academia-assets.com/89119455/figure_003.jpg" /></a></figure></div><div class="next-slide-container js-next-button-container"><button aria-label="Next" class="carousel-navigation-button js-profile-work-83930439-figures-next"><span class="material-symbols-outlined" style="font-size: 24px" translate="no">arrow_forward_ios</span></button></div></div></div><div class="wp-workCard_item wp-workCard--actions"><span class="work-strip-bookmark-button-container"></span><a id="f9d907ae24cc3200a9b1ca45e84b4e0d" class="wp-workCard--action" rel="nofollow" data-click-track="profile-work-strip-download" data-download="{&quot;attachment_id&quot;:89119455,&quot;asset_id&quot;:83930439,&quot;asset_type&quot;:&quot;Work&quot;,&quot;button_location&quot;:&quot;profile&quot;}" href="https://www.academia.edu/attachments/89119455/download_file?s=profile"><span><i class="fa fa-arrow-down"></i></span><span>Download</span></a><span class="wp-workCard--action visible-if-viewed-by-owner inline-block" style="display: none;"><span class="js-profile-work-strip-edit-button-wrapper profile-work-strip-edit-button-wrapper" data-work-id="83930439"><a class="js-profile-work-strip-edit-button" tabindex="0"><span><i class="fa fa-pencil"></i></span><span>Edit</span></a></span></span></div><div class="wp-workCard_item wp-workCard--stats"><span><span><span class="js-view-count view-count u-mr2x" data-work-id="83930439"><i class="fa fa-spinner fa-spin"></i></span><script>$(function () { var workId = 83930439; window.Academia.workViewCountsFetcher.queue(workId, function (count) { var description = window.$h.commaizeInt(count) + " " + window.$h.pluralize(count, 'View'); $(".js-view-count[data-work-id=83930439]").text(description); $(".js-view-count[data-work-id=83930439]").attr('title', description).tooltip(); }); });</script></span></span><span><span class="percentile-widget hidden"><span class="u-mr2x work-percentile"></span></span><script>$(function () { var workId = 83930439; window.Academia.workPercentilesFetcher.queue(workId, function (percentileText) { var container = $(".js-work-strip[data-work-id='83930439']"); container.find('.work-percentile').text(percentileText.charAt(0).toUpperCase() + percentileText.slice(1)); container.find('.percentile-widget').show(); container.find('.percentile-widget').removeClass('hidden'); }); });</script></span></div><div id="work-strip-premium-row-container"></div></div></div><script> require.config({ waitSeconds: 90 })(["https://a.academia-assets.com/assets/wow_profile-a9bf3a2bc8c89fa2a77156577594264ee8a0f214d74241bc0fcd3f69f8d107ac.js","https://a.academia-assets.com/assets/work_edit-ad038b8c047c1a8d4fa01b402d530ff93c45fee2137a149a4a5398bc8ad67560.js"], function() { // from javascript_helper.rb var dispatcherData = {} if (true){ window.WowProfile.dispatcher = window.WowProfile.dispatcher || _.clone(Backbone.Events); dispatcherData = { dispatcher: window.WowProfile.dispatcher, downloadLinkId: "f9d907ae24cc3200a9b1ca45e84b4e0d" } } $('.js-work-strip[data-work-id=83930439]').each(function() { if (!$(this).data('initialized')) { new WowProfile.WorkStripView({ el: this, workJSON: {"id":83930439,"title":"POSO: A Generic Positioning System Ontology","translated_title":"","metadata":{"doi":"10.1007/978-3-031-19433-7_14","abstract":"While satellite-based positioning systems are mainly used in outdoor environments, various other positioning techniques exist for different domains and use cases, including indoor or underground settings. The representation of spatial data via semantic linked data is well addressed by existing spatial ontologies. However, there is a primary focus on location data with its specific geographical context, but a lack of solutions for describing the different types of data generated by a positioning system and the used sampling techniques to obtain the data. In this paper we introduce a new generic Positioning System Ontology (POSO) that is built on top of the Semantic Sensor Network (SSN) and Sensor, Observation, Sample, and Actuator (SOSA) ontologies. With POSO, we provide missing concepts needed for describing a positioning system and its output with known positioning algorithms and techniques in mind. Thereby, we enable the improvement of hybrid positioning systems making use of multiple platforms and sensors that are described via the presented POSO ontology.","more_info":"Maxim Van de Wynckel and Beat Signer, Proceedings of ISWC 2022, 21st International Semantic Web Conference, Hangzhou, China, October 2022","publication_date":{"day":null,"month":null,"year":2022,"errors":{}}},"translated_abstract":"While satellite-based positioning systems are mainly used in outdoor environments, various other positioning techniques exist for different domains and use cases, including indoor or underground settings. The representation of spatial data via semantic linked data is well addressed by existing spatial ontologies. However, there is a primary focus on location data with its specific geographical context, but a lack of solutions for describing the different types of data generated by a positioning system and the used sampling techniques to obtain the data. In this paper we introduce a new generic Positioning System Ontology (POSO) that is built on top of the Semantic Sensor Network (SSN) and Sensor, Observation, Sample, and Actuator (SOSA) ontologies. With POSO, we provide missing concepts needed for describing a positioning system and its output with known positioning algorithms and techniques in mind. Thereby, we enable the improvement of hybrid positioning systems making use of multiple platforms and sensors that are described via the presented POSO ontology.","internal_url":"https://www.academia.edu/83930439/POSO_A_Generic_Positioning_System_Ontology","translated_internal_url":"","created_at":"2022-07-30T00:30:22.758-07:00","preview_url":null,"current_user_can_edit":null,"current_user_is_owner":null,"owner_id":13155,"coauthors_can_edit":true,"document_type":"paper","co_author_tags":[{"id":38603061,"work_id":83930439,"tagging_user_id":13155,"tagged_user_id":61118416,"co_author_invite_id":null,"email":"m***l@gmail.com","affiliation":"Vrije Universiteit Brussel","display_order":1,"name":"Maxim Van de Wynckel","title":"POSO: A Generic Positioning System Ontology"}],"downloadable_attachments":[{"id":89119455,"title":"","file_type":"pdf","scribd_thumbnail_url":"https://attachments.academia-assets.com/89119455/thumbnails/1.jpg","file_name":"van_de_wynckel_ISWC2022.pdf","download_url":"https://www.academia.edu/attachments/89119455/download_file","bulk_download_file_name":"POSO_A_Generic_Positioning_System_Ontolo.pdf","bulk_download_url":"https://d1wqtxts1xzle7.cloudfront.net/89119455/van_de_wynckel_ISWC2022-libre.pdf?1659167415=\u0026response-content-disposition=attachment%3B+filename%3DPOSO_A_Generic_Positioning_System_Ontolo.pdf\u0026Expires=1744203891\u0026Signature=NYbuUE6-lN2q93X2G~b4~-5Bsdzd74S0eGgpxGseGut7j-HGzZOyluqRHYzELl2MEUWrQs-o5oUS641G8gGO4SQh6C7Ww4iyJs6HRE8GC-ULnVhS94h7w3dZE9qYPIK5Fmw1TVdh5OLmyTP6FK3SyY8s9uCp30nR3GyihY19B5YOp8xzcFIu6KnCkh6rqKfzh4I3sqkdLDXF1yyHuisukYq6JXiI3X54mn5uz0g0JbGGaENlQSq9s4zsTHOxW5WM2cHDiNs1aRGwOE8VE-yZaCtWSMj6pNGXiyXgbzYavnC5~bxoqg0KcB2VWza-5fsUmShI9t7OuK-6YRsjMEUK9Q__\u0026Key-Pair-Id=APKAJLOHF5GGSLRBV4ZA"}],"slug":"POSO_A_Generic_Positioning_System_Ontology","translated_slug":"","page_count":17,"language":"en","content_type":"Work","summary":"While satellite-based positioning systems are mainly used in outdoor environments, various other positioning techniques exist for different domains and use cases, including indoor or underground settings. The representation of spatial data via semantic linked data is well addressed by existing spatial ontologies. However, there is a primary focus on location data with its specific geographical context, but a lack of solutions for describing the different types of data generated by a positioning system and the used sampling techniques to obtain the data. In this paper we introduce a new generic Positioning System Ontology (POSO) that is built on top of the Semantic Sensor Network (SSN) and Sensor, Observation, Sample, and Actuator (SOSA) ontologies. With POSO, we provide missing concepts needed for describing a positioning system and its output with known positioning algorithms and techniques in mind. Thereby, we enable the improvement of hybrid positioning systems making use of multiple platforms and sensors that are described via the presented POSO ontology.","impression_tracking_id":null,"owner":{"id":13155,"first_name":"Beat","middle_initials":null,"last_name":"Signer","page_name":"BeatSigner","domain_name":"vub","created_at":"2008-10-29T17:11:18.015-07:00","display_name":"Beat Signer","url":"https://vub.academia.edu/BeatSigner"},"attachments":[{"id":89119455,"title":"","file_type":"pdf","scribd_thumbnail_url":"https://attachments.academia-assets.com/89119455/thumbnails/1.jpg","file_name":"van_de_wynckel_ISWC2022.pdf","download_url":"https://www.academia.edu/attachments/89119455/download_file","bulk_download_file_name":"POSO_A_Generic_Positioning_System_Ontolo.pdf","bulk_download_url":"https://d1wqtxts1xzle7.cloudfront.net/89119455/van_de_wynckel_ISWC2022-libre.pdf?1659167415=\u0026response-content-disposition=attachment%3B+filename%3DPOSO_A_Generic_Positioning_System_Ontolo.pdf\u0026Expires=1744203891\u0026Signature=NYbuUE6-lN2q93X2G~b4~-5Bsdzd74S0eGgpxGseGut7j-HGzZOyluqRHYzELl2MEUWrQs-o5oUS641G8gGO4SQh6C7Ww4iyJs6HRE8GC-ULnVhS94h7w3dZE9qYPIK5Fmw1TVdh5OLmyTP6FK3SyY8s9uCp30nR3GyihY19B5YOp8xzcFIu6KnCkh6rqKfzh4I3sqkdLDXF1yyHuisukYq6JXiI3X54mn5uz0g0JbGGaENlQSq9s4zsTHOxW5WM2cHDiNs1aRGwOE8VE-yZaCtWSMj6pNGXiyXgbzYavnC5~bxoqg0KcB2VWza-5fsUmShI9t7OuK-6YRsjMEUK9Q__\u0026Key-Pair-Id=APKAJLOHF5GGSLRBV4ZA"}],"research_interests":[{"id":422,"name":"Computer Science","url":"https://www.academia.edu/Documents/in/Computer_Science"},{"id":805,"name":"Ontology","url":"https://www.academia.edu/Documents/in/Ontology"},{"id":1210,"name":"Informatics","url":"https://www.academia.edu/Documents/in/Informatics"},{"id":1380,"name":"Computer Engineering","url":"https://www.academia.edu/Documents/in/Computer_Engineering"},{"id":1495,"name":"Location Based Services","url":"https://www.academia.edu/Documents/in/Location_Based_Services"},{"id":9988,"name":"Indoor Positioning","url":"https://www.academia.edu/Documents/in/Indoor_Positioning"},{"id":17711,"name":"Semantic Web","url":"https://www.academia.edu/Documents/in/Semantic_Web"},{"id":86589,"name":"Outdoor Positioning","url":"https://www.academia.edu/Documents/in/Outdoor_Positioning"},{"id":114414,"name":"Specification","url":"https://www.academia.edu/Documents/in/Specification"},{"id":368857,"name":"Positioning","url":"https://www.academia.edu/Documents/in/Positioning"},{"id":522700,"name":"Location-Based Service","url":"https://www.academia.edu/Documents/in/Location-Based_Service"},{"id":524072,"name":"Location","url":"https://www.academia.edu/Documents/in/Location"},{"id":1182242,"name":"Positioning Technology","url":"https://www.academia.edu/Documents/in/Positioning_Technology"},{"id":3857627,"name":"OpenHPS","url":"https://www.academia.edu/Documents/in/OpenHPS"}],"urls":[{"id":29589449,"url":"https://beatsigner.com/publications/poso-a-generic-positioning-system-ontology.pdf"}]}, dispatcherData: dispatcherData }); $(this).data('initialized', true); } }); $a.trackClickSource(".js-work-strip-work-link", "profile_work_strip") if (true) { Aedu.setUpFigureCarousel('profile-work-83930439-figures'); } }); </script> <div class="js-work-strip profile--work_container" data-work-id="128692393"><div class="profile--work_thumbnail hidden-xs"><a class="js-work-strip-work-link" data-click-track="profile-work-strip-thumbnail" href="https://www.academia.edu/128692393/JsStories_Improving_Social_Inclusion_in_Computer_Science_Education_Through_Interactive_Stories"><img alt="Research paper thumbnail of JsStories: Improving Social Inclusion in Computer Science Education Through Interactive Stories" class="work-thumbnail" src="https://attachments.academia-assets.com/122230119/thumbnails/1.jpg" /></a></div><div class="wp-workCard wp-workCard_itemContainer"><div class="wp-workCard_item wp-workCard--title"><a class="js-work-strip-work-link text-gray-darker" data-click-track="profile-work-strip-title" href="https://www.academia.edu/128692393/JsStories_Improving_Social_Inclusion_in_Computer_Science_Education_Through_Interactive_Stories">JsStories: Improving Social Inclusion in Computer Science Education Through Interactive Stories</a></div><div class="wp-workCard_item"><span class="js-work-more-abstract-truncated">A main challenge faced by non-profit organisations providing computer science education to under-...</span><a class="js-work-more-abstract" data-broccoli-component="work_strip.more_abstract" data-click-track="profile-work-strip-more-abstract" href="javascript:;"><span> more </span><span><i class="fa fa-caret-down"></i></span></a><span class="js-work-more-abstract-untruncated hidden">A main challenge faced by non-profit organisations providing computer science education to under-represented groups are the high drop-out rates. This issue arises from various factors affecting both students and teachers, such as the one-size-fits-all approach of many lessons. Enhancing social inclusion in the learning process could help reduce these drop-out rates. We present JsStories, a tool designed to help students learn JavaScript through interactive stories. The development of JsStories has been informed by existing literature on storytelling for inclusion and insights gained from a visit to HackYourFuture Belgium (HYFBE), a non-profit organisation that teaches web development to refugees and migrants. To lower barriers to entry and maximise the feeling of connection to the story, we incorporated narratives from HYFBE alumni. Further, we adhered to educational best practices by applying the PRIMM principles and offering level-appropriate content based on knowledge graphs. JsStories has been demonstrated, evaluated and communicated to the different stakeholders through interviews and a survey, enabling us to identify future directions for story-based learning solutions.</span></div><div class="wp-workCard_item wp-workCard--actions"><span class="work-strip-bookmark-button-container"></span><a id="f0b8173166864564b855fcf9e41c34f0" class="wp-workCard--action" rel="nofollow" data-click-track="profile-work-strip-download" data-download="{&quot;attachment_id&quot;:122230119,&quot;asset_id&quot;:128692393,&quot;asset_type&quot;:&quot;Work&quot;,&quot;button_location&quot;:&quot;profile&quot;}" href="https://www.academia.edu/attachments/122230119/download_file?s=profile"><span><i class="fa fa-arrow-down"></i></span><span>Download</span></a><span class="wp-workCard--action visible-if-viewed-by-owner inline-block" style="display: none;"><span class="js-profile-work-strip-edit-button-wrapper profile-work-strip-edit-button-wrapper" data-work-id="128692393"><a class="js-profile-work-strip-edit-button" tabindex="0"><span><i class="fa fa-pencil"></i></span><span>Edit</span></a></span></span></div><div class="wp-workCard_item wp-workCard--stats"><span><span><span class="js-view-count view-count u-mr2x" data-work-id="128692393"><i class="fa fa-spinner fa-spin"></i></span><script>$(function () { var workId = 128692393; window.Academia.workViewCountsFetcher.queue(workId, function (count) { var description = window.$h.commaizeInt(count) + " " + window.$h.pluralize(count, 'View'); $(".js-view-count[data-work-id=128692393]").text(description); $(".js-view-count[data-work-id=128692393]").attr('title', description).tooltip(); }); });</script></span></span><span><span class="percentile-widget hidden"><span class="u-mr2x work-percentile"></span></span><script>$(function () { var workId = 128692393; window.Academia.workPercentilesFetcher.queue(workId, function (percentileText) { var container = $(".js-work-strip[data-work-id='128692393']"); container.find('.work-percentile').text(percentileText.charAt(0).toUpperCase() + percentileText.slice(1)); container.find('.percentile-widget').show(); container.find('.percentile-widget').removeClass('hidden'); }); });</script></span></div><div id="work-strip-premium-row-container"></div></div></div><script> require.config({ waitSeconds: 90 })(["https://a.academia-assets.com/assets/wow_profile-a9bf3a2bc8c89fa2a77156577594264ee8a0f214d74241bc0fcd3f69f8d107ac.js","https://a.academia-assets.com/assets/work_edit-ad038b8c047c1a8d4fa01b402d530ff93c45fee2137a149a4a5398bc8ad67560.js"], function() { // from javascript_helper.rb var dispatcherData = {} if (true){ window.WowProfile.dispatcher = window.WowProfile.dispatcher || _.clone(Backbone.Events); dispatcherData = { dispatcher: window.WowProfile.dispatcher, downloadLinkId: "f0b8173166864564b855fcf9e41c34f0" } } $('.js-work-strip[data-work-id=128692393]').each(function() { if (!$(this).data('initialized')) { new WowProfile.WorkStripView({ el: this, workJSON: {"id":128692393,"title":"JsStories: Improving Social Inclusion in Computer Science Education Through Interactive Stories","translated_title":"","metadata":{"doi":"10.48550/arXiv.2504.04006","abstract":"A main challenge faced by non-profit organisations providing computer science education to under-represented groups are the high drop-out rates. This issue arises from various factors affecting both students and teachers, such as the one-size-fits-all approach of many lessons. Enhancing social inclusion in the learning process could help reduce these drop-out rates. We present JsStories, a tool designed to help students learn JavaScript through interactive stories. The development of JsStories has been informed by existing literature on storytelling for inclusion and insights gained from a visit to HackYourFuture Belgium (HYFBE), a non-profit organisation that teaches web development to refugees and migrants. To lower barriers to entry and maximise the feeling of connection to the story, we incorporated narratives from HYFBE alumni. Further, we adhered to educational best practices by applying the PRIMM principles and offering level-appropriate content based on knowledge graphs. JsStories has been demonstrated, evaluated and communicated to the different stakeholders through interviews and a survey, enabling us to identify future directions for story-based learning solutions.","more_info":"Inas Ghazouani Ghailani, Yoshi Malaise and Beat Signer, WISE-2025-02, arXiv preprint, April 2025","grobid_abstract":"A main challenge faced by non-profit organisations providing computer science education to under-represented groups are the high drop-out rates. This issue arises from various factors affecting both students and teachers, such as the one-size-fits-all approach of many lessons. Enhancing social inclusion in the learning process could help reduce these drop-out rates. We present JsStories, a tool designed to help students learn JavaScript through interactive stories. The development of JsStories has been informed by existing literature on storytelling for inclusion and insights gained from a visit to HackYourFuture Belgium (HYFBE), a non-profit organisation that teaches web development to refugees and migrants. To lower barriers to entry and maximise the feeling of connection to the story, we incorporated narratives from HYFBE alumni. Further, we adhered to educational best practices by applying the PRIMM principles and offering level-appropriate content based on knowledge graphs. JsStories has been demonstrated, evaluated and communicated to the different stakeholders through interviews and a survey, enabling us to identify future directions for story-based learning solutions. • Applied computing → Interactive learning environments; Distance learning.","publication_date":{"day":null,"month":null,"year":2025,"errors":{}},"grobid_abstract_attachment_id":122230119},"translated_abstract":"A main challenge faced by non-profit organisations providing computer science education to under-represented groups are the high drop-out rates. This issue arises from various factors affecting both students and teachers, such as the one-size-fits-all approach of many lessons. Enhancing social inclusion in the learning process could help reduce these drop-out rates. We present JsStories, a tool designed to help students learn JavaScript through interactive stories. The development of JsStories has been informed by existing literature on storytelling for inclusion and insights gained from a visit to HackYourFuture Belgium (HYFBE), a non-profit organisation that teaches web development to refugees and migrants. To lower barriers to entry and maximise the feeling of connection to the story, we incorporated narratives from HYFBE alumni. Further, we adhered to educational best practices by applying the PRIMM principles and offering level-appropriate content based on knowledge graphs. JsStories has been demonstrated, evaluated and communicated to the different stakeholders through interviews and a survey, enabling us to identify future directions for story-based learning solutions.","internal_url":"https://www.academia.edu/128692393/JsStories_Improving_Social_Inclusion_in_Computer_Science_Education_Through_Interactive_Stories","translated_internal_url":"","created_at":"2025-04-07T21:45:23.049-07:00","preview_url":null,"current_user_can_edit":null,"current_user_is_owner":null,"owner_id":13155,"coauthors_can_edit":true,"document_type":"paper","co_author_tags":[{"id":43341471,"work_id":128692393,"tagging_user_id":13155,"tagged_user_id":230910613,"co_author_invite_id":null,"email":"y***e@vub.be","affiliation":"Vrije Universiteit Brussel","display_order":1,"name":"Yoshi Malaise","title":"JsStories: Improving Social Inclusion in Computer Science Education Through Interactive Stories"}],"downloadable_attachments":[{"id":122230119,"title":"","file_type":"pdf","scribd_thumbnail_url":"https://attachments.academia-assets.com/122230119/thumbnails/1.jpg","file_name":"ghailani_CoRR2025.pdf","download_url":"https://www.academia.edu/attachments/122230119/download_file","bulk_download_file_name":"JsStories_Improving_Social_Inclusion_in.pdf","bulk_download_url":"https://d1wqtxts1xzle7.cloudfront.net/122230119/ghailani_CoRR2025-libre.pdf?1744091630=\u0026response-content-disposition=attachment%3B+filename%3DJsStories_Improving_Social_Inclusion_in.pdf\u0026Expires=1744203891\u0026Signature=c00SHmvX6T0z7gvAuOatRhO0heHKY8B1ZGGigMgKqMt0JybJdx17IBz8YzqsEgTAGFBDIjJl1~pXTb~dzwYlbDc2v9sERf9Hk7CFZyzR6unKUIyyaE-w~dEKxTLMDB3aNb46W9g563KI7nbIrciCkadkaJ4rQnbg0rfA0uQrlcBhuD1kcAqWLmdy6JTgDpadpbtIUET11VjR3S~1iDizdnufgOzwvXAnom4holu4zoAMBKGOKBC81t66eQPwko~meE2Wx-qosxG1gVAqHB0~AC2J~qvIVQLPqyh-TLK6grl9-6sd7zm4RR~686g1z4lFEijbmMHmAW8ABouT9ldqJg__\u0026Key-Pair-Id=APKAJLOHF5GGSLRBV4ZA"}],"slug":"JsStories_Improving_Social_Inclusion_in_Computer_Science_Education_Through_Interactive_Stories","translated_slug":"","page_count":10,"language":"en","content_type":"Work","summary":"A main challenge faced by non-profit organisations providing computer science education to under-represented groups are the high drop-out rates. This issue arises from various factors affecting both students and teachers, such as the one-size-fits-all approach of many lessons. Enhancing social inclusion in the learning process could help reduce these drop-out rates. We present JsStories, a tool designed to help students learn JavaScript through interactive stories. The development of JsStories has been informed by existing literature on storytelling for inclusion and insights gained from a visit to HackYourFuture Belgium (HYFBE), a non-profit organisation that teaches web development to refugees and migrants. To lower barriers to entry and maximise the feeling of connection to the story, we incorporated narratives from HYFBE alumni. Further, we adhered to educational best practices by applying the PRIMM principles and offering level-appropriate content based on knowledge graphs. JsStories has been demonstrated, evaluated and communicated to the different stakeholders through interviews and a survey, enabling us to identify future directions for story-based learning solutions.","impression_tracking_id":null,"owner":{"id":13155,"first_name":"Beat","middle_initials":null,"last_name":"Signer","page_name":"BeatSigner","domain_name":"vub","created_at":"2008-10-29T17:11:18.015-07:00","display_name":"Beat Signer","url":"https://vub.academia.edu/BeatSigner"},"attachments":[{"id":122230119,"title":"","file_type":"pdf","scribd_thumbnail_url":"https://attachments.academia-assets.com/122230119/thumbnails/1.jpg","file_name":"ghailani_CoRR2025.pdf","download_url":"https://www.academia.edu/attachments/122230119/download_file","bulk_download_file_name":"JsStories_Improving_Social_Inclusion_in.pdf","bulk_download_url":"https://d1wqtxts1xzle7.cloudfront.net/122230119/ghailani_CoRR2025-libre.pdf?1744091630=\u0026response-content-disposition=attachment%3B+filename%3DJsStories_Improving_Social_Inclusion_in.pdf\u0026Expires=1744203891\u0026Signature=c00SHmvX6T0z7gvAuOatRhO0heHKY8B1ZGGigMgKqMt0JybJdx17IBz8YzqsEgTAGFBDIjJl1~pXTb~dzwYlbDc2v9sERf9Hk7CFZyzR6unKUIyyaE-w~dEKxTLMDB3aNb46W9g563KI7nbIrciCkadkaJ4rQnbg0rfA0uQrlcBhuD1kcAqWLmdy6JTgDpadpbtIUET11VjR3S~1iDizdnufgOzwvXAnom4holu4zoAMBKGOKBC81t66eQPwko~meE2Wx-qosxG1gVAqHB0~AC2J~qvIVQLPqyh-TLK6grl9-6sd7zm4RR~686g1z4lFEijbmMHmAW8ABouT9ldqJg__\u0026Key-Pair-Id=APKAJLOHF5GGSLRBV4ZA"}],"research_interests":[{"id":422,"name":"Computer Science","url":"https://www.academia.edu/Documents/in/Computer_Science"},{"id":472,"name":"Human Computer Interaction","url":"https://www.academia.edu/Documents/in/Human_Computer_Interaction"},{"id":887,"name":"Teaching and Learning","url":"https://www.academia.edu/Documents/in/Teaching_and_Learning"},{"id":1210,"name":"Informatics","url":"https://www.academia.edu/Documents/in/Informatics"},{"id":1609,"name":"E-learning","url":"https://www.academia.edu/Documents/in/E-learning"},{"id":3095,"name":"Computer-Based Learning","url":"https://www.academia.edu/Documents/in/Computer-Based_Learning"},{"id":3457,"name":"Learning and Teaching","url":"https://www.academia.edu/Documents/in/Learning_and_Teaching"},{"id":6492,"name":"Storytelling","url":"https://www.academia.edu/Documents/in/Storytelling"},{"id":7585,"name":"ICT in Education","url":"https://www.academia.edu/Documents/in/ICT_in_Education"},{"id":8673,"name":"Digital Media \u0026 Learning","url":"https://www.academia.edu/Documents/in/Digital_Media_and_Learning"},{"id":15869,"name":"Online Learning","url":"https://www.academia.edu/Documents/in/Online_Learning"},{"id":17758,"name":"Technology Enhanced Learning","url":"https://www.academia.edu/Documents/in/Technology_Enhanced_Learning"},{"id":19707,"name":"Social Inclusion","url":"https://www.academia.edu/Documents/in/Social_Inclusion"},{"id":22412,"name":"Digital Storytelling","url":"https://www.academia.edu/Documents/in/Digital_Storytelling"},{"id":33915,"name":"Technology-enhanced Learning","url":"https://www.academia.edu/Documents/in/Technology-enhanced_Learning"},{"id":43774,"name":"Learning","url":"https://www.academia.edu/Documents/in/Learning"},{"id":279114,"name":"Technology Enhanced Education","url":"https://www.academia.edu/Documents/in/Technology_Enhanced_Education"},{"id":1251579,"name":"Human-Computer Interaction","url":"https://www.academia.edu/Documents/in/Human-Computer_Interaction"}],"urls":[{"id":47448285,"url":"https://beatsigner.com/publications/jsstories-improving-social-inclusion-in-computer-science-education-through-interactive-stories.pdf"}]}, dispatcherData: dispatcherData }); $(this).data('initialized', true); } }); $a.trackClickSource(".js-work-strip-work-link", "profile_work_strip") if (false) { Aedu.setUpFigureCarousel('profile-work-128692393-figures'); } }); </script> <div class="js-work-strip profile--work_container" data-work-id="108934703"><div class="profile--work_thumbnail hidden-xs"><a class="js-work-strip-work-link" data-click-track="profile-work-strip-thumbnail" href="https://www.academia.edu/108934703/Explorotron_An_IDE_Extension_for_Guided_and_Independent_Code_Exploration_and_Learning"><img alt="Research paper thumbnail of Explorotron: An IDE Extension for Guided and Independent Code Exploration and Learning" class="work-thumbnail" src="https://attachments.academia-assets.com/107202372/thumbnails/1.jpg" /></a></div><div class="wp-workCard wp-workCard_itemContainer"><div class="wp-workCard_item wp-workCard--title"><a class="js-work-strip-work-link text-gray-darker" data-click-track="profile-work-strip-title" href="https://www.academia.edu/108934703/Explorotron_An_IDE_Extension_for_Guided_and_Independent_Code_Exploration_and_Learning">Explorotron: An IDE Extension for Guided and Independent Code Exploration and Learning</a></div><div class="wp-workCard_item"><span class="js-work-more-abstract-truncated">We introduce the Explorotron Visual Studio Code extension for guided and independent code explora...</span><a class="js-work-more-abstract" data-broccoli-component="work_strip.more_abstract" data-click-track="profile-work-strip-more-abstract" href="javascript:;"><span> more </span><span><i class="fa fa-caret-down"></i></span></a><span class="js-work-more-abstract-untruncated hidden">We introduce the Explorotron Visual Studio Code extension for guided and independent code exploration and learning. Explorotron is a continuation of earlier work to explore how we can enable small organisations with limited resources to provide pedagogically sound learning experiences in programming. We situate Explorotron in the field of Computing Education Research (CER) and envision it to initiate a discussion around different topics, including how to balance the optimisation between the researcher-student-teacher trifecta that is inherent in CER, how to ethically and responsibly use large language models (LLMs) in the independent learning and exploration by students, and how to define better learning sessions over coding content that students obtained on their own. We further reflect on the question raised by Begel and Ko whether technology should &quot;structure learning for learners&quot; or whether learners should &quot;be taught how to structure their own independent learning&quot; outside of the classroom.</span></div><div class="wp-workCard_item"><div class="carousel-container carousel-container--sm" id="profile-work-108934703-figures"><div class="prev-slide-container js-prev-button-container"><button aria-label="Previous" class="carousel-navigation-button js-profile-work-108934703-figures-prev"><span class="material-symbols-outlined" style="font-size: 24px" translate="no">arrow_back_ios</span></button></div><div class="slides-container js-slides-container"><figure class="figure-slide-container"><a href="https://www.academia.edu/figures/32963430/figure-1-explorotron-visual-studio-code-extension-showing"><img alt="Figure 1: Explorotron Visual Studio Code extension showing recommended study lenses on the left and the Argument Picker study lens where students have to decide which argument goes where in the code on the right. Image altered due to space constraints. " class="figure-slide-image" src="https://figures.academia-assets.com/107202372/figure_001.jpg" /></a></figure><figure class="figure-slide-container"><a href="https://www.academia.edu/figures/32963438/figure-2-overview-of-the-different-ways-students-can-study"><img alt="Figure 2: Overview of the different ways students can study code " class="figure-slide-image" src="https://figures.academia-assets.com/107202372/figure_002.jpg" /></a></figure><figure class="figure-slide-container"><a href="https://www.academia.edu/figures/32963442/figure-3-overview-of-the-architecture-to-generate-the"><img alt="Figure 3: Overview of the architecture to generate the recommended lenses page " class="figure-slide-image" src="https://figures.academia-assets.com/107202372/figure_003.jpg" /></a></figure></div><div class="next-slide-container js-next-button-container"><button aria-label="Next" class="carousel-navigation-button js-profile-work-108934703-figures-next"><span class="material-symbols-outlined" style="font-size: 24px" translate="no">arrow_forward_ios</span></button></div></div></div><div class="wp-workCard_item wp-workCard--actions"><span class="work-strip-bookmark-button-container"></span><a id="2a1c1546630e6361ad3731c3174e2b30" class="wp-workCard--action" rel="nofollow" data-click-track="profile-work-strip-download" data-download="{&quot;attachment_id&quot;:107202372,&quot;asset_id&quot;:108934703,&quot;asset_type&quot;:&quot;Work&quot;,&quot;button_location&quot;:&quot;profile&quot;}" href="https://www.academia.edu/attachments/107202372/download_file?s=profile"><span><i class="fa fa-arrow-down"></i></span><span>Download</span></a><span class="wp-workCard--action visible-if-viewed-by-owner inline-block" style="display: none;"><span class="js-profile-work-strip-edit-button-wrapper profile-work-strip-edit-button-wrapper" data-work-id="108934703"><a class="js-profile-work-strip-edit-button" tabindex="0"><span><i class="fa fa-pencil"></i></span><span>Edit</span></a></span></span></div><div class="wp-workCard_item wp-workCard--stats"><span><span><span class="js-view-count view-count u-mr2x" data-work-id="108934703"><i class="fa fa-spinner fa-spin"></i></span><script>$(function () { var workId = 108934703; window.Academia.workViewCountsFetcher.queue(workId, function (count) { var description = window.$h.commaizeInt(count) + " " + window.$h.pluralize(count, 'View'); $(".js-view-count[data-work-id=108934703]").text(description); $(".js-view-count[data-work-id=108934703]").attr('title', description).tooltip(); }); });</script></span></span><span><span class="percentile-widget hidden"><span class="u-mr2x work-percentile"></span></span><script>$(function () { var workId = 108934703; window.Academia.workPercentilesFetcher.queue(workId, function (percentileText) { var container = $(".js-work-strip[data-work-id='108934703']"); container.find('.work-percentile').text(percentileText.charAt(0).toUpperCase() + percentileText.slice(1)); container.find('.percentile-widget').show(); container.find('.percentile-widget').removeClass('hidden'); }); });</script></span></div><div id="work-strip-premium-row-container"></div></div></div><script> require.config({ waitSeconds: 90 })(["https://a.academia-assets.com/assets/wow_profile-a9bf3a2bc8c89fa2a77156577594264ee8a0f214d74241bc0fcd3f69f8d107ac.js","https://a.academia-assets.com/assets/work_edit-ad038b8c047c1a8d4fa01b402d530ff93c45fee2137a149a4a5398bc8ad67560.js"], function() { // from javascript_helper.rb var dispatcherData = {} if (true){ window.WowProfile.dispatcher = window.WowProfile.dispatcher || _.clone(Backbone.Events); dispatcherData = { dispatcher: window.WowProfile.dispatcher, downloadLinkId: "2a1c1546630e6361ad3731c3174e2b30" } } $('.js-work-strip[data-work-id=108934703]').each(function() { if (!$(this).data('initialized')) { new WowProfile.WorkStripView({ el: this, workJSON: {"id":108934703,"title":"Explorotron: An IDE Extension for Guided and Independent Code Exploration and Learning","translated_title":"","metadata":{"doi":"10.1145/3631802.3631816","abstract":"We introduce the Explorotron Visual Studio Code extension for guided and independent code exploration and learning. Explorotron is a continuation of earlier work to explore how we can enable small organisations with limited resources to provide pedagogically sound learning experiences in programming. We situate Explorotron in the field of Computing Education Research (CER) and envision it to initiate a discussion around different topics, including how to balance the optimisation between the researcher-student-teacher trifecta that is inherent in CER, how to ethically and responsibly use large language models (LLMs) in the independent learning and exploration by students, and how to define better learning sessions over coding content that students obtained on their own. We further reflect on the question raised by Begel and Ko whether technology should \"structure learning for learners\" or whether learners should \"be taught how to structure their own independent learning\" outside of the classroom.","more_info":"Yoshi Malaise and Beat Signer, Proceedings of Koli Calling 2023, 23rd International Conference on Computing Education Research, Koli, Finland, November 2023","publication_date":{"day":null,"month":null,"year":2023,"errors":{}}},"translated_abstract":"We introduce the Explorotron Visual Studio Code extension for guided and independent code exploration and learning. Explorotron is a continuation of earlier work to explore how we can enable small organisations with limited resources to provide pedagogically sound learning experiences in programming. We situate Explorotron in the field of Computing Education Research (CER) and envision it to initiate a discussion around different topics, including how to balance the optimisation between the researcher-student-teacher trifecta that is inherent in CER, how to ethically and responsibly use large language models (LLMs) in the independent learning and exploration by students, and how to define better learning sessions over coding content that students obtained on their own. We further reflect on the question raised by Begel and Ko whether technology should \"structure learning for learners\" or whether learners should \"be taught how to structure their own independent learning\" outside of the classroom.","internal_url":"https://www.academia.edu/108934703/Explorotron_An_IDE_Extension_for_Guided_and_Independent_Code_Exploration_and_Learning","translated_internal_url":"","created_at":"2023-11-06T12:23:57.019-08:00","preview_url":null,"current_user_can_edit":null,"current_user_is_owner":null,"owner_id":13155,"coauthors_can_edit":true,"document_type":"paper","co_author_tags":[{"id":40513373,"work_id":108934703,"tagging_user_id":13155,"tagged_user_id":230910613,"co_author_invite_id":null,"email":"y***e@vub.be","affiliation":"Vrije Universiteit Brussel","display_order":1,"name":"Yoshi Malaise","title":"Explorotron: An IDE Extension for Guided and Independent Code Exploration and Learning"}],"downloadable_attachments":[{"id":107202372,"title":"","file_type":"pdf","scribd_thumbnail_url":"https://attachments.academia-assets.com/107202372/thumbnails/1.jpg","file_name":"Malaise_KoliCalling2023.pdf","download_url":"https://www.academia.edu/attachments/107202372/download_file","bulk_download_file_name":"Explorotron_An_IDE_Extension_for_Guided.pdf","bulk_download_url":"https://d1wqtxts1xzle7.cloudfront.net/107202372/Malaise_KoliCalling2023-libre.pdf?1699333282=\u0026response-content-disposition=attachment%3B+filename%3DExplorotron_An_IDE_Extension_for_Guided.pdf\u0026Expires=1744203891\u0026Signature=UQDIoJorTmZ9zbaqUoKm6vs0afIEY3P24Z5yWXKB5aFedrMpQFi~VPJZoJ0xZ5aOqCv5dklBNC5KjUk-IVdV7wa-K~53qPwFMch6Fs077umlQoTxyP2k8Yx4D5Z9BP3aojKaWOBEKmfcOJM8xdtOSSknfeTb5YuhX2hxmLaEv~uzH25Je~viWiSCj-jHFsEMEh6LKh4vHsUi7phf7~ZXnvFRQFUYTX7sG85zWwPKYDhHkoarBzi0G4MdoTd9jM8Z2d7BCxjgWQK7IiFxEoBvTbsyIrN-aFgg2z98GQqhAGenvT0CbJ2BQeH0Finf0WEdZCyFCh1~sGpRhWUTig74qQ__\u0026Key-Pair-Id=APKAJLOHF5GGSLRBV4ZA"}],"slug":"Explorotron_An_IDE_Extension_for_Guided_and_Independent_Code_Exploration_and_Learning","translated_slug":"","page_count":8,"language":"en","content_type":"Work","summary":"We introduce the Explorotron Visual Studio Code extension for guided and independent code exploration and learning. Explorotron is a continuation of earlier work to explore how we can enable small organisations with limited resources to provide pedagogically sound learning experiences in programming. We situate Explorotron in the field of Computing Education Research (CER) and envision it to initiate a discussion around different topics, including how to balance the optimisation between the researcher-student-teacher trifecta that is inherent in CER, how to ethically and responsibly use large language models (LLMs) in the independent learning and exploration by students, and how to define better learning sessions over coding content that students obtained on their own. We further reflect on the question raised by Begel and Ko whether technology should \"structure learning for learners\" or whether learners should \"be taught how to structure their own independent learning\" outside of the classroom.","impression_tracking_id":null,"owner":{"id":13155,"first_name":"Beat","middle_initials":null,"last_name":"Signer","page_name":"BeatSigner","domain_name":"vub","created_at":"2008-10-29T17:11:18.015-07:00","display_name":"Beat Signer","url":"https://vub.academia.edu/BeatSigner"},"attachments":[{"id":107202372,"title":"","file_type":"pdf","scribd_thumbnail_url":"https://attachments.academia-assets.com/107202372/thumbnails/1.jpg","file_name":"Malaise_KoliCalling2023.pdf","download_url":"https://www.academia.edu/attachments/107202372/download_file","bulk_download_file_name":"Explorotron_An_IDE_Extension_for_Guided.pdf","bulk_download_url":"https://d1wqtxts1xzle7.cloudfront.net/107202372/Malaise_KoliCalling2023-libre.pdf?1699333282=\u0026response-content-disposition=attachment%3B+filename%3DExplorotron_An_IDE_Extension_for_Guided.pdf\u0026Expires=1744203891\u0026Signature=UQDIoJorTmZ9zbaqUoKm6vs0afIEY3P24Z5yWXKB5aFedrMpQFi~VPJZoJ0xZ5aOqCv5dklBNC5KjUk-IVdV7wa-K~53qPwFMch6Fs077umlQoTxyP2k8Yx4D5Z9BP3aojKaWOBEKmfcOJM8xdtOSSknfeTb5YuhX2hxmLaEv~uzH25Je~viWiSCj-jHFsEMEh6LKh4vHsUi7phf7~ZXnvFRQFUYTX7sG85zWwPKYDhHkoarBzi0G4MdoTd9jM8Z2d7BCxjgWQK7IiFxEoBvTbsyIrN-aFgg2z98GQqhAGenvT0CbJ2BQeH0Finf0WEdZCyFCh1~sGpRhWUTig74qQ__\u0026Key-Pair-Id=APKAJLOHF5GGSLRBV4ZA"}],"research_interests":[{"id":422,"name":"Computer Science","url":"https://www.academia.edu/Documents/in/Computer_Science"},{"id":887,"name":"Teaching and Learning","url":"https://www.academia.edu/Documents/in/Teaching_and_Learning"},{"id":922,"name":"Education","url":"https://www.academia.edu/Documents/in/Education"},{"id":1003,"name":"Educational Technology","url":"https://www.academia.edu/Documents/in/Educational_Technology"},{"id":1210,"name":"Informatics","url":"https://www.academia.edu/Documents/in/Informatics"},{"id":1380,"name":"Computer Engineering","url":"https://www.academia.edu/Documents/in/Computer_Engineering"},{"id":1609,"name":"E-learning","url":"https://www.academia.edu/Documents/in/E-learning"},{"id":7585,"name":"ICT in Education","url":"https://www.academia.edu/Documents/in/ICT_in_Education"},{"id":17951,"name":"Learning And Teaching In Higher Education","url":"https://www.academia.edu/Documents/in/Learning_And_Teaching_In_Higher_Education"},{"id":53292,"name":"Programming","url":"https://www.academia.edu/Documents/in/Programming"}],"urls":[{"id":35241699,"url":"https://beatsigner.com/publications/explorotron-an-ide-extension-for-guided-and-independent-code-exploration-and-learning.pdf"}]}, dispatcherData: dispatcherData }); $(this).data('initialized', true); } }); $a.trackClickSource(".js-work-strip-work-link", "profile_work_strip") if (true) { Aedu.setUpFigureCarousel('profile-work-108934703-figures'); } }); </script> </div><div class="profile--tab_content_container js-tab-pane tab-pane" data-section-id="4679" id="books"><div class="js-work-strip profile--work_container" data-work-id="175411"><div class="profile--work_thumbnail hidden-xs"><a class="js-work-strip-work-link" data-click-track="profile-work-strip-thumbnail" href="https://www.academia.edu/175411/Fundamental_Concepts_for_Interactive_Paper_and_Cross_Media_Information_Spaces"><img alt="Research paper thumbnail of Fundamental Concepts for Interactive Paper and Cross-Media Information Spaces" class="work-thumbnail" src="https://attachments.academia-assets.com/54076380/thumbnails/1.jpg" /></a></div><div class="wp-workCard wp-workCard_itemContainer"><div class="wp-workCard_item wp-workCard--title"><a class="js-work-strip-work-link text-gray-darker" data-click-track="profile-work-strip-title" href="https://www.academia.edu/175411/Fundamental_Concepts_for_Interactive_Paper_and_Cross_Media_Information_Spaces">Fundamental Concepts for Interactive Paper and Cross-Media Information Spaces</a></div><div class="wp-workCard_item"><span class="js-work-more-abstract-truncated">While there have been dramatic increases in the use of digital technologies for information stora...</span><a class="js-work-more-abstract" data-broccoli-component="work_strip.more_abstract" data-click-track="profile-work-strip-more-abstract" href="javascript:;"><span> more </span><span><i class="fa fa-caret-down"></i></span></a><span class="js-work-more-abstract-untruncated hidden">While there have been dramatic increases in the use of digital technologies for information storage, processing and delivery over the last twenty years, the affordances of paper have ensured its retention as a key information medium. In this book we review a wide variety of projects and technological developments for bridging the paper-digital divide. We present our information-centric approach for a tight integration of paper and digital information that is based on a general cross-media information platform. Different innovative augmented paper applications that have been developed based on our interactive paper platform and Anoto Digital Pen and Paper technology are introduced. For example, these applications include a mobile interactive paper-based tourist information system (EdFest) and a paper-digital presentation tool (PaperPoint). Challenges and solutions for new forms of interactive paper and cross-media publishing are discussed. The book is targeted at developers and researchers in information systems, hypermedia and human computer interaction, professionals from the printing and publishing industry as well as readers with a general interest in the future of paper. <br /> <br />Buy from Amazon: <a href="http://www.amazon.co.uk/Fundamental-Concepts-Interactive-Cross-Media-Information/dp/3837027139" rel="nofollow">http://www.amazon.co.uk/Fundamental-Concepts-Interactive-Cross-Media-Information/dp/3837027139</a></span></div><div class="wp-workCard_item wp-workCard--actions"><span class="work-strip-bookmark-button-container"></span><a id="9bdf01fac66a1a290ae0557a4672c903" class="wp-workCard--action" rel="nofollow" data-click-track="profile-work-strip-download" data-download="{&quot;attachment_id&quot;:54076380,&quot;asset_id&quot;:175411,&quot;asset_type&quot;:&quot;Work&quot;,&quot;button_location&quot;:&quot;profile&quot;}" href="https://www.academia.edu/attachments/54076380/download_file?s=profile"><span><i class="fa fa-arrow-down"></i></span><span>Download</span></a><span class="wp-workCard--action visible-if-viewed-by-owner inline-block" style="display: none;"><span class="js-profile-work-strip-edit-button-wrapper profile-work-strip-edit-button-wrapper" data-work-id="175411"><a class="js-profile-work-strip-edit-button" tabindex="0"><span><i class="fa fa-pencil"></i></span><span>Edit</span></a></span></span></div><div class="wp-workCard_item wp-workCard--stats"><span><span><span class="js-view-count view-count u-mr2x" data-work-id="175411"><i class="fa fa-spinner fa-spin"></i></span><script>$(function () { var workId = 175411; window.Academia.workViewCountsFetcher.queue(workId, function (count) { var description = window.$h.commaizeInt(count) + " " + window.$h.pluralize(count, 'View'); $(".js-view-count[data-work-id=175411]").text(description); $(".js-view-count[data-work-id=175411]").attr('title', description).tooltip(); }); });</script></span></span><span><span class="percentile-widget hidden"><span class="u-mr2x work-percentile"></span></span><script>$(function () { var workId = 175411; window.Academia.workPercentilesFetcher.queue(workId, function (percentileText) { var container = $(".js-work-strip[data-work-id='175411']"); container.find('.work-percentile').text(percentileText.charAt(0).toUpperCase() + percentileText.slice(1)); container.find('.percentile-widget').show(); container.find('.percentile-widget').removeClass('hidden'); }); });</script></span></div><div id="work-strip-premium-row-container"></div></div></div><script> require.config({ waitSeconds: 90 })(["https://a.academia-assets.com/assets/wow_profile-a9bf3a2bc8c89fa2a77156577594264ee8a0f214d74241bc0fcd3f69f8d107ac.js","https://a.academia-assets.com/assets/work_edit-ad038b8c047c1a8d4fa01b402d530ff93c45fee2137a149a4a5398bc8ad67560.js"], function() { // from javascript_helper.rb var dispatcherData = {} if (true){ window.WowProfile.dispatcher = window.WowProfile.dispatcher || _.clone(Backbone.Events); dispatcherData = { dispatcher: window.WowProfile.dispatcher, downloadLinkId: "9bdf01fac66a1a290ae0557a4672c903" } } $('.js-work-strip[data-work-id=175411]').each(function() { if (!$(this).data('initialized')) { new WowProfile.WorkStripView({ el: this, workJSON: {"id":175411,"title":"Fundamental Concepts for Interactive Paper and Cross-Media Information Spaces","translated_title":"","metadata":{"abstract":"While there have been dramatic increases in the use of digital technologies for information storage, processing and delivery over the last twenty years, the affordances of paper have ensured its retention as a key information medium. In this book we review a wide variety of projects and technological developments for bridging the paper-digital divide. We present our information-centric approach for a tight integration of paper and digital information that is based on a general cross-media information platform. Different innovative augmented paper applications that have been developed based on our interactive paper platform and Anoto Digital Pen and Paper technology are introduced. For example, these applications include a mobile interactive paper-based tourist information system (EdFest) and a paper-digital presentation tool (PaperPoint). Challenges and solutions for new forms of interactive paper and cross-media publishing are discussed. The book is targeted at developers and researchers in information systems, hypermedia and human computer interaction, professionals from the printing and publishing industry as well as readers with a general interest in the future of paper.\r\n\r\nBuy from Amazon: http://www.amazon.co.uk/Fundamental-Concepts-Interactive-Cross-Media-Information/dp/3837027139","publication_date":{"day":null,"month":null,"year":2017,"errors":{}}},"translated_abstract":"While there have been dramatic increases in the use of digital technologies for information storage, processing and delivery over the last twenty years, the affordances of paper have ensured its retention as a key information medium. In this book we review a wide variety of projects and technological developments for bridging the paper-digital divide. We present our information-centric approach for a tight integration of paper and digital information that is based on a general cross-media information platform. Different innovative augmented paper applications that have been developed based on our interactive paper platform and Anoto Digital Pen and Paper technology are introduced. For example, these applications include a mobile interactive paper-based tourist information system (EdFest) and a paper-digital presentation tool (PaperPoint). Challenges and solutions for new forms of interactive paper and cross-media publishing are discussed. The book is targeted at developers and researchers in information systems, hypermedia and human computer interaction, professionals from the printing and publishing industry as well as readers with a general interest in the future of paper.\r\n\r\nBuy from Amazon: http://www.amazon.co.uk/Fundamental-Concepts-Interactive-Cross-Media-Information/dp/3837027139","internal_url":"https://www.academia.edu/175411/Fundamental_Concepts_for_Interactive_Paper_and_Cross_Media_Information_Spaces","translated_internal_url":"","created_at":"2009-03-16T08:59:43.243-07:00","preview_url":null,"current_user_can_edit":null,"current_user_is_owner":null,"owner_id":13155,"coauthors_can_edit":true,"document_type":"book","co_author_tags":[],"downloadable_attachments":[{"id":54076380,"title":"","file_type":"pdf","scribd_thumbnail_url":"https://attachments.academia-assets.com/54076380/thumbnails/1.jpg","file_name":"signer2017b.pdf","download_url":"https://www.academia.edu/attachments/54076380/download_file","bulk_download_file_name":"Fundamental_Concepts_for_Interactive_Pap.pdf","bulk_download_url":"https://d1wqtxts1xzle7.cloudfront.net/54076380/signer2017b-libre.pdf?1502094237=\u0026response-content-disposition=attachment%3B+filename%3DFundamental_Concepts_for_Interactive_Pap.pdf\u0026Expires=1744203890\u0026Signature=QHnnCJ9dH-SZYTXH3DP3DLMK9wQLXyvYaXCD-KAmEYgIpFcFjMSFx0KqHxjI74nsDQXTCdBR6~H~B-fGsRCa6qC8rsVNswa~xf3K2n9AKPDN~yJnLPmZp6uRuCHtyZZHcbgVhl-ZHlo7HnSo-rE-tRo32TFCOwtmacKw2NQx7XX9VXpFXe2~tkywxBHdvJ4hButMTADeqlIWTLhs-fUpO2Zo7yBE0W4olgr6lHxH0S--f0g~TnnwNdPlAxoYatvXmWYXlNDO-dbatrtCB2GlOWL1MDJhVPwjRtKcgst7eIHgN~JWxT~BiUfQbebRJPKrSEOwLd2aN8hLket7J44suA__\u0026Key-Pair-Id=APKAJLOHF5GGSLRBV4ZA"}],"slug":"Fundamental_Concepts_for_Interactive_Paper_and_Cross_Media_Information_Spaces","translated_slug":"","page_count":278,"language":"en","content_type":"Work","summary":"While there have been dramatic increases in the use of digital technologies for information storage, processing and delivery over the last twenty years, the affordances of paper have ensured its retention as a key information medium. In this book we review a wide variety of projects and technological developments for bridging the paper-digital divide. We present our information-centric approach for a tight integration of paper and digital information that is based on a general cross-media information platform. Different innovative augmented paper applications that have been developed based on our interactive paper platform and Anoto Digital Pen and Paper technology are introduced. For example, these applications include a mobile interactive paper-based tourist information system (EdFest) and a paper-digital presentation tool (PaperPoint). Challenges and solutions for new forms of interactive paper and cross-media publishing are discussed. The book is targeted at developers and researchers in information systems, hypermedia and human computer interaction, professionals from the printing and publishing industry as well as readers with a general interest in the future of paper.\r\n\r\nBuy from Amazon: http://www.amazon.co.uk/Fundamental-Concepts-Interactive-Cross-Media-Information/dp/3837027139","impression_tracking_id":null,"owner":{"id":13155,"first_name":"Beat","middle_initials":null,"last_name":"Signer","page_name":"BeatSigner","domain_name":"vub","created_at":"2008-10-29T17:11:18.015-07:00","display_name":"Beat Signer","url":"https://vub.academia.edu/BeatSigner"},"attachments":[{"id":54076380,"title":"","file_type":"pdf","scribd_thumbnail_url":"https://attachments.academia-assets.com/54076380/thumbnails/1.jpg","file_name":"signer2017b.pdf","download_url":"https://www.academia.edu/attachments/54076380/download_file","bulk_download_file_name":"Fundamental_Concepts_for_Interactive_Pap.pdf","bulk_download_url":"https://d1wqtxts1xzle7.cloudfront.net/54076380/signer2017b-libre.pdf?1502094237=\u0026response-content-disposition=attachment%3B+filename%3DFundamental_Concepts_for_Interactive_Pap.pdf\u0026Expires=1744203890\u0026Signature=QHnnCJ9dH-SZYTXH3DP3DLMK9wQLXyvYaXCD-KAmEYgIpFcFjMSFx0KqHxjI74nsDQXTCdBR6~H~B-fGsRCa6qC8rsVNswa~xf3K2n9AKPDN~yJnLPmZp6uRuCHtyZZHcbgVhl-ZHlo7HnSo-rE-tRo32TFCOwtmacKw2NQx7XX9VXpFXe2~tkywxBHdvJ4hButMTADeqlIWTLhs-fUpO2Zo7yBE0W4olgr6lHxH0S--f0g~TnnwNdPlAxoYatvXmWYXlNDO-dbatrtCB2GlOWL1MDJhVPwjRtKcgst7eIHgN~JWxT~BiUfQbebRJPKrSEOwLd2aN8hLket7J44suA__\u0026Key-Pair-Id=APKAJLOHF5GGSLRBV4ZA"}],"research_interests":[{"id":37,"name":"Information Systems","url":"https://www.academia.edu/Documents/in/Information_Systems"},{"id":48,"name":"Engineering","url":"https://www.academia.edu/Documents/in/Engineering"},{"id":422,"name":"Computer Science","url":"https://www.academia.edu/Documents/in/Computer_Science"},{"id":449,"name":"Software Engineering","url":"https://www.academia.edu/Documents/in/Software_Engineering"},{"id":459,"name":"Information Science","url":"https://www.academia.edu/Documents/in/Information_Science"},{"id":472,"name":"Human Computer Interaction","url":"https://www.academia.edu/Documents/in/Human_Computer_Interaction"},{"id":491,"name":"Information Technology","url":"https://www.academia.edu/Documents/in/Information_Technology"},{"id":859,"name":"Communication","url":"https://www.academia.edu/Documents/in/Communication"},{"id":923,"name":"Technology","url":"https://www.academia.edu/Documents/in/Technology"},{"id":933,"name":"New Media","url":"https://www.academia.edu/Documents/in/New_Media"},{"id":1003,"name":"Educational Technology","url":"https://www.academia.edu/Documents/in/Educational_Technology"},{"id":1012,"name":"Digital Libraries","url":"https://www.academia.edu/Documents/in/Digital_Libraries"},{"id":1210,"name":"Informatics","url":"https://www.academia.edu/Documents/in/Informatics"},{"id":1241,"name":"Knowledge Management","url":"https://www.academia.edu/Documents/in/Knowledge_Management"},{"id":1380,"name":"Computer Engineering","url":"https://www.academia.edu/Documents/in/Computer_Engineering"},{"id":1453,"name":"Information Management","url":"https://www.academia.edu/Documents/in/Information_Management"},{"id":1609,"name":"E-learning","url":"https://www.academia.edu/Documents/in/E-learning"},{"id":2129,"name":"Computer Supported Cooperative Work (CSCW)","url":"https://www.academia.edu/Documents/in/Computer_Supported_Cooperative_Work_CSCW_"},{"id":2482,"name":"Database Systems","url":"https://www.academia.edu/Documents/in/Database_Systems"},{"id":2869,"name":"Digital Media","url":"https://www.academia.edu/Documents/in/Digital_Media"},{"id":2879,"name":"Ubiquitous Computing","url":"https://www.academia.edu/Documents/in/Ubiquitous_Computing"},{"id":3419,"name":"Multimedia","url":"https://www.academia.edu/Documents/in/Multimedia"},{"id":4416,"name":"Interaction Design","url":"https://www.academia.edu/Documents/in/Interaction_Design"},{"id":5266,"name":"Human Information Interaction","url":"https://www.academia.edu/Documents/in/Human_Information_Interaction"},{"id":5673,"name":"Augmented Reality","url":"https://www.academia.edu/Documents/in/Augmented_Reality"},{"id":6492,"name":"Storytelling","url":"https://www.academia.edu/Documents/in/Storytelling"},{"id":7454,"name":"Information Communication Technology","url":"https://www.academia.edu/Documents/in/Information_Communication_Technology"},{"id":8215,"name":"Conceptual Modelling","url":"https://www.academia.edu/Documents/in/Conceptual_Modelling"},{"id":9134,"name":"Pervasive Computing","url":"https://www.academia.edu/Documents/in/Pervasive_Computing"},{"id":10165,"name":"Interactive and Digital Media","url":"https://www.academia.edu/Documents/in/Interactive_and_Digital_Media"},{"id":11085,"name":"Cross-Media Information Spaces","url":"https://www.academia.edu/Documents/in/Cross-Media_Information_Spaces"},{"id":11086,"name":"Interactive Paper","url":"https://www.academia.edu/Documents/in/Interactive_Paper"},{"id":11119,"name":"User Interface","url":"https://www.academia.edu/Documents/in/User_Interface"},{"id":11123,"name":"Personal Information Management","url":"https://www.academia.edu/Documents/in/Personal_Information_Management"},{"id":15951,"name":"Digital Pen and Paper","url":"https://www.academia.edu/Documents/in/Digital_Pen_and_Paper"},{"id":17758,"name":"Technology Enhanced Learning","url":"https://www.academia.edu/Documents/in/Technology_Enhanced_Learning"},{"id":20470,"name":"Tangible User Interfaces","url":"https://www.academia.edu/Documents/in/Tangible_User_Interfaces"},{"id":23997,"name":"HCI","url":"https://www.academia.edu/Documents/in/HCI"},{"id":27360,"name":"Databases","url":"https://www.academia.edu/Documents/in/Databases"},{"id":41030,"name":"Mobile Augmented Reality","url":"https://www.academia.edu/Documents/in/Mobile_Augmented_Reality"},{"id":41474,"name":"Cross Media Platforms","url":"https://www.academia.edu/Documents/in/Cross_Media_Platforms"},{"id":42095,"name":"Document Engineering","url":"https://www.academia.edu/Documents/in/Document_Engineering"},{"id":42896,"name":"Conceptual Modeling","url":"https://www.academia.edu/Documents/in/Conceptual_Modeling"},{"id":44606,"name":"Hypermedia","url":"https://www.academia.edu/Documents/in/Hypermedia"},{"id":48200,"name":"Digital Library","url":"https://www.academia.edu/Documents/in/Digital_Library"},{"id":51492,"name":"Pulp and Paper + Recycled Paper","url":"https://www.academia.edu/Documents/in/Pulp_and_Paper_Recycled_Paper"},{"id":97585,"name":"User interfaces","url":"https://www.academia.edu/Documents/in/User_interfaces"},{"id":311668,"name":"Digital Story Telling","url":"https://www.academia.edu/Documents/in/Digital_Story_Telling"},{"id":369022,"name":"Cross Media","url":"https://www.academia.edu/Documents/in/Cross_Media"},{"id":416425,"name":"Tangible Media","url":"https://www.academia.edu/Documents/in/Tangible_Media"},{"id":688513,"name":"Information Technology‎","url":"https://www.academia.edu/Documents/in/Information_Technology_"},{"id":721414,"name":"Augmented Paper","url":"https://www.academia.edu/Documents/in/Augmented_Paper"},{"id":1251579,"name":"Human-Computer Interaction","url":"https://www.academia.edu/Documents/in/Human-Computer_Interaction"}],"urls":[{"id":4448895,"url":"http://www.amazon.co.uk/Fundamental-Concepts-Interactive-Cross-Media-Information/dp/3837027139"},{"id":8277477,"url":"https://www.amazon.com/Fundamental-Concepts-Interactive-Cross-Media-Information-ebook/dp/B0753MK7VN/"}]}, dispatcherData: dispatcherData }); $(this).data('initialized', true); } }); $a.trackClickSource(".js-work-strip-work-link", "profile_work_strip") if (false) { Aedu.setUpFigureCarousel('profile-work-175411-figures'); } }); </script> </div><div class="profile--tab_content_container js-tab-pane tab-pane" data-section-id="11086297" id="interviews"><div class="js-work-strip profile--work_container" data-work-id="46927206"><div class="profile--work_thumbnail hidden-xs"><a class="js-work-strip-work-link" data-click-track="profile-work-strip-thumbnail" href="https://www.academia.edu/46927206/Interview_with_Beat_Signer"><img alt="Research paper thumbnail of Interview with Beat Signer" class="work-thumbnail" src="https://attachments.academia-assets.com/113208906/thumbnails/1.jpg" /></a></div><div class="wp-workCard wp-workCard_itemContainer"><div class="wp-workCard_item wp-workCard--title"><a class="js-work-strip-work-link text-gray-darker" data-click-track="profile-work-strip-title" href="https://www.academia.edu/46927206/Interview_with_Beat_Signer">Interview with Beat Signer</a></div><div class="wp-workCard_item"><span>ACM SIGWEB Newsletter 2021(Winter), February 2021</span><span>, 2021</span></div><div class="wp-workCard_item"><span class="js-work-more-abstract-truncated">Beat Signer is Professor of Computer Science at the Vrije Universiteit Brussel (VUB) and codirect...</span><a class="js-work-more-abstract" data-broccoli-component="work_strip.more_abstract" data-click-track="profile-work-strip-more-abstract" href="javascript:;"><span> more </span><span><i class="fa fa-caret-down"></i></span></a><span class="js-work-more-abstract-untruncated hidden">Beat Signer is Professor of Computer Science at the Vrije Universiteit Brussel (VUB) and codirectorof the Web &amp; Information Systems Engineering (WISE) research lab. He received a PhDin Computer Science from ETH Zurich where he has also been leading the Interactive Paper lab asa senior researcher for four years. He is an internationally distinguished expert in cross-media technologies and interactive paper solutions. His further research interests include human-information interaction, document engineering, data physicalisation, mixed reality as well as multimodal interaction. He has published more than 100 papers on these topics at international conferences and journals, and received multiple best paper awards.<br /><br />Beat has 20 years of experience in research on cross-media information management and mul-timodal user interfaces. As part of his PhD research, he investigated the use of paper as an interactive user interface and developed the resource-selector-link (RSL) hypermedia metamodel. With the interactive paper platform (iPaper), he strongly contributed to the interdisciplinary European Paper++ and PaperWorks research projects and the seminal research on paper-digital user interfaces led to innovative cross-media publishing solutions and novel forms of paper-based human-computer interaction. The RSL hypermedia metamodel is nowadays widely applied in his research lab and has, for example, been used for cross-media personal information management, an extensible cross-document link service, the MindXpres presentation platform as well as in a framework for cross-device and Internet of Things applications. For more details, please visit <a href="https://beatsigner.com" rel="nofollow">https://beatsigner.com</a>.</span></div><div class="wp-workCard_item wp-workCard--actions"><span class="work-strip-bookmark-button-container"></span><a id="37619249c5521245d7003d4dfb95dab9" class="wp-workCard--action" rel="nofollow" data-click-track="profile-work-strip-download" data-download="{&quot;attachment_id&quot;:113208906,&quot;asset_id&quot;:46927206,&quot;asset_type&quot;:&quot;Work&quot;,&quot;button_location&quot;:&quot;profile&quot;}" href="https://www.academia.edu/attachments/113208906/download_file?s=profile"><span><i class="fa fa-arrow-down"></i></span><span>Download</span></a><span class="wp-workCard--action visible-if-viewed-by-owner inline-block" style="display: none;"><span class="js-profile-work-strip-edit-button-wrapper profile-work-strip-edit-button-wrapper" data-work-id="46927206"><a class="js-profile-work-strip-edit-button" tabindex="0"><span><i class="fa fa-pencil"></i></span><span>Edit</span></a></span></span></div><div class="wp-workCard_item wp-workCard--stats"><span><span><span class="js-view-count view-count u-mr2x" data-work-id="46927206"><i class="fa fa-spinner fa-spin"></i></span><script>$(function () { var workId = 46927206; window.Academia.workViewCountsFetcher.queue(workId, function (count) { var description = window.$h.commaizeInt(count) + " " + window.$h.pluralize(count, 'View'); $(".js-view-count[data-work-id=46927206]").text(description); $(".js-view-count[data-work-id=46927206]").attr('title', description).tooltip(); }); });</script></span></span><span><span class="percentile-widget hidden"><span class="u-mr2x work-percentile"></span></span><script>$(function () { var workId = 46927206; window.Academia.workPercentilesFetcher.queue(workId, function (percentileText) { var container = $(".js-work-strip[data-work-id='46927206']"); container.find('.work-percentile').text(percentileText.charAt(0).toUpperCase() + percentileText.slice(1)); container.find('.percentile-widget').show(); container.find('.percentile-widget').removeClass('hidden'); }); });</script></span></div><div id="work-strip-premium-row-container"></div></div></div><script> require.config({ waitSeconds: 90 })(["https://a.academia-assets.com/assets/wow_profile-a9bf3a2bc8c89fa2a77156577594264ee8a0f214d74241bc0fcd3f69f8d107ac.js","https://a.academia-assets.com/assets/work_edit-ad038b8c047c1a8d4fa01b402d530ff93c45fee2137a149a4a5398bc8ad67560.js"], function() { // from javascript_helper.rb var dispatcherData = {} if (true){ window.WowProfile.dispatcher = window.WowProfile.dispatcher || _.clone(Backbone.Events); dispatcherData = { dispatcher: window.WowProfile.dispatcher, downloadLinkId: "37619249c5521245d7003d4dfb95dab9" } } $('.js-work-strip[data-work-id=46927206]').each(function() { if (!$(this).data('initialized')) { new WowProfile.WorkStripView({ el: this, workJSON: {"id":46927206,"title":"Interview with Beat Signer","translated_title":"","metadata":{"doi":"10.1145/3447879.3447881","abstract":"Beat Signer is Professor of Computer Science at the Vrije Universiteit Brussel (VUB) and codirectorof the Web \u0026 Information Systems Engineering (WISE) research lab. He received a PhDin Computer Science from ETH Zurich where he has also been leading the Interactive Paper lab asa senior researcher for four years. He is an internationally distinguished expert in cross-media technologies and interactive paper solutions. His further research interests include human-information interaction, document engineering, data physicalisation, mixed reality as well as multimodal interaction. He has published more than 100 papers on these topics at international conferences and journals, and received multiple best paper awards.\n\nBeat has 20 years of experience in research on cross-media information management and mul-timodal user interfaces. As part of his PhD research, he investigated the use of paper as an interactive user interface and developed the resource-selector-link (RSL) hypermedia metamodel. With the interactive paper platform (iPaper), he strongly contributed to the interdisciplinary European Paper++ and PaperWorks research projects and the seminal research on paper-digital user interfaces led to innovative cross-media publishing solutions and novel forms of paper-based human-computer interaction. The RSL hypermedia metamodel is nowadays widely applied in his research lab and has, for example, been used for cross-media personal information management, an extensible cross-document link service, the MindXpres presentation platform as well as in a framework for cross-device and Internet of Things applications. For more details, please visit https://beatsigner.com.","publication_date":{"day":null,"month":null,"year":2021,"errors":{}},"publication_name":"ACM SIGWEB Newsletter 2021(Winter), February 2021"},"translated_abstract":"Beat Signer is Professor of Computer Science at the Vrije Universiteit Brussel (VUB) and codirectorof the Web \u0026 Information Systems Engineering (WISE) research lab. He received a PhDin Computer Science from ETH Zurich where he has also been leading the Interactive Paper lab asa senior researcher for four years. He is an internationally distinguished expert in cross-media technologies and interactive paper solutions. His further research interests include human-information interaction, document engineering, data physicalisation, mixed reality as well as multimodal interaction. He has published more than 100 papers on these topics at international conferences and journals, and received multiple best paper awards.\n\nBeat has 20 years of experience in research on cross-media information management and mul-timodal user interfaces. As part of his PhD research, he investigated the use of paper as an interactive user interface and developed the resource-selector-link (RSL) hypermedia metamodel. With the interactive paper platform (iPaper), he strongly contributed to the interdisciplinary European Paper++ and PaperWorks research projects and the seminal research on paper-digital user interfaces led to innovative cross-media publishing solutions and novel forms of paper-based human-computer interaction. The RSL hypermedia metamodel is nowadays widely applied in his research lab and has, for example, been used for cross-media personal information management, an extensible cross-document link service, the MindXpres presentation platform as well as in a framework for cross-device and Internet of Things applications. For more details, please visit https://beatsigner.com.","internal_url":"https://www.academia.edu/46927206/Interview_with_Beat_Signer","translated_internal_url":"","created_at":"2021-04-18T01:23:55.745-07:00","preview_url":null,"current_user_can_edit":null,"current_user_is_owner":null,"owner_id":13155,"coauthors_can_edit":true,"document_type":"book","co_author_tags":[{"id":36449542,"work_id":46927206,"tagging_user_id":13155,"tagged_user_id":162672613,"co_author_invite_id":null,"email":"c***s@atzenbeck.de","display_order":1,"name":"Claus Atzenbeck","title":"Interview with Beat Signer"},{"id":36449543,"work_id":46927206,"tagging_user_id":13155,"tagged_user_id":null,"co_author_invite_id":6826860,"email":"b***r@vub.be","display_order":3,"name":"Beat Signer","title":"Interview with Beat Signer"},{"id":36449544,"work_id":46927206,"tagging_user_id":13155,"tagged_user_id":null,"co_author_invite_id":6640951,"email":"b***r@vub.ac","display_order":4,"name":"Beat Signer","title":"Interview with Beat Signer"},{"id":36449545,"work_id":46927206,"tagging_user_id":13155,"tagged_user_id":null,"co_author_invite_id":6640952,"email":"s***r@inf.ethz","display_order":5,"name":"Beat Signer","title":"Interview with Beat Signer"},{"id":36449546,"work_id":46927206,"tagging_user_id":13155,"tagged_user_id":13155,"co_author_invite_id":6640953,"email":"b***r@vub.be","affiliation":"Vrije Universiteit Brussel","display_order":6,"name":"Beat Signer","title":"Interview with Beat Signer"},{"id":36449547,"work_id":46927206,"tagging_user_id":13155,"tagged_user_id":null,"co_author_invite_id":921185,"email":"s***r@inf.ethz.ch","display_order":7,"name":"Beat Signer","title":"Interview with Beat Signer"},{"id":36449548,"work_id":46927206,"tagging_user_id":13155,"tagged_user_id":null,"co_author_invite_id":6640954,"email":"b***r@vub.ac.be","display_order":8,"name":"Beat Signer","title":"Interview with Beat Signer"}],"downloadable_attachments":[{"id":113208906,"title":"","file_type":"pdf","scribd_thumbnail_url":"https://attachments.academia-assets.com/113208906/thumbnails/1.jpg","file_name":"interview_with_beat_signer.pdf","download_url":"https://www.academia.edu/attachments/113208906/download_file","bulk_download_file_name":"Interview_with_Beat_Signer.pdf","bulk_download_url":"https://d1wqtxts1xzle7.cloudfront.net/113208906/interview_with_beat_signer-libre.pdf?1712776904=\u0026response-content-disposition=attachment%3B+filename%3DInterview_with_Beat_Signer.pdf\u0026Expires=1744203890\u0026Signature=AxxBQ6ymDg8oSjqflRTXNlYouhrebEqgkjoyV-dk0lkRuKmRLMGjcj7Iih5BAxkiGyuJjctgJur1zpruA8gLL9ZrDOhtpLVKFzDh51018ZVtmCX24BfJAA11CQd9RoTp3~Wb5q331p1Xvf7ZQ3FGKOl7fzC-yptVsEWomKfxm8Zpaj0tbSq9fObAiZUU5vBx9R~m7N0JThydubp0LAyjx-twCOvfbJcx9MKwpGmSw~6lXxMTDgc2sRAGEZ4R2i28GWKjiuOdU9ToWyFPkrnf512FGTQQk8UJkFlpYjfp~78MHopckycyYKfi~P1sXcej7WmJamxWnexIC1pknGvVGQ__\u0026Key-Pair-Id=APKAJLOHF5GGSLRBV4ZA"}],"slug":"Interview_with_Beat_Signer","translated_slug":"","page_count":5,"language":"en","content_type":"Work","summary":"Beat Signer is Professor of Computer Science at the Vrije Universiteit Brussel (VUB) and codirectorof the Web \u0026 Information Systems Engineering (WISE) research lab. He received a PhDin Computer Science from ETH Zurich where he has also been leading the Interactive Paper lab asa senior researcher for four years. He is an internationally distinguished expert in cross-media technologies and interactive paper solutions. His further research interests include human-information interaction, document engineering, data physicalisation, mixed reality as well as multimodal interaction. He has published more than 100 papers on these topics at international conferences and journals, and received multiple best paper awards.\n\nBeat has 20 years of experience in research on cross-media information management and mul-timodal user interfaces. As part of his PhD research, he investigated the use of paper as an interactive user interface and developed the resource-selector-link (RSL) hypermedia metamodel. With the interactive paper platform (iPaper), he strongly contributed to the interdisciplinary European Paper++ and PaperWorks research projects and the seminal research on paper-digital user interfaces led to innovative cross-media publishing solutions and novel forms of paper-based human-computer interaction. The RSL hypermedia metamodel is nowadays widely applied in his research lab and has, for example, been used for cross-media personal information management, an extensible cross-document link service, the MindXpres presentation platform as well as in a framework for cross-device and Internet of Things applications. For more details, please visit https://beatsigner.com.","impression_tracking_id":null,"owner":{"id":13155,"first_name":"Beat","middle_initials":null,"last_name":"Signer","page_name":"BeatSigner","domain_name":"vub","created_at":"2008-10-29T17:11:18.015-07:00","display_name":"Beat Signer","url":"https://vub.academia.edu/BeatSigner"},"attachments":[{"id":113208906,"title":"","file_type":"pdf","scribd_thumbnail_url":"https://attachments.academia-assets.com/113208906/thumbnails/1.jpg","file_name":"interview_with_beat_signer.pdf","download_url":"https://www.academia.edu/attachments/113208906/download_file","bulk_download_file_name":"Interview_with_Beat_Signer.pdf","bulk_download_url":"https://d1wqtxts1xzle7.cloudfront.net/113208906/interview_with_beat_signer-libre.pdf?1712776904=\u0026response-content-disposition=attachment%3B+filename%3DInterview_with_Beat_Signer.pdf\u0026Expires=1744203890\u0026Signature=AxxBQ6ymDg8oSjqflRTXNlYouhrebEqgkjoyV-dk0lkRuKmRLMGjcj7Iih5BAxkiGyuJjctgJur1zpruA8gLL9ZrDOhtpLVKFzDh51018ZVtmCX24BfJAA11CQd9RoTp3~Wb5q331p1Xvf7ZQ3FGKOl7fzC-yptVsEWomKfxm8Zpaj0tbSq9fObAiZUU5vBx9R~m7N0JThydubp0LAyjx-twCOvfbJcx9MKwpGmSw~6lXxMTDgc2sRAGEZ4R2i28GWKjiuOdU9ToWyFPkrnf512FGTQQk8UJkFlpYjfp~78MHopckycyYKfi~P1sXcej7WmJamxWnexIC1pknGvVGQ__\u0026Key-Pair-Id=APKAJLOHF5GGSLRBV4ZA"}],"research_interests":[{"id":37,"name":"Information Systems","url":"https://www.academia.edu/Documents/in/Information_Systems"},{"id":422,"name":"Computer Science","url":"https://www.academia.edu/Documents/in/Computer_Science"},{"id":459,"name":"Information Science","url":"https://www.academia.edu/Documents/in/Information_Science"},{"id":472,"name":"Human Computer Interaction","url":"https://www.academia.edu/Documents/in/Human_Computer_Interaction"},{"id":491,"name":"Information Technology","url":"https://www.academia.edu/Documents/in/Information_Technology"},{"id":3419,"name":"Multimedia","url":"https://www.academia.edu/Documents/in/Multimedia"},{"id":5266,"name":"Human Information Interaction","url":"https://www.academia.edu/Documents/in/Human_Information_Interaction"},{"id":5978,"name":"Web Technologies","url":"https://www.academia.edu/Documents/in/Web_Technologies"},{"id":11085,"name":"Cross-Media Information Spaces","url":"https://www.academia.edu/Documents/in/Cross-Media_Information_Spaces"},{"id":11086,"name":"Interactive Paper","url":"https://www.academia.edu/Documents/in/Interactive_Paper"},{"id":11123,"name":"Personal Information Management","url":"https://www.academia.edu/Documents/in/Personal_Information_Management"},{"id":15893,"name":"Hypertext theory","url":"https://www.academia.edu/Documents/in/Hypertext_theory"},{"id":20470,"name":"Tangible User Interfaces","url":"https://www.academia.edu/Documents/in/Tangible_User_Interfaces"},{"id":44606,"name":"Hypermedia","url":"https://www.academia.edu/Documents/in/Hypermedia"},{"id":54192,"name":"Hypertext","url":"https://www.academia.edu/Documents/in/Hypertext"},{"id":69848,"name":"Presentation","url":"https://www.academia.edu/Documents/in/Presentation"},{"id":141114,"name":"World Wide Web","url":"https://www.academia.edu/Documents/in/World_Wide_Web"},{"id":193390,"name":"RSL","url":"https://www.academia.edu/Documents/in/RSL"},{"id":1993399,"name":"Data Physicalisation","url":"https://www.academia.edu/Documents/in/Data_Physicalisation"}],"urls":[{"id":29589554,"url":"https://beatsigner.com/publications/interview-with-beat-signer.pdf"}]}, dispatcherData: dispatcherData }); $(this).data('initialized', true); } }); $a.trackClickSource(".js-work-strip-work-link", "profile_work_strip") if (false) { Aedu.setUpFigureCarousel('profile-work-46927206-figures'); } }); </script> </div><div class="profile--tab_content_container js-tab-pane tab-pane" data-section-id="4294482" id="flyers"><div class="js-work-strip profile--work_container" data-work-id="12785185"><div class="profile--work_thumbnail hidden-xs"><a class="js-work-strip-work-link" data-click-track="profile-work-strip-thumbnail" href="https://www.academia.edu/12785185/MindXpres_An_Extensible_Cross_Media_Presentation_Tool"><img alt="Research paper thumbnail of MindXpres - An Extensible Cross-Media Presentation Tool" class="work-thumbnail" src="https://attachments.academia-assets.com/51752000/thumbnails/1.jpg" /></a></div><div class="wp-workCard wp-workCard_itemContainer"><div class="wp-workCard_item wp-workCard--title"><a class="js-work-strip-work-link text-gray-darker" data-click-track="profile-work-strip-title" href="https://www.academia.edu/12785185/MindXpres_An_Extensible_Cross_Media_Presentation_Tool">MindXpres - An Extensible Cross-Media Presentation Tool</a></div><div class="wp-workCard_item wp-workCard--actions"><span class="work-strip-bookmark-button-container"></span><a id="32cf187967198e5a7bbfa140eccca594" class="wp-workCard--action" rel="nofollow" data-click-track="profile-work-strip-download" data-download="{&quot;attachment_id&quot;:51752000,&quot;asset_id&quot;:12785185,&quot;asset_type&quot;:&quot;Work&quot;,&quot;button_location&quot;:&quot;profile&quot;}" href="https://www.academia.edu/attachments/51752000/download_file?s=profile"><span><i class="fa fa-arrow-down"></i></span><span>Download</span></a><span class="wp-workCard--action visible-if-viewed-by-owner inline-block" style="display: none;"><span class="js-profile-work-strip-edit-button-wrapper profile-work-strip-edit-button-wrapper" data-work-id="12785185"><a class="js-profile-work-strip-edit-button" tabindex="0"><span><i class="fa fa-pencil"></i></span><span>Edit</span></a></span></span></div><div class="wp-workCard_item wp-workCard--stats"><span><span><span class="js-view-count view-count u-mr2x" data-work-id="12785185"><i class="fa fa-spinner fa-spin"></i></span><script>$(function () { var workId = 12785185; window.Academia.workViewCountsFetcher.queue(workId, function (count) { var description = window.$h.commaizeInt(count) + " " + window.$h.pluralize(count, 'View'); $(".js-view-count[data-work-id=12785185]").text(description); $(".js-view-count[data-work-id=12785185]").attr('title', description).tooltip(); }); });</script></span></span><span><span class="percentile-widget hidden"><span class="u-mr2x work-percentile"></span></span><script>$(function () { var workId = 12785185; window.Academia.workPercentilesFetcher.queue(workId, function (percentileText) { var container = $(".js-work-strip[data-work-id='12785185']"); container.find('.work-percentile').text(percentileText.charAt(0).toUpperCase() + percentileText.slice(1)); container.find('.percentile-widget').show(); container.find('.percentile-widget').removeClass('hidden'); }); });</script></span></div><div id="work-strip-premium-row-container"></div></div></div><script> require.config({ waitSeconds: 90 })(["https://a.academia-assets.com/assets/wow_profile-a9bf3a2bc8c89fa2a77156577594264ee8a0f214d74241bc0fcd3f69f8d107ac.js","https://a.academia-assets.com/assets/work_edit-ad038b8c047c1a8d4fa01b402d530ff93c45fee2137a149a4a5398bc8ad67560.js"], function() { // from javascript_helper.rb var dispatcherData = {} if (true){ window.WowProfile.dispatcher = window.WowProfile.dispatcher || _.clone(Backbone.Events); dispatcherData = { dispatcher: window.WowProfile.dispatcher, downloadLinkId: "32cf187967198e5a7bbfa140eccca594" } } $('.js-work-strip[data-work-id=12785185]').each(function() { if (!$(this).data('initialized')) { new WowProfile.WorkStripView({ el: this, workJSON: {"id":12785185,"title":"MindXpres - An Extensible Cross-Media Presentation Tool","translated_title":"","metadata":{"ai_abstract":"MindXpres is an innovative presentation platform designed to address the limitations of traditional slideware by providing a plug-in architecture that enhances content presentation. It allows users to integrate various media types and functionalities, focusing on content delivery rather than aesthetic aspects. By enabling central content storage and a web-based framework, MindXpres ensures high portability and audience interaction, paving the way for collaborative and engaging presentations."},"translated_abstract":null,"internal_url":"https://www.academia.edu/12785185/MindXpres_An_Extensible_Cross_Media_Presentation_Tool","translated_internal_url":"","created_at":"2015-06-03T18:54:57.671-07:00","preview_url":null,"current_user_can_edit":null,"current_user_is_owner":null,"owner_id":13155,"coauthors_can_edit":true,"document_type":"other","co_author_tags":[],"downloadable_attachments":[{"id":51752000,"title":"","file_type":"pdf","scribd_thumbnail_url":"https://attachments.academia-assets.com/51752000/thumbnails/1.jpg","file_name":"MindXpres.pdf","download_url":"https://www.academia.edu/attachments/51752000/download_file","bulk_download_file_name":"MindXpres_An_Extensible_Cross_Media_Pres.pdf","bulk_download_url":"https://d1wqtxts1xzle7.cloudfront.net/51752000/MindXpres-libre.pdf?1486839083=\u0026response-content-disposition=attachment%3B+filename%3DMindXpres_An_Extensible_Cross_Media_Pres.pdf\u0026Expires=1744203890\u0026Signature=E37dAKw3Xbl-K8hug2exMP9EmPCs3wkqJlRmdSrbuCWF7EN3iQwp95~Hpz~mIQ13GVmUx9ZXhvs-rFCj9tGxduISvZIVgerlqkzbvdX9GN6MOfw2cd8aHuQv7FzD4SQzUVFzVItR50P8lEgQmnw7jmzFniW1fmPmvi-E7aMPFu5bgeKfdLWxwa72BTBBOHSXsdo6uMItljFzUz-nKxRM3IyDxrWBT~Jad9sq~ntAmtp6LZlZLyrf2ts9T9ZVqQrtV2SwHE8C36hikJaV~pXoKWdGZ9BW1SmdFpmpiimVj-zuqIF7AmEccFAggOGbeTGb65hJwnUUbzF1fbLnJrsHiA__\u0026Key-Pair-Id=APKAJLOHF5GGSLRBV4ZA"}],"slug":"MindXpres_An_Extensible_Cross_Media_Presentation_Tool","translated_slug":"","page_count":1,"language":"en","content_type":"Work","summary":null,"impression_tracking_id":null,"owner":{"id":13155,"first_name":"Beat","middle_initials":null,"last_name":"Signer","page_name":"BeatSigner","domain_name":"vub","created_at":"2008-10-29T17:11:18.015-07:00","display_name":"Beat Signer","url":"https://vub.academia.edu/BeatSigner"},"attachments":[{"id":51752000,"title":"","file_type":"pdf","scribd_thumbnail_url":"https://attachments.academia-assets.com/51752000/thumbnails/1.jpg","file_name":"MindXpres.pdf","download_url":"https://www.academia.edu/attachments/51752000/download_file","bulk_download_file_name":"MindXpres_An_Extensible_Cross_Media_Pres.pdf","bulk_download_url":"https://d1wqtxts1xzle7.cloudfront.net/51752000/MindXpres-libre.pdf?1486839083=\u0026response-content-disposition=attachment%3B+filename%3DMindXpres_An_Extensible_Cross_Media_Pres.pdf\u0026Expires=1744203890\u0026Signature=E37dAKw3Xbl-K8hug2exMP9EmPCs3wkqJlRmdSrbuCWF7EN3iQwp95~Hpz~mIQ13GVmUx9ZXhvs-rFCj9tGxduISvZIVgerlqkzbvdX9GN6MOfw2cd8aHuQv7FzD4SQzUVFzVItR50P8lEgQmnw7jmzFniW1fmPmvi-E7aMPFu5bgeKfdLWxwa72BTBBOHSXsdo6uMItljFzUz-nKxRM3IyDxrWBT~Jad9sq~ntAmtp6LZlZLyrf2ts9T9ZVqQrtV2SwHE8C36hikJaV~pXoKWdGZ9BW1SmdFpmpiimVj-zuqIF7AmEccFAggOGbeTGb65hJwnUUbzF1fbLnJrsHiA__\u0026Key-Pair-Id=APKAJLOHF5GGSLRBV4ZA"}],"research_interests":[{"id":37,"name":"Information Systems","url":"https://www.academia.edu/Documents/in/Information_Systems"},{"id":48,"name":"Engineering","url":"https://www.academia.edu/Documents/in/Engineering"},{"id":422,"name":"Computer Science","url":"https://www.academia.edu/Documents/in/Computer_Science"},{"id":449,"name":"Software Engineering","url":"https://www.academia.edu/Documents/in/Software_Engineering"},{"id":451,"name":"Programming Languages","url":"https://www.academia.edu/Documents/in/Programming_Languages"},{"id":453,"name":"Object Oriented Programming","url":"https://www.academia.edu/Documents/in/Object_Oriented_Programming"},{"id":472,"name":"Human Computer Interaction","url":"https://www.academia.edu/Documents/in/Human_Computer_Interaction"},{"id":491,"name":"Information Technology","url":"https://www.academia.edu/Documents/in/Information_Technology"},{"id":492,"name":"Management Information Systems","url":"https://www.academia.edu/Documents/in/Management_Information_Systems"},{"id":859,"name":"Communication","url":"https://www.academia.edu/Documents/in/Communication"},{"id":887,"name":"Teaching and Learning","url":"https://www.academia.edu/Documents/in/Teaching_and_Learning"},{"id":922,"name":"Education","url":"https://www.academia.edu/Documents/in/Education"},{"id":923,"name":"Technology","url":"https://www.academia.edu/Documents/in/Technology"},{"id":1003,"name":"Educational Technology","url":"https://www.academia.edu/Documents/in/Educational_Technology"},{"id":1007,"name":"Teaching English as a Second Language","url":"https://www.academia.edu/Documents/in/Teaching_English_as_a_Second_Language"},{"id":1128,"name":"Computer Science Education","url":"https://www.academia.edu/Documents/in/Computer_Science_Education"},{"id":1210,"name":"Informatics","url":"https://www.academia.edu/Documents/in/Informatics"},{"id":1609,"name":"E-learning","url":"https://www.academia.edu/Documents/in/E-learning"},{"id":2209,"name":"Cross-Media Studies","url":"https://www.academia.edu/Documents/in/Cross-Media_Studies"},{"id":2621,"name":"Higher Education","url":"https://www.academia.edu/Documents/in/Higher_Education"},{"id":2869,"name":"Digital Media","url":"https://www.academia.edu/Documents/in/Digital_Media"},{"id":3095,"name":"Computer-Based Learning","url":"https://www.academia.edu/Documents/in/Computer-Based_Learning"},{"id":3424,"name":"Information Visualization","url":"https://www.academia.edu/Documents/in/Information_Visualization"},{"id":3457,"name":"Learning and Teaching","url":"https://www.academia.edu/Documents/in/Learning_and_Teaching"},{"id":5266,"name":"Human Information Interaction","url":"https://www.academia.edu/Documents/in/Human_Information_Interaction"},{"id":7585,"name":"ICT in Education","url":"https://www.academia.edu/Documents/in/ICT_in_Education"},{"id":8129,"name":"Software Development","url":"https://www.academia.edu/Documents/in/Software_Development"},{"id":8130,"name":"Web Development","url":"https://www.academia.edu/Documents/in/Web_Development"},{"id":8673,"name":"Digital Media \u0026 Learning","url":"https://www.academia.edu/Documents/in/Digital_Media_and_Learning"},{"id":8679,"name":"Computer Supported Collaborative Learning (CSCL)","url":"https://www.academia.edu/Documents/in/Computer_Supported_Collaborative_Learning_CSCL_"},{"id":9270,"name":"Software Architecture","url":"https://www.academia.edu/Documents/in/Software_Architecture"},{"id":10472,"name":"Web Applications","url":"https://www.academia.edu/Documents/in/Web_Applications"},{"id":11085,"name":"Cross-Media Information Spaces","url":"https://www.academia.edu/Documents/in/Cross-Media_Information_Spaces"},{"id":11119,"name":"User Interface","url":"https://www.academia.edu/Documents/in/User_Interface"},{"id":12417,"name":"Multimedia Learning","url":"https://www.academia.edu/Documents/in/Multimedia_Learning"},{"id":17758,"name":"Technology Enhanced Learning","url":"https://www.academia.edu/Documents/in/Technology_Enhanced_Learning"},{"id":17951,"name":"Learning And Teaching In Higher Education","url":"https://www.academia.edu/Documents/in/Learning_And_Teaching_In_Higher_Education"},{"id":20481,"name":"Information Visualisation","url":"https://www.academia.edu/Documents/in/Information_Visualisation"},{"id":23997,"name":"HCI","url":"https://www.academia.edu/Documents/in/HCI"},{"id":25475,"name":"Learning environments","url":"https://www.academia.edu/Documents/in/Learning_environments"},{"id":25681,"name":"E-learning 2.0","url":"https://www.academia.edu/Documents/in/E-learning_2.0"},{"id":33112,"name":"Blended learning in higher education","url":"https://www.academia.edu/Documents/in/Blended_learning_in_higher_education"},{"id":33915,"name":"Technology-enhanced Learning","url":"https://www.academia.edu/Documents/in/Technology-enhanced_Learning"},{"id":41474,"name":"Cross Media Platforms","url":"https://www.academia.edu/Documents/in/Cross_Media_Platforms"},{"id":42095,"name":"Document Engineering","url":"https://www.academia.edu/Documents/in/Document_Engineering"},{"id":53293,"name":"Software","url":"https://www.academia.edu/Documents/in/Software"},{"id":69848,"name":"Presentation","url":"https://www.academia.edu/Documents/in/Presentation"},{"id":69857,"name":"PowerPoint","url":"https://www.academia.edu/Documents/in/PowerPoint"},{"id":141114,"name":"World Wide Web","url":"https://www.academia.edu/Documents/in/World_Wide_Web"},{"id":148250,"name":"Presentation of Paper in a Seminar","url":"https://www.academia.edu/Documents/in/Presentation_of_Paper_in_a_Seminar"},{"id":242420,"name":"Presentation Slides","url":"https://www.academia.edu/Documents/in/Presentation_Slides"},{"id":279114,"name":"Technology Enhanced Education","url":"https://www.academia.edu/Documents/in/Technology_Enhanced_Education"},{"id":407764,"name":"Visual Presentation","url":"https://www.academia.edu/Documents/in/Visual_Presentation"},{"id":502875,"name":"Microsoft Powerpoint","url":"https://www.academia.edu/Documents/in/Microsoft_Powerpoint"},{"id":514903,"name":"Presentasion Training","url":"https://www.academia.edu/Documents/in/Presentasion_Training"},{"id":554420,"name":"PPT Presentation","url":"https://www.academia.edu/Documents/in/PPT_Presentation"},{"id":631659,"name":"Power Point Presentations","url":"https://www.academia.edu/Documents/in/Power_Point_Presentations"},{"id":651491,"name":"Web Information Systems","url":"https://www.academia.edu/Documents/in/Web_Information_Systems"},{"id":672167,"name":"Cross-Media Information Spaces and Architectures (CISA)","url":"https://www.academia.edu/Documents/in/Cross-Media_Information_Spaces_and_Architectures_CISA_"},{"id":1019468,"name":"Teaching and Learning In Adult and Higher Education","url":"https://www.academia.edu/Documents/in/Teaching_and_Learning_In_Adult_and_Higher_Education"},{"id":1273494,"name":"Slideware","url":"https://www.academia.edu/Documents/in/Slideware"},{"id":1705150,"name":"MindXpres","url":"https://www.academia.edu/Documents/in/MindXpres"}],"urls":[{"id":4839679,"url":"http://www.beatsigner.com/flyers/MindXpres.pdf"},{"id":7945887,"url":"http://mindxpres.com/"}]}, dispatcherData: dispatcherData }); $(this).data('initialized', true); } }); $a.trackClickSource(".js-work-strip-work-link", "profile_work_strip") if (false) { Aedu.setUpFigureCarousel('profile-work-12785185-figures'); } }); </script> <div class="js-work-strip profile--work_container" data-work-id="12785289"><div class="profile--work_thumbnail hidden-xs"><a class="js-work-strip-work-link" data-click-track="profile-work-strip-thumbnail" href="https://www.academia.edu/12785289/ArtVis_Gaining_New_Insights_from_Digital_Artwork_Collections"><img alt="Research paper thumbnail of ArtVis - Gaining New Insights from Digital Artwork Collections" class="work-thumbnail" src="https://attachments.academia-assets.com/37820024/thumbnails/1.jpg" /></a></div><div class="wp-workCard wp-workCard_itemContainer"><div class="wp-workCard_item wp-workCard--title"><a class="js-work-strip-work-link text-gray-darker" data-click-track="profile-work-strip-title" href="https://www.academia.edu/12785289/ArtVis_Gaining_New_Insights_from_Digital_Artwork_Collections">ArtVis - Gaining New Insights from Digital Artwork Collections</a></div><div class="wp-workCard_item wp-workCard--actions"><span class="work-strip-bookmark-button-container"></span><a id="cf46dc33482e1a6847ab6120d4824141" class="wp-workCard--action" rel="nofollow" data-click-track="profile-work-strip-download" data-download="{&quot;attachment_id&quot;:37820024,&quot;asset_id&quot;:12785289,&quot;asset_type&quot;:&quot;Work&quot;,&quot;button_location&quot;:&quot;profile&quot;}" href="https://www.academia.edu/attachments/37820024/download_file?s=profile"><span><i class="fa fa-arrow-down"></i></span><span>Download</span></a><span class="wp-workCard--action visible-if-viewed-by-owner inline-block" style="display: none;"><span class="js-profile-work-strip-edit-button-wrapper profile-work-strip-edit-button-wrapper" data-work-id="12785289"><a class="js-profile-work-strip-edit-button" tabindex="0"><span><i class="fa fa-pencil"></i></span><span>Edit</span></a></span></span></div><div class="wp-workCard_item wp-workCard--stats"><span><span><span class="js-view-count view-count u-mr2x" data-work-id="12785289"><i class="fa fa-spinner fa-spin"></i></span><script>$(function () { var workId = 12785289; window.Academia.workViewCountsFetcher.queue(workId, function (count) { var description = window.$h.commaizeInt(count) + " " + window.$h.pluralize(count, 'View'); $(".js-view-count[data-work-id=12785289]").text(description); $(".js-view-count[data-work-id=12785289]").attr('title', description).tooltip(); }); });</script></span></span><span><span class="percentile-widget hidden"><span class="u-mr2x work-percentile"></span></span><script>$(function () { var workId = 12785289; window.Academia.workPercentilesFetcher.queue(workId, function (percentileText) { var container = $(".js-work-strip[data-work-id='12785289']"); container.find('.work-percentile').text(percentileText.charAt(0).toUpperCase() + percentileText.slice(1)); container.find('.percentile-widget').show(); container.find('.percentile-widget').removeClass('hidden'); }); });</script></span></div><div id="work-strip-premium-row-container"></div></div></div><script> require.config({ waitSeconds: 90 })(["https://a.academia-assets.com/assets/wow_profile-a9bf3a2bc8c89fa2a77156577594264ee8a0f214d74241bc0fcd3f69f8d107ac.js","https://a.academia-assets.com/assets/work_edit-ad038b8c047c1a8d4fa01b402d530ff93c45fee2137a149a4a5398bc8ad67560.js"], function() { // from javascript_helper.rb var dispatcherData = {} if (true){ window.WowProfile.dispatcher = window.WowProfile.dispatcher || _.clone(Backbone.Events); dispatcherData = { dispatcher: window.WowProfile.dispatcher, downloadLinkId: "cf46dc33482e1a6847ab6120d4824141" } } $('.js-work-strip[data-work-id=12785289]').each(function() { if (!$(this).data('initialized')) { new WowProfile.WorkStripView({ el: this, workJSON: {"id":12785289,"title":"ArtVis - Gaining New Insights from Digital Artwork Collections","translated_title":"","metadata":{"ai_abstract":"The ArtVis project employs innovative visualization techniques alongside a tangible user interface to enhance interaction with a vast digital collection of European artwork from the 11th to the 19th century. By enabling users to explore, analyze, and browse artworks through three interconnected visualization components, ArtVis aims to foster new insights. Specialized controls facilitate user-driven exploration across various dimensions, such as artist name, museum, artistic type, and time period, ultimately promoting a playful and exploratory user experience.","ai_title_tag":"ArtVis: Interactive Visualization of Artwork"},"translated_abstract":null,"internal_url":"https://www.academia.edu/12785289/ArtVis_Gaining_New_Insights_from_Digital_Artwork_Collections","translated_internal_url":"","created_at":"2015-06-03T18:58:21.633-07:00","preview_url":null,"current_user_can_edit":null,"current_user_is_owner":null,"owner_id":13155,"coauthors_can_edit":true,"document_type":"other","co_author_tags":[],"downloadable_attachments":[{"id":37820024,"title":"","file_type":"pdf","scribd_thumbnail_url":"https://attachments.academia-assets.com/37820024/thumbnails/1.jpg","file_name":"ArtVis.pdf","download_url":"https://www.academia.edu/attachments/37820024/download_file","bulk_download_file_name":"ArtVis_Gaining_New_Insights_from_Digital.pdf","bulk_download_url":"https://d1wqtxts1xzle7.cloudfront.net/37820024/ArtVis-libre.pdf?1433383253=\u0026response-content-disposition=attachment%3B+filename%3DArtVis_Gaining_New_Insights_from_Digital.pdf\u0026Expires=1744203890\u0026Signature=dgy8zBOYc31b~rFzfNRy~oyzzRE72DIWeoamyUTC~klHl-2jP-Efjat9gVqEjvP0dCa1l5BYz-E7RIEa5oR42enQgyXYm6QIuMuG9ImedzkL2RvIyUfPBkmMv0NNGXsZNv6rAGEeG0ZGLv-h0psRe-BVojQevfU22QQmFj28SLFMbAiBFXG-72IVBR5NqsCvzJLyBEhMah7cY8RqUSd388OuOHsJ1bU5F4myQsNOdb2uwxjdr9fDX8gTPmlNOi-kN7nkFRQdZ4nhbbqBXltxI~pwY-XQj0H8F4IlK7~qdgTvqz3dGhIf-7ALwptbHZUjCN5lLohIbua8OZfKDgC1eQ__\u0026Key-Pair-Id=APKAJLOHF5GGSLRBV4ZA"}],"slug":"ArtVis_Gaining_New_Insights_from_Digital_Artwork_Collections","translated_slug":"","page_count":1,"language":"en","content_type":"Work","summary":null,"impression_tracking_id":null,"owner":{"id":13155,"first_name":"Beat","middle_initials":null,"last_name":"Signer","page_name":"BeatSigner","domain_name":"vub","created_at":"2008-10-29T17:11:18.015-07:00","display_name":"Beat Signer","url":"https://vub.academia.edu/BeatSigner"},"attachments":[{"id":37820024,"title":"","file_type":"pdf","scribd_thumbnail_url":"https://attachments.academia-assets.com/37820024/thumbnails/1.jpg","file_name":"ArtVis.pdf","download_url":"https://www.academia.edu/attachments/37820024/download_file","bulk_download_file_name":"ArtVis_Gaining_New_Insights_from_Digital.pdf","bulk_download_url":"https://d1wqtxts1xzle7.cloudfront.net/37820024/ArtVis-libre.pdf?1433383253=\u0026response-content-disposition=attachment%3B+filename%3DArtVis_Gaining_New_Insights_from_Digital.pdf\u0026Expires=1744203890\u0026Signature=dgy8zBOYc31b~rFzfNRy~oyzzRE72DIWeoamyUTC~klHl-2jP-Efjat9gVqEjvP0dCa1l5BYz-E7RIEa5oR42enQgyXYm6QIuMuG9ImedzkL2RvIyUfPBkmMv0NNGXsZNv6rAGEeG0ZGLv-h0psRe-BVojQevfU22QQmFj28SLFMbAiBFXG-72IVBR5NqsCvzJLyBEhMah7cY8RqUSd388OuOHsJ1bU5F4myQsNOdb2uwxjdr9fDX8gTPmlNOi-kN7nkFRQdZ4nhbbqBXltxI~pwY-XQj0H8F4IlK7~qdgTvqz3dGhIf-7ALwptbHZUjCN5lLohIbua8OZfKDgC1eQ__\u0026Key-Pair-Id=APKAJLOHF5GGSLRBV4ZA"}],"research_interests":[{"id":37,"name":"Information Systems","url":"https://www.academia.edu/Documents/in/Information_Systems"},{"id":48,"name":"Engineering","url":"https://www.academia.edu/Documents/in/Engineering"},{"id":422,"name":"Computer Science","url":"https://www.academia.edu/Documents/in/Computer_Science"},{"id":449,"name":"Software Engineering","url":"https://www.academia.edu/Documents/in/Software_Engineering"},{"id":459,"name":"Information Science","url":"https://www.academia.edu/Documents/in/Information_Science"},{"id":472,"name":"Human Computer Interaction","url":"https://www.academia.edu/Documents/in/Human_Computer_Interaction"},{"id":491,"name":"Information Technology","url":"https://www.academia.edu/Documents/in/Information_Technology"},{"id":887,"name":"Teaching and Learning","url":"https://www.academia.edu/Documents/in/Teaching_and_Learning"},{"id":922,"name":"Education","url":"https://www.academia.edu/Documents/in/Education"},{"id":923,"name":"Technology","url":"https://www.academia.edu/Documents/in/Technology"},{"id":933,"name":"New Media","url":"https://www.academia.edu/Documents/in/New_Media"},{"id":1003,"name":"Educational Technology","url":"https://www.academia.edu/Documents/in/Educational_Technology"},{"id":1012,"name":"Digital Libraries","url":"https://www.academia.edu/Documents/in/Digital_Libraries"},{"id":1210,"name":"Informatics","url":"https://www.academia.edu/Documents/in/Informatics"},{"id":1236,"name":"Art","url":"https://www.academia.edu/Documents/in/Art"},{"id":1241,"name":"Knowledge Management","url":"https://www.academia.edu/Documents/in/Knowledge_Management"},{"id":1440,"name":"Visualization","url":"https://www.academia.edu/Documents/in/Visualization"},{"id":1453,"name":"Information Management","url":"https://www.academia.edu/Documents/in/Information_Management"},{"id":1609,"name":"E-learning","url":"https://www.academia.edu/Documents/in/E-learning"},{"id":2482,"name":"Database Systems","url":"https://www.academia.edu/Documents/in/Database_Systems"},{"id":2869,"name":"Digital Media","url":"https://www.academia.edu/Documents/in/Digital_Media"},{"id":2875,"name":"User Experience (UX)","url":"https://www.academia.edu/Documents/in/User_Experience_UX_"},{"id":2879,"name":"Ubiquitous Computing","url":"https://www.academia.edu/Documents/in/Ubiquitous_Computing"},{"id":3419,"name":"Multimedia","url":"https://www.academia.edu/Documents/in/Multimedia"},{"id":3424,"name":"Information Visualization","url":"https://www.academia.edu/Documents/in/Information_Visualization"},{"id":3457,"name":"Learning and Teaching","url":"https://www.academia.edu/Documents/in/Learning_and_Teaching"},{"id":4416,"name":"Interaction Design","url":"https://www.academia.edu/Documents/in/Interaction_Design"},{"id":5266,"name":"Human Information Interaction","url":"https://www.academia.edu/Documents/in/Human_Information_Interaction"},{"id":5673,"name":"Augmented Reality","url":"https://www.academia.edu/Documents/in/Augmented_Reality"},{"id":7585,"name":"ICT in Education","url":"https://www.academia.edu/Documents/in/ICT_in_Education"},{"id":9134,"name":"Pervasive Computing","url":"https://www.academia.edu/Documents/in/Pervasive_Computing"},{"id":9135,"name":"The Internet of Things","url":"https://www.academia.edu/Documents/in/The_Internet_of_Things"},{"id":10165,"name":"Interactive and Digital Media","url":"https://www.academia.edu/Documents/in/Interactive_and_Digital_Media"},{"id":11081,"name":"Multimodal Interaction","url":"https://www.academia.edu/Documents/in/Multimodal_Interaction"},{"id":11085,"name":"Cross-Media Information Spaces","url":"https://www.academia.edu/Documents/in/Cross-Media_Information_Spaces"},{"id":11119,"name":"User Interface","url":"https://www.academia.edu/Documents/in/User_Interface"},{"id":17758,"name":"Technology Enhanced Learning","url":"https://www.academia.edu/Documents/in/Technology_Enhanced_Learning"},{"id":18711,"name":"Technology-mediated teaching and learning","url":"https://www.academia.edu/Documents/in/Technology-mediated_teaching_and_learning"},{"id":20470,"name":"Tangible User Interfaces","url":"https://www.academia.edu/Documents/in/Tangible_User_Interfaces"},{"id":21201,"name":"Multimodality","url":"https://www.academia.edu/Documents/in/Multimodality"},{"id":23997,"name":"HCI","url":"https://www.academia.edu/Documents/in/HCI"},{"id":27360,"name":"Databases","url":"https://www.academia.edu/Documents/in/Databases"},{"id":44606,"name":"Hypermedia","url":"https://www.academia.edu/Documents/in/Hypermedia"},{"id":48200,"name":"Digital Library","url":"https://www.academia.edu/Documents/in/Digital_Library"},{"id":66003,"name":"Human-Machine Interaction","url":"https://www.academia.edu/Documents/in/Human-Machine_Interaction"},{"id":85420,"name":"Crossmedia","url":"https://www.academia.edu/Documents/in/Crossmedia"},{"id":1226904,"name":"Computer Science and Engineering","url":"https://www.academia.edu/Documents/in/Computer_Science_and_Engineering-1"}],"urls":[{"id":4839678,"url":"http://www.beatsigner.com/flyers/ArtVis.pdf"}]}, dispatcherData: dispatcherData }); $(this).data('initialized', true); } }); $a.trackClickSource(".js-work-strip-work-link", "profile_work_strip") if (false) { Aedu.setUpFigureCarousel('profile-work-12785289-figures'); } }); </script> </div><div class="profile--tab_content_container js-tab-pane tab-pane" data-section-id="4681" id="papers"><div class="js-work-strip profile--work_container" data-work-id="40139780"><div class="profile--work_thumbnail hidden-xs"><a class="js-work-strip-work-link" data-click-track="profile-work-strip-thumbnail" href="https://www.academia.edu/40139780/Towards_Cross_Media_Information_Spaces_and_Architectures"><img alt="Research paper thumbnail of Towards Cross-Media Information Spaces and Architectures" class="work-thumbnail" src="https://attachments.academia-assets.com/77413706/thumbnails/1.jpg" /></a></div><div class="wp-workCard wp-workCard_itemContainer"><div class="wp-workCard_item wp-workCard--title"><a class="js-work-strip-work-link text-gray-darker" data-click-track="profile-work-strip-title" href="https://www.academia.edu/40139780/Towards_Cross_Media_Information_Spaces_and_Architectures">Towards Cross-Media Information Spaces and Architectures</a></div><div class="wp-workCard_item"><span class="js-work-more-abstract-truncated">The efficient management and retrieval of information via dedicated devices and data structures h...</span><a class="js-work-more-abstract" data-broccoli-component="work_strip.more_abstract" data-click-track="profile-work-strip-more-abstract" href="javascript:;"><span> more </span><span><i class="fa fa-caret-down"></i></span></a><span class="js-work-more-abstract-untruncated hidden">The efficient management and retrieval of information via dedicated devices and data structures has been investigated since the early days of Vannevar Bush&#39;s seminal article As We May Think introducing the Memex. However, nowadays information is usually fragmented across different media types, devices as well as digital and physical environments, and we are often struggling to retrieve specific information. We discuss three main issues to be addressed when developing solutions for managing information in these co-called cross-media information spaces. First, we have a look at an extensible cross-media linking solution based on the resource-selector-link (RSL) hypermedia metamodel where information can be integrated across applications, devices as well as digital and physical information environments. We then outline some of the limitations of existing digital document formats which are often just a simulation of paper documents and their affordances on desktop computers, and discuss more flexible document representations for cross-media information spaces. Further, new forms of human-information interaction and cross-media user interfaces—including some recent work on dynamic data physicalisation—are discussed. A number of research artefacts are used to illustrate different aspects of the presented data-centric approach for cross-media information spaces and architectures. Last but not least, we provide an outlook on how the embedding of the presented concepts at the level of an operating system might ultimately lead to new possibilities for cross-media information management and innovative forms of human-information interaction.</span></div><div class="wp-workCard_item wp-workCard--actions"><span class="work-strip-bookmark-button-container"></span><a id="ad0690a5d0beba195f5e5f3c3528a83e" class="wp-workCard--action" rel="nofollow" data-click-track="profile-work-strip-download" data-download="{&quot;attachment_id&quot;:77413706,&quot;asset_id&quot;:40139780,&quot;asset_type&quot;:&quot;Work&quot;,&quot;button_location&quot;:&quot;profile&quot;}" href="https://www.academia.edu/attachments/77413706/download_file?s=profile"><span><i class="fa fa-arrow-down"></i></span><span>Download</span></a><span class="wp-workCard--action visible-if-viewed-by-owner inline-block" style="display: none;"><span class="js-profile-work-strip-edit-button-wrapper profile-work-strip-edit-button-wrapper" data-work-id="40139780"><a class="js-profile-work-strip-edit-button" tabindex="0"><span><i class="fa fa-pencil"></i></span><span>Edit</span></a></span></span></div><div class="wp-workCard_item wp-workCard--stats"><span><span><span class="js-view-count view-count u-mr2x" data-work-id="40139780"><i class="fa fa-spinner fa-spin"></i></span><script>$(function () { var workId = 40139780; window.Academia.workViewCountsFetcher.queue(workId, function (count) { var description = window.$h.commaizeInt(count) + " " + window.$h.pluralize(count, 'View'); $(".js-view-count[data-work-id=40139780]").text(description); $(".js-view-count[data-work-id=40139780]").attr('title', description).tooltip(); }); });</script></span></span><span><span class="percentile-widget hidden"><span class="u-mr2x work-percentile"></span></span><script>$(function () { var workId = 40139780; window.Academia.workPercentilesFetcher.queue(workId, function (percentileText) { var container = $(".js-work-strip[data-work-id='40139780']"); container.find('.work-percentile').text(percentileText.charAt(0).toUpperCase() + percentileText.slice(1)); container.find('.percentile-widget').show(); container.find('.percentile-widget').removeClass('hidden'); }); });</script></span></div><div id="work-strip-premium-row-container"></div></div></div><script> require.config({ waitSeconds: 90 })(["https://a.academia-assets.com/assets/wow_profile-a9bf3a2bc8c89fa2a77156577594264ee8a0f214d74241bc0fcd3f69f8d107ac.js","https://a.academia-assets.com/assets/work_edit-ad038b8c047c1a8d4fa01b402d530ff93c45fee2137a149a4a5398bc8ad67560.js"], function() { // from javascript_helper.rb var dispatcherData = {} if (true){ window.WowProfile.dispatcher = window.WowProfile.dispatcher || _.clone(Backbone.Events); dispatcherData = { dispatcher: window.WowProfile.dispatcher, downloadLinkId: "ad0690a5d0beba195f5e5f3c3528a83e" } } $('.js-work-strip[data-work-id=40139780]').each(function() { if (!$(this).data('initialized')) { new WowProfile.WorkStripView({ el: this, workJSON: {"id":40139780,"title":"Towards Cross-Media Information Spaces and Architectures","translated_title":"","metadata":{"doi":"10.1109/RCIS.2019.8877105","abstract":"The efficient management and retrieval of information via dedicated devices and data structures has been investigated since the early days of Vannevar Bush's seminal article As We May Think introducing the Memex. However, nowadays information is usually fragmented across different media types, devices as well as digital and physical environments, and we are often struggling to retrieve specific information. We discuss three main issues to be addressed when developing solutions for managing information in these co-called cross-media information spaces. First, we have a look at an extensible cross-media linking solution based on the resource-selector-link (RSL) hypermedia metamodel where information can be integrated across applications, devices as well as digital and physical information environments. We then outline some of the limitations of existing digital document formats which are often just a simulation of paper documents and their affordances on desktop computers, and discuss more flexible document representations for cross-media information spaces. Further, new forms of human-information interaction and cross-media user interfaces—including some recent work on dynamic data physicalisation—are discussed. A number of research artefacts are used to illustrate different aspects of the presented data-centric approach for cross-media information spaces and architectures. Last but not least, we provide an outlook on how the embedding of the presented concepts at the level of an operating system might ultimately lead to new possibilities for cross-media information management and innovative forms of human-information interaction.","more_info":"Beat Signer, Proceedings of RCIS 2019, 13th International Conference on Research Challenges in Information Science, Brussels, Belgium, May 2019","publication_date":{"day":null,"month":null,"year":2019,"errors":{}}},"translated_abstract":"The efficient management and retrieval of information via dedicated devices and data structures has been investigated since the early days of Vannevar Bush's seminal article As We May Think introducing the Memex. However, nowadays information is usually fragmented across different media types, devices as well as digital and physical environments, and we are often struggling to retrieve specific information. We discuss three main issues to be addressed when developing solutions for managing information in these co-called cross-media information spaces. First, we have a look at an extensible cross-media linking solution based on the resource-selector-link (RSL) hypermedia metamodel where information can be integrated across applications, devices as well as digital and physical information environments. We then outline some of the limitations of existing digital document formats which are often just a simulation of paper documents and their affordances on desktop computers, and discuss more flexible document representations for cross-media information spaces. Further, new forms of human-information interaction and cross-media user interfaces—including some recent work on dynamic data physicalisation—are discussed. A number of research artefacts are used to illustrate different aspects of the presented data-centric approach for cross-media information spaces and architectures. Last but not least, we provide an outlook on how the embedding of the presented concepts at the level of an operating system might ultimately lead to new possibilities for cross-media information management and innovative forms of human-information interaction.","internal_url":"https://www.academia.edu/40139780/Towards_Cross_Media_Information_Spaces_and_Architectures","translated_internal_url":"","created_at":"2019-08-21T09:45:00.925-07:00","preview_url":null,"current_user_can_edit":null,"current_user_is_owner":null,"owner_id":13155,"coauthors_can_edit":true,"document_type":"paper","co_author_tags":[],"downloadable_attachments":[{"id":77413706,"title":"","file_type":"pdf","scribd_thumbnail_url":"https://attachments.academia-assets.com/77413706/thumbnails/1.jpg","file_name":"towards_cross_media_information_spaces_and_architectures.pdf","download_url":"https://www.academia.edu/attachments/77413706/download_file","bulk_download_file_name":"Towards_Cross_Media_Information_Spaces_a.pdf","bulk_download_url":"https://d1wqtxts1xzle7.cloudfront.net/77413706/towards_cross_media_information_spaces_and_architectures-libre.pdf?1640609738=\u0026response-content-disposition=attachment%3B+filename%3DTowards_Cross_Media_Information_Spaces_a.pdf\u0026Expires=1744203890\u0026Signature=QjfZjpxK53XI~slQCb9lXlrn4prZruc~Y8OLUpOFQS0FHWeEwRnOHLwSWeqXaHCloVH3Xn44UjXLgbDxdN64BZQV35DsAXcWUR6SQI-fvq4HdRiysZ1OoK57-dxFaYMQU5sH~mCdm3P8W2JUrAUcIPrhYU8Khv~ZhtQFU~PftjOhTiMDuJQDJGeGWinRPN313B8NUVp7biqXsJBPB-3LY9h93afI3Gopt-COoGcH1AAWzq2E3obzJe8Kd~ie4pIQwgPnun9kB93Mcxb~rnmSjBjzKfyIOdgUbKAVRKhgbYzlj2qjs2cZKu9CjTAHYbTLwJep8GV~~-s70mz7I6Q2Ug__\u0026Key-Pair-Id=APKAJLOHF5GGSLRBV4ZA"}],"slug":"Towards_Cross_Media_Information_Spaces_and_Architectures","translated_slug":"","page_count":7,"language":"en","content_type":"Work","summary":"The efficient management and retrieval of information via dedicated devices and data structures has been investigated since the early days of Vannevar Bush's seminal article As We May Think introducing the Memex. However, nowadays information is usually fragmented across different media types, devices as well as digital and physical environments, and we are often struggling to retrieve specific information. We discuss three main issues to be addressed when developing solutions for managing information in these co-called cross-media information spaces. First, we have a look at an extensible cross-media linking solution based on the resource-selector-link (RSL) hypermedia metamodel where information can be integrated across applications, devices as well as digital and physical information environments. We then outline some of the limitations of existing digital document formats which are often just a simulation of paper documents and their affordances on desktop computers, and discuss more flexible document representations for cross-media information spaces. Further, new forms of human-information interaction and cross-media user interfaces—including some recent work on dynamic data physicalisation—are discussed. A number of research artefacts are used to illustrate different aspects of the presented data-centric approach for cross-media information spaces and architectures. Last but not least, we provide an outlook on how the embedding of the presented concepts at the level of an operating system might ultimately lead to new possibilities for cross-media information management and innovative forms of human-information interaction.","impression_tracking_id":null,"owner":{"id":13155,"first_name":"Beat","middle_initials":null,"last_name":"Signer","page_name":"BeatSigner","domain_name":"vub","created_at":"2008-10-29T17:11:18.015-07:00","display_name":"Beat Signer","url":"https://vub.academia.edu/BeatSigner"},"attachments":[{"id":77413706,"title":"","file_type":"pdf","scribd_thumbnail_url":"https://attachments.academia-assets.com/77413706/thumbnails/1.jpg","file_name":"towards_cross_media_information_spaces_and_architectures.pdf","download_url":"https://www.academia.edu/attachments/77413706/download_file","bulk_download_file_name":"Towards_Cross_Media_Information_Spaces_a.pdf","bulk_download_url":"https://d1wqtxts1xzle7.cloudfront.net/77413706/towards_cross_media_information_spaces_and_architectures-libre.pdf?1640609738=\u0026response-content-disposition=attachment%3B+filename%3DTowards_Cross_Media_Information_Spaces_a.pdf\u0026Expires=1744203890\u0026Signature=QjfZjpxK53XI~slQCb9lXlrn4prZruc~Y8OLUpOFQS0FHWeEwRnOHLwSWeqXaHCloVH3Xn44UjXLgbDxdN64BZQV35DsAXcWUR6SQI-fvq4HdRiysZ1OoK57-dxFaYMQU5sH~mCdm3P8W2JUrAUcIPrhYU8Khv~ZhtQFU~PftjOhTiMDuJQDJGeGWinRPN313B8NUVp7biqXsJBPB-3LY9h93afI3Gopt-COoGcH1AAWzq2E3obzJe8Kd~ie4pIQwgPnun9kB93Mcxb~rnmSjBjzKfyIOdgUbKAVRKhgbYzlj2qjs2cZKu9CjTAHYbTLwJep8GV~~-s70mz7I6Q2Ug__\u0026Key-Pair-Id=APKAJLOHF5GGSLRBV4ZA"}],"research_interests":[{"id":37,"name":"Information Systems","url":"https://www.academia.edu/Documents/in/Information_Systems"},{"id":48,"name":"Engineering","url":"https://www.academia.edu/Documents/in/Engineering"},{"id":422,"name":"Computer Science","url":"https://www.academia.edu/Documents/in/Computer_Science"},{"id":459,"name":"Information Science","url":"https://www.academia.edu/Documents/in/Information_Science"},{"id":472,"name":"Human Computer Interaction","url":"https://www.academia.edu/Documents/in/Human_Computer_Interaction"},{"id":491,"name":"Information Technology","url":"https://www.academia.edu/Documents/in/Information_Technology"},{"id":923,"name":"Technology","url":"https://www.academia.edu/Documents/in/Technology"},{"id":933,"name":"New Media","url":"https://www.academia.edu/Documents/in/New_Media"},{"id":1012,"name":"Digital Libraries","url":"https://www.academia.edu/Documents/in/Digital_Libraries"},{"id":1210,"name":"Informatics","url":"https://www.academia.edu/Documents/in/Informatics"},{"id":1453,"name":"Information Management","url":"https://www.academia.edu/Documents/in/Information_Management"},{"id":2879,"name":"Ubiquitous Computing","url":"https://www.academia.edu/Documents/in/Ubiquitous_Computing"},{"id":5266,"name":"Human Information Interaction","url":"https://www.academia.edu/Documents/in/Human_Information_Interaction"},{"id":11085,"name":"Cross-Media Information Spaces","url":"https://www.academia.edu/Documents/in/Cross-Media_Information_Spaces"},{"id":42095,"name":"Document Engineering","url":"https://www.academia.edu/Documents/in/Document_Engineering"},{"id":44606,"name":"Hypermedia","url":"https://www.academia.edu/Documents/in/Hypermedia"},{"id":54192,"name":"Hypertext","url":"https://www.academia.edu/Documents/in/Hypertext"},{"id":85420,"name":"Crossmedia","url":"https://www.academia.edu/Documents/in/Crossmedia"},{"id":141114,"name":"World Wide Web","url":"https://www.academia.edu/Documents/in/World_Wide_Web"},{"id":1251579,"name":"Human-Computer Interaction","url":"https://www.academia.edu/Documents/in/Human-Computer_Interaction"}],"urls":[{"id":9196123,"url":"https://beatsigner.com/publications/towards-cross-media-information-spaces-and-architectures.pdf"}]}, dispatcherData: dispatcherData }); $(this).data('initialized', true); } }); $a.trackClickSource(".js-work-strip-work-link", "profile_work_strip") if (false) { Aedu.setUpFigureCarousel('profile-work-40139780-figures'); } }); </script> <div class="js-work-strip profile--work_container" data-work-id="120601781"><div class="profile--work_thumbnail hidden-xs"><a class="js-work-strip-work-link" data-click-track="profile-work-strip-thumbnail" href="https://www.academia.edu/120601781/Pen_based_Interaction"><img alt="Research paper thumbnail of Pen-based Interaction" class="work-thumbnail" src="https://attachments.academia-assets.com/120841760/thumbnails/1.jpg" /></a></div><div class="wp-workCard wp-workCard_itemContainer"><div class="wp-workCard_item wp-workCard--title"><a class="js-work-strip-work-link text-gray-darker" data-click-track="profile-work-strip-title" href="https://www.academia.edu/120601781/Pen_based_Interaction">Pen-based Interaction</a></div><div class="wp-workCard_item"><span class="js-work-more-abstract-truncated">The use of pens in human-computer interaction has been investigated since Ivan Sutherland&#39;s Sketc...</span><a class="js-work-more-abstract" data-broccoli-component="work_strip.more_abstract" data-click-track="profile-work-strip-more-abstract" href="javascript:;"><span> more </span><span><i class="fa fa-caret-down"></i></span></a><span class="js-work-more-abstract-untruncated hidden">The use of pens in human-computer interaction has been investigated since Ivan Sutherland&#39;s Sketchpad graphical communication system in the early 1960s. We provide an overview of the major developments in pen-based interaction over the last six decades and compare different hardware solutions and pen-tracking techniques. In addition to pen-based interaction with digital devices, we discuss more recent digital pen and paper solutions where pen and paper-based interaction is augmented with digital information and services. We outline different interface and interaction styles and present various academic as well as commercial application domains where pen-based interaction has been successfully applied. Furthermore, we discuss several issues to be considered when designing pen-based interactions and conclude with an outlook of potential future challenges and directions for penbased human-computer interaction.</span></div><div class="wp-workCard_item"><div class="carousel-container carousel-container--sm" id="profile-work-120601781-figures"><div class="prev-slide-container js-prev-button-container"><button aria-label="Previous" class="carousel-navigation-button js-profile-work-120601781-figures-prev"><span class="material-symbols-outlined" style="font-size: 24px" translate="no">arrow_back_ios</span></button></div><div class="slides-container js-slides-container"><figure class="figure-slide-container"><a href="https://www.academia.edu/figures/51867935/figure-1-early-pen-based-devices-and-technologies"><img alt="Fig. 1 Early pen-based devices and technologies " class="figure-slide-image" src="https://figures.academia-assets.com/120841760/figure_001.jpg" /></a></figure><figure class="figure-slide-container"><a href="https://www.academia.edu/figures/51867955/figure-2-modern-pen-based-devices-and-technologies"><img alt="Fig. 2 Modern pen-based devices and technologies " class="figure-slide-image" src="https://figures.academia-assets.com/120841760/figure_002.jpg" /></a></figure></div><div class="next-slide-container js-next-button-container"><button aria-label="Next" class="carousel-navigation-button js-profile-work-120601781-figures-next"><span class="material-symbols-outlined" style="font-size: 24px" translate="no">arrow_forward_ios</span></button></div></div></div><div class="wp-workCard_item wp-workCard--actions"><span class="work-strip-bookmark-button-container"></span><a id="84a21d71cafca7d213e9d6c8475d466c" class="wp-workCard--action" rel="nofollow" data-click-track="profile-work-strip-download" data-download="{&quot;attachment_id&quot;:120841760,&quot;asset_id&quot;:120601781,&quot;asset_type&quot;:&quot;Work&quot;,&quot;button_location&quot;:&quot;profile&quot;}" href="https://www.academia.edu/attachments/120841760/download_file?s=profile"><span><i class="fa fa-arrow-down"></i></span><span>Download</span></a><span class="wp-workCard--action visible-if-viewed-by-owner inline-block" style="display: none;"><span class="js-profile-work-strip-edit-button-wrapper profile-work-strip-edit-button-wrapper" data-work-id="120601781"><a class="js-profile-work-strip-edit-button" tabindex="0"><span><i class="fa fa-pencil"></i></span><span>Edit</span></a></span></span></div><div class="wp-workCard_item wp-workCard--stats"><span><span><span class="js-view-count view-count u-mr2x" data-work-id="120601781"><i class="fa fa-spinner fa-spin"></i></span><script>$(function () { var workId = 120601781; window.Academia.workViewCountsFetcher.queue(workId, function (count) { var description = window.$h.commaizeInt(count) + " " + window.$h.pluralize(count, 'View'); $(".js-view-count[data-work-id=120601781]").text(description); $(".js-view-count[data-work-id=120601781]").attr('title', description).tooltip(); }); });</script></span></span><span><span class="percentile-widget hidden"><span class="u-mr2x work-percentile"></span></span><script>$(function () { var workId = 120601781; window.Academia.workPercentilesFetcher.queue(workId, function (percentileText) { var container = $(".js-work-strip[data-work-id='120601781']"); container.find('.work-percentile').text(percentileText.charAt(0).toUpperCase() + percentileText.slice(1)); container.find('.percentile-widget').show(); container.find('.percentile-widget').removeClass('hidden'); }); });</script></span></div><div id="work-strip-premium-row-container"></div></div></div><script> require.config({ waitSeconds: 90 })(["https://a.academia-assets.com/assets/wow_profile-a9bf3a2bc8c89fa2a77156577594264ee8a0f214d74241bc0fcd3f69f8d107ac.js","https://a.academia-assets.com/assets/work_edit-ad038b8c047c1a8d4fa01b402d530ff93c45fee2137a149a4a5398bc8ad67560.js"], function() { // from javascript_helper.rb var dispatcherData = {} if (true){ window.WowProfile.dispatcher = window.WowProfile.dispatcher || _.clone(Backbone.Events); dispatcherData = { dispatcher: window.WowProfile.dispatcher, downloadLinkId: "84a21d71cafca7d213e9d6c8475d466c" } } $('.js-work-strip[data-work-id=120601781]').each(function() { if (!$(this).data('initialized')) { new WowProfile.WorkStripView({ el: this, workJSON: {"id":120601781,"title":"Pen-based Interaction","translated_title":"","metadata":{"doi":"10.1007/978-3-319-27648-9_102-1","abstract":"The use of pens in human-computer interaction has been investigated since Ivan Sutherland's Sketchpad graphical communication system in the early 1960s. We provide an overview of the major developments in pen-based interaction over the last six decades and compare different hardware solutions and pen-tracking techniques. In addition to pen-based interaction with digital devices, we discuss more recent digital pen and paper solutions where pen and paper-based interaction is augmented with digital information and services. We outline different interface and interaction styles and present various academic as well as commercial application domains where pen-based interaction has been successfully applied. Furthermore, we discuss several issues to be considered when designing pen-based interactions and conclude with an outlook of potential future challenges and directions for penbased human-computer interaction.","more_info":"Beat Signer, Handbook of Human Computer Interaction, Major Reference Work, Springer Nature, 2025","publication_date":{"day":null,"month":null,"year":2025,"errors":{}}},"translated_abstract":"The use of pens in human-computer interaction has been investigated since Ivan Sutherland's Sketchpad graphical communication system in the early 1960s. We provide an overview of the major developments in pen-based interaction over the last six decades and compare different hardware solutions and pen-tracking techniques. In addition to pen-based interaction with digital devices, we discuss more recent digital pen and paper solutions where pen and paper-based interaction is augmented with digital information and services. We outline different interface and interaction styles and present various academic as well as commercial application domains where pen-based interaction has been successfully applied. Furthermore, we discuss several issues to be considered when designing pen-based interactions and conclude with an outlook of potential future challenges and directions for penbased human-computer interaction.","internal_url":"https://www.academia.edu/120601781/Pen_based_Interaction","translated_internal_url":"","created_at":"2024-06-05T17:06:12.712-07:00","preview_url":null,"current_user_can_edit":null,"current_user_is_owner":null,"owner_id":13155,"coauthors_can_edit":true,"document_type":"paper","co_author_tags":[],"downloadable_attachments":[{"id":120841760,"title":"","file_type":"pdf","scribd_thumbnail_url":"https://attachments.academia-assets.com/120841760/thumbnails/1.jpg","file_name":"signer_HCIhandbook2025.pdf","download_url":"https://www.academia.edu/attachments/120841760/download_file","bulk_download_file_name":"Pen_based_Interaction.pdf","bulk_download_url":"https://d1wqtxts1xzle7.cloudfront.net/120841760/signer_HCIhandbook2025-libre.pdf?1736963196=\u0026response-content-disposition=attachment%3B+filename%3DPen_based_Interaction.pdf\u0026Expires=1744203890\u0026Signature=X9RgsdYhBbZAipbbOyQH~gZhPyD6ZdK0CrdJ-kXG2b-aSpDS6pTrB3sGAmAcJLAXX-c-u2trydCc9ZyejF-LRRZnGW5~rtTAPs6UpWdgqvmewJnX1nWpAW1LGOe~guI8W9CH3PaE9mNxf5nG2DvgNdnxMmlhE2T4IV6pb6h7f1Mw9NtGuM0aPG8IgIhcMrHhIv62kP2kstXvNZgnCYLHvkaMibVA34qKzke2wVIAzwqCix1zgZnw5mewPlD-6cdhtrO7PtBW4L1ug~ZN9xPm0C0nnRqsA-2-P33QkZWNUHybU4vbx4Em1p0ItQVNqsUevc8OFUGxTsF-DTCrFEx~Yg__\u0026Key-Pair-Id=APKAJLOHF5GGSLRBV4ZA"}],"slug":"Pen_based_Interaction","translated_slug":"","page_count":24,"language":"en","content_type":"Work","summary":"The use of pens in human-computer interaction has been investigated since Ivan Sutherland's Sketchpad graphical communication system in the early 1960s. We provide an overview of the major developments in pen-based interaction over the last six decades and compare different hardware solutions and pen-tracking techniques. In addition to pen-based interaction with digital devices, we discuss more recent digital pen and paper solutions where pen and paper-based interaction is augmented with digital information and services. We outline different interface and interaction styles and present various academic as well as commercial application domains where pen-based interaction has been successfully applied. Furthermore, we discuss several issues to be considered when designing pen-based interactions and conclude with an outlook of potential future challenges and directions for penbased human-computer interaction.","impression_tracking_id":null,"owner":{"id":13155,"first_name":"Beat","middle_initials":null,"last_name":"Signer","page_name":"BeatSigner","domain_name":"vub","created_at":"2008-10-29T17:11:18.015-07:00","display_name":"Beat Signer","url":"https://vub.academia.edu/BeatSigner"},"attachments":[{"id":120841760,"title":"","file_type":"pdf","scribd_thumbnail_url":"https://attachments.academia-assets.com/120841760/thumbnails/1.jpg","file_name":"signer_HCIhandbook2025.pdf","download_url":"https://www.academia.edu/attachments/120841760/download_file","bulk_download_file_name":"Pen_based_Interaction.pdf","bulk_download_url":"https://d1wqtxts1xzle7.cloudfront.net/120841760/signer_HCIhandbook2025-libre.pdf?1736963196=\u0026response-content-disposition=attachment%3B+filename%3DPen_based_Interaction.pdf\u0026Expires=1744203890\u0026Signature=X9RgsdYhBbZAipbbOyQH~gZhPyD6ZdK0CrdJ-kXG2b-aSpDS6pTrB3sGAmAcJLAXX-c-u2trydCc9ZyejF-LRRZnGW5~rtTAPs6UpWdgqvmewJnX1nWpAW1LGOe~guI8W9CH3PaE9mNxf5nG2DvgNdnxMmlhE2T4IV6pb6h7f1Mw9NtGuM0aPG8IgIhcMrHhIv62kP2kstXvNZgnCYLHvkaMibVA34qKzke2wVIAzwqCix1zgZnw5mewPlD-6cdhtrO7PtBW4L1ug~ZN9xPm0C0nnRqsA-2-P33QkZWNUHybU4vbx4Em1p0ItQVNqsUevc8OFUGxTsF-DTCrFEx~Yg__\u0026Key-Pair-Id=APKAJLOHF5GGSLRBV4ZA"}],"research_interests":[{"id":422,"name":"Computer Science","url":"https://www.academia.edu/Documents/in/Computer_Science"},{"id":472,"name":"Human Computer Interaction","url":"https://www.academia.edu/Documents/in/Human_Computer_Interaction"},{"id":491,"name":"Information Technology","url":"https://www.academia.edu/Documents/in/Information_Technology"},{"id":1210,"name":"Informatics","url":"https://www.academia.edu/Documents/in/Informatics"},{"id":1380,"name":"Computer Engineering","url":"https://www.academia.edu/Documents/in/Computer_Engineering"},{"id":4416,"name":"Interaction Design","url":"https://www.academia.edu/Documents/in/Interaction_Design"},{"id":11081,"name":"Multimodal Interaction","url":"https://www.academia.edu/Documents/in/Multimodal_Interaction"},{"id":11085,"name":"Cross-Media Information Spaces","url":"https://www.academia.edu/Documents/in/Cross-Media_Information_Spaces"},{"id":11086,"name":"Interactive Paper","url":"https://www.academia.edu/Documents/in/Interactive_Paper"},{"id":11119,"name":"User Interface","url":"https://www.academia.edu/Documents/in/User_Interface"},{"id":14203,"name":"Natural User Interfaces","url":"https://www.academia.edu/Documents/in/Natural_User_Interfaces"},{"id":15951,"name":"Digital Pen and Paper","url":"https://www.academia.edu/Documents/in/Digital_Pen_and_Paper"},{"id":23997,"name":"HCI","url":"https://www.academia.edu/Documents/in/HCI"},{"id":36689,"name":"Human-Computer Interface","url":"https://www.academia.edu/Documents/in/Human-Computer_Interface"},{"id":97585,"name":"User interfaces","url":"https://www.academia.edu/Documents/in/User_interfaces"},{"id":126585,"name":"Human Computer Interaction Design","url":"https://www.academia.edu/Documents/in/Human_Computer_Interaction_Design"},{"id":469749,"name":"Human Computer Interface","url":"https://www.academia.edu/Documents/in/Human_Computer_Interface"},{"id":851044,"name":"Computer Human Interaction","url":"https://www.academia.edu/Documents/in/Computer_Human_Interaction"},{"id":1226904,"name":"Computer Science and Engineering","url":"https://www.academia.edu/Documents/in/Computer_Science_and_Engineering-1"},{"id":1251579,"name":"Human-Computer Interaction","url":"https://www.academia.edu/Documents/in/Human-Computer_Interaction"}],"urls":[{"id":42685186,"url":"https://beatsigner.com/publications/pen-based-interaction.pdf"}]}, dispatcherData: dispatcherData }); $(this).data('initialized', true); } }); $a.trackClickSource(".js-work-strip-work-link", "profile_work_strip") if (true) { Aedu.setUpFigureCarousel('profile-work-120601781-figures'); } }); </script> <div class="js-work-strip profile--work_container" data-work-id="241739"><div class="profile--work_thumbnail hidden-xs"><a class="js-work-strip-work-link" data-click-track="profile-work-strip-thumbnail" href="https://www.academia.edu/241739/What_is_Wrong_with_Digital_Documents_A_Conceptual_Model_for_Structural_Cross_Media_Content_Composition_and_Reuse"><img alt="Research paper thumbnail of What is Wrong with Digital Documents? A Conceptual Model for Structural Cross-Media Content Composition and Reuse" class="work-thumbnail" src="https://attachments.academia-assets.com/59300463/thumbnails/1.jpg" /></a></div><div class="wp-workCard wp-workCard_itemContainer"><div class="wp-workCard_item wp-workCard--title"><a class="js-work-strip-work-link text-gray-darker" data-click-track="profile-work-strip-title" href="https://www.academia.edu/241739/What_is_Wrong_with_Digital_Documents_A_Conceptual_Model_for_Structural_Cross_Media_Content_Composition_and_Reuse">What is Wrong with Digital Documents? A Conceptual Model for Structural Cross-Media Content Composition and Reuse</a></div><div class="wp-workCard_item"><span class="js-work-more-abstract-truncated">Many of today&#39;s digital document formats are strongly based on a digital emulation of printed med...</span><a class="js-work-more-abstract" data-broccoli-component="work_strip.more_abstract" data-click-track="profile-work-strip-more-abstract" href="javascript:;"><span> more </span><span><i class="fa fa-caret-down"></i></span></a><span class="js-work-more-abstract-untruncated hidden">Many of today&#39;s digital document formats are strongly based on a digital emulation of printed media. While such a paper simulation might be appropriate for the visualisation of certain digital content, it is generally not the most effective solution for digitally managing and storing information. The oversimplistic modelling of digital documents as monolithic blocks of linear content, with a lack of structural semantics, does not pay attention to some of the superior features that digital media offers in comparison to traditional paper documents. For example, existing digital document formats adopt the limitations of paper documents by unnecessarily replicating content via copy and paste operations, instead of digitally embedding and reusing parts of digital documents via structural references. We introduce a conceptual model for structural cross-media content composition and highlight how the proposed solution not only enables the reuse of content via structural relationships, but also supports dynamic and context-dependent document adaptation, structural content annotations as well as the integration of arbitrary non-textual media types. We further discuss solutions for the fluid navigation and cross-media content publishing based on the proposed structural cross-media content model.</span></div><div class="wp-workCard_item wp-workCard--actions"><span class="work-strip-bookmark-button-container"></span><a id="ee3429dda2158feb0d7531a0fc15735d" class="wp-workCard--action" rel="nofollow" data-click-track="profile-work-strip-download" data-download="{&quot;attachment_id&quot;:59300463,&quot;asset_id&quot;:241739,&quot;asset_type&quot;:&quot;Work&quot;,&quot;button_location&quot;:&quot;profile&quot;}" href="https://www.academia.edu/attachments/59300463/download_file?s=profile"><span><i class="fa fa-arrow-down"></i></span><span>Download</span></a><span class="wp-workCard--action visible-if-viewed-by-owner inline-block" style="display: none;"><span class="js-profile-work-strip-edit-button-wrapper profile-work-strip-edit-button-wrapper" data-work-id="241739"><a class="js-profile-work-strip-edit-button" tabindex="0"><span><i class="fa fa-pencil"></i></span><span>Edit</span></a></span></span></div><div class="wp-workCard_item wp-workCard--stats"><span><span><span class="js-view-count view-count u-mr2x" data-work-id="241739"><i class="fa fa-spinner fa-spin"></i></span><script>$(function () { var workId = 241739; window.Academia.workViewCountsFetcher.queue(workId, function (count) { var description = window.$h.commaizeInt(count) + " " + window.$h.pluralize(count, 'View'); $(".js-view-count[data-work-id=241739]").text(description); $(".js-view-count[data-work-id=241739]").attr('title', description).tooltip(); }); });</script></span></span><span><span class="percentile-widget hidden"><span class="u-mr2x work-percentile"></span></span><script>$(function () { var workId = 241739; window.Academia.workPercentilesFetcher.queue(workId, function (percentileText) { var container = $(".js-work-strip[data-work-id='241739']"); container.find('.work-percentile').text(percentileText.charAt(0).toUpperCase() + percentileText.slice(1)); container.find('.percentile-widget').show(); container.find('.percentile-widget').removeClass('hidden'); }); });</script></span></div><div id="work-strip-premium-row-container"></div></div></div><script> require.config({ waitSeconds: 90 })(["https://a.academia-assets.com/assets/wow_profile-a9bf3a2bc8c89fa2a77156577594264ee8a0f214d74241bc0fcd3f69f8d107ac.js","https://a.academia-assets.com/assets/work_edit-ad038b8c047c1a8d4fa01b402d530ff93c45fee2137a149a4a5398bc8ad67560.js"], function() { // from javascript_helper.rb var dispatcherData = {} if (true){ window.WowProfile.dispatcher = window.WowProfile.dispatcher || _.clone(Backbone.Events); dispatcherData = { dispatcher: window.WowProfile.dispatcher, downloadLinkId: "ee3429dda2158feb0d7531a0fc15735d" } } $('.js-work-strip[data-work-id=241739]').each(function() { if (!$(this).data('initialized')) { new WowProfile.WorkStripView({ el: this, workJSON: {"id":241739,"title":"What is Wrong with Digital Documents? A Conceptual Model for Structural Cross-Media Content Composition and Reuse","translated_title":"","metadata":{"doi":"10.1007/978-3-642-16373-9_28","abstract":"Many of today's digital document formats are strongly based on a digital emulation of printed media. While such a paper simulation might be appropriate for the visualisation of certain digital content, it is generally not the most effective solution for digitally managing and storing information. The oversimplistic modelling of digital documents as monolithic blocks of linear content, with a lack of structural semantics, does not pay attention to some of the superior features that digital media offers in comparison to traditional paper documents. For example, existing digital document formats adopt the limitations of paper documents by unnecessarily replicating content via copy and paste operations, instead of digitally embedding and reusing parts of digital documents via structural references. We introduce a conceptual model for structural cross-media content composition and highlight how the proposed solution not only enables the reuse of content via structural relationships, but also supports dynamic and context-dependent document adaptation, structural content annotations as well as the integration of arbitrary non-textual media types. We further discuss solutions for the fluid navigation and cross-media content publishing based on the proposed structural cross-media content model.","more_info":"Beat Signer, Proceedings of ER 2010, 29th International Conference on Conceptual Modeling, Vancouver, Canada, November 2010","publication_date":{"day":null,"month":null,"year":2010,"errors":{}}},"translated_abstract":"Many of today's digital document formats are strongly based on a digital emulation of printed media. While such a paper simulation might be appropriate for the visualisation of certain digital content, it is generally not the most effective solution for digitally managing and storing information. The oversimplistic modelling of digital documents as monolithic blocks of linear content, with a lack of structural semantics, does not pay attention to some of the superior features that digital media offers in comparison to traditional paper documents. For example, existing digital document formats adopt the limitations of paper documents by unnecessarily replicating content via copy and paste operations, instead of digitally embedding and reusing parts of digital documents via structural references. We introduce a conceptual model for structural cross-media content composition and highlight how the proposed solution not only enables the reuse of content via structural relationships, but also supports dynamic and context-dependent document adaptation, structural content annotations as well as the integration of arbitrary non-textual media types. We further discuss solutions for the fluid navigation and cross-media content publishing based on the proposed structural cross-media content model.","internal_url":"https://www.academia.edu/241739/What_is_Wrong_with_Digital_Documents_A_Conceptual_Model_for_Structural_Cross_Media_Content_Composition_and_Reuse","translated_internal_url":"","created_at":"2010-06-02T18:51:41.095-07:00","preview_url":null,"current_user_can_edit":null,"current_user_is_owner":null,"owner_id":13155,"coauthors_can_edit":true,"document_type":"paper","co_author_tags":[],"downloadable_attachments":[{"id":59300463,"title":"","file_type":"pdf","scribd_thumbnail_url":"https://attachments.academia-assets.com/59300463/thumbnails/1.jpg","file_name":"signer_ER2010.pdf","download_url":"https://www.academia.edu/attachments/59300463/download_file","bulk_download_file_name":"What_is_Wrong_with_Digital_Documents_A_C.pdf","bulk_download_url":"https://d1wqtxts1xzle7.cloudfront.net/59300463/signer_ER2010-libre.pdf?1558175311=\u0026response-content-disposition=attachment%3B+filename%3DWhat_is_Wrong_with_Digital_Documents_A_C.pdf\u0026Expires=1744203890\u0026Signature=Xit-FFWX-3MErrm0MXkyG3zNyEAp0IcpdP21jgaKoDzVIKCgDHlXMz4PlRi7GykrdNi13ho1vH9wO-89~PbKUwnAbHmJ9uodK9UE79wGIDgXtU4oBDRQ7pbsB7StvujgbDh8tRsS2WZjU~F-dOwm2VsylSab9~DEtJy6Bzv1memLP299Cr0uTYZtoFKv2bHP0DN9L~e3FAzeiAV21gfxRq64CGAY3OAQIy06avFHArSwJJfz-Cj4Su2RFtRmrH2jL~EnCi3U2YI~rwZtJ7pSPAi-lF6VLifNeJqL4ugaeMhiMfEvLUMBjLknN0TEfuNFQS-0bTZJJLIT2LKY6wPdYA__\u0026Key-Pair-Id=APKAJLOHF5GGSLRBV4ZA"}],"slug":"What_is_Wrong_with_Digital_Documents_A_Conceptual_Model_for_Structural_Cross_Media_Content_Composition_and_Reuse","translated_slug":"","page_count":14,"language":"en","content_type":"Work","summary":"Many of today's digital document formats are strongly based on a digital emulation of printed media. While such a paper simulation might be appropriate for the visualisation of certain digital content, it is generally not the most effective solution for digitally managing and storing information. The oversimplistic modelling of digital documents as monolithic blocks of linear content, with a lack of structural semantics, does not pay attention to some of the superior features that digital media offers in comparison to traditional paper documents. For example, existing digital document formats adopt the limitations of paper documents by unnecessarily replicating content via copy and paste operations, instead of digitally embedding and reusing parts of digital documents via structural references. We introduce a conceptual model for structural cross-media content composition and highlight how the proposed solution not only enables the reuse of content via structural relationships, but also supports dynamic and context-dependent document adaptation, structural content annotations as well as the integration of arbitrary non-textual media types. We further discuss solutions for the fluid navigation and cross-media content publishing based on the proposed structural cross-media content model.","impression_tracking_id":null,"owner":{"id":13155,"first_name":"Beat","middle_initials":null,"last_name":"Signer","page_name":"BeatSigner","domain_name":"vub","created_at":"2008-10-29T17:11:18.015-07:00","display_name":"Beat Signer","url":"https://vub.academia.edu/BeatSigner"},"attachments":[{"id":59300463,"title":"","file_type":"pdf","scribd_thumbnail_url":"https://attachments.academia-assets.com/59300463/thumbnails/1.jpg","file_name":"signer_ER2010.pdf","download_url":"https://www.academia.edu/attachments/59300463/download_file","bulk_download_file_name":"What_is_Wrong_with_Digital_Documents_A_C.pdf","bulk_download_url":"https://d1wqtxts1xzle7.cloudfront.net/59300463/signer_ER2010-libre.pdf?1558175311=\u0026response-content-disposition=attachment%3B+filename%3DWhat_is_Wrong_with_Digital_Documents_A_C.pdf\u0026Expires=1744203890\u0026Signature=Xit-FFWX-3MErrm0MXkyG3zNyEAp0IcpdP21jgaKoDzVIKCgDHlXMz4PlRi7GykrdNi13ho1vH9wO-89~PbKUwnAbHmJ9uodK9UE79wGIDgXtU4oBDRQ7pbsB7StvujgbDh8tRsS2WZjU~F-dOwm2VsylSab9~DEtJy6Bzv1memLP299Cr0uTYZtoFKv2bHP0DN9L~e3FAzeiAV21gfxRq64CGAY3OAQIy06avFHArSwJJfz-Cj4Su2RFtRmrH2jL~EnCi3U2YI~rwZtJ7pSPAi-lF6VLifNeJqL4ugaeMhiMfEvLUMBjLknN0TEfuNFQS-0bTZJJLIT2LKY6wPdYA__\u0026Key-Pair-Id=APKAJLOHF5GGSLRBV4ZA"}],"research_interests":[{"id":37,"name":"Information Systems","url":"https://www.academia.edu/Documents/in/Information_Systems"},{"id":48,"name":"Engineering","url":"https://www.academia.edu/Documents/in/Engineering"},{"id":422,"name":"Computer Science","url":"https://www.academia.edu/Documents/in/Computer_Science"},{"id":449,"name":"Software Engineering","url":"https://www.academia.edu/Documents/in/Software_Engineering"},{"id":472,"name":"Human Computer Interaction","url":"https://www.academia.edu/Documents/in/Human_Computer_Interaction"},{"id":491,"name":"Information Technology","url":"https://www.academia.edu/Documents/in/Information_Technology"},{"id":922,"name":"Education","url":"https://www.academia.edu/Documents/in/Education"},{"id":923,"name":"Technology","url":"https://www.academia.edu/Documents/in/Technology"},{"id":933,"name":"New Media","url":"https://www.academia.edu/Documents/in/New_Media"},{"id":1012,"name":"Digital Libraries","url":"https://www.academia.edu/Documents/in/Digital_Libraries"},{"id":1210,"name":"Informatics","url":"https://www.academia.edu/Documents/in/Informatics"},{"id":1380,"name":"Computer Engineering","url":"https://www.academia.edu/Documents/in/Computer_Engineering"},{"id":1384,"name":"Web Engineering","url":"https://www.academia.edu/Documents/in/Web_Engineering"},{"id":1453,"name":"Information Management","url":"https://www.academia.edu/Documents/in/Information_Management"},{"id":1609,"name":"E-learning","url":"https://www.academia.edu/Documents/in/E-learning"},{"id":2482,"name":"Database Systems","url":"https://www.academia.edu/Documents/in/Database_Systems"},{"id":2869,"name":"Digital Media","url":"https://www.academia.edu/Documents/in/Digital_Media"},{"id":3419,"name":"Multimedia","url":"https://www.academia.edu/Documents/in/Multimedia"},{"id":4568,"name":"Liberalism","url":"https://www.academia.edu/Documents/in/Liberalism"},{"id":4682,"name":"Reading Habits/Attitudes","url":"https://www.academia.edu/Documents/in/Reading_Habits_Attitudes"},{"id":7454,"name":"Information Communication Technology","url":"https://www.academia.edu/Documents/in/Information_Communication_Technology"},{"id":8215,"name":"Conceptual Modelling","url":"https://www.academia.edu/Documents/in/Conceptual_Modelling"},{"id":9471,"name":"Reading","url":"https://www.academia.edu/Documents/in/Reading"},{"id":10048,"name":"Future Media","url":"https://www.academia.edu/Documents/in/Future_Media"},{"id":10165,"name":"Interactive and Digital Media","url":"https://www.academia.edu/Documents/in/Interactive_and_Digital_Media"},{"id":10249,"name":"Writing","url":"https://www.academia.edu/Documents/in/Writing"},{"id":11085,"name":"Cross-Media Information Spaces","url":"https://www.academia.edu/Documents/in/Cross-Media_Information_Spaces"},{"id":11123,"name":"Personal Information Management","url":"https://www.academia.edu/Documents/in/Personal_Information_Management"},{"id":29124,"name":"Web Science","url":"https://www.academia.edu/Documents/in/Web_Science"},{"id":41474,"name":"Cross Media Platforms","url":"https://www.academia.edu/Documents/in/Cross_Media_Platforms"},{"id":42095,"name":"Document Engineering","url":"https://www.academia.edu/Documents/in/Document_Engineering"},{"id":44606,"name":"Hypermedia","url":"https://www.academia.edu/Documents/in/Hypermedia"},{"id":48200,"name":"Digital Library","url":"https://www.academia.edu/Documents/in/Digital_Library"},{"id":54192,"name":"Hypertext","url":"https://www.academia.edu/Documents/in/Hypertext"},{"id":193390,"name":"RSL","url":"https://www.academia.edu/Documents/in/RSL"},{"id":369022,"name":"Cross Media","url":"https://www.academia.edu/Documents/in/Cross_Media"},{"id":402880,"name":"Personal Information Management - PIM","url":"https://www.academia.edu/Documents/in/Personal_Information_Management_-_PIM"},{"id":418691,"name":"Multimedia Computing","url":"https://www.academia.edu/Documents/in/Multimedia_Computing"},{"id":668047,"name":"Digital Media and Interaction Design","url":"https://www.academia.edu/Documents/in/Digital_Media_and_Interaction_Design"}],"urls":[{"id":9196129,"url":"https://beatsigner.com/publications/what-is-wrong-with-digital-documents-a-conceptual-model-for-structural-cross-media-content-composition-and-reuse.pdf"}]}, dispatcherData: dispatcherData }); $(this).data('initialized', true); } }); $a.trackClickSource(".js-work-strip-work-link", "profile_work_strip") if (false) { Aedu.setUpFigureCarousel('profile-work-241739-figures'); } }); </script> <div class="js-work-strip profile--work_container" data-work-id="37336859"><div class="profile--work_thumbnail hidden-xs"><a class="js-work-strip-work-link" data-click-track="profile-work-strip-thumbnail" href="https://www.academia.edu/37336859/Towards_a_Framework_for_Dynamic_Data_Physicalisation"><img alt="Research paper thumbnail of Towards a Framework for Dynamic Data Physicalisation" class="work-thumbnail" src="https://attachments.academia-assets.com/59746352/thumbnails/1.jpg" /></a></div><div class="wp-workCard wp-workCard_itemContainer"><div class="wp-workCard_item wp-workCard--title"><a class="js-work-strip-work-link text-gray-darker" data-click-track="profile-work-strip-title" href="https://www.academia.edu/37336859/Towards_a_Framework_for_Dynamic_Data_Physicalisation">Towards a Framework for Dynamic Data Physicalisation</a></div><div class="wp-workCard_item wp-workCard--coauthors"><span>by </span><span><a class="" data-click-track="profile-work-strip-authors" href="https://vub.academia.edu/BeatSigner">Beat Signer</a>, <a class="" data-click-track="profile-work-strip-authors" href="https://vub.academia.edu/PayamEbrahimi">Payam Ebrahimi</a>, and <a class="" data-click-track="profile-work-strip-authors" href="https://vub.academia.edu/AhmedKareemAAbdullah">Ahmed K.A. Abdullah</a></span></div><div class="wp-workCard_item"><span class="js-work-more-abstract-truncated">Advanced data visualisation techniques enable the exploration and analysis of large datasets. Rec...</span><a class="js-work-more-abstract" data-broccoli-component="work_strip.more_abstract" data-click-track="profile-work-strip-more-abstract" href="javascript:;"><span> more </span><span><i class="fa fa-caret-down"></i></span></a><span class="js-work-more-abstract-untruncated hidden">Advanced data visualisation techniques enable the exploration and analysis of large datasets. Recently, there is the emerging field of data physicalisation, where data is represented in physical space (e.g. via physical models) and can no longer only be explored visually, but also by making use of other senses such as touch. Most existing data physicalisation solutions are static and cannot be dynamically updated based on a user&#39;s interaction. Our goal is to develop a framework for new forms of dynamic data physicalisation in order to support an interactive exploration and analysis of datasets. Based on a study of the design space for dynamic data physicalisation, we are therefore working on a grammar for representing the fundamental physical operations and interactions that can be applied to the underlying data. Our envisioned extensible data physicalisation framework will enable the rapid prototyping of dynamic data physicalisations and thereby support researchers who want to experiment with new combinations of physical variables or output devices for dynamic data physicalisation as well as designers and application developers who are interested in the development of innovative dynamic data physicalisation solutions.</span></div><div class="wp-workCard_item wp-workCard--actions"><span class="work-strip-bookmark-button-container"></span><a id="004fb5eaa4f6acfa8e3b4f76cd783a1d" class="wp-workCard--action" rel="nofollow" data-click-track="profile-work-strip-download" data-download="{&quot;attachment_id&quot;:59746352,&quot;asset_id&quot;:37336859,&quot;asset_type&quot;:&quot;Work&quot;,&quot;button_location&quot;:&quot;profile&quot;}" href="https://www.academia.edu/attachments/59746352/download_file?s=profile"><span><i class="fa fa-arrow-down"></i></span><span>Download</span></a><span class="wp-workCard--action visible-if-viewed-by-owner inline-block" style="display: none;"><span class="js-profile-work-strip-edit-button-wrapper profile-work-strip-edit-button-wrapper" data-work-id="37336859"><a class="js-profile-work-strip-edit-button" tabindex="0"><span><i class="fa fa-pencil"></i></span><span>Edit</span></a></span></span></div><div class="wp-workCard_item wp-workCard--stats"><span><span><span class="js-view-count view-count u-mr2x" data-work-id="37336859"><i class="fa fa-spinner fa-spin"></i></span><script>$(function () { var workId = 37336859; window.Academia.workViewCountsFetcher.queue(workId, function (count) { var description = window.$h.commaizeInt(count) + " " + window.$h.pluralize(count, 'View'); $(".js-view-count[data-work-id=37336859]").text(description); $(".js-view-count[data-work-id=37336859]").attr('title', description).tooltip(); }); });</script></span></span><span><span class="percentile-widget hidden"><span class="u-mr2x work-percentile"></span></span><script>$(function () { var workId = 37336859; window.Academia.workPercentilesFetcher.queue(workId, function (percentileText) { var container = $(".js-work-strip[data-work-id='37336859']"); container.find('.work-percentile').text(percentileText.charAt(0).toUpperCase() + percentileText.slice(1)); container.find('.percentile-widget').show(); container.find('.percentile-widget').removeClass('hidden'); }); });</script></span></div><div id="work-strip-premium-row-container"></div></div></div><script> require.config({ waitSeconds: 90 })(["https://a.academia-assets.com/assets/wow_profile-a9bf3a2bc8c89fa2a77156577594264ee8a0f214d74241bc0fcd3f69f8d107ac.js","https://a.academia-assets.com/assets/work_edit-ad038b8c047c1a8d4fa01b402d530ff93c45fee2137a149a4a5398bc8ad67560.js"], function() { // from javascript_helper.rb var dispatcherData = {} if (true){ window.WowProfile.dispatcher = window.WowProfile.dispatcher || _.clone(Backbone.Events); dispatcherData = { dispatcher: window.WowProfile.dispatcher, downloadLinkId: "004fb5eaa4f6acfa8e3b4f76cd783a1d" } } $('.js-work-strip[data-work-id=37336859]').each(function() { if (!$(this).data('initialized')) { new WowProfile.WorkStripView({ el: this, workJSON: {"id":37336859,"title":"Towards a Framework for Dynamic Data Physicalisation","translated_title":"","metadata":{"abstract":"Advanced data visualisation techniques enable the exploration and analysis of large datasets. Recently, there is the emerging field of data physicalisation, where data is represented in physical space (e.g. via physical models) and can no longer only be explored visually, but also by making use of other senses such as touch. Most existing data physicalisation solutions are static and cannot be dynamically updated based on a user's interaction. Our goal is to develop a framework for new forms of dynamic data physicalisation in order to support an interactive exploration and analysis of datasets. Based on a study of the design space for dynamic data physicalisation, we are therefore working on a grammar for representing the fundamental physical operations and interactions that can be applied to the underlying data. Our envisioned extensible data physicalisation framework will enable the rapid prototyping of dynamic data physicalisations and thereby support researchers who want to experiment with new combinations of physical variables or output devices for dynamic data physicalisation as well as designers and application developers who are interested in the development of innovative dynamic data physicalisation solutions. ","more_info":" Beat Signer, Payam Ebrahimi, Timothy J. Curtin and Ahmed K.A. Abdullah, International Workshop Toward a Design Language for Data Physicalization, Berlin, Germany, October 2018","publication_date":{"day":null,"month":null,"year":2018,"errors":{}}},"translated_abstract":"Advanced data visualisation techniques enable the exploration and analysis of large datasets. Recently, there is the emerging field of data physicalisation, where data is represented in physical space (e.g. via physical models) and can no longer only be explored visually, but also by making use of other senses such as touch. Most existing data physicalisation solutions are static and cannot be dynamically updated based on a user's interaction. Our goal is to develop a framework for new forms of dynamic data physicalisation in order to support an interactive exploration and analysis of datasets. Based on a study of the design space for dynamic data physicalisation, we are therefore working on a grammar for representing the fundamental physical operations and interactions that can be applied to the underlying data. Our envisioned extensible data physicalisation framework will enable the rapid prototyping of dynamic data physicalisations and thereby support researchers who want to experiment with new combinations of physical variables or output devices for dynamic data physicalisation as well as designers and application developers who are interested in the development of innovative dynamic data physicalisation solutions. ","internal_url":"https://www.academia.edu/37336859/Towards_a_Framework_for_Dynamic_Data_Physicalisation","translated_internal_url":"","created_at":"2018-09-03T13:04:25.315-07:00","preview_url":null,"current_user_can_edit":null,"current_user_is_owner":null,"owner_id":13155,"coauthors_can_edit":true,"document_type":"paper","co_author_tags":[{"id":31847683,"work_id":37336859,"tagging_user_id":13155,"tagged_user_id":3163292,"co_author_invite_id":null,"email":"p***m@gmail.com","affiliation":"Vrije Universiteit Brussel","display_order":1,"name":"Payam Ebrahimi","title":"Towards a Framework for Dynamic Data Physicalisation"},{"id":31847684,"work_id":37336859,"tagging_user_id":13155,"tagged_user_id":62307900,"co_author_invite_id":null,"email":"t***n@vub.ac.be","display_order":2,"name":"Timothy Curtin","title":"Towards a Framework for Dynamic Data Physicalisation"},{"id":31871736,"work_id":37336859,"tagging_user_id":13155,"tagged_user_id":91228720,"co_author_invite_id":null,"email":"a***h@gmail.com","affiliation":"Vrije Universiteit Brussel","display_order":3,"name":"Ahmed K.A. Abdullah","title":"Towards a Framework for Dynamic Data Physicalisation"}],"downloadable_attachments":[{"id":59746352,"title":"","file_type":"pdf","scribd_thumbnail_url":"https://attachments.academia-assets.com/59746352/thumbnails/1.jpg","file_name":"signer_DataPhys2018.pdf","download_url":"https://www.academia.edu/attachments/59746352/download_file","bulk_download_file_name":"Towards_a_Framework_for_Dynamic_Data_Phy.pdf","bulk_download_url":"https://d1wqtxts1xzle7.cloudfront.net/59746352/signer_DataPhys2018-libre.pdf?1560637089=\u0026response-content-disposition=attachment%3B+filename%3DTowards_a_Framework_for_Dynamic_Data_Phy.pdf\u0026Expires=1744203890\u0026Signature=b12zZvVZvmqYV1PGe9xhwnT70V2Jxe1KCZWGCdIiQaEcwUrv7-LHMWXtR8O8Vagz3vZYVsJl1nnXYRrJsBCvdgkqXhMOEH8oOYTq1YgLSGZGI-gjpAqUJxHwuU1WY-lbXmsHHwZ-AjMe4WMAMQeHkpGpEC3LXzKnM-1ml8eJW79mPozzThZeGUDvjF8HVbhy8GoG4YBpQYfZil6aDxihzzpFKuiKSodAhI2Hm6S6yNSycPNQgV8xnCDLpZh2fGXx7i7ed12dUahy-oKTA~avIU6ekGIqYdt8iWb8D4a-aGXamBMTIDPngL0XeTBKs~9fljdj-EBZ8pq~IyRMVkdA3A__\u0026Key-Pair-Id=APKAJLOHF5GGSLRBV4ZA"}],"slug":"Towards_a_Framework_for_Dynamic_Data_Physicalisation","translated_slug":"","page_count":4,"language":"en","content_type":"Work","summary":"Advanced data visualisation techniques enable the exploration and analysis of large datasets. Recently, there is the emerging field of data physicalisation, where data is represented in physical space (e.g. via physical models) and can no longer only be explored visually, but also by making use of other senses such as touch. Most existing data physicalisation solutions are static and cannot be dynamically updated based on a user's interaction. Our goal is to develop a framework for new forms of dynamic data physicalisation in order to support an interactive exploration and analysis of datasets. Based on a study of the design space for dynamic data physicalisation, we are therefore working on a grammar for representing the fundamental physical operations and interactions that can be applied to the underlying data. Our envisioned extensible data physicalisation framework will enable the rapid prototyping of dynamic data physicalisations and thereby support researchers who want to experiment with new combinations of physical variables or output devices for dynamic data physicalisation as well as designers and application developers who are interested in the development of innovative dynamic data physicalisation solutions. ","impression_tracking_id":null,"owner":{"id":13155,"first_name":"Beat","middle_initials":null,"last_name":"Signer","page_name":"BeatSigner","domain_name":"vub","created_at":"2008-10-29T17:11:18.015-07:00","display_name":"Beat Signer","url":"https://vub.academia.edu/BeatSigner"},"attachments":[{"id":59746352,"title":"","file_type":"pdf","scribd_thumbnail_url":"https://attachments.academia-assets.com/59746352/thumbnails/1.jpg","file_name":"signer_DataPhys2018.pdf","download_url":"https://www.academia.edu/attachments/59746352/download_file","bulk_download_file_name":"Towards_a_Framework_for_Dynamic_Data_Phy.pdf","bulk_download_url":"https://d1wqtxts1xzle7.cloudfront.net/59746352/signer_DataPhys2018-libre.pdf?1560637089=\u0026response-content-disposition=attachment%3B+filename%3DTowards_a_Framework_for_Dynamic_Data_Phy.pdf\u0026Expires=1744203890\u0026Signature=b12zZvVZvmqYV1PGe9xhwnT70V2Jxe1KCZWGCdIiQaEcwUrv7-LHMWXtR8O8Vagz3vZYVsJl1nnXYRrJsBCvdgkqXhMOEH8oOYTq1YgLSGZGI-gjpAqUJxHwuU1WY-lbXmsHHwZ-AjMe4WMAMQeHkpGpEC3LXzKnM-1ml8eJW79mPozzThZeGUDvjF8HVbhy8GoG4YBpQYfZil6aDxihzzpFKuiKSodAhI2Hm6S6yNSycPNQgV8xnCDLpZh2fGXx7i7ed12dUahy-oKTA~avIU6ekGIqYdt8iWb8D4a-aGXamBMTIDPngL0XeTBKs~9fljdj-EBZ8pq~IyRMVkdA3A__\u0026Key-Pair-Id=APKAJLOHF5GGSLRBV4ZA"}],"research_interests":[{"id":48,"name":"Engineering","url":"https://www.academia.edu/Documents/in/Engineering"},{"id":422,"name":"Computer Science","url":"https://www.academia.edu/Documents/in/Computer_Science"},{"id":449,"name":"Software Engineering","url":"https://www.academia.edu/Documents/in/Software_Engineering"},{"id":472,"name":"Human Computer Interaction","url":"https://www.academia.edu/Documents/in/Human_Computer_Interaction"},{"id":491,"name":"Information Technology","url":"https://www.academia.edu/Documents/in/Information_Technology"},{"id":923,"name":"Technology","url":"https://www.academia.edu/Documents/in/Technology"},{"id":933,"name":"New Media","url":"https://www.academia.edu/Documents/in/New_Media"},{"id":1210,"name":"Informatics","url":"https://www.academia.edu/Documents/in/Informatics"},{"id":3424,"name":"Information Visualization","url":"https://www.academia.edu/Documents/in/Information_Visualization"},{"id":4205,"name":"Data Analysis","url":"https://www.academia.edu/Documents/in/Data_Analysis"},{"id":4416,"name":"Interaction Design","url":"https://www.academia.edu/Documents/in/Interaction_Design"},{"id":5266,"name":"Human Information Interaction","url":"https://www.academia.edu/Documents/in/Human_Information_Interaction"},{"id":5673,"name":"Augmented Reality","url":"https://www.academia.edu/Documents/in/Augmented_Reality"},{"id":11119,"name":"User Interface","url":"https://www.academia.edu/Documents/in/User_Interface"},{"id":20470,"name":"Tangible User Interfaces","url":"https://www.academia.edu/Documents/in/Tangible_User_Interfaces"},{"id":21201,"name":"Multimodality","url":"https://www.academia.edu/Documents/in/Multimodality"},{"id":85420,"name":"Crossmedia","url":"https://www.academia.edu/Documents/in/Crossmedia"},{"id":126300,"name":"Big Data","url":"https://www.academia.edu/Documents/in/Big_Data"},{"id":140531,"name":"Framework","url":"https://www.academia.edu/Documents/in/Framework"},{"id":1993399,"name":"Data Physicalisation","url":"https://www.academia.edu/Documents/in/Data_Physicalisation"}],"urls":[{"id":9196125,"url":"https://beatsigner.com/publications/towards-a-framework-for-dynamic-data-physicalisation.pdf"}]}, dispatcherData: dispatcherData }); $(this).data('initialized', true); } }); $a.trackClickSource(".js-work-strip-work-link", "profile_work_strip") if (false) { Aedu.setUpFigureCarousel('profile-work-37336859-figures'); } }); </script> <div class="js-work-strip profile--work_container" data-work-id="122578719"><div class="profile--work_thumbnail hidden-xs"><a class="js-work-strip-work-link" data-click-track="profile-work-strip-thumbnail" href="https://www.academia.edu/122578719/As_We_May_Interact_Challenges_and_Opportunities_for_Next_Generation_Human_Information_Interaction"><img alt="Research paper thumbnail of As We May Interact: Challenges and Opportunities for Next-Generation Human-Information Interaction" class="work-thumbnail" src="https://attachments.academia-assets.com/117388797/thumbnails/1.jpg" /></a></div><div class="wp-workCard wp-workCard_itemContainer"><div class="wp-workCard_item wp-workCard--title"><a class="js-work-strip-work-link text-gray-darker" data-click-track="profile-work-strip-title" href="https://www.academia.edu/122578719/As_We_May_Interact_Challenges_and_Opportunities_for_Next_Generation_Human_Information_Interaction">As We May Interact: Challenges and Opportunities for Next-Generation Human-Information Interaction</a></div><div class="wp-workCard_item"><span class="js-work-more-abstract-truncated">Long before the advent of personal computing, Vannevar Bush envisioned the Memex as a solution to...</span><a class="js-work-more-abstract" data-broccoli-component="work_strip.more_abstract" data-click-track="profile-work-strip-more-abstract" href="javascript:;"><span> more </span><span><i class="fa fa-caret-down"></i></span></a><span class="js-work-more-abstract-untruncated hidden">Long before the advent of personal computing, Vannevar Bush envisioned the Memex as a solution to address information overload by enhancing the management and refinding of information through associative trails. While other hypertext pioneers like Douglas Engelbart and Ted Nelson introduced advanced hypertext concepts to create more flexible document structures and augment the human intellect, some of their original ideas are still absent in our daily interaction with documents and information systems. Today, many digital document formats mimic paper documents without fully leveraging the opportunities offered by digital media and documents are often organised in hierarchical file structures. In this keynote, we explore how cross-media technologies, such as the resource-selector-link (RSL) hypermedia metamodel, can be used to organise and interact with information across digital and physical spaces. While emerging wearable mixed reality (MR) headsets offer new possibilities to augment the human intellect, we discuss how hypermedia research, in combination with other technologies, could play a major role in providing the necessary linked data and hypertext infrastructure for this augmentation process. We outline the challenges and opportunities for next-generation multimodal human-information interaction enabled by flexible cross-media information spaces and document structures in combination with upcoming mixed and virtual reality solutions.</span></div><div class="wp-workCard_item"><div class="carousel-container carousel-container--sm" id="profile-work-122578719-figures"><div class="prev-slide-container js-prev-button-container"><button aria-label="Previous" class="carousel-navigation-button js-profile-work-122578719-figures-prev"><span class="material-symbols-outlined" style="font-size: 24px" translate="no">arrow_back_ios</span></button></div><div class="slides-container js-slides-container"><figure class="figure-slide-container"><a href="https://www.academia.edu/figures/28930603/figure-1-next-generation-human-information-interaction-image"><img alt="Figure 1: Next-generation human-information interaction (image created with the assistance of DALL-E 3) " class="figure-slide-image" src="https://figures.academia-assets.com/117388797/figure_001.jpg" /></a></figure></div><div class="next-slide-container js-next-button-container"><button aria-label="Next" class="carousel-navigation-button js-profile-work-122578719-figures-next"><span class="material-symbols-outlined" style="font-size: 24px" translate="no">arrow_forward_ios</span></button></div></div></div><div class="wp-workCard_item wp-workCard--actions"><span class="work-strip-bookmark-button-container"></span><a id="f42d8dd417c294d86168cd81aa0a0cef" class="wp-workCard--action" rel="nofollow" data-click-track="profile-work-strip-download" data-download="{&quot;attachment_id&quot;:117388797,&quot;asset_id&quot;:122578719,&quot;asset_type&quot;:&quot;Work&quot;,&quot;button_location&quot;:&quot;profile&quot;}" href="https://www.academia.edu/attachments/117388797/download_file?s=profile"><span><i class="fa fa-arrow-down"></i></span><span>Download</span></a><span class="wp-workCard--action visible-if-viewed-by-owner inline-block" style="display: none;"><span class="js-profile-work-strip-edit-button-wrapper profile-work-strip-edit-button-wrapper" data-work-id="122578719"><a class="js-profile-work-strip-edit-button" tabindex="0"><span><i class="fa fa-pencil"></i></span><span>Edit</span></a></span></span></div><div class="wp-workCard_item wp-workCard--stats"><span><span><span class="js-view-count view-count u-mr2x" data-work-id="122578719"><i class="fa fa-spinner fa-spin"></i></span><script>$(function () { var workId = 122578719; window.Academia.workViewCountsFetcher.queue(workId, function (count) { var description = window.$h.commaizeInt(count) + " " + window.$h.pluralize(count, 'View'); $(".js-view-count[data-work-id=122578719]").text(description); $(".js-view-count[data-work-id=122578719]").attr('title', description).tooltip(); }); });</script></span></span><span><span class="percentile-widget hidden"><span class="u-mr2x work-percentile"></span></span><script>$(function () { var workId = 122578719; window.Academia.workPercentilesFetcher.queue(workId, function (percentileText) { var container = $(".js-work-strip[data-work-id='122578719']"); container.find('.work-percentile').text(percentileText.charAt(0).toUpperCase() + percentileText.slice(1)); container.find('.percentile-widget').show(); container.find('.percentile-widget').removeClass('hidden'); }); });</script></span></div><div id="work-strip-premium-row-container"></div></div></div><script> require.config({ waitSeconds: 90 })(["https://a.academia-assets.com/assets/wow_profile-a9bf3a2bc8c89fa2a77156577594264ee8a0f214d74241bc0fcd3f69f8d107ac.js","https://a.academia-assets.com/assets/work_edit-ad038b8c047c1a8d4fa01b402d530ff93c45fee2137a149a4a5398bc8ad67560.js"], function() { // from javascript_helper.rb var dispatcherData = {} if (true){ window.WowProfile.dispatcher = window.WowProfile.dispatcher || _.clone(Backbone.Events); dispatcherData = { dispatcher: window.WowProfile.dispatcher, downloadLinkId: "f42d8dd417c294d86168cd81aa0a0cef" } } $('.js-work-strip[data-work-id=122578719]').each(function() { if (!$(this).data('initialized')) { new WowProfile.WorkStripView({ el: this, workJSON: {"id":122578719,"title":"As We May Interact: Challenges and Opportunities for Next-Generation Human-Information Interaction","translated_title":"","metadata":{"doi":"10.1145/3679058.3688629","abstract":"Long before the advent of personal computing, Vannevar Bush envisioned the Memex as a solution to address information overload by enhancing the management and refinding of information through associative trails. While other hypertext pioneers like Douglas Engelbart and Ted Nelson introduced advanced hypertext concepts to create more flexible document structures and augment the human intellect, some of their original ideas are still absent in our daily interaction with documents and information systems. Today, many digital document formats mimic paper documents without fully leveraging the opportunities offered by digital media and documents are often organised in hierarchical file structures. In this keynote, we explore how cross-media technologies, such as the resource-selector-link (RSL) hypermedia metamodel, can be used to organise and interact with information across digital and physical spaces. While emerging wearable mixed reality (MR) headsets offer new possibilities to augment the human intellect, we discuss how hypermedia research, in combination with other technologies, could play a major role in providing the necessary linked data and hypertext infrastructure for this augmentation process. We outline the challenges and opportunities for next-generation multimodal human-information interaction enabled by flexible cross-media information spaces and document structures in combination with upcoming mixed and virtual reality solutions.","more_info":"Beat Signer, Proceedings of HUMAN 2024, 7th Workshop on Human Factors in Hypertext, Poznan, Poland, September 2024","ai_title_tag":"Future Human-Information Interaction: Challenges \u0026 Opportunities","publication_date":{"day":null,"month":null,"year":2024,"errors":{}}},"translated_abstract":"Long before the advent of personal computing, Vannevar Bush envisioned the Memex as a solution to address information overload by enhancing the management and refinding of information through associative trails. While other hypertext pioneers like Douglas Engelbart and Ted Nelson introduced advanced hypertext concepts to create more flexible document structures and augment the human intellect, some of their original ideas are still absent in our daily interaction with documents and information systems. Today, many digital document formats mimic paper documents without fully leveraging the opportunities offered by digital media and documents are often organised in hierarchical file structures. In this keynote, we explore how cross-media technologies, such as the resource-selector-link (RSL) hypermedia metamodel, can be used to organise and interact with information across digital and physical spaces. While emerging wearable mixed reality (MR) headsets offer new possibilities to augment the human intellect, we discuss how hypermedia research, in combination with other technologies, could play a major role in providing the necessary linked data and hypertext infrastructure for this augmentation process. We outline the challenges and opportunities for next-generation multimodal human-information interaction enabled by flexible cross-media information spaces and document structures in combination with upcoming mixed and virtual reality solutions.","internal_url":"https://www.academia.edu/122578719/As_We_May_Interact_Challenges_and_Opportunities_for_Next_Generation_Human_Information_Interaction","translated_internal_url":"","created_at":"2024-08-04T15:34:39.942-07:00","preview_url":null,"current_user_can_edit":null,"current_user_is_owner":null,"owner_id":13155,"coauthors_can_edit":true,"document_type":"paper","co_author_tags":[],"downloadable_attachments":[{"id":117388797,"title":"","file_type":"pdf","scribd_thumbnail_url":"https://attachments.academia-assets.com/117388797/thumbnails/1.jpg","file_name":"signer_HUMAN2024.pdf","download_url":"https://www.academia.edu/attachments/117388797/download_file","bulk_download_file_name":"As_We_May_Interact_Challenges_and_Opport.pdf","bulk_download_url":"https://d1wqtxts1xzle7.cloudfront.net/117388797/signer_HUMAN2024-libre.pdf?1723470508=\u0026response-content-disposition=attachment%3B+filename%3DAs_We_May_Interact_Challenges_and_Opport.pdf\u0026Expires=1744203890\u0026Signature=ZvKPFbsOIV5ESwBAp0CE7OPyE-U1xp8Ne6OK70bqUo4c8xFyD3oPoeOI~rYaoG0~OG3F~asxmV4NVMrUOv2YFrqV5htoMvz4NHee2T19KGdh7fOPmvgcG4dxQM5GoFZNi5GGhmv~5pOM46XbpnB0qd-UlRXPyi~jgA67yWjvOlgOvsYLcc50z8cDoMVrvCOmImR3p~DxsWf2xhwqJvVzspIMRgNrq7GkdG~xujExFlzlQoEsnSN76VsoApATWS7h4fZpTSa9aZkB94H302YxOVjb8XVXl8gcChZgvUgSvcf5o-alo7tNggbeQjd0pNLfPtsWQsohxK5FXaZnOcy-pA__\u0026Key-Pair-Id=APKAJLOHF5GGSLRBV4ZA"}],"slug":"As_We_May_Interact_Challenges_and_Opportunities_for_Next_Generation_Human_Information_Interaction","translated_slug":"","page_count":2,"language":"en","content_type":"Work","summary":"Long before the advent of personal computing, Vannevar Bush envisioned the Memex as a solution to address information overload by enhancing the management and refinding of information through associative trails. While other hypertext pioneers like Douglas Engelbart and Ted Nelson introduced advanced hypertext concepts to create more flexible document structures and augment the human intellect, some of their original ideas are still absent in our daily interaction with documents and information systems. Today, many digital document formats mimic paper documents without fully leveraging the opportunities offered by digital media and documents are often organised in hierarchical file structures. In this keynote, we explore how cross-media technologies, such as the resource-selector-link (RSL) hypermedia metamodel, can be used to organise and interact with information across digital and physical spaces. While emerging wearable mixed reality (MR) headsets offer new possibilities to augment the human intellect, we discuss how hypermedia research, in combination with other technologies, could play a major role in providing the necessary linked data and hypertext infrastructure for this augmentation process. We outline the challenges and opportunities for next-generation multimodal human-information interaction enabled by flexible cross-media information spaces and document structures in combination with upcoming mixed and virtual reality solutions.","impression_tracking_id":null,"owner":{"id":13155,"first_name":"Beat","middle_initials":null,"last_name":"Signer","page_name":"BeatSigner","domain_name":"vub","created_at":"2008-10-29T17:11:18.015-07:00","display_name":"Beat Signer","url":"https://vub.academia.edu/BeatSigner"},"attachments":[{"id":117388797,"title":"","file_type":"pdf","scribd_thumbnail_url":"https://attachments.academia-assets.com/117388797/thumbnails/1.jpg","file_name":"signer_HUMAN2024.pdf","download_url":"https://www.academia.edu/attachments/117388797/download_file","bulk_download_file_name":"As_We_May_Interact_Challenges_and_Opport.pdf","bulk_download_url":"https://d1wqtxts1xzle7.cloudfront.net/117388797/signer_HUMAN2024-libre.pdf?1723470508=\u0026response-content-disposition=attachment%3B+filename%3DAs_We_May_Interact_Challenges_and_Opport.pdf\u0026Expires=1744203890\u0026Signature=ZvKPFbsOIV5ESwBAp0CE7OPyE-U1xp8Ne6OK70bqUo4c8xFyD3oPoeOI~rYaoG0~OG3F~asxmV4NVMrUOv2YFrqV5htoMvz4NHee2T19KGdh7fOPmvgcG4dxQM5GoFZNi5GGhmv~5pOM46XbpnB0qd-UlRXPyi~jgA67yWjvOlgOvsYLcc50z8cDoMVrvCOmImR3p~DxsWf2xhwqJvVzspIMRgNrq7GkdG~xujExFlzlQoEsnSN76VsoApATWS7h4fZpTSa9aZkB94H302YxOVjb8XVXl8gcChZgvUgSvcf5o-alo7tNggbeQjd0pNLfPtsWQsohxK5FXaZnOcy-pA__\u0026Key-Pair-Id=APKAJLOHF5GGSLRBV4ZA"}],"research_interests":[{"id":37,"name":"Information Systems","url":"https://www.academia.edu/Documents/in/Information_Systems"},{"id":422,"name":"Computer Science","url":"https://www.academia.edu/Documents/in/Computer_Science"},{"id":472,"name":"Human Computer Interaction","url":"https://www.academia.edu/Documents/in/Human_Computer_Interaction"},{"id":491,"name":"Information Technology","url":"https://www.academia.edu/Documents/in/Information_Technology"},{"id":1210,"name":"Informatics","url":"https://www.academia.edu/Documents/in/Informatics"},{"id":5266,"name":"Human Information Interaction","url":"https://www.academia.edu/Documents/in/Human_Information_Interaction"},{"id":5673,"name":"Augmented Reality","url":"https://www.academia.edu/Documents/in/Augmented_Reality"},{"id":11081,"name":"Multimodal Interaction","url":"https://www.academia.edu/Documents/in/Multimodal_Interaction"},{"id":11085,"name":"Cross-Media Information Spaces","url":"https://www.academia.edu/Documents/in/Cross-Media_Information_Spaces"},{"id":39370,"name":"Mixed Reality","url":"https://www.academia.edu/Documents/in/Mixed_Reality"},{"id":44606,"name":"Hypermedia","url":"https://www.academia.edu/Documents/in/Hypermedia"},{"id":54192,"name":"Hypertext","url":"https://www.academia.edu/Documents/in/Hypertext"}],"urls":[{"id":43816080,"url":"https://beatsigner.com/publications/as-we-may-interact-challenges-and-opportunities-for-next-generation-human-information-interaction.pdf"}]}, dispatcherData: dispatcherData }); $(this).data('initialized', true); } }); $a.trackClickSource(".js-work-strip-work-link", "profile_work_strip") if (true) { Aedu.setUpFigureCarousel('profile-work-122578719-figures'); } }); </script> <div class="js-work-strip profile--work_container" data-work-id="270763"><div class="profile--work_thumbnail hidden-xs"><a class="js-work-strip-work-link" data-click-track="profile-work-strip-thumbnail" href="https://www.academia.edu/270763/Interactive_Paper_Past_Present_and_Future"><img alt="Research paper thumbnail of Interactive Paper: Past, Present and Future" class="work-thumbnail" src="https://attachments.academia-assets.com/45228887/thumbnails/1.jpg" /></a></div><div class="wp-workCard wp-workCard_itemContainer"><div class="wp-workCard_item wp-workCard--title"><a class="js-work-strip-work-link text-gray-darker" data-click-track="profile-work-strip-title" href="https://www.academia.edu/270763/Interactive_Paper_Past_Present_and_Future">Interactive Paper: Past, Present and Future</a></div><div class="wp-workCard_item"><span class="js-work-more-abstract-truncated">Over the last few years, there has been a significant increase in the number of researchers deali...</span><a class="js-work-more-abstract" data-broccoli-component="work_strip.more_abstract" data-click-track="profile-work-strip-more-abstract" href="javascript:;"><span> more </span><span><i class="fa fa-caret-down"></i></span></a><span class="js-work-more-abstract-untruncated hidden">Over the last few years, there has been a significant increase in the number of researchers dealing with the integration of paper and digital information or services. While recent technological developments enable new forms of paper-digital integration and interaction, some of the original research on interactive paper dates back almost twenty years. We give a brief overview of the most relevant past and current interactive paper developments. Then, based on our experience in developing a wide variety of interactive paper solutions over the last decade, as well as the results of other research groups, we outline future directions and challenges for the realisation of innovative interactive paper solutions. Further, we propose the definition of common data formats and interactive paper design patterns to ensure future cross-application and framework interoperability.</span></div><div class="wp-workCard_item wp-workCard--actions"><span class="work-strip-bookmark-button-container"></span><a id="0927aad0340b06e2978fe29a3b62271b" class="wp-workCard--action" rel="nofollow" data-click-track="profile-work-strip-download" data-download="{&quot;attachment_id&quot;:45228887,&quot;asset_id&quot;:270763,&quot;asset_type&quot;:&quot;Work&quot;,&quot;button_location&quot;:&quot;profile&quot;}" href="https://www.academia.edu/attachments/45228887/download_file?s=profile"><span><i class="fa fa-arrow-down"></i></span><span>Download</span></a><span class="wp-workCard--action visible-if-viewed-by-owner inline-block" style="display: none;"><span class="js-profile-work-strip-edit-button-wrapper profile-work-strip-edit-button-wrapper" data-work-id="270763"><a class="js-profile-work-strip-edit-button" tabindex="0"><span><i class="fa fa-pencil"></i></span><span>Edit</span></a></span></span></div><div class="wp-workCard_item wp-workCard--stats"><span><span><span class="js-view-count view-count u-mr2x" data-work-id="270763"><i class="fa fa-spinner fa-spin"></i></span><script>$(function () { var workId = 270763; window.Academia.workViewCountsFetcher.queue(workId, function (count) { var description = window.$h.commaizeInt(count) + " " + window.$h.pluralize(count, 'View'); $(".js-view-count[data-work-id=270763]").text(description); $(".js-view-count[data-work-id=270763]").attr('title', description).tooltip(); }); });</script></span></span><span><span class="percentile-widget hidden"><span class="u-mr2x work-percentile"></span></span><script>$(function () { var workId = 270763; window.Academia.workPercentilesFetcher.queue(workId, function (percentileText) { var container = $(".js-work-strip[data-work-id='270763']"); container.find('.work-percentile').text(percentileText.charAt(0).toUpperCase() + percentileText.slice(1)); container.find('.percentile-widget').show(); container.find('.percentile-widget').removeClass('hidden'); }); });</script></span></div><div id="work-strip-premium-row-container"></div></div></div><script> require.config({ waitSeconds: 90 })(["https://a.academia-assets.com/assets/wow_profile-a9bf3a2bc8c89fa2a77156577594264ee8a0f214d74241bc0fcd3f69f8d107ac.js","https://a.academia-assets.com/assets/work_edit-ad038b8c047c1a8d4fa01b402d530ff93c45fee2137a149a4a5398bc8ad67560.js"], function() { // from javascript_helper.rb var dispatcherData = {} if (true){ window.WowProfile.dispatcher = window.WowProfile.dispatcher || _.clone(Backbone.Events); dispatcherData = { dispatcher: window.WowProfile.dispatcher, downloadLinkId: "0927aad0340b06e2978fe29a3b62271b" } } $('.js-work-strip[data-work-id=270763]').each(function() { if (!$(this).data('initialized')) { new WowProfile.WorkStripView({ el: this, workJSON: {"id":270763,"title":"Interactive Paper: Past, Present and Future","translated_title":"","metadata":{"abstract":"Over the last few years, there has been a significant increase in the number of researchers dealing with the integration of paper and digital information or services. While recent technological developments enable new forms of paper-digital integration and interaction, some of the original research on interactive paper dates back almost twenty years. We give a brief overview of the most relevant past and current interactive paper developments. Then, based on our experience in developing a wide variety of interactive paper solutions over the last decade, as well as the results of other research groups, we outline future directions and challenges for the realisation of innovative interactive paper solutions. Further, we propose the definition of common data formats and interactive paper design patterns to ensure future cross-application and framework interoperability.","more_info":"Beat Signer and Moira C. Norrie, Proceedings of PaperComp 2010, 1st International Workshop on Paper Computing, Copenhagen Denmark, September 2010","ai_title_tag":"Interactive Paper: History and Future Directions","publication_date":{"day":null,"month":null,"year":2010,"errors":{}}},"translated_abstract":"Over the last few years, there has been a significant increase in the number of researchers dealing with the integration of paper and digital information or services. While recent technological developments enable new forms of paper-digital integration and interaction, some of the original research on interactive paper dates back almost twenty years. We give a brief overview of the most relevant past and current interactive paper developments. Then, based on our experience in developing a wide variety of interactive paper solutions over the last decade, as well as the results of other research groups, we outline future directions and challenges for the realisation of innovative interactive paper solutions. Further, we propose the definition of common data formats and interactive paper design patterns to ensure future cross-application and framework interoperability.","internal_url":"https://www.academia.edu/270763/Interactive_Paper_Past_Present_and_Future","translated_internal_url":"","created_at":"2010-07-22T05:12:24.899-07:00","preview_url":null,"current_user_can_edit":null,"current_user_is_owner":null,"owner_id":13155,"coauthors_can_edit":true,"document_type":"paper","co_author_tags":[{"id":36173143,"work_id":270763,"tagging_user_id":13155,"tagged_user_id":120441,"co_author_invite_id":null,"email":"n***e@inf.ethz.ch","affiliation":"Swiss Federal Institute of Technology (ETH)","display_order":1,"name":"Moira Norrie","title":"Interactive Paper: Past, Present and Future"}],"downloadable_attachments":[{"id":45228887,"title":"","file_type":"pdf","scribd_thumbnail_url":"https://attachments.academia-assets.com/45228887/thumbnails/1.jpg","file_name":"signer_PaperComp2010.pdf","download_url":"https://www.academia.edu/attachments/45228887/download_file","bulk_download_file_name":"Interactive_Paper_Past_Present_and_Futur.pdf","bulk_download_url":"https://d1wqtxts1xzle7.cloudfront.net/45228887/signer_PaperComp2010-libre.pdf?1462042110=\u0026response-content-disposition=attachment%3B+filename%3DInteractive_Paper_Past_Present_and_Futur.pdf\u0026Expires=1744203890\u0026Signature=NQceGH2NTHagxvnQYxiXdHRtAdSI~F8t5rb8iNs2dSPrmB8TUOOIPmy-CC3OEIbsX4VcsOh0idzN~WsMkR8Lsg~774OHIpMI84AqrxFraQhy40WmVmADzi1cxVtvgdgZ4m1qabxyEbtYuzeNAmSc5tE~Gq6p8iBC4VIWeNBsbnCG3itGCVVtqBbtPpdXjaoF95jC-FTh6EmrE3eePq2VuMgvX3dtZcsQE-yiSnsnBs6sdaydHNhDr3evNeXPBI3wzyvH~W5eqpacCamqS7lVkIyeGwg1Hk2eUb0haiUgWvcuAVrZ5eQ4lVHjbBNt5-K~ZI4qUAhw3i7Ak0SGnzSAbg__\u0026Key-Pair-Id=APKAJLOHF5GGSLRBV4ZA"}],"slug":"Interactive_Paper_Past_Present_and_Future","translated_slug":"","page_count":4,"language":"en","content_type":"Work","summary":"Over the last few years, there has been a significant increase in the number of researchers dealing with the integration of paper and digital information or services. While recent technological developments enable new forms of paper-digital integration and interaction, some of the original research on interactive paper dates back almost twenty years. We give a brief overview of the most relevant past and current interactive paper developments. Then, based on our experience in developing a wide variety of interactive paper solutions over the last decade, as well as the results of other research groups, we outline future directions and challenges for the realisation of innovative interactive paper solutions. Further, we propose the definition of common data formats and interactive paper design patterns to ensure future cross-application and framework interoperability.","impression_tracking_id":null,"owner":{"id":13155,"first_name":"Beat","middle_initials":null,"last_name":"Signer","page_name":"BeatSigner","domain_name":"vub","created_at":"2008-10-29T17:11:18.015-07:00","display_name":"Beat Signer","url":"https://vub.academia.edu/BeatSigner"},"attachments":[{"id":45228887,"title":"","file_type":"pdf","scribd_thumbnail_url":"https://attachments.academia-assets.com/45228887/thumbnails/1.jpg","file_name":"signer_PaperComp2010.pdf","download_url":"https://www.academia.edu/attachments/45228887/download_file","bulk_download_file_name":"Interactive_Paper_Past_Present_and_Futur.pdf","bulk_download_url":"https://d1wqtxts1xzle7.cloudfront.net/45228887/signer_PaperComp2010-libre.pdf?1462042110=\u0026response-content-disposition=attachment%3B+filename%3DInteractive_Paper_Past_Present_and_Futur.pdf\u0026Expires=1744203890\u0026Signature=NQceGH2NTHagxvnQYxiXdHRtAdSI~F8t5rb8iNs2dSPrmB8TUOOIPmy-CC3OEIbsX4VcsOh0idzN~WsMkR8Lsg~774OHIpMI84AqrxFraQhy40WmVmADzi1cxVtvgdgZ4m1qabxyEbtYuzeNAmSc5tE~Gq6p8iBC4VIWeNBsbnCG3itGCVVtqBbtPpdXjaoF95jC-FTh6EmrE3eePq2VuMgvX3dtZcsQE-yiSnsnBs6sdaydHNhDr3evNeXPBI3wzyvH~W5eqpacCamqS7lVkIyeGwg1Hk2eUb0haiUgWvcuAVrZ5eQ4lVHjbBNt5-K~ZI4qUAhw3i7Ak0SGnzSAbg__\u0026Key-Pair-Id=APKAJLOHF5GGSLRBV4ZA"}],"research_interests":[{"id":37,"name":"Information Systems","url":"https://www.academia.edu/Documents/in/Information_Systems"},{"id":48,"name":"Engineering","url":"https://www.academia.edu/Documents/in/Engineering"},{"id":422,"name":"Computer Science","url":"https://www.academia.edu/Documents/in/Computer_Science"},{"id":449,"name":"Software Engineering","url":"https://www.academia.edu/Documents/in/Software_Engineering"},{"id":459,"name":"Information Science","url":"https://www.academia.edu/Documents/in/Information_Science"},{"id":472,"name":"Human Computer Interaction","url":"https://www.academia.edu/Documents/in/Human_Computer_Interaction"},{"id":491,"name":"Information Technology","url":"https://www.academia.edu/Documents/in/Information_Technology"},{"id":492,"name":"Management Information Systems","url":"https://www.academia.edu/Documents/in/Management_Information_Systems"},{"id":854,"name":"Computer Vision","url":"https://www.academia.edu/Documents/in/Computer_Vision"},{"id":859,"name":"Communication","url":"https://www.academia.edu/Documents/in/Communication"},{"id":922,"name":"Education","url":"https://www.academia.edu/Documents/in/Education"},{"id":923,"name":"Technology","url":"https://www.academia.edu/Documents/in/Technology"},{"id":933,"name":"New Media","url":"https://www.academia.edu/Documents/in/New_Media"},{"id":1012,"name":"Digital Libraries","url":"https://www.academia.edu/Documents/in/Digital_Libraries"},{"id":1210,"name":"Informatics","url":"https://www.academia.edu/Documents/in/Informatics"},{"id":1236,"name":"Art","url":"https://www.academia.edu/Documents/in/Art"},{"id":1380,"name":"Computer Engineering","url":"https://www.academia.edu/Documents/in/Computer_Engineering"},{"id":1384,"name":"Web Engineering","url":"https://www.academia.edu/Documents/in/Web_Engineering"},{"id":1453,"name":"Information Management","url":"https://www.academia.edu/Documents/in/Information_Management"},{"id":1609,"name":"E-learning","url":"https://www.academia.edu/Documents/in/E-learning"},{"id":2482,"name":"Database Systems","url":"https://www.academia.edu/Documents/in/Database_Systems"},{"id":2852,"name":"Narrative","url":"https://www.academia.edu/Documents/in/Narrative"},{"id":2869,"name":"Digital Media","url":"https://www.academia.edu/Documents/in/Digital_Media"},{"id":2879,"name":"Ubiquitous Computing","url":"https://www.academia.edu/Documents/in/Ubiquitous_Computing"},{"id":3419,"name":"Multimedia","url":"https://www.academia.edu/Documents/in/Multimedia"},{"id":3934,"name":"Visual Narrative","url":"https://www.academia.edu/Documents/in/Visual_Narrative"},{"id":4198,"name":"Mobile Technology","url":"https://www.academia.edu/Documents/in/Mobile_Technology"},{"id":4416,"name":"Interaction Design","url":"https://www.academia.edu/Documents/in/Interaction_Design"},{"id":5673,"name":"Augmented Reality","url":"https://www.academia.edu/Documents/in/Augmented_Reality"},{"id":10165,"name":"Interactive and Digital Media","url":"https://www.academia.edu/Documents/in/Interactive_and_Digital_Media"},{"id":11081,"name":"Multimodal Interaction","url":"https://www.academia.edu/Documents/in/Multimodal_Interaction"},{"id":11085,"name":"Cross-Media Information Spaces","url":"https://www.academia.edu/Documents/in/Cross-Media_Information_Spaces"},{"id":11086,"name":"Interactive Paper","url":"https://www.academia.edu/Documents/in/Interactive_Paper"},{"id":11119,"name":"User Interface","url":"https://www.academia.edu/Documents/in/User_Interface"},{"id":12650,"name":"Intelligent User Interface Agents","url":"https://www.academia.edu/Documents/in/Intelligent_User_Interface_Agents"},{"id":14203,"name":"Natural User Interfaces","url":"https://www.academia.edu/Documents/in/Natural_User_Interfaces"},{"id":15951,"name":"Digital Pen and Paper","url":"https://www.academia.edu/Documents/in/Digital_Pen_and_Paper"},{"id":20470,"name":"Tangible User Interfaces","url":"https://www.academia.edu/Documents/in/Tangible_User_Interfaces"},{"id":21201,"name":"Multimodality","url":"https://www.academia.edu/Documents/in/Multimodality"},{"id":23997,"name":"HCI","url":"https://www.academia.edu/Documents/in/HCI"},{"id":26825,"name":"Mobile Computing","url":"https://www.academia.edu/Documents/in/Mobile_Computing"},{"id":36689,"name":"Human-Computer Interface","url":"https://www.academia.edu/Documents/in/Human-Computer_Interface"},{"id":37228,"name":"Multimodal Interfaces","url":"https://www.academia.edu/Documents/in/Multimodal_Interfaces"},{"id":39369,"name":"Augmented Reality (Computer Science)","url":"https://www.academia.edu/Documents/in/Augmented_Reality_Computer_Science_"},{"id":41030,"name":"Mobile Augmented Reality","url":"https://www.academia.edu/Documents/in/Mobile_Augmented_Reality"},{"id":41474,"name":"Cross Media Platforms","url":"https://www.academia.edu/Documents/in/Cross_Media_Platforms"},{"id":44606,"name":"Hypermedia","url":"https://www.academia.edu/Documents/in/Hypermedia"},{"id":51492,"name":"Pulp and Paper + Recycled Paper","url":"https://www.academia.edu/Documents/in/Pulp_and_Paper_Recycled_Paper"},{"id":58120,"name":"pulp and paper Technology","url":"https://www.academia.edu/Documents/in/pulp_and_paper_Technology"},{"id":69848,"name":"Presentation","url":"https://www.academia.edu/Documents/in/Presentation"},{"id":69857,"name":"PowerPoint","url":"https://www.academia.edu/Documents/in/PowerPoint"},{"id":97585,"name":"User interfaces","url":"https://www.academia.edu/Documents/in/User_interfaces"},{"id":182290,"name":"Tangible User Interface, Tangible Programming","url":"https://www.academia.edu/Documents/in/Tangible_User_Interface_Tangible_Programming"},{"id":227601,"name":"Intelligent User Interfaces","url":"https://www.academia.edu/Documents/in/Intelligent_User_Interfaces"},{"id":242420,"name":"Presentation Slides","url":"https://www.academia.edu/Documents/in/Presentation_Slides"},{"id":255094,"name":"Computer User Interface Design","url":"https://www.academia.edu/Documents/in/Computer_User_Interface_Design"},{"id":311668,"name":"Digital Story Telling","url":"https://www.academia.edu/Documents/in/Digital_Story_Telling"},{"id":369022,"name":"Cross Media","url":"https://www.academia.edu/Documents/in/Cross_Media"},{"id":407764,"name":"Visual Presentation","url":"https://www.academia.edu/Documents/in/Visual_Presentation"},{"id":470389,"name":"COMPUTER SCIENCE \u0026 ENGINEERING","url":"https://www.academia.edu/Documents/in/COMPUTER_SCIENCE_and_ENGINEERING-3"},{"id":531041,"name":"Augmented Reality Book","url":"https://www.academia.edu/Documents/in/Augmented_Reality_Book"},{"id":554420,"name":"PPT Presentation","url":"https://www.academia.edu/Documents/in/PPT_Presentation"},{"id":631659,"name":"Power Point Presentations","url":"https://www.academia.edu/Documents/in/Power_Point_Presentations"},{"id":721414,"name":"Augmented Paper","url":"https://www.academia.edu/Documents/in/Augmented_Paper"},{"id":1251579,"name":"Human-Computer Interaction","url":"https://www.academia.edu/Documents/in/Human-Computer_Interaction"},{"id":1273494,"name":"Slideware","url":"https://www.academia.edu/Documents/in/Slideware"}],"urls":[{"id":9196127,"url":"https://beatsigner.com/publications/interactive-paper-past-present-and-future.pdf"}]}, dispatcherData: dispatcherData }); $(this).data('initialized', true); } }); $a.trackClickSource(".js-work-strip-work-link", "profile_work_strip") if (false) { Aedu.setUpFigureCarousel('profile-work-270763-figures'); } }); </script> <div class="js-work-strip profile--work_container" data-work-id="32057799"><div class="profile--work_thumbnail hidden-xs"><a class="js-work-strip-work-link" data-click-track="profile-work-strip-thumbnail" href="https://www.academia.edu/32057799/Tangible_Holograms_Towards_Mobile_Physical_Augmentation_of_Virtual_Objects"><img alt="Research paper thumbnail of Tangible Holograms: Towards Mobile Physical Augmentation of Virtual Objects" class="work-thumbnail" src="https://attachments.academia-assets.com/62885113/thumbnails/1.jpg" /></a></div><div class="wp-workCard wp-workCard_itemContainer"><div class="wp-workCard_item wp-workCard--title"><a class="js-work-strip-work-link text-gray-darker" data-click-track="profile-work-strip-title" href="https://www.academia.edu/32057799/Tangible_Holograms_Towards_Mobile_Physical_Augmentation_of_Virtual_Objects">Tangible Holograms: Towards Mobile Physical Augmentation of Virtual Objects</a></div><div class="wp-workCard_item wp-workCard--coauthors"><span>by </span><span><a class="" data-click-track="profile-work-strip-authors" href="https://vub.academia.edu/BeatSigner">Beat Signer</a> and <a class="" data-click-track="profile-work-strip-authors" href="https://independent.academia.edu/CurtinTimothy">Timothy Curtin</a></span></div><div class="wp-workCard_item"><span class="js-work-more-abstract-truncated">The last two decades have seen the emergence and steady development of tangible user interfaces. ...</span><a class="js-work-more-abstract" data-broccoli-component="work_strip.more_abstract" data-click-track="profile-work-strip-more-abstract" href="javascript:;"><span> more </span><span><i class="fa fa-caret-down"></i></span></a><span class="js-work-more-abstract-untruncated hidden">The last two decades have seen the emergence and steady development of tangible user interfaces. While most of these interfaces are applied for input—with output still on traditional computer screens—the goal of programmable matter and actuated shape-changing materials is to directly use the physical objects for visual or tangible feedback. Advances in material sciences and flexible display technologies are investigated to enable such reconfigurable physical objects. While existing solutions aim for making physical objects more controllable via the digital world, we propose an approach where holograms (virtual objects) in a mixed reality environment are augmented with physical variables such as shape, texture or temperature. As such, the support for mobility forms an important contribution of the proposed solution since it enables users to freely move within and across environments. Furthermore, our augmented virtual objects can co-exist in a single environment with programmable matter and other actuated shape-changing solutions. The future potential of the proposed approach is illustrated in two usage scenarios and we hope that the presentation of our work in progress on a novel way to realise tangible holograms will foster some lively discussions in the CHI community.</span></div><div class="wp-workCard_item wp-workCard--actions"><span class="work-strip-bookmark-button-container"></span><a id="5218acd91b146c128423507ecb57e5c2" class="wp-workCard--action" rel="nofollow" data-click-track="profile-work-strip-download" data-download="{&quot;attachment_id&quot;:62885113,&quot;asset_id&quot;:32057799,&quot;asset_type&quot;:&quot;Work&quot;,&quot;button_location&quot;:&quot;profile&quot;}" href="https://www.academia.edu/attachments/62885113/download_file?s=profile"><span><i class="fa fa-arrow-down"></i></span><span>Download</span></a><span class="wp-workCard--action visible-if-viewed-by-owner inline-block" style="display: none;"><span class="js-profile-work-strip-edit-button-wrapper profile-work-strip-edit-button-wrapper" data-work-id="32057799"><a class="js-profile-work-strip-edit-button" tabindex="0"><span><i class="fa fa-pencil"></i></span><span>Edit</span></a></span></span></div><div class="wp-workCard_item wp-workCard--stats"><span><span><span class="js-view-count view-count u-mr2x" data-work-id="32057799"><i class="fa fa-spinner fa-spin"></i></span><script>$(function () { var workId = 32057799; window.Academia.workViewCountsFetcher.queue(workId, function (count) { var description = window.$h.commaizeInt(count) + " " + window.$h.pluralize(count, 'View'); $(".js-view-count[data-work-id=32057799]").text(description); $(".js-view-count[data-work-id=32057799]").attr('title', description).tooltip(); }); });</script></span></span><span><span class="percentile-widget hidden"><span class="u-mr2x work-percentile"></span></span><script>$(function () { var workId = 32057799; window.Academia.workPercentilesFetcher.queue(workId, function (percentileText) { var container = $(".js-work-strip[data-work-id='32057799']"); container.find('.work-percentile').text(percentileText.charAt(0).toUpperCase() + percentileText.slice(1)); container.find('.percentile-widget').show(); container.find('.percentile-widget').removeClass('hidden'); }); });</script></span></div><div id="work-strip-premium-row-container"></div></div></div><script> require.config({ waitSeconds: 90 })(["https://a.academia-assets.com/assets/wow_profile-a9bf3a2bc8c89fa2a77156577594264ee8a0f214d74241bc0fcd3f69f8d107ac.js","https://a.academia-assets.com/assets/work_edit-ad038b8c047c1a8d4fa01b402d530ff93c45fee2137a149a4a5398bc8ad67560.js"], function() { // from javascript_helper.rb var dispatcherData = {} if (true){ window.WowProfile.dispatcher = window.WowProfile.dispatcher || _.clone(Backbone.Events); dispatcherData = { dispatcher: window.WowProfile.dispatcher, downloadLinkId: "5218acd91b146c128423507ecb57e5c2" } } $('.js-work-strip[data-work-id=32057799]').each(function() { if (!$(this).data('initialized')) { new WowProfile.WorkStripView({ el: this, workJSON: {"id":32057799,"title":"Tangible Holograms: Towards Mobile Physical Augmentation of Virtual Objects","translated_title":"","metadata":{"abstract":"The last two decades have seen the emergence and steady development of tangible user interfaces. While most of these interfaces are applied for input—with output still on traditional computer screens—the goal of programmable matter and actuated shape-changing materials is to directly use the physical objects for visual or tangible feedback. Advances in material sciences and flexible display technologies are investigated to enable such reconfigurable physical objects. While existing solutions aim for making physical objects more controllable via the digital world, we propose an approach where holograms (virtual objects) in a mixed reality environment are augmented with physical variables such as shape, texture or temperature. As such, the support for mobility forms an important contribution of the proposed solution since it enables users to freely move within and across environments. Furthermore, our augmented virtual objects can co-exist in a single environment with programmable matter and other actuated shape-changing solutions. The future potential of the proposed approach is illustrated in two usage scenarios and we hope that the presentation of our work in progress on a novel way to realise tangible holograms will foster some lively discussions in the CHI community.","more_info":"Technical Report WISE Lab, WISE-2017-01, March 2017","publication_date":{"day":null,"month":null,"year":2017,"errors":{}}},"translated_abstract":"The last two decades have seen the emergence and steady development of tangible user interfaces. While most of these interfaces are applied for input—with output still on traditional computer screens—the goal of programmable matter and actuated shape-changing materials is to directly use the physical objects for visual or tangible feedback. Advances in material sciences and flexible display technologies are investigated to enable such reconfigurable physical objects. While existing solutions aim for making physical objects more controllable via the digital world, we propose an approach where holograms (virtual objects) in a mixed reality environment are augmented with physical variables such as shape, texture or temperature. As such, the support for mobility forms an important contribution of the proposed solution since it enables users to freely move within and across environments. Furthermore, our augmented virtual objects can co-exist in a single environment with programmable matter and other actuated shape-changing solutions. The future potential of the proposed approach is illustrated in two usage scenarios and we hope that the presentation of our work in progress on a novel way to realise tangible holograms will foster some lively discussions in the CHI community.","internal_url":"https://www.academia.edu/32057799/Tangible_Holograms_Towards_Mobile_Physical_Augmentation_of_Virtual_Objects","translated_internal_url":"","created_at":"2017-03-26T18:32:27.736-07:00","preview_url":null,"current_user_can_edit":null,"current_user_is_owner":null,"owner_id":13155,"coauthors_can_edit":true,"document_type":"paper","co_author_tags":[{"id":28195055,"work_id":32057799,"tagging_user_id":13155,"tagged_user_id":62307900,"co_author_invite_id":6156284,"email":"t***n@vub.ac.be","display_order":1,"name":"Timothy Curtin","title":"Tangible Holograms: Towards Mobile Physical Augmentation of Virtual Objects"}],"downloadable_attachments":[{"id":62885113,"title":"","file_type":"pdf","scribd_thumbnail_url":"https://attachments.academia-assets.com/62885113/thumbnails/1.jpg","file_name":"signer_arXiv2017.pdf","download_url":"https://www.academia.edu/attachments/62885113/download_file","bulk_download_file_name":"Tangible_Holograms_Towards_Mobile_Physic.pdf","bulk_download_url":"https://d1wqtxts1xzle7.cloudfront.net/62885113/signer_arXiv2017-libre.pdf?1586425217=\u0026response-content-disposition=attachment%3B+filename%3DTangible_Holograms_Towards_Mobile_Physic.pdf\u0026Expires=1744203890\u0026Signature=LlEIe1FrLhHeJYZSqBscfmYsTYw1CUdK-K8BqsCUrIZ8nLBUCIY49vWOR4lO5gQ7cx-NXYLu7hbgIAVgsqLcRCTxR0qxptd~vyKY~uEqA4lHiJDD3J6Su1wZegb2mMCCYTShETWOKtsVYQpuxt3gCHpFVjlgC6uC-W6qdl39GqQeMsBqT2O3mzuCnUkrRWJxf-Fp34r0Ou~SOB0bW0ZyXug-aL2aNmjDmXKae-OOuLpq4V~-vFZhOOjrljyf-7~VycgjkWltjGynTjYFTKAelb~e2kvY1qgtwiIDO7Dizs-2eCFTnacBM2yF7hpdcE9M2idQj6UR3pJp6AGYkJbylA__\u0026Key-Pair-Id=APKAJLOHF5GGSLRBV4ZA"}],"slug":"Tangible_Holograms_Towards_Mobile_Physical_Augmentation_of_Virtual_Objects","translated_slug":"","page_count":8,"language":"en","content_type":"Work","summary":"The last two decades have seen the emergence and steady development of tangible user interfaces. While most of these interfaces are applied for input—with output still on traditional computer screens—the goal of programmable matter and actuated shape-changing materials is to directly use the physical objects for visual or tangible feedback. Advances in material sciences and flexible display technologies are investigated to enable such reconfigurable physical objects. While existing solutions aim for making physical objects more controllable via the digital world, we propose an approach where holograms (virtual objects) in a mixed reality environment are augmented with physical variables such as shape, texture or temperature. As such, the support for mobility forms an important contribution of the proposed solution since it enables users to freely move within and across environments. Furthermore, our augmented virtual objects can co-exist in a single environment with programmable matter and other actuated shape-changing solutions. The future potential of the proposed approach is illustrated in two usage scenarios and we hope that the presentation of our work in progress on a novel way to realise tangible holograms will foster some lively discussions in the CHI community.","impression_tracking_id":null,"owner":{"id":13155,"first_name":"Beat","middle_initials":null,"last_name":"Signer","page_name":"BeatSigner","domain_name":"vub","created_at":"2008-10-29T17:11:18.015-07:00","display_name":"Beat Signer","url":"https://vub.academia.edu/BeatSigner"},"attachments":[{"id":62885113,"title":"","file_type":"pdf","scribd_thumbnail_url":"https://attachments.academia-assets.com/62885113/thumbnails/1.jpg","file_name":"signer_arXiv2017.pdf","download_url":"https://www.academia.edu/attachments/62885113/download_file","bulk_download_file_name":"Tangible_Holograms_Towards_Mobile_Physic.pdf","bulk_download_url":"https://d1wqtxts1xzle7.cloudfront.net/62885113/signer_arXiv2017-libre.pdf?1586425217=\u0026response-content-disposition=attachment%3B+filename%3DTangible_Holograms_Towards_Mobile_Physic.pdf\u0026Expires=1744203890\u0026Signature=LlEIe1FrLhHeJYZSqBscfmYsTYw1CUdK-K8BqsCUrIZ8nLBUCIY49vWOR4lO5gQ7cx-NXYLu7hbgIAVgsqLcRCTxR0qxptd~vyKY~uEqA4lHiJDD3J6Su1wZegb2mMCCYTShETWOKtsVYQpuxt3gCHpFVjlgC6uC-W6qdl39GqQeMsBqT2O3mzuCnUkrRWJxf-Fp34r0Ou~SOB0bW0ZyXug-aL2aNmjDmXKae-OOuLpq4V~-vFZhOOjrljyf-7~VycgjkWltjGynTjYFTKAelb~e2kvY1qgtwiIDO7Dizs-2eCFTnacBM2yF7hpdcE9M2idQj6UR3pJp6AGYkJbylA__\u0026Key-Pair-Id=APKAJLOHF5GGSLRBV4ZA"}],"research_interests":[{"id":37,"name":"Information Systems","url":"https://www.academia.edu/Documents/in/Information_Systems"},{"id":48,"name":"Engineering","url":"https://www.academia.edu/Documents/in/Engineering"},{"id":77,"name":"Robotics","url":"https://www.academia.edu/Documents/in/Robotics"},{"id":422,"name":"Computer Science","url":"https://www.academia.edu/Documents/in/Computer_Science"},{"id":449,"name":"Software Engineering","url":"https://www.academia.edu/Documents/in/Software_Engineering"},{"id":471,"name":"Robotics (Computer Science)","url":"https://www.academia.edu/Documents/in/Robotics_Computer_Science_"},{"id":472,"name":"Human Computer Interaction","url":"https://www.academia.edu/Documents/in/Human_Computer_Interaction"},{"id":491,"name":"Information Technology","url":"https://www.academia.edu/Documents/in/Information_Technology"},{"id":492,"name":"Management Information Systems","url":"https://www.academia.edu/Documents/in/Management_Information_Systems"},{"id":511,"name":"Materials Science","url":"https://www.academia.edu/Documents/in/Materials_Science"},{"id":923,"name":"Technology","url":"https://www.academia.edu/Documents/in/Technology"},{"id":933,"name":"New Media","url":"https://www.academia.edu/Documents/in/New_Media"},{"id":1210,"name":"Informatics","url":"https://www.academia.edu/Documents/in/Informatics"},{"id":1380,"name":"Computer Engineering","url":"https://www.academia.edu/Documents/in/Computer_Engineering"},{"id":1440,"name":"Visualization","url":"https://www.academia.edu/Documents/in/Visualization"},{"id":2043,"name":"Mobile Robotics","url":"https://www.academia.edu/Documents/in/Mobile_Robotics"},{"id":2869,"name":"Digital Media","url":"https://www.academia.edu/Documents/in/Digital_Media"},{"id":2875,"name":"User Experience (UX)","url":"https://www.academia.edu/Documents/in/User_Experience_UX_"},{"id":2879,"name":"Ubiquitous Computing","url":"https://www.academia.edu/Documents/in/Ubiquitous_Computing"},{"id":3419,"name":"Multimedia","url":"https://www.academia.edu/Documents/in/Multimedia"},{"id":3424,"name":"Information Visualization","url":"https://www.academia.edu/Documents/in/Information_Visualization"},{"id":4198,"name":"Mobile Technology","url":"https://www.academia.edu/Documents/in/Mobile_Technology"},{"id":4416,"name":"Interaction Design","url":"https://www.academia.edu/Documents/in/Interaction_Design"},{"id":5044,"name":"Embodiment","url":"https://www.academia.edu/Documents/in/Embodiment"},{"id":5266,"name":"Human Information Interaction","url":"https://www.academia.edu/Documents/in/Human_Information_Interaction"},{"id":5673,"name":"Augmented Reality","url":"https://www.academia.edu/Documents/in/Augmented_Reality"},{"id":9134,"name":"Pervasive Computing","url":"https://www.academia.edu/Documents/in/Pervasive_Computing"},{"id":9135,"name":"The Internet of Things","url":"https://www.academia.edu/Documents/in/The_Internet_of_Things"},{"id":10165,"name":"Interactive and Digital Media","url":"https://www.academia.edu/Documents/in/Interactive_and_Digital_Media"},{"id":11085,"name":"Cross-Media Information Spaces","url":"https://www.academia.edu/Documents/in/Cross-Media_Information_Spaces"},{"id":11119,"name":"User Interface","url":"https://www.academia.edu/Documents/in/User_Interface"},{"id":20470,"name":"Tangible User Interfaces","url":"https://www.academia.edu/Documents/in/Tangible_User_Interfaces"},{"id":20481,"name":"Information Visualisation","url":"https://www.academia.edu/Documents/in/Information_Visualisation"},{"id":24002,"name":"Materials Science and Engineering","url":"https://www.academia.edu/Documents/in/Materials_Science_and_Engineering"},{"id":25035,"name":"Material Science","url":"https://www.academia.edu/Documents/in/Material_Science"},{"id":26825,"name":"Mobile Computing","url":"https://www.academia.edu/Documents/in/Mobile_Computing"},{"id":33288,"name":"Affordances","url":"https://www.academia.edu/Documents/in/Affordances"},{"id":39370,"name":"Mixed Reality","url":"https://www.academia.edu/Documents/in/Mixed_Reality"},{"id":41030,"name":"Mobile Augmented Reality","url":"https://www.academia.edu/Documents/in/Mobile_Augmented_Reality"},{"id":47980,"name":"Data Visualization","url":"https://www.academia.edu/Documents/in/Data_Visualization"},{"id":48591,"name":"Data Visualisation","url":"https://www.academia.edu/Documents/in/Data_Visualisation"},{"id":59072,"name":"Mixed Reality research","url":"https://www.academia.edu/Documents/in/Mixed_Reality_research"},{"id":85420,"name":"Crossmedia","url":"https://www.academia.edu/Documents/in/Crossmedia"},{"id":88758,"name":"Multisensory Design","url":"https://www.academia.edu/Documents/in/Multisensory_Design"},{"id":88960,"name":"Computer Generated Holograms","url":"https://www.academia.edu/Documents/in/Computer_Generated_Holograms"},{"id":90556,"name":"TEI (Tangible, Embedded, and Embodied Interaction)","url":"https://www.academia.edu/Documents/in/TEI_Tangible_Embedded_and_Embodied_Interaction_"},{"id":126300,"name":"Big Data","url":"https://www.academia.edu/Documents/in/Big_Data"},{"id":126585,"name":"Human Computer Interaction Design","url":"https://www.academia.edu/Documents/in/Human_Computer_Interaction_Design"},{"id":173458,"name":"Tangible Computing","url":"https://www.academia.edu/Documents/in/Tangible_Computing"},{"id":182290,"name":"Tangible User Interface, Tangible Programming","url":"https://www.academia.edu/Documents/in/Tangible_User_Interface_Tangible_Programming"},{"id":188142,"name":"Inverse Kinematics","url":"https://www.academia.edu/Documents/in/Inverse_Kinematics"},{"id":271153,"name":"Data representation","url":"https://www.academia.edu/Documents/in/Data_representation"},{"id":289278,"name":"Big Data Analytics","url":"https://www.academia.edu/Documents/in/Big_Data_Analytics"},{"id":321828,"name":"Programmable Matter","url":"https://www.academia.edu/Documents/in/Programmable_Matter"},{"id":359271,"name":"Tangible Interfaces","url":"https://www.academia.edu/Documents/in/Tangible_Interfaces"},{"id":411019,"name":"Complex Data Visualizations","url":"https://www.academia.edu/Documents/in/Complex_Data_Visualizations"},{"id":413148,"name":"Big Data / Analytics / Data Mining","url":"https://www.academia.edu/Documents/in/Big_Data_Analytics_Data_Mining"},{"id":416425,"name":"Tangible Media","url":"https://www.academia.edu/Documents/in/Tangible_Media"},{"id":448968,"name":"Tangible Interaction","url":"https://www.academia.edu/Documents/in/Tangible_Interaction"},{"id":449899,"name":"Big Data Visualisation","url":"https://www.academia.edu/Documents/in/Big_Data_Visualisation"},{"id":458849,"name":"Pervasive and Ubiquitous Computing","url":"https://www.academia.edu/Documents/in/Pervasive_and_Ubiquitous_Computing"},{"id":463287,"name":"Tangibles","url":"https://www.academia.edu/Documents/in/Tangibles"},{"id":581039,"name":"Visualization of Big Data","url":"https://www.academia.edu/Documents/in/Visualization_of_Big_Data"},{"id":711190,"name":"Tangible BIts","url":"https://www.academia.edu/Documents/in/Tangible_BIts"},{"id":792615,"name":"Holograms","url":"https://www.academia.edu/Documents/in/Holograms"},{"id":876643,"name":"Tangible User interface","url":"https://www.academia.edu/Documents/in/Tangible_User_interface"},{"id":1004370,"name":"Holographic Display","url":"https://www.academia.edu/Documents/in/Holographic_Display"},{"id":1145332,"name":"Shape-Changing Interfaces","url":"https://www.academia.edu/Documents/in/Shape-Changing_Interfaces"},{"id":1993399,"name":"Data Physicalisation","url":"https://www.academia.edu/Documents/in/Data_Physicalisation"},{"id":2668784,"name":"HoloLens","url":"https://www.academia.edu/Documents/in/HoloLens-1"},{"id":2947471,"name":"Data Physicalization","url":"https://www.academia.edu/Documents/in/Data_Physicalization"}],"urls":[{"id":9196130,"url":"https://beatsigner.com/publications/tangible-holograms-towards-mobile-physical-augmentation-of-virtual-objects.pdf"}]}, dispatcherData: dispatcherData }); $(this).data('initialized', true); } }); $a.trackClickSource(".js-work-strip-work-link", "profile_work_strip") if (false) { Aedu.setUpFigureCarousel('profile-work-32057799-figures'); } }); </script> <div class="js-work-strip profile--work_container" data-work-id="50140273"><div class="profile--work_thumbnail hidden-xs"><a class="js-work-strip-work-link" data-click-track="profile-work-strip-thumbnail" href="https://www.academia.edu/50140273/Back_to_the_Future_Bringing_Original_Hypermedia_and_Cross_Media_Concepts_to_Modern_Desktop_Environments"><img alt="Research paper thumbnail of Back to the Future: Bringing Original Hypermedia and Cross-Media Concepts to Modern Desktop Environments" class="work-thumbnail" src="https://attachments.academia-assets.com/68237788/thumbnails/1.jpg" /></a></div><div class="wp-workCard wp-workCard_itemContainer"><div class="wp-workCard_item wp-workCard--title"><a class="js-work-strip-work-link text-gray-darker" data-click-track="profile-work-strip-title" href="https://www.academia.edu/50140273/Back_to_the_Future_Bringing_Original_Hypermedia_and_Cross_Media_Concepts_to_Modern_Desktop_Environments">Back to the Future: Bringing Original Hypermedia and Cross-Media Concepts to Modern Desktop Environments</a></div><div class="wp-workCard_item wp-workCard--coauthors"><span>by </span><span><a class="" data-click-track="profile-work-strip-authors" href="https://vub.academia.edu/BeatSigner">Beat Signer</a>, <a class="" data-click-track="profile-work-strip-authors" href="https://independent.academia.edu/RobertvanBarlingen">Robert van Barlingen</a>, and <a class="" data-click-track="profile-work-strip-authors" href="https://independent.academia.edu/ReinoutRoels">Reinout Roels</a></span></div><div class="wp-workCard_item"><span class="js-work-more-abstract-truncated">Over the last few decades, we have seen massive improvements in computing power, but nevertheless...</span><a class="js-work-more-abstract" data-broccoli-component="work_strip.more_abstract" data-click-track="profile-work-strip-more-abstract" href="javascript:;"><span> more </span><span><i class="fa fa-caret-down"></i></span></a><span class="js-work-more-abstract-untruncated hidden">Over the last few decades, we have seen massive improvements in computing power, but nevertheless we still rely on digital documents and file systems that were originally created by mimicking the characteristics of physical storage media with all its limitations. This is quite surprising given that even before the existence of the computer, Information Science visionaries such as Vannevar Bush described more powerful information management solutions. We therefore aim to improve the way information is managed in modern desktop environments by embedding a hypermedia engine offering rich hypermedia and cross-media concepts at the level of an operating system. We discuss the resource-selector-link (RSL) hypermedia metamodel as a candidate for realising such a general hypermedia engine and highlight its flexibility based on a number of domain-specific applications that have been developed over the last two decades. The underlying content repository will no longer rely on monolithic files, but rather contain a user&#39;s data in the form of content fragments, such as snippets of text or images, which are structurally linked to form the corresponding documents, and can be reused in other documents or even shared across computers. By increasing the scope to a system-wide hypermedia engine, we have to deal with fundamental challenges related to granularity, interoperability or context resolving. We strongly believe that computing technology has evolved enough to revisit and address these challenges, laying the foundation for a wide range of innovative use cases for efficiently managing cross-media content in modern desktop environments.</span></div><div class="wp-workCard_item wp-workCard--actions"><span class="work-strip-bookmark-button-container"></span><a id="e212459d23a8d6c746e89189b6108b0d" class="wp-workCard--action" rel="nofollow" data-click-track="profile-work-strip-download" data-download="{&quot;attachment_id&quot;:68237788,&quot;asset_id&quot;:50140273,&quot;asset_type&quot;:&quot;Work&quot;,&quot;button_location&quot;:&quot;profile&quot;}" href="https://www.academia.edu/attachments/68237788/download_file?s=profile"><span><i class="fa fa-arrow-down"></i></span><span>Download</span></a><span class="wp-workCard--action visible-if-viewed-by-owner inline-block" style="display: none;"><span class="js-profile-work-strip-edit-button-wrapper profile-work-strip-edit-button-wrapper" data-work-id="50140273"><a class="js-profile-work-strip-edit-button" tabindex="0"><span><i class="fa fa-pencil"></i></span><span>Edit</span></a></span></span></div><div class="wp-workCard_item wp-workCard--stats"><span><span><span class="js-view-count view-count u-mr2x" data-work-id="50140273"><i class="fa fa-spinner fa-spin"></i></span><script>$(function () { var workId = 50140273; window.Academia.workViewCountsFetcher.queue(workId, function (count) { var description = window.$h.commaizeInt(count) + " " + window.$h.pluralize(count, 'View'); $(".js-view-count[data-work-id=50140273]").text(description); $(".js-view-count[data-work-id=50140273]").attr('title', description).tooltip(); }); });</script></span></span><span><span class="percentile-widget hidden"><span class="u-mr2x work-percentile"></span></span><script>$(function () { var workId = 50140273; window.Academia.workPercentilesFetcher.queue(workId, function (percentileText) { var container = $(".js-work-strip[data-work-id='50140273']"); container.find('.work-percentile').text(percentileText.charAt(0).toUpperCase() + percentileText.slice(1)); container.find('.percentile-widget').show(); container.find('.percentile-widget').removeClass('hidden'); }); });</script></span></div><div id="work-strip-premium-row-container"></div></div></div><script> require.config({ waitSeconds: 90 })(["https://a.academia-assets.com/assets/wow_profile-a9bf3a2bc8c89fa2a77156577594264ee8a0f214d74241bc0fcd3f69f8d107ac.js","https://a.academia-assets.com/assets/work_edit-ad038b8c047c1a8d4fa01b402d530ff93c45fee2137a149a4a5398bc8ad67560.js"], function() { // from javascript_helper.rb var dispatcherData = {} if (true){ window.WowProfile.dispatcher = window.WowProfile.dispatcher || _.clone(Backbone.Events); dispatcherData = { dispatcher: window.WowProfile.dispatcher, downloadLinkId: "e212459d23a8d6c746e89189b6108b0d" } } $('.js-work-strip[data-work-id=50140273]').each(function() { if (!$(this).data('initialized')) { new WowProfile.WorkStripView({ el: this, workJSON: {"id":50140273,"title":"Back to the Future: Bringing Original Hypermedia and Cross-Media Concepts to Modern Desktop Environments","translated_title":"","metadata":{"doi":"10.1145/3465336.3475122","abstract":"Over the last few decades, we have seen massive improvements in computing power, but nevertheless we still rely on digital documents and file systems that were originally created by mimicking the characteristics of physical storage media with all its limitations. This is quite surprising given that even before the existence of the computer, Information Science visionaries such as Vannevar Bush described more powerful information management solutions. We therefore aim to improve the way information is managed in modern desktop environments by embedding a hypermedia engine offering rich hypermedia and cross-media concepts at the level of an operating system. We discuss the resource-selector-link (RSL) hypermedia metamodel as a candidate for realising such a general hypermedia engine and highlight its flexibility based on a number of domain-specific applications that have been developed over the last two decades. The underlying content repository will no longer rely on monolithic files, but rather contain a user's data in the form of content fragments, such as snippets of text or images, which are structurally linked to form the corresponding documents, and can be reused in other documents or even shared across computers. By increasing the scope to a system-wide hypermedia engine, we have to deal with fundamental challenges related to granularity, interoperability or context resolving. We strongly believe that computing technology has evolved enough to revisit and address these challenges, laying the foundation for a wide range of innovative use cases for efficiently managing cross-media content in modern desktop environments.","more_info":"Beat Signer, Reinout Roels, Robert Van Barlingen and Brent Willems, Proceedings of Hypertext 2021, 32nd ACM Conference on Hypertext and Social Media, Virtual Event, August 2021","ai_title_tag":"Revolutionizing Desktop Information Management","publication_date":{"day":null,"month":null,"year":2021,"errors":{}}},"translated_abstract":"Over the last few decades, we have seen massive improvements in computing power, but nevertheless we still rely on digital documents and file systems that were originally created by mimicking the characteristics of physical storage media with all its limitations. This is quite surprising given that even before the existence of the computer, Information Science visionaries such as Vannevar Bush described more powerful information management solutions. We therefore aim to improve the way information is managed in modern desktop environments by embedding a hypermedia engine offering rich hypermedia and cross-media concepts at the level of an operating system. We discuss the resource-selector-link (RSL) hypermedia metamodel as a candidate for realising such a general hypermedia engine and highlight its flexibility based on a number of domain-specific applications that have been developed over the last two decades. The underlying content repository will no longer rely on monolithic files, but rather contain a user's data in the form of content fragments, such as snippets of text or images, which are structurally linked to form the corresponding documents, and can be reused in other documents or even shared across computers. By increasing the scope to a system-wide hypermedia engine, we have to deal with fundamental challenges related to granularity, interoperability or context resolving. We strongly believe that computing technology has evolved enough to revisit and address these challenges, laying the foundation for a wide range of innovative use cases for efficiently managing cross-media content in modern desktop environments.","internal_url":"https://www.academia.edu/50140273/Back_to_the_Future_Bringing_Original_Hypermedia_and_Cross_Media_Concepts_to_Modern_Desktop_Environments","translated_internal_url":"","created_at":"2021-07-21T15:06:14.924-07:00","preview_url":null,"current_user_can_edit":null,"current_user_is_owner":null,"owner_id":13155,"coauthors_can_edit":true,"document_type":"paper","co_author_tags":[{"id":36722513,"work_id":50140273,"tagging_user_id":13155,"tagged_user_id":null,"co_author_invite_id":7272979,"email":"b***m@vub.be","display_order":1,"name":"Brent Willems","title":"Back to the Future: Bringing Original Hypermedia and Cross-Media Concepts to Modern Desktop Environments"},{"id":36790130,"work_id":50140273,"tagging_user_id":13155,"tagged_user_id":200968837,"co_author_invite_id":null,"email":"b***n@gmail.com","display_order":2,"name":"Robert van Barlingen","title":"Back to the Future: Bringing Original Hypermedia and Cross-Media Concepts to Modern Desktop Environments"},{"id":36722512,"work_id":50140273,"tagging_user_id":13155,"tagged_user_id":2181828,"co_author_invite_id":7272978,"email":"r***s@gmail.com","display_order":3,"name":"Reinout Roels","title":"Back to the Future: Bringing Original Hypermedia and Cross-Media Concepts to Modern Desktop Environments"}],"downloadable_attachments":[{"id":68237788,"title":"","file_type":"pdf","scribd_thumbnail_url":"https://attachments.academia-assets.com/68237788/thumbnails/1.jpg","file_name":"Signer_Hypertext2021.pdf","download_url":"https://www.academia.edu/attachments/68237788/download_file","bulk_download_file_name":"Back_to_the_Future_Bringing_Original_Hyp.pdf","bulk_download_url":"https://d1wqtxts1xzle7.cloudfront.net/68237788/Signer_Hypertext2021-libre.pdf?1626905710=\u0026response-content-disposition=attachment%3B+filename%3DBack_to_the_Future_Bringing_Original_Hyp.pdf\u0026Expires=1744203890\u0026Signature=C76yqaupcwNsCY4c~KUoeS1hVbmkS2GHGs3ByfuwWHe0FllfJ8Z49KwpGmyTyebu6YwNKYqQFKDxT9a8OFKptbu1aG5KGLKxX9vvy9f5Ds7zbxc08bXmr~4idCXQvWWsKavNaNjiRSVbRwDF-uB7cWsC-aYb08lYxMPp3R6-t8FQoCNA5LF-X4j2aeRX~i-I0jLC4~haZNpBSx2WQCQ4amFdUUY33rE8fF-yLPJMmbmOhG0dvKDUor-SqJt3neGnB0QmiUnop4KtZV5VMKc~jlxVduxQMS-95AwxPjGkluprpg~36Fy0rYca-9nNo2hcQAVNUKZDuWcZ9G65tKcDDA__\u0026Key-Pair-Id=APKAJLOHF5GGSLRBV4ZA"}],"slug":"Back_to_the_Future_Bringing_Original_Hypermedia_and_Cross_Media_Concepts_to_Modern_Desktop_Environments","translated_slug":"","page_count":6,"language":"en","content_type":"Work","summary":"Over the last few decades, we have seen massive improvements in computing power, but nevertheless we still rely on digital documents and file systems that were originally created by mimicking the characteristics of physical storage media with all its limitations. This is quite surprising given that even before the existence of the computer, Information Science visionaries such as Vannevar Bush described more powerful information management solutions. We therefore aim to improve the way information is managed in modern desktop environments by embedding a hypermedia engine offering rich hypermedia and cross-media concepts at the level of an operating system. We discuss the resource-selector-link (RSL) hypermedia metamodel as a candidate for realising such a general hypermedia engine and highlight its flexibility based on a number of domain-specific applications that have been developed over the last two decades. The underlying content repository will no longer rely on monolithic files, but rather contain a user's data in the form of content fragments, such as snippets of text or images, which are structurally linked to form the corresponding documents, and can be reused in other documents or even shared across computers. By increasing the scope to a system-wide hypermedia engine, we have to deal with fundamental challenges related to granularity, interoperability or context resolving. We strongly believe that computing technology has evolved enough to revisit and address these challenges, laying the foundation for a wide range of innovative use cases for efficiently managing cross-media content in modern desktop environments.","impression_tracking_id":null,"owner":{"id":13155,"first_name":"Beat","middle_initials":null,"last_name":"Signer","page_name":"BeatSigner","domain_name":"vub","created_at":"2008-10-29T17:11:18.015-07:00","display_name":"Beat Signer","url":"https://vub.academia.edu/BeatSigner"},"attachments":[{"id":68237788,"title":"","file_type":"pdf","scribd_thumbnail_url":"https://attachments.academia-assets.com/68237788/thumbnails/1.jpg","file_name":"Signer_Hypertext2021.pdf","download_url":"https://www.academia.edu/attachments/68237788/download_file","bulk_download_file_name":"Back_to_the_Future_Bringing_Original_Hyp.pdf","bulk_download_url":"https://d1wqtxts1xzle7.cloudfront.net/68237788/Signer_Hypertext2021-libre.pdf?1626905710=\u0026response-content-disposition=attachment%3B+filename%3DBack_to_the_Future_Bringing_Original_Hyp.pdf\u0026Expires=1744203890\u0026Signature=C76yqaupcwNsCY4c~KUoeS1hVbmkS2GHGs3ByfuwWHe0FllfJ8Z49KwpGmyTyebu6YwNKYqQFKDxT9a8OFKptbu1aG5KGLKxX9vvy9f5Ds7zbxc08bXmr~4idCXQvWWsKavNaNjiRSVbRwDF-uB7cWsC-aYb08lYxMPp3R6-t8FQoCNA5LF-X4j2aeRX~i-I0jLC4~haZNpBSx2WQCQ4amFdUUY33rE8fF-yLPJMmbmOhG0dvKDUor-SqJt3neGnB0QmiUnop4KtZV5VMKc~jlxVduxQMS-95AwxPjGkluprpg~36Fy0rYca-9nNo2hcQAVNUKZDuWcZ9G65tKcDDA__\u0026Key-Pair-Id=APKAJLOHF5GGSLRBV4ZA"}],"research_interests":[{"id":37,"name":"Information Systems","url":"https://www.academia.edu/Documents/in/Information_Systems"},{"id":422,"name":"Computer Science","url":"https://www.academia.edu/Documents/in/Computer_Science"},{"id":459,"name":"Information Science","url":"https://www.academia.edu/Documents/in/Information_Science"},{"id":1210,"name":"Informatics","url":"https://www.academia.edu/Documents/in/Informatics"},{"id":10472,"name":"Web Applications","url":"https://www.academia.edu/Documents/in/Web_Applications"},{"id":11085,"name":"Cross-Media Information Spaces","url":"https://www.academia.edu/Documents/in/Cross-Media_Information_Spaces"},{"id":41474,"name":"Cross Media Platforms","url":"https://www.academia.edu/Documents/in/Cross_Media_Platforms"},{"id":42095,"name":"Document Engineering","url":"https://www.academia.edu/Documents/in/Document_Engineering"},{"id":44606,"name":"Hypermedia","url":"https://www.academia.edu/Documents/in/Hypermedia"},{"id":54192,"name":"Hypertext","url":"https://www.academia.edu/Documents/in/Hypertext"},{"id":284081,"name":"Vannevar Bush","url":"https://www.academia.edu/Documents/in/Vannevar_Bush"},{"id":627593,"name":"File management for operating systems","url":"https://www.academia.edu/Documents/in/File_management_for_operating_systems"},{"id":640377,"name":"Document Management","url":"https://www.academia.edu/Documents/in/Document_Management"},{"id":641270,"name":"Computer Information Systems","url":"https://www.academia.edu/Documents/in/Computer_Information_Systems"},{"id":974547,"name":"New Trends \u0026 Technologies of Information Science","url":"https://www.academia.edu/Documents/in/New_Trends_and_Technologies_of_Information_Science"},{"id":1258169,"name":"File Manager","url":"https://www.academia.edu/Documents/in/File_Manager"},{"id":1315262,"name":"Douglas Engelbart","url":"https://www.academia.edu/Documents/in/Douglas_Engelbart"},{"id":3745667,"name":"File management","url":"https://www.academia.edu/Documents/in/File_management"}],"urls":[{"id":10515097,"url":"https://beatsigner.com/publications/back-to-the-future-bringing-original-hypermedia-and-cross-media-concepts-to-modern-desktop-environments.pdf"}]}, dispatcherData: dispatcherData }); $(this).data('initialized', true); } }); $a.trackClickSource(".js-work-strip-work-link", "profile_work_strip") if (false) { Aedu.setUpFigureCarousel('profile-work-50140273-figures'); } }); </script> <div class="js-work-strip profile--work_container" data-work-id="44902025"><div class="profile--work_thumbnail hidden-xs"><a class="js-work-strip-work-link" data-click-track="profile-work-strip-thumbnail" href="https://www.academia.edu/44902025/OpenHPS_An_Open_Source_Hybrid_Positioning_System"><img alt="Research paper thumbnail of OpenHPS: An Open Source Hybrid Positioning System" class="work-thumbnail" src="https://attachments.academia-assets.com/65488861/thumbnails/1.jpg" /></a></div><div class="wp-workCard wp-workCard_itemContainer"><div class="wp-workCard_item wp-workCard--title"><a class="js-work-strip-work-link text-gray-darker" data-click-track="profile-work-strip-title" href="https://www.academia.edu/44902025/OpenHPS_An_Open_Source_Hybrid_Positioning_System">OpenHPS: An Open Source Hybrid Positioning System</a></div><div class="wp-workCard_item"><span class="js-work-more-abstract-truncated">Positioning systems and frameworks use various techniques to determine the position of an object....</span><a class="js-work-more-abstract" data-broccoli-component="work_strip.more_abstract" data-click-track="profile-work-strip-more-abstract" href="javascript:;"><span> more </span><span><i class="fa fa-caret-down"></i></span></a><span class="js-work-more-abstract-untruncated hidden">Positioning systems and frameworks use various techniques to determine the position of an object. Some of the existing solutions combine different sensory data at the time of positioning in order to compute more accurate positions by reducing the error introduced by the used individual positioning techniques. We present OpenHPS, a generic hybrid positioning system implemented in TypeScript, that can not only reduce the error during tracking by fusing different sensory data based on different algorithms, but also also make use of combined tracking techniques when calibrating or training the system. In addition to a detailed discussion of the architecture, features and implementation of the extensible open source OpenHPS framework, we illustrate the use of our solution in a demonstrator application fusing different positioning techniques. While OpenHPS offers a number of positioning techniques, future extensions might integrate new positioning methods or algorithms and support additional levels of abstraction including symbolic locations.</span></div><div class="wp-workCard_item wp-workCard--actions"><span class="work-strip-bookmark-button-container"></span><a id="b5a637de390718fbeeaa78c96b61f2b6" class="wp-workCard--action" rel="nofollow" data-click-track="profile-work-strip-download" data-download="{&quot;attachment_id&quot;:65488861,&quot;asset_id&quot;:44902025,&quot;asset_type&quot;:&quot;Work&quot;,&quot;button_location&quot;:&quot;profile&quot;}" href="https://www.academia.edu/attachments/65488861/download_file?s=profile"><span><i class="fa fa-arrow-down"></i></span><span>Download</span></a><span class="wp-workCard--action visible-if-viewed-by-owner inline-block" style="display: none;"><span class="js-profile-work-strip-edit-button-wrapper profile-work-strip-edit-button-wrapper" data-work-id="44902025"><a class="js-profile-work-strip-edit-button" tabindex="0"><span><i class="fa fa-pencil"></i></span><span>Edit</span></a></span></span></div><div class="wp-workCard_item wp-workCard--stats"><span><span><span class="js-view-count view-count u-mr2x" data-work-id="44902025"><i class="fa fa-spinner fa-spin"></i></span><script>$(function () { var workId = 44902025; window.Academia.workViewCountsFetcher.queue(workId, function (count) { var description = window.$h.commaizeInt(count) + " " + window.$h.pluralize(count, 'View'); $(".js-view-count[data-work-id=44902025]").text(description); $(".js-view-count[data-work-id=44902025]").attr('title', description).tooltip(); }); });</script></span></span><span><span class="percentile-widget hidden"><span class="u-mr2x work-percentile"></span></span><script>$(function () { var workId = 44902025; window.Academia.workPercentilesFetcher.queue(workId, function (percentileText) { var container = $(".js-work-strip[data-work-id='44902025']"); container.find('.work-percentile').text(percentileText.charAt(0).toUpperCase() + percentileText.slice(1)); container.find('.percentile-widget').show(); container.find('.percentile-widget').removeClass('hidden'); }); });</script></span></div><div id="work-strip-premium-row-container"></div></div></div><script> require.config({ waitSeconds: 90 })(["https://a.academia-assets.com/assets/wow_profile-a9bf3a2bc8c89fa2a77156577594264ee8a0f214d74241bc0fcd3f69f8d107ac.js","https://a.academia-assets.com/assets/work_edit-ad038b8c047c1a8d4fa01b402d530ff93c45fee2137a149a4a5398bc8ad67560.js"], function() { // from javascript_helper.rb var dispatcherData = {} if (true){ window.WowProfile.dispatcher = window.WowProfile.dispatcher || _.clone(Backbone.Events); dispatcherData = { dispatcher: window.WowProfile.dispatcher, downloadLinkId: "b5a637de390718fbeeaa78c96b61f2b6" } } $('.js-work-strip[data-work-id=44902025]').each(function() { if (!$(this).data('initialized')) { new WowProfile.WorkStripView({ el: this, workJSON: {"id":44902025,"title":"OpenHPS: An Open Source Hybrid Positioning System","translated_title":"","metadata":{"abstract":"Positioning systems and frameworks use various techniques to determine the position of an object. Some of the existing solutions combine different sensory data at the time of positioning in order to compute more accurate positions by reducing the error introduced by the used individual positioning techniques. We present OpenHPS, a generic hybrid positioning system implemented in TypeScript, that can not only reduce the error during tracking by fusing different sensory data based on different algorithms, but also also make use of combined tracking techniques when calibrating or training the system. In addition to a detailed discussion of the architecture, features and implementation of the extensible open source OpenHPS framework, we illustrate the use of our solution in a demonstrator application fusing different positioning techniques. While OpenHPS offers a number of positioning techniques, future extensions might integrate new positioning methods or algorithms and support additional levels of abstraction including symbolic locations.","more_info":"Technical Report WISE Lab, WISE-2020-01, December 2020","ai_title_tag":"OpenHPS: A Hybrid Open Source Positioning System","publication_date":{"day":null,"month":null,"year":2020,"errors":{}}},"translated_abstract":"Positioning systems and frameworks use various techniques to determine the position of an object. Some of the existing solutions combine different sensory data at the time of positioning in order to compute more accurate positions by reducing the error introduced by the used individual positioning techniques. We present OpenHPS, a generic hybrid positioning system implemented in TypeScript, that can not only reduce the error during tracking by fusing different sensory data based on different algorithms, but also also make use of combined tracking techniques when calibrating or training the system. In addition to a detailed discussion of the architecture, features and implementation of the extensible open source OpenHPS framework, we illustrate the use of our solution in a demonstrator application fusing different positioning techniques. While OpenHPS offers a number of positioning techniques, future extensions might integrate new positioning methods or algorithms and support additional levels of abstraction including symbolic locations.","internal_url":"https://www.academia.edu/44902025/OpenHPS_An_Open_Source_Hybrid_Positioning_System","translated_internal_url":"","created_at":"2021-01-14T02:23:15.818-08:00","preview_url":null,"current_user_can_edit":null,"current_user_is_owner":null,"owner_id":13155,"coauthors_can_edit":true,"document_type":"paper","co_author_tags":[{"id":38902015,"work_id":44902025,"tagging_user_id":13155,"tagged_user_id":61118416,"co_author_invite_id":null,"email":"m***l@gmail.com","affiliation":"Vrije Universiteit Brussel","display_order":-1,"name":"Maxim Van de Wynckel","title":"OpenHPS: An Open Source Hybrid Positioning System"}],"downloadable_attachments":[{"id":65488861,"title":"","file_type":"pdf","scribd_thumbnail_url":"https://attachments.academia-assets.com/65488861/thumbnails/1.jpg","file_name":"OpenHPS.pdf","download_url":"https://www.academia.edu/attachments/65488861/download_file","bulk_download_file_name":"OpenHPS_An_Open_Source_Hybrid_Positionin.pdf","bulk_download_url":"https://d1wqtxts1xzle7.cloudfront.net/65488861/OpenHPS-libre.pdf?1611352920=\u0026response-content-disposition=attachment%3B+filename%3DOpenHPS_An_Open_Source_Hybrid_Positionin.pdf\u0026Expires=1744203890\u0026Signature=G0AoN5OFeYvEQAE7EHjJX72tnmEWTrt7skE9GdN2FhwMZ1jaD1G7Qraa7Rq1OKv3zwiOLsJgIu8hfkicknarFwTSWYk-ar0Wk3qfmJE6rV36kZH1CaZ3K31ZK0gsVrBfX4-tzfsX1A3yKVXeihF~~u9GbECq3TefRrnm9cu7-fy7uTrV4LWFbDilOFg4YdCq19xfQsFlokuBsTOyM9dqYwbn6s43NO25TRHdQfeqwcsdsDdk1m~EA33weO0ZNjI05he1njpTuhYCBHV-RyVSA9UmkFlXmj28Ttec4H7m6CHZXNUa~fU8y0HIg4~1adcU2U1VQRxgTin-~VCpnGsaRA__\u0026Key-Pair-Id=APKAJLOHF5GGSLRBV4ZA"}],"slug":"OpenHPS_An_Open_Source_Hybrid_Positioning_System","translated_slug":"","page_count":17,"language":"en","content_type":"Work","summary":"Positioning systems and frameworks use various techniques to determine the position of an object. Some of the existing solutions combine different sensory data at the time of positioning in order to compute more accurate positions by reducing the error introduced by the used individual positioning techniques. We present OpenHPS, a generic hybrid positioning system implemented in TypeScript, that can not only reduce the error during tracking by fusing different sensory data based on different algorithms, but also also make use of combined tracking techniques when calibrating or training the system. In addition to a detailed discussion of the architecture, features and implementation of the extensible open source OpenHPS framework, we illustrate the use of our solution in a demonstrator application fusing different positioning techniques. While OpenHPS offers a number of positioning techniques, future extensions might integrate new positioning methods or algorithms and support additional levels of abstraction including symbolic locations.","impression_tracking_id":null,"owner":{"id":13155,"first_name":"Beat","middle_initials":null,"last_name":"Signer","page_name":"BeatSigner","domain_name":"vub","created_at":"2008-10-29T17:11:18.015-07:00","display_name":"Beat Signer","url":"https://vub.academia.edu/BeatSigner"},"attachments":[{"id":65488861,"title":"","file_type":"pdf","scribd_thumbnail_url":"https://attachments.academia-assets.com/65488861/thumbnails/1.jpg","file_name":"OpenHPS.pdf","download_url":"https://www.academia.edu/attachments/65488861/download_file","bulk_download_file_name":"OpenHPS_An_Open_Source_Hybrid_Positionin.pdf","bulk_download_url":"https://d1wqtxts1xzle7.cloudfront.net/65488861/OpenHPS-libre.pdf?1611352920=\u0026response-content-disposition=attachment%3B+filename%3DOpenHPS_An_Open_Source_Hybrid_Positionin.pdf\u0026Expires=1744203890\u0026Signature=G0AoN5OFeYvEQAE7EHjJX72tnmEWTrt7skE9GdN2FhwMZ1jaD1G7Qraa7Rq1OKv3zwiOLsJgIu8hfkicknarFwTSWYk-ar0Wk3qfmJE6rV36kZH1CaZ3K31ZK0gsVrBfX4-tzfsX1A3yKVXeihF~~u9GbECq3TefRrnm9cu7-fy7uTrV4LWFbDilOFg4YdCq19xfQsFlokuBsTOyM9dqYwbn6s43NO25TRHdQfeqwcsdsDdk1m~EA33weO0ZNjI05he1njpTuhYCBHV-RyVSA9UmkFlXmj28Ttec4H7m6CHZXNUa~fU8y0HIg4~1adcU2U1VQRxgTin-~VCpnGsaRA__\u0026Key-Pair-Id=APKAJLOHF5GGSLRBV4ZA"}],"research_interests":[{"id":422,"name":"Computer Science","url":"https://www.academia.edu/Documents/in/Computer_Science"},{"id":449,"name":"Software Engineering","url":"https://www.academia.edu/Documents/in/Software_Engineering"},{"id":472,"name":"Human Computer Interaction","url":"https://www.academia.edu/Documents/in/Human_Computer_Interaction"},{"id":491,"name":"Information Technology","url":"https://www.academia.edu/Documents/in/Information_Technology"},{"id":923,"name":"Technology","url":"https://www.academia.edu/Documents/in/Technology"},{"id":1210,"name":"Informatics","url":"https://www.academia.edu/Documents/in/Informatics"},{"id":1380,"name":"Computer Engineering","url":"https://www.academia.edu/Documents/in/Computer_Engineering"},{"id":2879,"name":"Ubiquitous Computing","url":"https://www.academia.edu/Documents/in/Ubiquitous_Computing"},{"id":4222,"name":"Hybrid Systems","url":"https://www.academia.edu/Documents/in/Hybrid_Systems"},{"id":4672,"name":"Open Source Software","url":"https://www.academia.edu/Documents/in/Open_Source_Software"},{"id":9988,"name":"Indoor Positioning","url":"https://www.academia.edu/Documents/in/Indoor_Positioning"},{"id":10907,"name":"Context-Aware Computing","url":"https://www.academia.edu/Documents/in/Context-Aware_Computing"},{"id":15158,"name":"Object Tracking (Computer Vision)","url":"https://www.academia.edu/Documents/in/Object_Tracking_Computer_Vision_"},{"id":53164,"name":"Context Awareness","url":"https://www.academia.edu/Documents/in/Context_Awareness"},{"id":83870,"name":"Ubiquitous Positioning","url":"https://www.academia.edu/Documents/in/Ubiquitous_Positioning"},{"id":90025,"name":"Tracking","url":"https://www.academia.edu/Documents/in/Tracking"},{"id":368857,"name":"Positioning","url":"https://www.academia.edu/Documents/in/Positioning"},{"id":491363,"name":"Vehicle Tracking","url":"https://www.academia.edu/Documents/in/Vehicle_Tracking"},{"id":1111705,"name":"Typescript","url":"https://www.academia.edu/Documents/in/Typescript"},{"id":3857627,"name":"OpenHPS","url":"https://www.academia.edu/Documents/in/OpenHPS"}],"urls":[{"id":9192959,"url":"https://beatsigner.com/publications/openhps-an-open-source-hybrid-positioning-system.pdf"}]}, dispatcherData: dispatcherData }); $(this).data('initialized', true); } }); $a.trackClickSource(".js-work-strip-work-link", "profile_work_strip") if (false) { Aedu.setUpFigureCarousel('profile-work-44902025-figures'); } }); </script> <div class="js-work-strip profile--work_container" data-work-id="82685780"><div class="profile--work_thumbnail hidden-xs"><a class="js-work-strip-work-link" data-click-track="profile-work-strip-thumbnail" href="https://www.academia.edu/82685780/A_Solid_based_Architecture_for_Decentralised_Interoperable_Location_Data"><img alt="Research paper thumbnail of A Solid-based Architecture for Decentralised Interoperable Location Data" class="work-thumbnail" src="https://attachments.academia-assets.com/88318267/thumbnails/1.jpg" /></a></div><div class="wp-workCard wp-workCard_itemContainer"><div class="wp-workCard_item wp-workCard--title"><a class="js-work-strip-work-link text-gray-darker" data-click-track="profile-work-strip-title" href="https://www.academia.edu/82685780/A_Solid_based_Architecture_for_Decentralised_Interoperable_Location_Data">A Solid-based Architecture for Decentralised Interoperable Location Data</a></div><div class="wp-workCard_item"><span class="js-work-more-abstract-truncated">In today&#39;s technological world of privacy-conscious users, the tracking of individuals via differ...</span><a class="js-work-more-abstract" data-broccoli-component="work_strip.more_abstract" data-click-track="profile-work-strip-more-abstract" href="javascript:;"><span> more </span><span><i class="fa fa-caret-down"></i></span></a><span class="js-work-more-abstract-untruncated hidden">In today&#39;s technological world of privacy-conscious users, the tracking of individuals via different positioning systems and services can be considered obtrusive. Furthermore, linking and integrating data from these positioning systems is not always possible or requires the major effort of creating new interfaces between systems. In this paper, we propose an architecture for the realisation of a decentralised positioning system based on the W3C&#39;s Solid platform specification. Using this specification, sensor data as well as an individual&#39;s location information is stored in secure decentralised data stores called Pods, that are hosted by user-selected Pod providers. We demonstrate that these Pods do not only offer transparent and interoperable data stores for persisting sensor data as well as processed location information, but also aid in linking multiple positioning systems for high-and low-level sensor fusion. For indoor positioning, this interoperability provides a way to offer users a single location-based service while also providing additional semantic context for other positioning systems to improve their data output. Developers of indoor positioning systems can store all data in a format that is readable, understandable and accessible by any other system that their users might be using, enabling collaboration between researchers and companies implementing these indoor positioning systems.</span></div><div class="wp-workCard_item wp-workCard--actions"><span class="work-strip-bookmark-button-container"></span><a id="6159c21c543eb34d72d104c24e120d56" class="wp-workCard--action" rel="nofollow" data-click-track="profile-work-strip-download" data-download="{&quot;attachment_id&quot;:88318267,&quot;asset_id&quot;:82685780,&quot;asset_type&quot;:&quot;Work&quot;,&quot;button_location&quot;:&quot;profile&quot;}" href="https://www.academia.edu/attachments/88318267/download_file?s=profile"><span><i class="fa fa-arrow-down"></i></span><span>Download</span></a><span class="wp-workCard--action visible-if-viewed-by-owner inline-block" style="display: none;"><span class="js-profile-work-strip-edit-button-wrapper profile-work-strip-edit-button-wrapper" data-work-id="82685780"><a class="js-profile-work-strip-edit-button" tabindex="0"><span><i class="fa fa-pencil"></i></span><span>Edit</span></a></span></span></div><div class="wp-workCard_item wp-workCard--stats"><span><span><span class="js-view-count view-count u-mr2x" data-work-id="82685780"><i class="fa fa-spinner fa-spin"></i></span><script>$(function () { var workId = 82685780; window.Academia.workViewCountsFetcher.queue(workId, function (count) { var description = window.$h.commaizeInt(count) + " " + window.$h.pluralize(count, 'View'); $(".js-view-count[data-work-id=82685780]").text(description); $(".js-view-count[data-work-id=82685780]").attr('title', description).tooltip(); }); });</script></span></span><span><span class="percentile-widget hidden"><span class="u-mr2x work-percentile"></span></span><script>$(function () { var workId = 82685780; window.Academia.workPercentilesFetcher.queue(workId, function (percentileText) { var container = $(".js-work-strip[data-work-id='82685780']"); container.find('.work-percentile').text(percentileText.charAt(0).toUpperCase() + percentileText.slice(1)); container.find('.percentile-widget').show(); container.find('.percentile-widget').removeClass('hidden'); }); });</script></span></div><div id="work-strip-premium-row-container"></div></div></div><script> require.config({ waitSeconds: 90 })(["https://a.academia-assets.com/assets/wow_profile-a9bf3a2bc8c89fa2a77156577594264ee8a0f214d74241bc0fcd3f69f8d107ac.js","https://a.academia-assets.com/assets/work_edit-ad038b8c047c1a8d4fa01b402d530ff93c45fee2137a149a4a5398bc8ad67560.js"], function() { // from javascript_helper.rb var dispatcherData = {} if (true){ window.WowProfile.dispatcher = window.WowProfile.dispatcher || _.clone(Backbone.Events); dispatcherData = { dispatcher: window.WowProfile.dispatcher, downloadLinkId: "6159c21c543eb34d72d104c24e120d56" } } $('.js-work-strip[data-work-id=82685780]').each(function() { if (!$(this).data('initialized')) { new WowProfile.WorkStripView({ el: this, workJSON: {"id":82685780,"title":"A Solid-based Architecture for Decentralised Interoperable Location Data","translated_title":"","metadata":{"abstract":"In today's technological world of privacy-conscious users, the tracking of individuals via different positioning systems and services can be considered obtrusive. Furthermore, linking and integrating data from these positioning systems is not always possible or requires the major effort of creating new interfaces between systems. In this paper, we propose an architecture for the realisation of a decentralised positioning system based on the W3C's Solid platform specification. Using this specification, sensor data as well as an individual's location information is stored in secure decentralised data stores called Pods, that are hosted by user-selected Pod providers. We demonstrate that these Pods do not only offer transparent and interoperable data stores for persisting sensor data as well as processed location information, but also aid in linking multiple positioning systems for high-and low-level sensor fusion. For indoor positioning, this interoperability provides a way to offer users a single location-based service while also providing additional semantic context for other positioning systems to improve their data output. Developers of indoor positioning systems can store all data in a format that is readable, understandable and accessible by any other system that their users might be using, enabling collaboration between researchers and companies implementing these indoor positioning systems.","more_info":"Maxim Van de Wynckel and Beat Signer, Proceedings of IPIN 2022 (WiP), 12th International Conference on Indoor Positioning and Indoor Navigation, Beijing, China, September 2022","ai_title_tag":"Decentralised Interoperable Location Data with Solid Architecture","publication_date":{"day":null,"month":null,"year":2022,"errors":{}}},"translated_abstract":"In today's technological world of privacy-conscious users, the tracking of individuals via different positioning systems and services can be considered obtrusive. Furthermore, linking and integrating data from these positioning systems is not always possible or requires the major effort of creating new interfaces between systems. In this paper, we propose an architecture for the realisation of a decentralised positioning system based on the W3C's Solid platform specification. Using this specification, sensor data as well as an individual's location information is stored in secure decentralised data stores called Pods, that are hosted by user-selected Pod providers. We demonstrate that these Pods do not only offer transparent and interoperable data stores for persisting sensor data as well as processed location information, but also aid in linking multiple positioning systems for high-and low-level sensor fusion. For indoor positioning, this interoperability provides a way to offer users a single location-based service while also providing additional semantic context for other positioning systems to improve their data output. Developers of indoor positioning systems can store all data in a format that is readable, understandable and accessible by any other system that their users might be using, enabling collaboration between researchers and companies implementing these indoor positioning systems.","internal_url":"https://www.academia.edu/82685780/A_Solid_based_Architecture_for_Decentralised_Interoperable_Location_Data","translated_internal_url":"","created_at":"2022-07-06T02:53:47.154-07:00","preview_url":null,"current_user_can_edit":null,"current_user_is_owner":null,"owner_id":13155,"coauthors_can_edit":true,"document_type":"paper","co_author_tags":[{"id":38496798,"work_id":82685780,"tagging_user_id":13155,"tagged_user_id":61118416,"co_author_invite_id":null,"email":"m***l@gmail.com","affiliation":"Vrije Universiteit Brussel","display_order":1,"name":"Maxim Van de Wynckel","title":"A Solid-based Architecture for Decentralised Interoperable Location Data"}],"downloadable_attachments":[{"id":88318267,"title":"","file_type":"pdf","scribd_thumbnail_url":"https://attachments.academia-assets.com/88318267/thumbnails/1.jpg","file_name":"van_de_wynckel_IPIN2022.pdf","download_url":"https://www.academia.edu/attachments/88318267/download_file","bulk_download_file_name":"A_Solid_based_Architecture_for_Decentral.pdf","bulk_download_url":"https://d1wqtxts1xzle7.cloudfront.net/88318267/van_de_wynckel_IPIN2022-libre.pdf?1657119020=\u0026response-content-disposition=attachment%3B+filename%3DA_Solid_based_Architecture_for_Decentral.pdf\u0026Expires=1744203890\u0026Signature=KHcf0DP8IW7jbNL7qFTbtSzvEHF7bIavN7jH41npkOk2bkUv8tfmrlZTkUO25m1zJzJRDD4jZ0mDZ-tGcHYOyyrzztvEuXXQiMBQkPh1ZmA7snilb9MneiIBRIWjZfBgsInYpyr01~BrtRvHZhbc-CXW0qHwHCU4mvRi~nl-PRp8Pzfpp3LgbAt0gRC1fQFqa8NHgL-L2QAaFOKyXk0hQWN4jrcGrvZq~o8K3ec25dAvDqGqMQD4ffPD~zFWWXfJ8lK~JQapDw4HHX~4mKNVwXSJxpX~si-LVR0OHxdfcdJCot0AhYP6Yr4VdFJqVRf3hLlK2culxNSKeeXQ0m1Ouw__\u0026Key-Pair-Id=APKAJLOHF5GGSLRBV4ZA"}],"slug":"A_Solid_based_Architecture_for_Decentralised_Interoperable_Location_Data","translated_slug":"","page_count":15,"language":"en","content_type":"Work","summary":"In today's technological world of privacy-conscious users, the tracking of individuals via different positioning systems and services can be considered obtrusive. Furthermore, linking and integrating data from these positioning systems is not always possible or requires the major effort of creating new interfaces between systems. In this paper, we propose an architecture for the realisation of a decentralised positioning system based on the W3C's Solid platform specification. Using this specification, sensor data as well as an individual's location information is stored in secure decentralised data stores called Pods, that are hosted by user-selected Pod providers. We demonstrate that these Pods do not only offer transparent and interoperable data stores for persisting sensor data as well as processed location information, but also aid in linking multiple positioning systems for high-and low-level sensor fusion. For indoor positioning, this interoperability provides a way to offer users a single location-based service while also providing additional semantic context for other positioning systems to improve their data output. Developers of indoor positioning systems can store all data in a format that is readable, understandable and accessible by any other system that their users might be using, enabling collaboration between researchers and companies implementing these indoor positioning systems.","impression_tracking_id":null,"owner":{"id":13155,"first_name":"Beat","middle_initials":null,"last_name":"Signer","page_name":"BeatSigner","domain_name":"vub","created_at":"2008-10-29T17:11:18.015-07:00","display_name":"Beat Signer","url":"https://vub.academia.edu/BeatSigner"},"attachments":[{"id":88318267,"title":"","file_type":"pdf","scribd_thumbnail_url":"https://attachments.academia-assets.com/88318267/thumbnails/1.jpg","file_name":"van_de_wynckel_IPIN2022.pdf","download_url":"https://www.academia.edu/attachments/88318267/download_file","bulk_download_file_name":"A_Solid_based_Architecture_for_Decentral.pdf","bulk_download_url":"https://d1wqtxts1xzle7.cloudfront.net/88318267/van_de_wynckel_IPIN2022-libre.pdf?1657119020=\u0026response-content-disposition=attachment%3B+filename%3DA_Solid_based_Architecture_for_Decentral.pdf\u0026Expires=1744203890\u0026Signature=KHcf0DP8IW7jbNL7qFTbtSzvEHF7bIavN7jH41npkOk2bkUv8tfmrlZTkUO25m1zJzJRDD4jZ0mDZ-tGcHYOyyrzztvEuXXQiMBQkPh1ZmA7snilb9MneiIBRIWjZfBgsInYpyr01~BrtRvHZhbc-CXW0qHwHCU4mvRi~nl-PRp8Pzfpp3LgbAt0gRC1fQFqa8NHgL-L2QAaFOKyXk0hQWN4jrcGrvZq~o8K3ec25dAvDqGqMQD4ffPD~zFWWXfJ8lK~JQapDw4HHX~4mKNVwXSJxpX~si-LVR0OHxdfcdJCot0AhYP6Yr4VdFJqVRf3hLlK2culxNSKeeXQ0m1Ouw__\u0026Key-Pair-Id=APKAJLOHF5GGSLRBV4ZA"}],"research_interests":[{"id":422,"name":"Computer Science","url":"https://www.academia.edu/Documents/in/Computer_Science"},{"id":1210,"name":"Informatics","url":"https://www.academia.edu/Documents/in/Informatics"},{"id":1380,"name":"Computer Engineering","url":"https://www.academia.edu/Documents/in/Computer_Engineering"},{"id":1495,"name":"Location Based Services","url":"https://www.academia.edu/Documents/in/Location_Based_Services"},{"id":4222,"name":"Hybrid Systems","url":"https://www.academia.edu/Documents/in/Hybrid_Systems"},{"id":9988,"name":"Indoor Positioning","url":"https://www.academia.edu/Documents/in/Indoor_Positioning"},{"id":86589,"name":"Outdoor Positioning","url":"https://www.academia.edu/Documents/in/Outdoor_Positioning"},{"id":368857,"name":"Positioning","url":"https://www.academia.edu/Documents/in/Positioning"},{"id":522700,"name":"Location-Based Service","url":"https://www.academia.edu/Documents/in/Location-Based_Service"},{"id":602903,"name":"Solid","url":"https://www.academia.edu/Documents/in/Solid"},{"id":3857627,"name":"OpenHPS","url":"https://www.academia.edu/Documents/in/OpenHPS"}],"urls":[{"id":29589462,"url":"https://beatsigner.com/publications/a-solid-based-architecture-for-decentralised-interoperable-location-data.pdf"}]}, dispatcherData: dispatcherData }); $(this).data('initialized', true); } }); $a.trackClickSource(".js-work-strip-work-link", "profile_work_strip") if (false) { Aedu.setUpFigureCarousel('profile-work-82685780-figures'); } }); </script> <div class="js-work-strip profile--work_container" data-work-id="50252266"><div class="profile--work_thumbnail hidden-xs"><a class="js-work-strip-work-link" data-click-track="profile-work-strip-thumbnail" href="https://www.academia.edu/50252266/Indoor_Positioning_Using_the_OpenHPS_Framework"><img alt="Research paper thumbnail of Indoor Positioning Using the OpenHPS Framework" class="work-thumbnail" src="https://attachments.academia-assets.com/68307205/thumbnails/1.jpg" /></a></div><div class="wp-workCard wp-workCard_itemContainer"><div class="wp-workCard_item wp-workCard--title"><a class="js-work-strip-work-link text-gray-darker" data-click-track="profile-work-strip-title" href="https://www.academia.edu/50252266/Indoor_Positioning_Using_the_OpenHPS_Framework">Indoor Positioning Using the OpenHPS Framework</a></div><div class="wp-workCard_item wp-workCard--coauthors"><span>by </span><span><a class="" data-click-track="profile-work-strip-authors" href="https://vub.academia.edu/BeatSigner">Beat Signer</a> and <a class="" data-click-track="profile-work-strip-authors" href="https://vub.academia.edu/MaximVandeWynckel">Maxim Van de Wynckel</a></span></div><div class="wp-workCard_item"><span class="js-work-more-abstract-truncated">Hybrid positioning frameworks use various sensors and algorithms to enhance positioning through d...</span><a class="js-work-more-abstract" data-broccoli-component="work_strip.more_abstract" data-click-track="profile-work-strip-more-abstract" href="javascript:;"><span> more </span><span><i class="fa fa-caret-down"></i></span></a><span class="js-work-more-abstract-untruncated hidden">Hybrid positioning frameworks use various sensors and algorithms to enhance positioning through different types of fusion. The optimisation of the fusion process requires the testing of different algorithm parameters and optimal lowas well as high-level sensor fusion techniques. The presented OpenHPS open source hybrid positioning system is a modular framework managing individual nodes in a process network, which can be configured to support concrete positioning use cases or to adapt to specific technologies. This modularity allows developers to rapidly develop and optimise their positioning system while still providing them the flexibility to add their own algorithms. In this paper we discuss how a process network developed with OpenHPS can be used to realise a customisable indoor positioning solution with an offline and online stage, and how it can be adapted for high accuracy or low latency. For the demonstration and validation of our indoor positioning solution, we further compiled a publicly available dataset containing data from WLAN access points, BLE beacons as well as several trajectories that include IMU data.</span></div><div class="wp-workCard_item"><div class="carousel-container carousel-container--sm" id="profile-work-50252266-figures"><div class="prev-slide-container js-prev-button-container"><button aria-label="Previous" class="carousel-navigation-button js-profile-work-50252266-figures-prev"><span class="material-symbols-outlined" style="font-size: 24px" translate="no">arrow_back_ios</span></button></div><div class="slides-container js-slides-container"><figure class="figure-slide-container"><a href="https://www.academia.edu/figures/33982157/figure-1-component-architecture-of-openhps-openhp-is-an-open"><img alt="Fig. 1: Component architecture of OpenHPS OpenHP3$? is an open source hybrid positioning framework implemented in TypeScript. The system is split into individual modules that provide extra functionality on top of a core component. The core component of the OpenHPS framework is a process network designed to sample sensor data to a position while other components extend this core function- ality with different data storage types, positioning techniques, abstractions and communication nodes. " class="figure-slide-image" src="https://figures.academia-assets.com/68307205/figure_001.jpg" /></a></figure><figure class="figure-slide-container"><a href="https://www.academia.edu/figures/33982169/figure-3-pulling-data-in-the-process-network-shows-data"><img alt="Fig. 3: Pulling data in the process network Fig. 2 shows data being pushed by a source node and handled by two processing nodes. Once a downstream node is ready with the frame, it resolves the promise signalling to the upstream node that new data can be accepted. Sink nodes emit an event upstream, indicating that data has been persisted. " class="figure-slide-image" src="https://figures.academia-assets.com/68307205/figure_002.jpg" /></a></figure><figure class="figure-slide-container"><a href="https://www.academia.edu/figures/33982204/figure-2-pushing-data-in-the-process-network-our-framework"><img alt="Fig. 2: Pushing data in the process network Our framework uses a push-pull-based stream for sampling sensor data based on existing stream-based software architec- tures. However, we optimised the framework for processing and handling positioning data. Source nodes that actively produce information, such as an IMU sensor, can push in- formation. Pull request actions trigger a push when a node is able to respond to the request. This behaviour is similar to Akka Streams [17], but unlike reactive streams our framework does not use the behaviour to implement back pressure in the system. Both the push and pull requests can be executed asynchronously, similar to reactive streams [18]. " class="figure-slide-image" src="https://figures.academia-assets.com/68307205/figure_003.jpg" /></a></figure><figure class="figure-slide-container"><a href="https://www.academia.edu/figures/33982216/figure-4-data-frame-containing-objects-and-sensor-data"><img alt="Fig. 4: Data frame containing objects and sensor data " class="figure-slide-image" src="https://figures.academia-assets.com/68307205/figure_004.jpg" /></a></figure><figure class="figure-slide-container"><a href="https://www.academia.edu/figures/33982230/figure-5-listing-location-based-service"><img alt="Listing 1: Location-based service " class="figure-slide-image" src="https://figures.academia-assets.com/68307205/figure_005.jpg" /></a></figure><figure class="figure-slide-container"><a href="https://www.academia.edu/figures/33982243/figure-5-listing-graph-shape-for-pedestrian-dead-reckoning"><img alt="Listing 3: Graph shape for pedestrian dead reckoning The fingerprint service used to store fingerprints is shared with the online stage. For the scope of our evaluation, we used various positioning methods ranging from BLE multilateration and cell identification using 11 BLE beacons, WLAN finger- printing and BLE fingerprinting. A high-level position fusion node fuses the positions based on their accuracy [14]. Finally, the calculated position is sent back to the mobile application through the socket sink node (orange) as indicated in Fig. 5. The effectiveness of OpenHPS as a hybrid positioning solution is validated with two scenarios in Sections IV-B and IV-C. " class="figure-slide-image" src="https://figures.academia-assets.com/68307205/figure_006.jpg" /></a></figure><figure class="figure-slide-container"><a href="https://www.academia.edu/figures/33982255/figure-5-positioning-model-for-server-offline-and-online"><img alt="Fig. 5: Positioning model for server, offline and online application " class="figure-slide-image" src="https://figures.academia-assets.com/68307205/figure_007.jpg" /></a></figure><figure class="figure-slide-container"><a href="https://www.academia.edu/figures/33982266/figure-6-for-the-evaluation-or-our-positioning-model-we"><img alt="For the evaluation or our positioning model, we created a fingerprinting dataset of a single floor in the building of our research lab [23]. A visual representation of our dataset is shown in Fig. 6. The dataset was recorded with a calibration application collecting information from WLAN access points, BLE beacons with a known position (blue) and an IMU sensor. " class="figure-slide-image" src="https://figures.academia-assets.com/68307205/figure_008.jpg" /></a></figure><figure class="figure-slide-container"><a href="https://www.academia.edu/figures/33982276/figure-7-symbolic-spaces-in-geojson-format-we-use-the"><img alt="Fig. 7: Symbolic spaces in GeoJSON format We use the symbolic space abstraction that has been intro- duced in Section III-E to create symbolic spaces for the rooms, corridors two lobbies and toilets. These symbolic spaces will be used to determine the hit rate and are illustrated in Fig. 7 exported as GeoJSON polygonal features. " class="figure-slide-image" src="https://figures.academia-assets.com/68307205/figure_009.jpg" /></a></figure><figure class="figure-slide-container"><a href="https://www.academia.edu/figures/33982287/figure-10-listing-positioning-with-fingerprinting-parameters"><img alt="Listing 4: Positioning with fingerprinting parameters " class="figure-slide-image" src="https://figures.academia-assets.com/68307205/figure_010.jpg" /></a></figure><figure class="figure-slide-container"><a href="https://www.academia.edu/figures/33982298/figure-8-test-trajectory-with-wlan-ble-and-imu-data"><img alt="Fig. 8: Test trajectory with WLAN, BLE and IMU data application sends the WLAN and BLE data to the server, where it is processed similarly to the test data points in Section IV-B, while the IMU data is used locally in the application to perform pedestrian dead reckoning. Trajectory sensor information was collected by keeping the phone at chest height while performing the trajectory at a normal walking pace. Other than the stationary points, the update frequency and accuracy is more important than the symbolic hit rate. " class="figure-slide-image" src="https://figures.academia-assets.com/68307205/figure_011.jpg" /></a></figure><figure class="figure-slide-container"><a href="https://www.academia.edu/figures/33982309/table-1-average-minimum-and-maximum-position-error-compared"><img alt="TABLE I: Average, minimum and maximum X/Y position error compared to the fused position " class="figure-slide-image" src="https://figures.academia-assets.com/68307205/table_001.jpg" /></a></figure><figure class="figure-slide-container"><a href="https://www.academia.edu/figures/33982319/figure-8-ii-sensor-fusion-comparison-for-test-trajectory-in"><img alt="TABLE II: Sensor fusion comparison for test trajectory In Table II we show the maximum and average error for our test trajectory with and without IMU data. The delay caused in the fingerprinting in combination with the slow update fre- quency causes a larger error compared to the real-time position during the trajectory. Note that flexibility of OpenHPS allows developers to experiment with different positioning algorithms and fusion techniques to further optimise the system. The expected trajectory is shown in red in Fig. 8. We determined the error by comparing the last known position with the actual expected position in the trajectory. While WLAN positioning and BLE cell identification can show a visual representation of the complete route, it only consists of 13 data points that are not synchronised with the user’s real- time position. This delay is due to the scan duration and the processing time on the server. " class="figure-slide-image" src="https://figures.academia-assets.com/68307205/table_002.jpg" /></a></figure></div><div class="next-slide-container js-next-button-container"><button aria-label="Next" class="carousel-navigation-button js-profile-work-50252266-figures-next"><span class="material-symbols-outlined" style="font-size: 24px" translate="no">arrow_forward_ios</span></button></div></div></div><div class="wp-workCard_item wp-workCard--actions"><span class="work-strip-bookmark-button-container"></span><a id="f355a9d3d0a49444dd5edf5a7ce7b5b7" class="wp-workCard--action" rel="nofollow" data-click-track="profile-work-strip-download" data-download="{&quot;attachment_id&quot;:68307205,&quot;asset_id&quot;:50252266,&quot;asset_type&quot;:&quot;Work&quot;,&quot;button_location&quot;:&quot;profile&quot;}" href="https://www.academia.edu/attachments/68307205/download_file?s=profile"><span><i class="fa fa-arrow-down"></i></span><span>Download</span></a><span class="wp-workCard--action visible-if-viewed-by-owner inline-block" style="display: none;"><span class="js-profile-work-strip-edit-button-wrapper profile-work-strip-edit-button-wrapper" data-work-id="50252266"><a class="js-profile-work-strip-edit-button" tabindex="0"><span><i class="fa fa-pencil"></i></span><span>Edit</span></a></span></span></div><div class="wp-workCard_item wp-workCard--stats"><span><span><span class="js-view-count view-count u-mr2x" data-work-id="50252266"><i class="fa fa-spinner fa-spin"></i></span><script>$(function () { var workId = 50252266; window.Academia.workViewCountsFetcher.queue(workId, function (count) { var description = window.$h.commaizeInt(count) + " " + window.$h.pluralize(count, 'View'); $(".js-view-count[data-work-id=50252266]").text(description); $(".js-view-count[data-work-id=50252266]").attr('title', description).tooltip(); }); });</script></span></span><span><span class="percentile-widget hidden"><span class="u-mr2x work-percentile"></span></span><script>$(function () { var workId = 50252266; window.Academia.workPercentilesFetcher.queue(workId, function (percentileText) { var container = $(".js-work-strip[data-work-id='50252266']"); container.find('.work-percentile').text(percentileText.charAt(0).toUpperCase() + percentileText.slice(1)); container.find('.percentile-widget').show(); container.find('.percentile-widget').removeClass('hidden'); }); });</script></span></div><div id="work-strip-premium-row-container"></div></div></div><script> require.config({ waitSeconds: 90 })(["https://a.academia-assets.com/assets/wow_profile-a9bf3a2bc8c89fa2a77156577594264ee8a0f214d74241bc0fcd3f69f8d107ac.js","https://a.academia-assets.com/assets/work_edit-ad038b8c047c1a8d4fa01b402d530ff93c45fee2137a149a4a5398bc8ad67560.js"], function() { // from javascript_helper.rb var dispatcherData = {} if (true){ window.WowProfile.dispatcher = window.WowProfile.dispatcher || _.clone(Backbone.Events); dispatcherData = { dispatcher: window.WowProfile.dispatcher, downloadLinkId: "f355a9d3d0a49444dd5edf5a7ce7b5b7" } } $('.js-work-strip[data-work-id=50252266]').each(function() { if (!$(this).data('initialized')) { new WowProfile.WorkStripView({ el: this, workJSON: {"id":50252266,"title":"Indoor Positioning Using the OpenHPS Framework","translated_title":"","metadata":{"doi":"10.1109/IPIN51156.2021.9662569","abstract":"Hybrid positioning frameworks use various sensors and algorithms to enhance positioning through different types of fusion. The optimisation of the fusion process requires the testing of different algorithm parameters and optimal lowas well as high-level sensor fusion techniques. The presented OpenHPS open source hybrid positioning system is a modular framework managing individual nodes in a process network, which can be configured to support concrete positioning use cases or to adapt to specific technologies. This modularity allows developers to rapidly develop and optimise their positioning system while still providing them the flexibility to add their own algorithms. In this paper we discuss how a process network developed with OpenHPS can be used to realise a customisable indoor positioning solution with an offline and online stage, and how it can be adapted for high accuracy or low latency. For the demonstration and validation of our indoor positioning solution, we further compiled a publicly available dataset containing data from WLAN access points, BLE beacons as well as several trajectories that include IMU data.","more_info":"Maxim Van de Wynckel and Beat Signer, Proceedings of IPIN 2021, 11th International Conference on Indoor Positioning and Indoor Navigation, Lloret de Mar, Spain, November 2021","publication_date":{"day":null,"month":null,"year":2021,"errors":{}}},"translated_abstract":"Hybrid positioning frameworks use various sensors and algorithms to enhance positioning through different types of fusion. The optimisation of the fusion process requires the testing of different algorithm parameters and optimal lowas well as high-level sensor fusion techniques. The presented OpenHPS open source hybrid positioning system is a modular framework managing individual nodes in a process network, which can be configured to support concrete positioning use cases or to adapt to specific technologies. This modularity allows developers to rapidly develop and optimise their positioning system while still providing them the flexibility to add their own algorithms. In this paper we discuss how a process network developed with OpenHPS can be used to realise a customisable indoor positioning solution with an offline and online stage, and how it can be adapted for high accuracy or low latency. For the demonstration and validation of our indoor positioning solution, we further compiled a publicly available dataset containing data from WLAN access points, BLE beacons as well as several trajectories that include IMU data.","internal_url":"https://www.academia.edu/50252266/Indoor_Positioning_Using_the_OpenHPS_Framework","translated_internal_url":"","created_at":"2021-07-25T14:31:02.900-07:00","preview_url":null,"current_user_can_edit":null,"current_user_is_owner":null,"owner_id":13155,"coauthors_can_edit":true,"document_type":"paper","co_author_tags":[{"id":36731995,"work_id":50252266,"tagging_user_id":13155,"tagged_user_id":61118416,"co_author_invite_id":null,"email":"m***l@gmail.com","affiliation":"Vrije Universiteit Brussel","display_order":1,"name":"Maxim Van de Wynckel","title":"Indoor Positioning Using the OpenHPS Framework"}],"downloadable_attachments":[{"id":68307205,"title":"","file_type":"pdf","scribd_thumbnail_url":"https://attachments.academia-assets.com/68307205/thumbnails/1.jpg","file_name":"van_de_wynckel_IPIN2021.pdf","download_url":"https://www.academia.edu/attachments/68307205/download_file","bulk_download_file_name":"Indoor_Positioning_Using_the_OpenHPS_Fra.pdf","bulk_download_url":"https://d1wqtxts1xzle7.cloudfront.net/68307205/van_de_wynckel_IPIN2021-libre.pdf?1627252092=\u0026response-content-disposition=attachment%3B+filename%3DIndoor_Positioning_Using_the_OpenHPS_Fra.pdf\u0026Expires=1744203891\u0026Signature=YG9DPlQIvrDeTTm8QsFyJcfOHYsRL6uVjXw3vQelu6lVoj00CxNwG5zyt9zJ2ToDYP4gOGUJX-BcPXEbvXNMW2Y1c~mZtO~~h9C3jDPmVkKafABMTV3MKYH3JPyF7cGazmK6vHMhmp0p3-QEPH~R40bzkYRDB6m2EM~zM2ad9FefVCW3iDIDkzGrLjagY7ary6XE74MDuc8OVGWcltFH~z315dJjNyf7whQReWlg3xmRr1Sn3Z~6-u31Aet3uu7-mCZR~MLEt9CybwatZU0WBFIkJ2fcXeoerpLldmMY64jVrVufcH8ZKbPxKEcvM6VoAMWTe0vzRqD6IfYCeWuwDw__\u0026Key-Pair-Id=APKAJLOHF5GGSLRBV4ZA"}],"slug":"Indoor_Positioning_Using_the_OpenHPS_Framework","translated_slug":"","page_count":8,"language":"en","content_type":"Work","summary":"Hybrid positioning frameworks use various sensors and algorithms to enhance positioning through different types of fusion. The optimisation of the fusion process requires the testing of different algorithm parameters and optimal lowas well as high-level sensor fusion techniques. The presented OpenHPS open source hybrid positioning system is a modular framework managing individual nodes in a process network, which can be configured to support concrete positioning use cases or to adapt to specific technologies. This modularity allows developers to rapidly develop and optimise their positioning system while still providing them the flexibility to add their own algorithms. In this paper we discuss how a process network developed with OpenHPS can be used to realise a customisable indoor positioning solution with an offline and online stage, and how it can be adapted for high accuracy or low latency. For the demonstration and validation of our indoor positioning solution, we further compiled a publicly available dataset containing data from WLAN access points, BLE beacons as well as several trajectories that include IMU data.","impression_tracking_id":null,"owner":{"id":13155,"first_name":"Beat","middle_initials":null,"last_name":"Signer","page_name":"BeatSigner","domain_name":"vub","created_at":"2008-10-29T17:11:18.015-07:00","display_name":"Beat Signer","url":"https://vub.academia.edu/BeatSigner"},"attachments":[{"id":68307205,"title":"","file_type":"pdf","scribd_thumbnail_url":"https://attachments.academia-assets.com/68307205/thumbnails/1.jpg","file_name":"van_de_wynckel_IPIN2021.pdf","download_url":"https://www.academia.edu/attachments/68307205/download_file","bulk_download_file_name":"Indoor_Positioning_Using_the_OpenHPS_Fra.pdf","bulk_download_url":"https://d1wqtxts1xzle7.cloudfront.net/68307205/van_de_wynckel_IPIN2021-libre.pdf?1627252092=\u0026response-content-disposition=attachment%3B+filename%3DIndoor_Positioning_Using_the_OpenHPS_Fra.pdf\u0026Expires=1744203891\u0026Signature=YG9DPlQIvrDeTTm8QsFyJcfOHYsRL6uVjXw3vQelu6lVoj00CxNwG5zyt9zJ2ToDYP4gOGUJX-BcPXEbvXNMW2Y1c~mZtO~~h9C3jDPmVkKafABMTV3MKYH3JPyF7cGazmK6vHMhmp0p3-QEPH~R40bzkYRDB6m2EM~zM2ad9FefVCW3iDIDkzGrLjagY7ary6XE74MDuc8OVGWcltFH~z315dJjNyf7whQReWlg3xmRr1Sn3Z~6-u31Aet3uu7-mCZR~MLEt9CybwatZU0WBFIkJ2fcXeoerpLldmMY64jVrVufcH8ZKbPxKEcvM6VoAMWTe0vzRqD6IfYCeWuwDw__\u0026Key-Pair-Id=APKAJLOHF5GGSLRBV4ZA"}],"research_interests":[{"id":422,"name":"Computer Science","url":"https://www.academia.edu/Documents/in/Computer_Science"},{"id":491,"name":"Information Technology","url":"https://www.academia.edu/Documents/in/Information_Technology"},{"id":1210,"name":"Informatics","url":"https://www.academia.edu/Documents/in/Informatics"},{"id":1380,"name":"Computer Engineering","url":"https://www.academia.edu/Documents/in/Computer_Engineering"},{"id":4222,"name":"Hybrid Systems","url":"https://www.academia.edu/Documents/in/Hybrid_Systems"},{"id":9988,"name":"Indoor Positioning","url":"https://www.academia.edu/Documents/in/Indoor_Positioning"},{"id":46957,"name":"Indoor Navigation","url":"https://www.academia.edu/Documents/in/Indoor_Navigation"},{"id":235328,"name":"Indoor Localization","url":"https://www.academia.edu/Documents/in/Indoor_Localization"},{"id":272690,"name":"Computer Sciencee","url":"https://www.academia.edu/Documents/in/Computer_Sciencee"},{"id":368857,"name":"Positioning","url":"https://www.academia.edu/Documents/in/Positioning"},{"id":482128,"name":"Pedestrian Dead-Reckoning","url":"https://www.academia.edu/Documents/in/Pedestrian_Dead-Reckoning"},{"id":707345,"name":"INDOOR POSITIONING SYSTEM","url":"https://www.academia.edu/Documents/in/INDOOR_POSITIONING_SYSTEM"},{"id":1483728,"name":"Data Fingerprinting","url":"https://www.academia.edu/Documents/in/Data_Fingerprinting"},{"id":3857627,"name":"OpenHPS","url":"https://www.academia.edu/Documents/in/OpenHPS"}],"urls":[{"id":29589477,"url":"https://beatsigner.com/publications/indoor-positioning-using-the-openhps-framework.pdf"}]}, dispatcherData: dispatcherData }); $(this).data('initialized', true); } }); $a.trackClickSource(".js-work-strip-work-link", "profile_work_strip") if (true) { Aedu.setUpFigureCarousel('profile-work-50252266-figures'); } }); </script> <div class="js-work-strip profile--work_container" data-work-id="7719770"><div class="profile--work_thumbnail hidden-xs"><a class="js-work-strip-work-link" data-click-track="profile-work-strip-thumbnail" href="https://www.academia.edu/7719770/MindXpres_An_Extensible_Content_driven_Cross_Media_Presentation_Platform"><img alt="Research paper thumbnail of MindXpres: An Extensible Content-driven Cross-Media Presentation Platform" class="work-thumbnail" src="https://attachments.academia-assets.com/45228932/thumbnails/1.jpg" /></a></div><div class="wp-workCard wp-workCard_itemContainer"><div class="wp-workCard_item wp-workCard--title"><a class="js-work-strip-work-link text-gray-darker" data-click-track="profile-work-strip-title" href="https://www.academia.edu/7719770/MindXpres_An_Extensible_Content_driven_Cross_Media_Presentation_Platform">MindXpres: An Extensible Content-driven Cross-Media Presentation Platform</a></div><div class="wp-workCard_item wp-workCard--coauthors"><span>by </span><span><a class="" data-click-track="profile-work-strip-authors" href="https://independent.academia.edu/ReinoutRoels">Reinout Roels</a> and <a class="" data-click-track="profile-work-strip-authors" href="https://vub.academia.edu/BeatSigner">Beat Signer</a></span></div><div class="wp-workCard_item"><span class="js-work-more-abstract-truncated">Existing presentation tools and document formats show a number of shortcomings in terms of the ma...</span><a class="js-work-more-abstract" data-broccoli-component="work_strip.more_abstract" data-click-track="profile-work-strip-more-abstract" href="javascript:;"><span> more </span><span><i class="fa fa-caret-down"></i></span></a><span class="js-work-more-abstract-untruncated hidden">Existing presentation tools and document formats show a number of shortcomings in terms of the management, visualisation and navigation of rich cross-media content. While slideware was originally designed for the production of physical transparencies, there is an increasing need for richer and more interactive media types. We investigate innovative forms of organising, visualising and navigating presentations. This includes the introduction of a new document format supporting the integration or transclusion of content from different presentations and cross-media sources as well as the non-linear navigation of presentations. We present MindXpres, a web technology-based extensible platform for content-driven cross-media presentations. The modular architecture and plug-in mechanism of MindXpres enable the reuse or integration of new visualisation and interaction components. Our MindXpres prototype forms a platform for the exploration and rapid prototyping of innovative concepts for presentation tools. Its support for multi-device user interfaces further enables an active participation of the audience which should ultimately result in more dynamic, engaging presentations and improved knowledge transfer.</span></div><div class="wp-workCard_item wp-workCard--actions"><span class="work-strip-bookmark-button-container"></span><a id="139346b3c87b1f7ad687fcc57e552021" class="wp-workCard--action" rel="nofollow" data-click-track="profile-work-strip-download" data-download="{&quot;attachment_id&quot;:45228932,&quot;asset_id&quot;:7719770,&quot;asset_type&quot;:&quot;Work&quot;,&quot;button_location&quot;:&quot;profile&quot;}" href="https://www.academia.edu/attachments/45228932/download_file?s=profile"><span><i class="fa fa-arrow-down"></i></span><span>Download</span></a><span class="wp-workCard--action visible-if-viewed-by-owner inline-block" style="display: none;"><span class="js-profile-work-strip-edit-button-wrapper profile-work-strip-edit-button-wrapper" data-work-id="7719770"><a class="js-profile-work-strip-edit-button" tabindex="0"><span><i class="fa fa-pencil"></i></span><span>Edit</span></a></span></span></div><div class="wp-workCard_item wp-workCard--stats"><span><span><span class="js-view-count view-count u-mr2x" data-work-id="7719770"><i class="fa fa-spinner fa-spin"></i></span><script>$(function () { var workId = 7719770; window.Academia.workViewCountsFetcher.queue(workId, function (count) { var description = window.$h.commaizeInt(count) + " " + window.$h.pluralize(count, 'View'); $(".js-view-count[data-work-id=7719770]").text(description); $(".js-view-count[data-work-id=7719770]").attr('title', description).tooltip(); }); });</script></span></span><span><span class="percentile-widget hidden"><span class="u-mr2x work-percentile"></span></span><script>$(function () { var workId = 7719770; window.Academia.workPercentilesFetcher.queue(workId, function (percentileText) { var container = $(".js-work-strip[data-work-id='7719770']"); container.find('.work-percentile').text(percentileText.charAt(0).toUpperCase() + percentileText.slice(1)); container.find('.percentile-widget').show(); container.find('.percentile-widget').removeClass('hidden'); }); });</script></span></div><div id="work-strip-premium-row-container"></div></div></div><script> require.config({ waitSeconds: 90 })(["https://a.academia-assets.com/assets/wow_profile-a9bf3a2bc8c89fa2a77156577594264ee8a0f214d74241bc0fcd3f69f8d107ac.js","https://a.academia-assets.com/assets/work_edit-ad038b8c047c1a8d4fa01b402d530ff93c45fee2137a149a4a5398bc8ad67560.js"], function() { // from javascript_helper.rb var dispatcherData = {} if (true){ window.WowProfile.dispatcher = window.WowProfile.dispatcher || _.clone(Backbone.Events); dispatcherData = { dispatcher: window.WowProfile.dispatcher, downloadLinkId: "139346b3c87b1f7ad687fcc57e552021" } } $('.js-work-strip[data-work-id=7719770]').each(function() { if (!$(this).data('initialized')) { new WowProfile.WorkStripView({ el: this, workJSON: {"id":7719770,"title":"MindXpres: An Extensible Content-driven Cross-Media Presentation Platform","translated_title":"","metadata":{"doi":"10.1007/978-3-319-11746-1_16","abstract":"Existing presentation tools and document formats show a number of shortcomings in terms of the management, visualisation and navigation of rich cross-media content. While slideware was originally designed for the production of physical transparencies, there is an increasing need for richer and more interactive media types. We investigate innovative forms of organising, visualising and navigating presentations. This includes the introduction of a new document format supporting the integration or transclusion of content from different presentations and cross-media sources as well as the non-linear navigation of presentations. We present MindXpres, a web technology-based extensible platform for content-driven cross-media presentations. The modular architecture and plug-in mechanism of MindXpres enable the reuse or integration of new visualisation and interaction components. Our MindXpres prototype forms a platform for the exploration and rapid prototyping of innovative concepts for presentation tools. Its support for multi-device user interfaces further enables an active participation of the audience which should ultimately result in more dynamic, engaging presentations and improved knowledge transfer.","more_info":"Reinout Roels and Beat Signer, Proceedings of WISE 2014, 15th International Conference on Web Information System Engineering, Thessaloniki, Greece, October, 2014","publication_date":{"day":null,"month":null,"year":2014,"errors":{}}},"translated_abstract":"Existing presentation tools and document formats show a number of shortcomings in terms of the management, visualisation and navigation of rich cross-media content. While slideware was originally designed for the production of physical transparencies, there is an increasing need for richer and more interactive media types. We investigate innovative forms of organising, visualising and navigating presentations. This includes the introduction of a new document format supporting the integration or transclusion of content from different presentations and cross-media sources as well as the non-linear navigation of presentations. We present MindXpres, a web technology-based extensible platform for content-driven cross-media presentations. The modular architecture and plug-in mechanism of MindXpres enable the reuse or integration of new visualisation and interaction components. Our MindXpres prototype forms a platform for the exploration and rapid prototyping of innovative concepts for presentation tools. Its support for multi-device user interfaces further enables an active participation of the audience which should ultimately result in more dynamic, engaging presentations and improved knowledge transfer.","internal_url":"https://www.academia.edu/7719770/MindXpres_An_Extensible_Content_driven_Cross_Media_Presentation_Platform","translated_internal_url":"","created_at":"2014-07-20T00:01:24.356-07:00","preview_url":null,"current_user_can_edit":null,"current_user_is_owner":null,"owner_id":13155,"coauthors_can_edit":true,"document_type":"paper","co_author_tags":[{"id":21069,"work_id":7719770,"tagging_user_id":13155,"tagged_user_id":2181828,"co_author_invite_id":null,"email":"r***s@gmail.com","display_order":-1,"name":"Reinout Roels","title":"MindXpres: An Extensible Content-driven Cross-Media Presentation Platform"}],"downloadable_attachments":[{"id":45228932,"title":"","file_type":"pdf","scribd_thumbnail_url":"https://attachments.academia-assets.com/45228932/thumbnails/1.jpg","file_name":"roels_WISE2014.pdf","download_url":"https://www.academia.edu/attachments/45228932/download_file","bulk_download_file_name":"MindXpres_An_Extensible_Content_driven_C.pdf","bulk_download_url":"https://d1wqtxts1xzle7.cloudfront.net/45228932/roels_WISE2014-libre.pdf?1462042109=\u0026response-content-disposition=attachment%3B+filename%3DMindXpres_An_Extensible_Content_driven_C.pdf\u0026Expires=1744203891\u0026Signature=ejvNXBgZMy3S9RhpHCofxwu0vPfPdnTkxt2m4eL67lFAoP4cpphwVqjyk6256Qcz-R7uQTPfS9LlSLBu~2evhJ6~7QJ0rGG-f~Tz3KZ9tj-OxpMwBdQjcs--GyEDKheuqzLqWYEcl9RW7c36NZCqZQY-ZwaFFuPc6Vc1eZNuKEVKJkLH0nctKaUuMnjsMnOChDqHG1nS1ZSexOU370Fpi6cGpepJ88vB2b3Qwr9LX~01wLM79djP0nNRf5aDQSDtzoj2V9Igyoz9kakP7U2jGz0UDAsngb8AmgPkAgm066CkKBpR6XKoqtKC~x3jM5v43PX8aTlhFle5~3ImAZx5nw__\u0026Key-Pair-Id=APKAJLOHF5GGSLRBV4ZA"}],"slug":"MindXpres_An_Extensible_Content_driven_Cross_Media_Presentation_Platform","translated_slug":"","page_count":15,"language":"en","content_type":"Work","summary":"Existing presentation tools and document formats show a number of shortcomings in terms of the management, visualisation and navigation of rich cross-media content. While slideware was originally designed for the production of physical transparencies, there is an increasing need for richer and more interactive media types. We investigate innovative forms of organising, visualising and navigating presentations. This includes the introduction of a new document format supporting the integration or transclusion of content from different presentations and cross-media sources as well as the non-linear navigation of presentations. We present MindXpres, a web technology-based extensible platform for content-driven cross-media presentations. The modular architecture and plug-in mechanism of MindXpres enable the reuse or integration of new visualisation and interaction components. Our MindXpres prototype forms a platform for the exploration and rapid prototyping of innovative concepts for presentation tools. Its support for multi-device user interfaces further enables an active participation of the audience which should ultimately result in more dynamic, engaging presentations and improved knowledge transfer.","impression_tracking_id":null,"owner":{"id":13155,"first_name":"Beat","middle_initials":null,"last_name":"Signer","page_name":"BeatSigner","domain_name":"vub","created_at":"2008-10-29T17:11:18.015-07:00","display_name":"Beat Signer","url":"https://vub.academia.edu/BeatSigner"},"attachments":[{"id":45228932,"title":"","file_type":"pdf","scribd_thumbnail_url":"https://attachments.academia-assets.com/45228932/thumbnails/1.jpg","file_name":"roels_WISE2014.pdf","download_url":"https://www.academia.edu/attachments/45228932/download_file","bulk_download_file_name":"MindXpres_An_Extensible_Content_driven_C.pdf","bulk_download_url":"https://d1wqtxts1xzle7.cloudfront.net/45228932/roels_WISE2014-libre.pdf?1462042109=\u0026response-content-disposition=attachment%3B+filename%3DMindXpres_An_Extensible_Content_driven_C.pdf\u0026Expires=1744203891\u0026Signature=ejvNXBgZMy3S9RhpHCofxwu0vPfPdnTkxt2m4eL67lFAoP4cpphwVqjyk6256Qcz-R7uQTPfS9LlSLBu~2evhJ6~7QJ0rGG-f~Tz3KZ9tj-OxpMwBdQjcs--GyEDKheuqzLqWYEcl9RW7c36NZCqZQY-ZwaFFuPc6Vc1eZNuKEVKJkLH0nctKaUuMnjsMnOChDqHG1nS1ZSexOU370Fpi6cGpepJ88vB2b3Qwr9LX~01wLM79djP0nNRf5aDQSDtzoj2V9Igyoz9kakP7U2jGz0UDAsngb8AmgPkAgm066CkKBpR6XKoqtKC~x3jM5v43PX8aTlhFle5~3ImAZx5nw__\u0026Key-Pair-Id=APKAJLOHF5GGSLRBV4ZA"}],"research_interests":[{"id":37,"name":"Information Systems","url":"https://www.academia.edu/Documents/in/Information_Systems"},{"id":48,"name":"Engineering","url":"https://www.academia.edu/Documents/in/Engineering"},{"id":422,"name":"Computer Science","url":"https://www.academia.edu/Documents/in/Computer_Science"},{"id":449,"name":"Software Engineering","url":"https://www.academia.edu/Documents/in/Software_Engineering"},{"id":459,"name":"Information Science","url":"https://www.academia.edu/Documents/in/Information_Science"},{"id":472,"name":"Human Computer Interaction","url":"https://www.academia.edu/Documents/in/Human_Computer_Interaction"},{"id":491,"name":"Information Technology","url":"https://www.academia.edu/Documents/in/Information_Technology"},{"id":859,"name":"Communication","url":"https://www.academia.edu/Documents/in/Communication"},{"id":887,"name":"Teaching and Learning","url":"https://www.academia.edu/Documents/in/Teaching_and_Learning"},{"id":922,"name":"Education","url":"https://www.academia.edu/Documents/in/Education"},{"id":923,"name":"Technology","url":"https://www.academia.edu/Documents/in/Technology"},{"id":1003,"name":"Educational Technology","url":"https://www.academia.edu/Documents/in/Educational_Technology"},{"id":1128,"name":"Computer Science Education","url":"https://www.academia.edu/Documents/in/Computer_Science_Education"},{"id":1210,"name":"Informatics","url":"https://www.academia.edu/Documents/in/Informatics"},{"id":1453,"name":"Information Management","url":"https://www.academia.edu/Documents/in/Information_Management"},{"id":1609,"name":"E-learning","url":"https://www.academia.edu/Documents/in/E-learning"},{"id":2621,"name":"Higher Education","url":"https://www.academia.edu/Documents/in/Higher_Education"},{"id":2869,"name":"Digital Media","url":"https://www.academia.edu/Documents/in/Digital_Media"},{"id":3095,"name":"Computer-Based Learning","url":"https://www.academia.edu/Documents/in/Computer-Based_Learning"},{"id":3424,"name":"Information Visualization","url":"https://www.academia.edu/Documents/in/Information_Visualization"},{"id":3457,"name":"Learning and Teaching","url":"https://www.academia.edu/Documents/in/Learning_and_Teaching"},{"id":5266,"name":"Human Information Interaction","url":"https://www.academia.edu/Documents/in/Human_Information_Interaction"},{"id":7585,"name":"ICT in Education","url":"https://www.academia.edu/Documents/in/ICT_in_Education"},{"id":8130,"name":"Web Development","url":"https://www.academia.edu/Documents/in/Web_Development"},{"id":8679,"name":"Computer Supported Collaborative Learning (CSCL)","url":"https://www.academia.edu/Documents/in/Computer_Supported_Collaborative_Learning_CSCL_"},{"id":10472,"name":"Web Applications","url":"https://www.academia.edu/Documents/in/Web_Applications"},{"id":11085,"name":"Cross-Media Information Spaces","url":"https://www.academia.edu/Documents/in/Cross-Media_Information_Spaces"},{"id":11119,"name":"User Interface","url":"https://www.academia.edu/Documents/in/User_Interface"},{"id":17758,"name":"Technology Enhanced Learning","url":"https://www.academia.edu/Documents/in/Technology_Enhanced_Learning"},{"id":17951,"name":"Learning And Teaching In Higher Education","url":"https://www.academia.edu/Documents/in/Learning_And_Teaching_In_Higher_Education"},{"id":18711,"name":"Technology-mediated teaching and learning","url":"https://www.academia.edu/Documents/in/Technology-mediated_teaching_and_learning"},{"id":20481,"name":"Information Visualisation","url":"https://www.academia.edu/Documents/in/Information_Visualisation"},{"id":23997,"name":"HCI","url":"https://www.academia.edu/Documents/in/HCI"},{"id":25475,"name":"Learning environments","url":"https://www.academia.edu/Documents/in/Learning_environments"},{"id":25681,"name":"E-learning 2.0","url":"https://www.academia.edu/Documents/in/E-learning_2.0"},{"id":33915,"name":"Technology-enhanced Learning","url":"https://www.academia.edu/Documents/in/Technology-enhanced_Learning"},{"id":37753,"name":"Teaching","url":"https://www.academia.edu/Documents/in/Teaching"},{"id":41474,"name":"Cross Media Platforms","url":"https://www.academia.edu/Documents/in/Cross_Media_Platforms"},{"id":42095,"name":"Document Engineering","url":"https://www.academia.edu/Documents/in/Document_Engineering"},{"id":69857,"name":"PowerPoint","url":"https://www.academia.edu/Documents/in/PowerPoint"},{"id":141114,"name":"World Wide Web","url":"https://www.academia.edu/Documents/in/World_Wide_Web"},{"id":168573,"name":"Lifelong learning and adult education","url":"https://www.academia.edu/Documents/in/Lifelong_learning_and_adult_education"},{"id":279114,"name":"Technology Enhanced Education","url":"https://www.academia.edu/Documents/in/Technology_Enhanced_Education"},{"id":369022,"name":"Cross Media","url":"https://www.academia.edu/Documents/in/Cross_Media"},{"id":651491,"name":"Web Information Systems","url":"https://www.academia.edu/Documents/in/Web_Information_Systems"},{"id":672167,"name":"Cross-Media Information Spaces and Architectures (CISA)","url":"https://www.academia.edu/Documents/in/Cross-Media_Information_Spaces_and_Architectures_CISA_"},{"id":1273494,"name":"Slideware","url":"https://www.academia.edu/Documents/in/Slideware"},{"id":1705150,"name":"MindXpres","url":"https://www.academia.edu/Documents/in/MindXpres"}],"urls":[{"id":9196132,"url":"https://beatsigner.com/publications/mindxpres-an-extensible-content-driven-cross-media-presentation-platform.pdf"}]}, dispatcherData: dispatcherData }); $(this).data('initialized', true); } }); $a.trackClickSource(".js-work-strip-work-link", "profile_work_strip") if (false) { Aedu.setUpFigureCarousel('profile-work-7719770-figures'); } }); </script> <div class="js-work-strip profile--work_container" data-work-id="91891912"><div class="profile--work_thumbnail hidden-xs"><a class="js-work-strip-work-link" data-click-track="profile-work-strip-thumbnail" href="https://www.academia.edu/91891912/Computing_Education_Research_as_a_Translational_Transdiscipline"><img alt="Research paper thumbnail of Computing Education Research as a Translational Transdiscipline" class="work-thumbnail" src="https://attachments.academia-assets.com/95744963/thumbnails/1.jpg" /></a></div><div class="wp-workCard wp-workCard_itemContainer"><div class="wp-workCard_item wp-workCard--title"><a class="js-work-strip-work-link text-gray-darker" data-click-track="profile-work-strip-title" href="https://www.academia.edu/91891912/Computing_Education_Research_as_a_Translational_Transdiscipline">Computing Education Research as a Translational Transdiscipline</a></div><div class="wp-workCard_item wp-workCard--coauthors"><span>by </span><span><a class="" data-click-track="profile-work-strip-authors" href="https://vub.academia.edu/BeatSigner">Beat Signer</a> and <a class="" data-click-track="profile-work-strip-authors" href="https://vub.academia.edu/YoshiMalaise">Yoshi Malaise</a></span></div><div class="wp-workCard_item"><span class="js-work-more-abstract-truncated">The field of Computing Education Research (CER) produces important insights into learning and not...</span><a class="js-work-more-abstract" data-broccoli-component="work_strip.more_abstract" data-click-track="profile-work-strip-more-abstract" href="javascript:;"><span> more </span><span><i class="fa fa-caret-down"></i></span></a><span class="js-work-more-abstract-untruncated hidden">The field of Computing Education Research (CER) produces important insights into learning and notable interventions, yet due to the research/practice divide these do not have the desired impact on learners or practitioners. Even within CER, Computing Education (CE) learning theories have limited influence on learning designs due to the theory/design divide, which is unfortunate given that the goal of CER is to impact learners and broaden access to computation.<br /><br />There is a lack of an overarching model defining CER as a unified field and providing a framework for discussion. While there is discussion around many of the core activities and practices in CER, we have yet to come across a holistic characterisation. We introduce a model of Translational Computing Education Research (TCER) that helps to understand and discuss CER as a field, bridge the divides and provide internal structure, while also making the field more approachable for interdisciplinary and non-academic collaborators. In our TCER model, theory and design are equally important but weighted differently depending on an activity&#39;s position along the research/practice continuum.<br /><br />In addition to the future exploration and exploitation of the presented TCER model, we propose further characterising CER as a field, applying the TCER model to understand past and contemporary CER, applying the model to address current challenges in CER, imagining what the field can become, as well as exploring the potential for translational research programmes to maximise the broader impact of computing education research.</span></div><div class="wp-workCard_item"><div class="carousel-container carousel-container--sm" id="profile-work-91891912-figures"><div class="prev-slide-container js-prev-button-container"><button aria-label="Previous" class="carousel-navigation-button js-profile-work-91891912-figures-prev"><span class="material-symbols-outlined" style="font-size: 24px" translate="no">arrow_back_ios</span></button></div><div class="slides-container js-slides-container"><figure class="figure-slide-container"><a href="https://www.academia.edu/figures/6322007/figure-1-computing-education-research-as-translational"><img alt="" class="figure-slide-image" src="https://figures.academia-assets.com/95744963/figure_001.jpg" /></a></figure><figure class="figure-slide-container"><a href="https://www.academia.edu/figures/6322020/figure-2-research-practice-and-theory-design-continuum"><img alt="Figure 2: Research/practice and theory/design continuum " class="figure-slide-image" src="https://figures.academia-assets.com/95744963/figure_002.jpg" /></a></figure></div><div class="next-slide-container js-next-button-container"><button aria-label="Next" class="carousel-navigation-button js-profile-work-91891912-figures-next"><span class="material-symbols-outlined" style="font-size: 24px" translate="no">arrow_forward_ios</span></button></div></div></div><div class="wp-workCard_item wp-workCard--actions"><span class="work-strip-bookmark-button-container"></span><a id="67d60a5e3a0018a1660936e51b17eefb" class="wp-workCard--action" rel="nofollow" data-click-track="profile-work-strip-download" data-download="{&quot;attachment_id&quot;:95744963,&quot;asset_id&quot;:91891912,&quot;asset_type&quot;:&quot;Work&quot;,&quot;button_location&quot;:&quot;profile&quot;}" href="https://www.academia.edu/attachments/95744963/download_file?s=profile"><span><i class="fa fa-arrow-down"></i></span><span>Download</span></a><span class="wp-workCard--action visible-if-viewed-by-owner inline-block" style="display: none;"><span class="js-profile-work-strip-edit-button-wrapper profile-work-strip-edit-button-wrapper" data-work-id="91891912"><a class="js-profile-work-strip-edit-button" tabindex="0"><span><i class="fa fa-pencil"></i></span><span>Edit</span></a></span></span></div><div class="wp-workCard_item wp-workCard--stats"><span><span><span class="js-view-count view-count u-mr2x" data-work-id="91891912"><i class="fa fa-spinner fa-spin"></i></span><script>$(function () { var workId = 91891912; window.Academia.workViewCountsFetcher.queue(workId, function (count) { var description = window.$h.commaizeInt(count) + " " + window.$h.pluralize(count, 'View'); $(".js-view-count[data-work-id=91891912]").text(description); $(".js-view-count[data-work-id=91891912]").attr('title', description).tooltip(); }); });</script></span></span><span><span class="percentile-widget hidden"><span class="u-mr2x work-percentile"></span></span><script>$(function () { var workId = 91891912; window.Academia.workPercentilesFetcher.queue(workId, function (percentileText) { var container = $(".js-work-strip[data-work-id='91891912']"); container.find('.work-percentile').text(percentileText.charAt(0).toUpperCase() + percentileText.slice(1)); container.find('.percentile-widget').show(); container.find('.percentile-widget').removeClass('hidden'); }); });</script></span></div><div id="work-strip-premium-row-container"></div></div></div><script> require.config({ waitSeconds: 90 })(["https://a.academia-assets.com/assets/wow_profile-a9bf3a2bc8c89fa2a77156577594264ee8a0f214d74241bc0fcd3f69f8d107ac.js","https://a.academia-assets.com/assets/work_edit-ad038b8c047c1a8d4fa01b402d530ff93c45fee2137a149a4a5398bc8ad67560.js"], function() { // from javascript_helper.rb var dispatcherData = {} if (true){ window.WowProfile.dispatcher = window.WowProfile.dispatcher || _.clone(Backbone.Events); dispatcherData = { dispatcher: window.WowProfile.dispatcher, downloadLinkId: "67d60a5e3a0018a1660936e51b17eefb" } } $('.js-work-strip[data-work-id=91891912]').each(function() { if (!$(this).data('initialized')) { new WowProfile.WorkStripView({ el: this, workJSON: {"id":91891912,"title":"Computing Education Research as a Translational Transdiscipline","translated_title":"","metadata":{"doi":"10.1145/3545945.3569771","abstract":"The field of Computing Education Research (CER) produces important insights into learning and notable interventions, yet due to the research/practice divide these do not have the desired impact on learners or practitioners. Even within CER, Computing Education (CE) learning theories have limited influence on learning designs due to the theory/design divide, which is unfortunate given that the goal of CER is to impact learners and broaden access to computation.\n\nThere is a lack of an overarching model defining CER as a unified field and providing a framework for discussion. While there is discussion around many of the core activities and practices in CER, we have yet to come across a holistic characterisation. We introduce a model of Translational Computing Education Research (TCER) that helps to understand and discuss CER as a field, bridge the divides and provide internal structure, while also making the field more approachable for interdisciplinary and non-academic collaborators. In our TCER model, theory and design are equally important but weighted differently depending on an activity's position along the research/practice continuum.\n\nIn addition to the future exploration and exploitation of the presented TCER model, we propose further characterising CER as a field, applying the TCER model to understand past and contemporary CER, applying the model to address current challenges in CER, imagining what the field can become, as well as exploring the potential for translational research programmes to maximise the broader impact of computing education research.","more_info":"Evan Cole, Yoshi Malaise and Beat Signer, Proceedings of SIGCSE 2023, 54th ACM Technical Symposium on Computer Science Education, Toronto, Canada, March 2023","ai_title_tag":"Translational Computing Education Research: Bridging Theory and Practice","publication_date":{"day":null,"month":null,"year":2023,"errors":{}}},"translated_abstract":"The field of Computing Education Research (CER) produces important insights into learning and notable interventions, yet due to the research/practice divide these do not have the desired impact on learners or practitioners. Even within CER, Computing Education (CE) learning theories have limited influence on learning designs due to the theory/design divide, which is unfortunate given that the goal of CER is to impact learners and broaden access to computation.\n\nThere is a lack of an overarching model defining CER as a unified field and providing a framework for discussion. While there is discussion around many of the core activities and practices in CER, we have yet to come across a holistic characterisation. We introduce a model of Translational Computing Education Research (TCER) that helps to understand and discuss CER as a field, bridge the divides and provide internal structure, while also making the field more approachable for interdisciplinary and non-academic collaborators. In our TCER model, theory and design are equally important but weighted differently depending on an activity's position along the research/practice continuum.\n\nIn addition to the future exploration and exploitation of the presented TCER model, we propose further characterising CER as a field, applying the TCER model to understand past and contemporary CER, applying the model to address current challenges in CER, imagining what the field can become, as well as exploring the potential for translational research programmes to maximise the broader impact of computing education research.","internal_url":"https://www.academia.edu/91891912/Computing_Education_Research_as_a_Translational_Transdiscipline","translated_internal_url":"","created_at":"2022-11-29T19:39:11.198-08:00","preview_url":null,"current_user_can_edit":null,"current_user_is_owner":null,"owner_id":13155,"coauthors_can_edit":true,"document_type":"paper","co_author_tags":[{"id":39133570,"work_id":91891912,"tagging_user_id":13155,"tagged_user_id":230910613,"co_author_invite_id":null,"email":"y***e@vub.be","affiliation":"Vrije Universiteit Brussel","display_order":1,"name":"Yoshi Malaise","title":"Computing Education Research as a Translational Transdiscipline"}],"downloadable_attachments":[{"id":95744963,"title":"","file_type":"pdf","scribd_thumbnail_url":"https://attachments.academia-assets.com/95744963/thumbnails/1.jpg","file_name":"cole_SIGCSE_2023.pdf","download_url":"https://www.academia.edu/attachments/95744963/download_file","bulk_download_file_name":"Computing_Education_Research_as_a_Transl.pdf","bulk_download_url":"https://d1wqtxts1xzle7.cloudfront.net/95744963/cole_SIGCSE_2023-libre.pdf?1671005207=\u0026response-content-disposition=attachment%3B+filename%3DComputing_Education_Research_as_a_Transl.pdf\u0026Expires=1744203891\u0026Signature=cj-9-aTFn-mnH43zUeQF3QOVeC7vVZUJ61XcrPikg1ux0R9Ct7TF19-Fueh--Z1NE-IIYyHsC3GQm67C7lY5Jpk4BJNueXi~oKk~ngiLZenB49KxBeKc6Scj2mhim8kdnZ4Gwb5BEd4-vdjImP2OpeQgbz9PT8Ewq6~R41-A8wvgeG6cv~VU8jirzgLc-xyY-g5a6Cr0RGOaRb19DlAZsjz6BM~iTvYsYt6U3feSLVvRjPa9DCVqC6idLUjIVsBSnCtjV-4W0Bt2zgVk21K3TFmMxh4Xp1y4vj~hB34d98VlZbKLpQ3P3n3HsulzXmt1dOkVAVB0xrAgDP2s2riaiQ__\u0026Key-Pair-Id=APKAJLOHF5GGSLRBV4ZA"}],"slug":"Computing_Education_Research_as_a_Translational_Transdiscipline","translated_slug":"","page_count":7,"language":"en","content_type":"Work","summary":"The field of Computing Education Research (CER) produces important insights into learning and notable interventions, yet due to the research/practice divide these do not have the desired impact on learners or practitioners. Even within CER, Computing Education (CE) learning theories have limited influence on learning designs due to the theory/design divide, which is unfortunate given that the goal of CER is to impact learners and broaden access to computation.\n\nThere is a lack of an overarching model defining CER as a unified field and providing a framework for discussion. While there is discussion around many of the core activities and practices in CER, we have yet to come across a holistic characterisation. We introduce a model of Translational Computing Education Research (TCER) that helps to understand and discuss CER as a field, bridge the divides and provide internal structure, while also making the field more approachable for interdisciplinary and non-academic collaborators. In our TCER model, theory and design are equally important but weighted differently depending on an activity's position along the research/practice continuum.\n\nIn addition to the future exploration and exploitation of the presented TCER model, we propose further characterising CER as a field, applying the TCER model to understand past and contemporary CER, applying the model to address current challenges in CER, imagining what the field can become, as well as exploring the potential for translational research programmes to maximise the broader impact of computing education research.","impression_tracking_id":null,"owner":{"id":13155,"first_name":"Beat","middle_initials":null,"last_name":"Signer","page_name":"BeatSigner","domain_name":"vub","created_at":"2008-10-29T17:11:18.015-07:00","display_name":"Beat Signer","url":"https://vub.academia.edu/BeatSigner"},"attachments":[{"id":95744963,"title":"","file_type":"pdf","scribd_thumbnail_url":"https://attachments.academia-assets.com/95744963/thumbnails/1.jpg","file_name":"cole_SIGCSE_2023.pdf","download_url":"https://www.academia.edu/attachments/95744963/download_file","bulk_download_file_name":"Computing_Education_Research_as_a_Transl.pdf","bulk_download_url":"https://d1wqtxts1xzle7.cloudfront.net/95744963/cole_SIGCSE_2023-libre.pdf?1671005207=\u0026response-content-disposition=attachment%3B+filename%3DComputing_Education_Research_as_a_Transl.pdf\u0026Expires=1744203891\u0026Signature=cj-9-aTFn-mnH43zUeQF3QOVeC7vVZUJ61XcrPikg1ux0R9Ct7TF19-Fueh--Z1NE-IIYyHsC3GQm67C7lY5Jpk4BJNueXi~oKk~ngiLZenB49KxBeKc6Scj2mhim8kdnZ4Gwb5BEd4-vdjImP2OpeQgbz9PT8Ewq6~R41-A8wvgeG6cv~VU8jirzgLc-xyY-g5a6Cr0RGOaRb19DlAZsjz6BM~iTvYsYt6U3feSLVvRjPa9DCVqC6idLUjIVsBSnCtjV-4W0Bt2zgVk21K3TFmMxh4Xp1y4vj~hB34d98VlZbKLpQ3P3n3HsulzXmt1dOkVAVB0xrAgDP2s2riaiQ__\u0026Key-Pair-Id=APKAJLOHF5GGSLRBV4ZA"}],"research_interests":[{"id":422,"name":"Computer Science","url":"https://www.academia.edu/Documents/in/Computer_Science"},{"id":922,"name":"Education","url":"https://www.academia.edu/Documents/in/Education"},{"id":1003,"name":"Educational Technology","url":"https://www.academia.edu/Documents/in/Educational_Technology"},{"id":1210,"name":"Informatics","url":"https://www.academia.edu/Documents/in/Informatics"},{"id":1601,"name":"Teacher Education","url":"https://www.academia.edu/Documents/in/Teacher_Education"},{"id":1736,"name":"Science Education","url":"https://www.academia.edu/Documents/in/Science_Education"},{"id":1902,"name":"Practice theory","url":"https://www.academia.edu/Documents/in/Practice_theory"},{"id":2066,"name":"Research Design","url":"https://www.academia.edu/Documents/in/Research_Design"},{"id":2621,"name":"Higher Education","url":"https://www.academia.edu/Documents/in/Higher_Education"},{"id":3429,"name":"Educational Research","url":"https://www.academia.edu/Documents/in/Educational_Research"},{"id":5799,"name":"Transdisciplinarity","url":"https://www.academia.edu/Documents/in/Transdisciplinarity"},{"id":27338,"name":"Educational Theory","url":"https://www.academia.edu/Documents/in/Educational_Theory"},{"id":154737,"name":"Best practices in education","url":"https://www.academia.edu/Documents/in/Best_practices_in_education"},{"id":312926,"name":"Computing education research","url":"https://www.academia.edu/Documents/in/Computing_education_research"},{"id":387046,"name":"Transdisciplinary research","url":"https://www.academia.edu/Documents/in/Transdisciplinary_research"},{"id":466616,"name":"Computing Education","url":"https://www.academia.edu/Documents/in/Computing_Education"},{"id":706697,"name":"Research Programmes","url":"https://www.academia.edu/Documents/in/Research_Programmes"},{"id":2412910,"name":"Computing education research (CER)","url":"https://www.academia.edu/Documents/in/Computing_education_research_CER_"}],"urls":[{"id":26993810,"url":"https://beatsigner.com/publications/computing-education-research-as-a-translational-transdiscipline.pdf"}]}, dispatcherData: dispatcherData }); $(this).data('initialized', true); } }); $a.trackClickSource(".js-work-strip-work-link", "profile_work_strip") if (true) { Aedu.setUpFigureCarousel('profile-work-91891912-figures'); } }); </script> <div class="js-work-strip profile--work_container" data-work-id="83930439"><div class="profile--work_thumbnail hidden-xs"><a class="js-work-strip-work-link" data-click-track="profile-work-strip-thumbnail" href="https://www.academia.edu/83930439/POSO_A_Generic_Positioning_System_Ontology"><img alt="Research paper thumbnail of POSO: A Generic Positioning System Ontology" class="work-thumbnail" src="https://attachments.academia-assets.com/89119455/thumbnails/1.jpg" /></a></div><div class="wp-workCard wp-workCard_itemContainer"><div class="wp-workCard_item wp-workCard--title"><a class="js-work-strip-work-link text-gray-darker" data-click-track="profile-work-strip-title" href="https://www.academia.edu/83930439/POSO_A_Generic_Positioning_System_Ontology">POSO: A Generic Positioning System Ontology</a></div><div class="wp-workCard_item wp-workCard--coauthors"><span>by </span><span><a class="" data-click-track="profile-work-strip-authors" href="https://vub.academia.edu/BeatSigner">Beat Signer</a> and <a class="" data-click-track="profile-work-strip-authors" href="https://vub.academia.edu/MaximVandeWynckel">Maxim Van de Wynckel</a></span></div><div class="wp-workCard_item"><span class="js-work-more-abstract-truncated">While satellite-based positioning systems are mainly used in outdoor environments, various other ...</span><a class="js-work-more-abstract" data-broccoli-component="work_strip.more_abstract" data-click-track="profile-work-strip-more-abstract" href="javascript:;"><span> more </span><span><i class="fa fa-caret-down"></i></span></a><span class="js-work-more-abstract-untruncated hidden">While satellite-based positioning systems are mainly used in outdoor environments, various other positioning techniques exist for different domains and use cases, including indoor or underground settings. The representation of spatial data via semantic linked data is well addressed by existing spatial ontologies. However, there is a primary focus on location data with its specific geographical context, but a lack of solutions for describing the different types of data generated by a positioning system and the used sampling techniques to obtain the data. In this paper we introduce a new generic Positioning System Ontology (POSO) that is built on top of the Semantic Sensor Network (SSN) and Sensor, Observation, Sample, and Actuator (SOSA) ontologies. With POSO, we provide missing concepts needed for describing a positioning system and its output with known positioning algorithms and techniques in mind. Thereby, we enable the improvement of hybrid positioning systems making use of multiple platforms and sensors that are described via the presented POSO ontology.</span></div><div class="wp-workCard_item"><div class="carousel-container carousel-container--sm" id="profile-work-83930439-figures"><div class="prev-slide-container js-prev-button-container"><button aria-label="Previous" class="carousel-navigation-button js-profile-work-83930439-figures-prev"><span class="material-symbols-outlined" style="font-size: 24px" translate="no">arrow_back_ios</span></button></div><div class="slides-container js-slides-container"><figure class="figure-slide-container"><a href="https://www.academia.edu/figures/22596793/figure-1-basic-structure-of-positioning-system-that-tracks"><img alt="Fig. 1: Basic structure of a positioning system that tracks entities " class="figure-slide-image" src="https://figures.academia-assets.com/89119455/figure_001.jpg" /></a></figure><figure class="figure-slide-container"><a href="https://www.academia.edu/figures/22596801/figure-2-positioning-systems-and-techniques-in-the-poso"><img alt="Fig. 2: Positioning systems and techniques in the POSO ontology " class="figure-slide-image" src="https://figures.academia-assets.com/89119455/figure_002.jpg" /></a></figure><figure class="figure-slide-container"><a href="https://www.academia.edu/figures/22596813/figure-3-example-of-positioning-system-with-position"><img alt="Fig. 3: Example of a positioning system with a position, orientation and velocity property " class="figure-slide-image" src="https://figures.academia-assets.com/89119455/figure_003.jpg" /></a></figure></div><div class="next-slide-container js-next-button-container"><button aria-label="Next" class="carousel-navigation-button js-profile-work-83930439-figures-next"><span class="material-symbols-outlined" style="font-size: 24px" translate="no">arrow_forward_ios</span></button></div></div></div><div class="wp-workCard_item wp-workCard--actions"><span class="work-strip-bookmark-button-container"></span><a id="f9d907ae24cc3200a9b1ca45e84b4e0d" class="wp-workCard--action" rel="nofollow" data-click-track="profile-work-strip-download" data-download="{&quot;attachment_id&quot;:89119455,&quot;asset_id&quot;:83930439,&quot;asset_type&quot;:&quot;Work&quot;,&quot;button_location&quot;:&quot;profile&quot;}" href="https://www.academia.edu/attachments/89119455/download_file?s=profile"><span><i class="fa fa-arrow-down"></i></span><span>Download</span></a><span class="wp-workCard--action visible-if-viewed-by-owner inline-block" style="display: none;"><span class="js-profile-work-strip-edit-button-wrapper profile-work-strip-edit-button-wrapper" data-work-id="83930439"><a class="js-profile-work-strip-edit-button" tabindex="0"><span><i class="fa fa-pencil"></i></span><span>Edit</span></a></span></span></div><div class="wp-workCard_item wp-workCard--stats"><span><span><span class="js-view-count view-count u-mr2x" data-work-id="83930439"><i class="fa fa-spinner fa-spin"></i></span><script>$(function () { var workId = 83930439; window.Academia.workViewCountsFetcher.queue(workId, function (count) { var description = window.$h.commaizeInt(count) + " " + window.$h.pluralize(count, 'View'); $(".js-view-count[data-work-id=83930439]").text(description); $(".js-view-count[data-work-id=83930439]").attr('title', description).tooltip(); }); });</script></span></span><span><span class="percentile-widget hidden"><span class="u-mr2x work-percentile"></span></span><script>$(function () { var workId = 83930439; window.Academia.workPercentilesFetcher.queue(workId, function (percentileText) { var container = $(".js-work-strip[data-work-id='83930439']"); container.find('.work-percentile').text(percentileText.charAt(0).toUpperCase() + percentileText.slice(1)); container.find('.percentile-widget').show(); container.find('.percentile-widget').removeClass('hidden'); }); });</script></span></div><div id="work-strip-premium-row-container"></div></div></div><script> require.config({ waitSeconds: 90 })(["https://a.academia-assets.com/assets/wow_profile-a9bf3a2bc8c89fa2a77156577594264ee8a0f214d74241bc0fcd3f69f8d107ac.js","https://a.academia-assets.com/assets/work_edit-ad038b8c047c1a8d4fa01b402d530ff93c45fee2137a149a4a5398bc8ad67560.js"], function() { // from javascript_helper.rb var dispatcherData = {} if (true){ window.WowProfile.dispatcher = window.WowProfile.dispatcher || _.clone(Backbone.Events); dispatcherData = { dispatcher: window.WowProfile.dispatcher, downloadLinkId: "f9d907ae24cc3200a9b1ca45e84b4e0d" } } $('.js-work-strip[data-work-id=83930439]').each(function() { if (!$(this).data('initialized')) { new WowProfile.WorkStripView({ el: this, workJSON: {"id":83930439,"title":"POSO: A Generic Positioning System Ontology","translated_title":"","metadata":{"doi":"10.1007/978-3-031-19433-7_14","abstract":"While satellite-based positioning systems are mainly used in outdoor environments, various other positioning techniques exist for different domains and use cases, including indoor or underground settings. The representation of spatial data via semantic linked data is well addressed by existing spatial ontologies. However, there is a primary focus on location data with its specific geographical context, but a lack of solutions for describing the different types of data generated by a positioning system and the used sampling techniques to obtain the data. In this paper we introduce a new generic Positioning System Ontology (POSO) that is built on top of the Semantic Sensor Network (SSN) and Sensor, Observation, Sample, and Actuator (SOSA) ontologies. With POSO, we provide missing concepts needed for describing a positioning system and its output with known positioning algorithms and techniques in mind. Thereby, we enable the improvement of hybrid positioning systems making use of multiple platforms and sensors that are described via the presented POSO ontology.","more_info":"Maxim Van de Wynckel and Beat Signer, Proceedings of ISWC 2022, 21st International Semantic Web Conference, Hangzhou, China, October 2022","publication_date":{"day":null,"month":null,"year":2022,"errors":{}}},"translated_abstract":"While satellite-based positioning systems are mainly used in outdoor environments, various other positioning techniques exist for different domains and use cases, including indoor or underground settings. The representation of spatial data via semantic linked data is well addressed by existing spatial ontologies. However, there is a primary focus on location data with its specific geographical context, but a lack of solutions for describing the different types of data generated by a positioning system and the used sampling techniques to obtain the data. In this paper we introduce a new generic Positioning System Ontology (POSO) that is built on top of the Semantic Sensor Network (SSN) and Sensor, Observation, Sample, and Actuator (SOSA) ontologies. With POSO, we provide missing concepts needed for describing a positioning system and its output with known positioning algorithms and techniques in mind. Thereby, we enable the improvement of hybrid positioning systems making use of multiple platforms and sensors that are described via the presented POSO ontology.","internal_url":"https://www.academia.edu/83930439/POSO_A_Generic_Positioning_System_Ontology","translated_internal_url":"","created_at":"2022-07-30T00:30:22.758-07:00","preview_url":null,"current_user_can_edit":null,"current_user_is_owner":null,"owner_id":13155,"coauthors_can_edit":true,"document_type":"paper","co_author_tags":[{"id":38603061,"work_id":83930439,"tagging_user_id":13155,"tagged_user_id":61118416,"co_author_invite_id":null,"email":"m***l@gmail.com","affiliation":"Vrije Universiteit Brussel","display_order":1,"name":"Maxim Van de Wynckel","title":"POSO: A Generic Positioning System Ontology"}],"downloadable_attachments":[{"id":89119455,"title":"","file_type":"pdf","scribd_thumbnail_url":"https://attachments.academia-assets.com/89119455/thumbnails/1.jpg","file_name":"van_de_wynckel_ISWC2022.pdf","download_url":"https://www.academia.edu/attachments/89119455/download_file","bulk_download_file_name":"POSO_A_Generic_Positioning_System_Ontolo.pdf","bulk_download_url":"https://d1wqtxts1xzle7.cloudfront.net/89119455/van_de_wynckel_ISWC2022-libre.pdf?1659167415=\u0026response-content-disposition=attachment%3B+filename%3DPOSO_A_Generic_Positioning_System_Ontolo.pdf\u0026Expires=1744203891\u0026Signature=NYbuUE6-lN2q93X2G~b4~-5Bsdzd74S0eGgpxGseGut7j-HGzZOyluqRHYzELl2MEUWrQs-o5oUS641G8gGO4SQh6C7Ww4iyJs6HRE8GC-ULnVhS94h7w3dZE9qYPIK5Fmw1TVdh5OLmyTP6FK3SyY8s9uCp30nR3GyihY19B5YOp8xzcFIu6KnCkh6rqKfzh4I3sqkdLDXF1yyHuisukYq6JXiI3X54mn5uz0g0JbGGaENlQSq9s4zsTHOxW5WM2cHDiNs1aRGwOE8VE-yZaCtWSMj6pNGXiyXgbzYavnC5~bxoqg0KcB2VWza-5fsUmShI9t7OuK-6YRsjMEUK9Q__\u0026Key-Pair-Id=APKAJLOHF5GGSLRBV4ZA"}],"slug":"POSO_A_Generic_Positioning_System_Ontology","translated_slug":"","page_count":17,"language":"en","content_type":"Work","summary":"While satellite-based positioning systems are mainly used in outdoor environments, various other positioning techniques exist for different domains and use cases, including indoor or underground settings. The representation of spatial data via semantic linked data is well addressed by existing spatial ontologies. However, there is a primary focus on location data with its specific geographical context, but a lack of solutions for describing the different types of data generated by a positioning system and the used sampling techniques to obtain the data. In this paper we introduce a new generic Positioning System Ontology (POSO) that is built on top of the Semantic Sensor Network (SSN) and Sensor, Observation, Sample, and Actuator (SOSA) ontologies. With POSO, we provide missing concepts needed for describing a positioning system and its output with known positioning algorithms and techniques in mind. Thereby, we enable the improvement of hybrid positioning systems making use of multiple platforms and sensors that are described via the presented POSO ontology.","impression_tracking_id":null,"owner":{"id":13155,"first_name":"Beat","middle_initials":null,"last_name":"Signer","page_name":"BeatSigner","domain_name":"vub","created_at":"2008-10-29T17:11:18.015-07:00","display_name":"Beat Signer","url":"https://vub.academia.edu/BeatSigner"},"attachments":[{"id":89119455,"title":"","file_type":"pdf","scribd_thumbnail_url":"https://attachments.academia-assets.com/89119455/thumbnails/1.jpg","file_name":"van_de_wynckel_ISWC2022.pdf","download_url":"https://www.academia.edu/attachments/89119455/download_file","bulk_download_file_name":"POSO_A_Generic_Positioning_System_Ontolo.pdf","bulk_download_url":"https://d1wqtxts1xzle7.cloudfront.net/89119455/van_de_wynckel_ISWC2022-libre.pdf?1659167415=\u0026response-content-disposition=attachment%3B+filename%3DPOSO_A_Generic_Positioning_System_Ontolo.pdf\u0026Expires=1744203891\u0026Signature=NYbuUE6-lN2q93X2G~b4~-5Bsdzd74S0eGgpxGseGut7j-HGzZOyluqRHYzELl2MEUWrQs-o5oUS641G8gGO4SQh6C7Ww4iyJs6HRE8GC-ULnVhS94h7w3dZE9qYPIK5Fmw1TVdh5OLmyTP6FK3SyY8s9uCp30nR3GyihY19B5YOp8xzcFIu6KnCkh6rqKfzh4I3sqkdLDXF1yyHuisukYq6JXiI3X54mn5uz0g0JbGGaENlQSq9s4zsTHOxW5WM2cHDiNs1aRGwOE8VE-yZaCtWSMj6pNGXiyXgbzYavnC5~bxoqg0KcB2VWza-5fsUmShI9t7OuK-6YRsjMEUK9Q__\u0026Key-Pair-Id=APKAJLOHF5GGSLRBV4ZA"}],"research_interests":[{"id":422,"name":"Computer Science","url":"https://www.academia.edu/Documents/in/Computer_Science"},{"id":805,"name":"Ontology","url":"https://www.academia.edu/Documents/in/Ontology"},{"id":1210,"name":"Informatics","url":"https://www.academia.edu/Documents/in/Informatics"},{"id":1380,"name":"Computer Engineering","url":"https://www.academia.edu/Documents/in/Computer_Engineering"},{"id":1495,"name":"Location Based Services","url":"https://www.academia.edu/Documents/in/Location_Based_Services"},{"id":9988,"name":"Indoor Positioning","url":"https://www.academia.edu/Documents/in/Indoor_Positioning"},{"id":17711,"name":"Semantic Web","url":"https://www.academia.edu/Documents/in/Semantic_Web"},{"id":86589,"name":"Outdoor Positioning","url":"https://www.academia.edu/Documents/in/Outdoor_Positioning"},{"id":114414,"name":"Specification","url":"https://www.academia.edu/Documents/in/Specification"},{"id":368857,"name":"Positioning","url":"https://www.academia.edu/Documents/in/Positioning"},{"id":522700,"name":"Location-Based Service","url":"https://www.academia.edu/Documents/in/Location-Based_Service"},{"id":524072,"name":"Location","url":"https://www.academia.edu/Documents/in/Location"},{"id":1182242,"name":"Positioning Technology","url":"https://www.academia.edu/Documents/in/Positioning_Technology"},{"id":3857627,"name":"OpenHPS","url":"https://www.academia.edu/Documents/in/OpenHPS"}],"urls":[{"id":29589449,"url":"https://beatsigner.com/publications/poso-a-generic-positioning-system-ontology.pdf"}]}, dispatcherData: dispatcherData }); $(this).data('initialized', true); } }); $a.trackClickSource(".js-work-strip-work-link", "profile_work_strip") if (true) { Aedu.setUpFigureCarousel('profile-work-83930439-figures'); } }); </script> <div class="js-work-strip profile--work_container" data-work-id="128692393"><div class="profile--work_thumbnail hidden-xs"><a class="js-work-strip-work-link" data-click-track="profile-work-strip-thumbnail" href="https://www.academia.edu/128692393/JsStories_Improving_Social_Inclusion_in_Computer_Science_Education_Through_Interactive_Stories"><img alt="Research paper thumbnail of JsStories: Improving Social Inclusion in Computer Science Education Through Interactive Stories" class="work-thumbnail" src="https://attachments.academia-assets.com/122230119/thumbnails/1.jpg" /></a></div><div class="wp-workCard wp-workCard_itemContainer"><div class="wp-workCard_item wp-workCard--title"><a class="js-work-strip-work-link text-gray-darker" data-click-track="profile-work-strip-title" href="https://www.academia.edu/128692393/JsStories_Improving_Social_Inclusion_in_Computer_Science_Education_Through_Interactive_Stories">JsStories: Improving Social Inclusion in Computer Science Education Through Interactive Stories</a></div><div class="wp-workCard_item"><span class="js-work-more-abstract-truncated">A main challenge faced by non-profit organisations providing computer science education to under-...</span><a class="js-work-more-abstract" data-broccoli-component="work_strip.more_abstract" data-click-track="profile-work-strip-more-abstract" href="javascript:;"><span> more </span><span><i class="fa fa-caret-down"></i></span></a><span class="js-work-more-abstract-untruncated hidden">A main challenge faced by non-profit organisations providing computer science education to under-represented groups are the high drop-out rates. This issue arises from various factors affecting both students and teachers, such as the one-size-fits-all approach of many lessons. Enhancing social inclusion in the learning process could help reduce these drop-out rates. We present JsStories, a tool designed to help students learn JavaScript through interactive stories. The development of JsStories has been informed by existing literature on storytelling for inclusion and insights gained from a visit to HackYourFuture Belgium (HYFBE), a non-profit organisation that teaches web development to refugees and migrants. To lower barriers to entry and maximise the feeling of connection to the story, we incorporated narratives from HYFBE alumni. Further, we adhered to educational best practices by applying the PRIMM principles and offering level-appropriate content based on knowledge graphs. JsStories has been demonstrated, evaluated and communicated to the different stakeholders through interviews and a survey, enabling us to identify future directions for story-based learning solutions.</span></div><div class="wp-workCard_item wp-workCard--actions"><span class="work-strip-bookmark-button-container"></span><a id="f0b8173166864564b855fcf9e41c34f0" class="wp-workCard--action" rel="nofollow" data-click-track="profile-work-strip-download" data-download="{&quot;attachment_id&quot;:122230119,&quot;asset_id&quot;:128692393,&quot;asset_type&quot;:&quot;Work&quot;,&quot;button_location&quot;:&quot;profile&quot;}" href="https://www.academia.edu/attachments/122230119/download_file?s=profile"><span><i class="fa fa-arrow-down"></i></span><span>Download</span></a><span class="wp-workCard--action visible-if-viewed-by-owner inline-block" style="display: none;"><span class="js-profile-work-strip-edit-button-wrapper profile-work-strip-edit-button-wrapper" data-work-id="128692393"><a class="js-profile-work-strip-edit-button" tabindex="0"><span><i class="fa fa-pencil"></i></span><span>Edit</span></a></span></span></div><div class="wp-workCard_item wp-workCard--stats"><span><span><span class="js-view-count view-count u-mr2x" data-work-id="128692393"><i class="fa fa-spinner fa-spin"></i></span><script>$(function () { var workId = 128692393; window.Academia.workViewCountsFetcher.queue(workId, function (count) { var description = window.$h.commaizeInt(count) + " " + window.$h.pluralize(count, 'View'); $(".js-view-count[data-work-id=128692393]").text(description); $(".js-view-count[data-work-id=128692393]").attr('title', description).tooltip(); }); });</script></span></span><span><span class="percentile-widget hidden"><span class="u-mr2x work-percentile"></span></span><script>$(function () { var workId = 128692393; window.Academia.workPercentilesFetcher.queue(workId, function (percentileText) { var container = $(".js-work-strip[data-work-id='128692393']"); container.find('.work-percentile').text(percentileText.charAt(0).toUpperCase() + percentileText.slice(1)); container.find('.percentile-widget').show(); container.find('.percentile-widget').removeClass('hidden'); }); });</script></span></div><div id="work-strip-premium-row-container"></div></div></div><script> require.config({ waitSeconds: 90 })(["https://a.academia-assets.com/assets/wow_profile-a9bf3a2bc8c89fa2a77156577594264ee8a0f214d74241bc0fcd3f69f8d107ac.js","https://a.academia-assets.com/assets/work_edit-ad038b8c047c1a8d4fa01b402d530ff93c45fee2137a149a4a5398bc8ad67560.js"], function() { // from javascript_helper.rb var dispatcherData = {} if (true){ window.WowProfile.dispatcher = window.WowProfile.dispatcher || _.clone(Backbone.Events); dispatcherData = { dispatcher: window.WowProfile.dispatcher, downloadLinkId: "f0b8173166864564b855fcf9e41c34f0" } } $('.js-work-strip[data-work-id=128692393]').each(function() { if (!$(this).data('initialized')) { new WowProfile.WorkStripView({ el: this, workJSON: {"id":128692393,"title":"JsStories: Improving Social Inclusion in Computer Science Education Through Interactive Stories","translated_title":"","metadata":{"doi":"10.48550/arXiv.2504.04006","abstract":"A main challenge faced by non-profit organisations providing computer science education to under-represented groups are the high drop-out rates. This issue arises from various factors affecting both students and teachers, such as the one-size-fits-all approach of many lessons. Enhancing social inclusion in the learning process could help reduce these drop-out rates. We present JsStories, a tool designed to help students learn JavaScript through interactive stories. The development of JsStories has been informed by existing literature on storytelling for inclusion and insights gained from a visit to HackYourFuture Belgium (HYFBE), a non-profit organisation that teaches web development to refugees and migrants. To lower barriers to entry and maximise the feeling of connection to the story, we incorporated narratives from HYFBE alumni. Further, we adhered to educational best practices by applying the PRIMM principles and offering level-appropriate content based on knowledge graphs. JsStories has been demonstrated, evaluated and communicated to the different stakeholders through interviews and a survey, enabling us to identify future directions for story-based learning solutions.","more_info":"Inas Ghazouani Ghailani, Yoshi Malaise and Beat Signer, WISE-2025-02, arXiv preprint, April 2025","grobid_abstract":"A main challenge faced by non-profit organisations providing computer science education to under-represented groups are the high drop-out rates. This issue arises from various factors affecting both students and teachers, such as the one-size-fits-all approach of many lessons. Enhancing social inclusion in the learning process could help reduce these drop-out rates. We present JsStories, a tool designed to help students learn JavaScript through interactive stories. The development of JsStories has been informed by existing literature on storytelling for inclusion and insights gained from a visit to HackYourFuture Belgium (HYFBE), a non-profit organisation that teaches web development to refugees and migrants. To lower barriers to entry and maximise the feeling of connection to the story, we incorporated narratives from HYFBE alumni. Further, we adhered to educational best practices by applying the PRIMM principles and offering level-appropriate content based on knowledge graphs. JsStories has been demonstrated, evaluated and communicated to the different stakeholders through interviews and a survey, enabling us to identify future directions for story-based learning solutions. • Applied computing → Interactive learning environments; Distance learning.","publication_date":{"day":null,"month":null,"year":2025,"errors":{}},"grobid_abstract_attachment_id":122230119},"translated_abstract":"A main challenge faced by non-profit organisations providing computer science education to under-represented groups are the high drop-out rates. This issue arises from various factors affecting both students and teachers, such as the one-size-fits-all approach of many lessons. Enhancing social inclusion in the learning process could help reduce these drop-out rates. We present JsStories, a tool designed to help students learn JavaScript through interactive stories. The development of JsStories has been informed by existing literature on storytelling for inclusion and insights gained from a visit to HackYourFuture Belgium (HYFBE), a non-profit organisation that teaches web development to refugees and migrants. To lower barriers to entry and maximise the feeling of connection to the story, we incorporated narratives from HYFBE alumni. Further, we adhered to educational best practices by applying the PRIMM principles and offering level-appropriate content based on knowledge graphs. JsStories has been demonstrated, evaluated and communicated to the different stakeholders through interviews and a survey, enabling us to identify future directions for story-based learning solutions.","internal_url":"https://www.academia.edu/128692393/JsStories_Improving_Social_Inclusion_in_Computer_Science_Education_Through_Interactive_Stories","translated_internal_url":"","created_at":"2025-04-07T21:45:23.049-07:00","preview_url":null,"current_user_can_edit":null,"current_user_is_owner":null,"owner_id":13155,"coauthors_can_edit":true,"document_type":"paper","co_author_tags":[{"id":43341471,"work_id":128692393,"tagging_user_id":13155,"tagged_user_id":230910613,"co_author_invite_id":null,"email":"y***e@vub.be","affiliation":"Vrije Universiteit Brussel","display_order":1,"name":"Yoshi Malaise","title":"JsStories: Improving Social Inclusion in Computer Science Education Through Interactive Stories"}],"downloadable_attachments":[{"id":122230119,"title":"","file_type":"pdf","scribd_thumbnail_url":"https://attachments.academia-assets.com/122230119/thumbnails/1.jpg","file_name":"ghailani_CoRR2025.pdf","download_url":"https://www.academia.edu/attachments/122230119/download_file","bulk_download_file_name":"JsStories_Improving_Social_Inclusion_in.pdf","bulk_download_url":"https://d1wqtxts1xzle7.cloudfront.net/122230119/ghailani_CoRR2025-libre.pdf?1744091630=\u0026response-content-disposition=attachment%3B+filename%3DJsStories_Improving_Social_Inclusion_in.pdf\u0026Expires=1744203891\u0026Signature=c00SHmvX6T0z7gvAuOatRhO0heHKY8B1ZGGigMgKqMt0JybJdx17IBz8YzqsEgTAGFBDIjJl1~pXTb~dzwYlbDc2v9sERf9Hk7CFZyzR6unKUIyyaE-w~dEKxTLMDB3aNb46W9g563KI7nbIrciCkadkaJ4rQnbg0rfA0uQrlcBhuD1kcAqWLmdy6JTgDpadpbtIUET11VjR3S~1iDizdnufgOzwvXAnom4holu4zoAMBKGOKBC81t66eQPwko~meE2Wx-qosxG1gVAqHB0~AC2J~qvIVQLPqyh-TLK6grl9-6sd7zm4RR~686g1z4lFEijbmMHmAW8ABouT9ldqJg__\u0026Key-Pair-Id=APKAJLOHF5GGSLRBV4ZA"}],"slug":"JsStories_Improving_Social_Inclusion_in_Computer_Science_Education_Through_Interactive_Stories","translated_slug":"","page_count":10,"language":"en","content_type":"Work","summary":"A main challenge faced by non-profit organisations providing computer science education to under-represented groups are the high drop-out rates. This issue arises from various factors affecting both students and teachers, such as the one-size-fits-all approach of many lessons. Enhancing social inclusion in the learning process could help reduce these drop-out rates. We present JsStories, a tool designed to help students learn JavaScript through interactive stories. The development of JsStories has been informed by existing literature on storytelling for inclusion and insights gained from a visit to HackYourFuture Belgium (HYFBE), a non-profit organisation that teaches web development to refugees and migrants. To lower barriers to entry and maximise the feeling of connection to the story, we incorporated narratives from HYFBE alumni. Further, we adhered to educational best practices by applying the PRIMM principles and offering level-appropriate content based on knowledge graphs. JsStories has been demonstrated, evaluated and communicated to the different stakeholders through interviews and a survey, enabling us to identify future directions for story-based learning solutions.","impression_tracking_id":null,"owner":{"id":13155,"first_name":"Beat","middle_initials":null,"last_name":"Signer","page_name":"BeatSigner","domain_name":"vub","created_at":"2008-10-29T17:11:18.015-07:00","display_name":"Beat Signer","url":"https://vub.academia.edu/BeatSigner"},"attachments":[{"id":122230119,"title":"","file_type":"pdf","scribd_thumbnail_url":"https://attachments.academia-assets.com/122230119/thumbnails/1.jpg","file_name":"ghailani_CoRR2025.pdf","download_url":"https://www.academia.edu/attachments/122230119/download_file","bulk_download_file_name":"JsStories_Improving_Social_Inclusion_in.pdf","bulk_download_url":"https://d1wqtxts1xzle7.cloudfront.net/122230119/ghailani_CoRR2025-libre.pdf?1744091630=\u0026response-content-disposition=attachment%3B+filename%3DJsStories_Improving_Social_Inclusion_in.pdf\u0026Expires=1744203891\u0026Signature=c00SHmvX6T0z7gvAuOatRhO0heHKY8B1ZGGigMgKqMt0JybJdx17IBz8YzqsEgTAGFBDIjJl1~pXTb~dzwYlbDc2v9sERf9Hk7CFZyzR6unKUIyyaE-w~dEKxTLMDB3aNb46W9g563KI7nbIrciCkadkaJ4rQnbg0rfA0uQrlcBhuD1kcAqWLmdy6JTgDpadpbtIUET11VjR3S~1iDizdnufgOzwvXAnom4holu4zoAMBKGOKBC81t66eQPwko~meE2Wx-qosxG1gVAqHB0~AC2J~qvIVQLPqyh-TLK6grl9-6sd7zm4RR~686g1z4lFEijbmMHmAW8ABouT9ldqJg__\u0026Key-Pair-Id=APKAJLOHF5GGSLRBV4ZA"}],"research_interests":[{"id":422,"name":"Computer Science","url":"https://www.academia.edu/Documents/in/Computer_Science"},{"id":472,"name":"Human Computer Interaction","url":"https://www.academia.edu/Documents/in/Human_Computer_Interaction"},{"id":887,"name":"Teaching and Learning","url":"https://www.academia.edu/Documents/in/Teaching_and_Learning"},{"id":1210,"name":"Informatics","url":"https://www.academia.edu/Documents/in/Informatics"},{"id":1609,"name":"E-learning","url":"https://www.academia.edu/Documents/in/E-learning"},{"id":3095,"name":"Computer-Based Learning","url":"https://www.academia.edu/Documents/in/Computer-Based_Learning"},{"id":3457,"name":"Learning and Teaching","url":"https://www.academia.edu/Documents/in/Learning_and_Teaching"},{"id":6492,"name":"Storytelling","url":"https://www.academia.edu/Documents/in/Storytelling"},{"id":7585,"name":"ICT in Education","url":"https://www.academia.edu/Documents/in/ICT_in_Education"},{"id":8673,"name":"Digital Media \u0026 Learning","url":"https://www.academia.edu/Documents/in/Digital_Media_and_Learning"},{"id":15869,"name":"Online Learning","url":"https://www.academia.edu/Documents/in/Online_Learning"},{"id":17758,"name":"Technology Enhanced Learning","url":"https://www.academia.edu/Documents/in/Technology_Enhanced_Learning"},{"id":19707,"name":"Social Inclusion","url":"https://www.academia.edu/Documents/in/Social_Inclusion"},{"id":22412,"name":"Digital Storytelling","url":"https://www.academia.edu/Documents/in/Digital_Storytelling"},{"id":33915,"name":"Technology-enhanced Learning","url":"https://www.academia.edu/Documents/in/Technology-enhanced_Learning"},{"id":43774,"name":"Learning","url":"https://www.academia.edu/Documents/in/Learning"},{"id":279114,"name":"Technology Enhanced Education","url":"https://www.academia.edu/Documents/in/Technology_Enhanced_Education"},{"id":1251579,"name":"Human-Computer Interaction","url":"https://www.academia.edu/Documents/in/Human-Computer_Interaction"}],"urls":[{"id":47448285,"url":"https://beatsigner.com/publications/jsstories-improving-social-inclusion-in-computer-science-education-through-interactive-stories.pdf"}]}, dispatcherData: dispatcherData }); $(this).data('initialized', true); } }); $a.trackClickSource(".js-work-strip-work-link", "profile_work_strip") if (false) { Aedu.setUpFigureCarousel('profile-work-128692393-figures'); } }); </script> <div class="js-work-strip profile--work_container" data-work-id="108934703"><div class="profile--work_thumbnail hidden-xs"><a class="js-work-strip-work-link" data-click-track="profile-work-strip-thumbnail" href="https://www.academia.edu/108934703/Explorotron_An_IDE_Extension_for_Guided_and_Independent_Code_Exploration_and_Learning"><img alt="Research paper thumbnail of Explorotron: An IDE Extension for Guided and Independent Code Exploration and Learning" class="work-thumbnail" src="https://attachments.academia-assets.com/107202372/thumbnails/1.jpg" /></a></div><div class="wp-workCard wp-workCard_itemContainer"><div class="wp-workCard_item wp-workCard--title"><a class="js-work-strip-work-link text-gray-darker" data-click-track="profile-work-strip-title" href="https://www.academia.edu/108934703/Explorotron_An_IDE_Extension_for_Guided_and_Independent_Code_Exploration_and_Learning">Explorotron: An IDE Extension for Guided and Independent Code Exploration and Learning</a></div><div class="wp-workCard_item"><span class="js-work-more-abstract-truncated">We introduce the Explorotron Visual Studio Code extension for guided and independent code explora...</span><a class="js-work-more-abstract" data-broccoli-component="work_strip.more_abstract" data-click-track="profile-work-strip-more-abstract" href="javascript:;"><span> more </span><span><i class="fa fa-caret-down"></i></span></a><span class="js-work-more-abstract-untruncated hidden">We introduce the Explorotron Visual Studio Code extension for guided and independent code exploration and learning. Explorotron is a continuation of earlier work to explore how we can enable small organisations with limited resources to provide pedagogically sound learning experiences in programming. We situate Explorotron in the field of Computing Education Research (CER) and envision it to initiate a discussion around different topics, including how to balance the optimisation between the researcher-student-teacher trifecta that is inherent in CER, how to ethically and responsibly use large language models (LLMs) in the independent learning and exploration by students, and how to define better learning sessions over coding content that students obtained on their own. We further reflect on the question raised by Begel and Ko whether technology should &quot;structure learning for learners&quot; or whether learners should &quot;be taught how to structure their own independent learning&quot; outside of the classroom.</span></div><div class="wp-workCard_item"><div class="carousel-container carousel-container--sm" id="profile-work-108934703-figures"><div class="prev-slide-container js-prev-button-container"><button aria-label="Previous" class="carousel-navigation-button js-profile-work-108934703-figures-prev"><span class="material-symbols-outlined" style="font-size: 24px" translate="no">arrow_back_ios</span></button></div><div class="slides-container js-slides-container"><figure class="figure-slide-container"><a href="https://www.academia.edu/figures/32963430/figure-1-explorotron-visual-studio-code-extension-showing"><img alt="Figure 1: Explorotron Visual Studio Code extension showing recommended study lenses on the left and the Argument Picker study lens where students have to decide which argument goes where in the code on the right. Image altered due to space constraints. " class="figure-slide-image" src="https://figures.academia-assets.com/107202372/figure_001.jpg" /></a></figure><figure class="figure-slide-container"><a href="https://www.academia.edu/figures/32963438/figure-2-overview-of-the-different-ways-students-can-study"><img alt="Figure 2: Overview of the different ways students can study code " class="figure-slide-image" src="https://figures.academia-assets.com/107202372/figure_002.jpg" /></a></figure><figure class="figure-slide-container"><a href="https://www.academia.edu/figures/32963442/figure-3-overview-of-the-architecture-to-generate-the"><img alt="Figure 3: Overview of the architecture to generate the recommended lenses page " class="figure-slide-image" src="https://figures.academia-assets.com/107202372/figure_003.jpg" /></a></figure></div><div class="next-slide-container js-next-button-container"><button aria-label="Next" class="carousel-navigation-button js-profile-work-108934703-figures-next"><span class="material-symbols-outlined" style="font-size: 24px" translate="no">arrow_forward_ios</span></button></div></div></div><div class="wp-workCard_item wp-workCard--actions"><span class="work-strip-bookmark-button-container"></span><a id="2a1c1546630e6361ad3731c3174e2b30" class="wp-workCard--action" rel="nofollow" data-click-track="profile-work-strip-download" data-download="{&quot;attachment_id&quot;:107202372,&quot;asset_id&quot;:108934703,&quot;asset_type&quot;:&quot;Work&quot;,&quot;button_location&quot;:&quot;profile&quot;}" href="https://www.academia.edu/attachments/107202372/download_file?s=profile"><span><i class="fa fa-arrow-down"></i></span><span>Download</span></a><span class="wp-workCard--action visible-if-viewed-by-owner inline-block" style="display: none;"><span class="js-profile-work-strip-edit-button-wrapper profile-work-strip-edit-button-wrapper" data-work-id="108934703"><a class="js-profile-work-strip-edit-button" tabindex="0"><span><i class="fa fa-pencil"></i></span><span>Edit</span></a></span></span></div><div class="wp-workCard_item wp-workCard--stats"><span><span><span class="js-view-count view-count u-mr2x" data-work-id="108934703"><i class="fa fa-spinner fa-spin"></i></span><script>$(function () { var workId = 108934703; window.Academia.workViewCountsFetcher.queue(workId, function (count) { var description = window.$h.commaizeInt(count) + " " + window.$h.pluralize(count, 'View'); $(".js-view-count[data-work-id=108934703]").text(description); $(".js-view-count[data-work-id=108934703]").attr('title', description).tooltip(); }); });</script></span></span><span><span class="percentile-widget hidden"><span class="u-mr2x work-percentile"></span></span><script>$(function () { var workId = 108934703; window.Academia.workPercentilesFetcher.queue(workId, function (percentileText) { var container = $(".js-work-strip[data-work-id='108934703']"); container.find('.work-percentile').text(percentileText.charAt(0).toUpperCase() + percentileText.slice(1)); container.find('.percentile-widget').show(); container.find('.percentile-widget').removeClass('hidden'); }); });</script></span></div><div id="work-strip-premium-row-container"></div></div></div><script> require.config({ waitSeconds: 90 })(["https://a.academia-assets.com/assets/wow_profile-a9bf3a2bc8c89fa2a77156577594264ee8a0f214d74241bc0fcd3f69f8d107ac.js","https://a.academia-assets.com/assets/work_edit-ad038b8c047c1a8d4fa01b402d530ff93c45fee2137a149a4a5398bc8ad67560.js"], function() { // from javascript_helper.rb var dispatcherData = {} if (true){ window.WowProfile.dispatcher = window.WowProfile.dispatcher || _.clone(Backbone.Events); dispatcherData = { dispatcher: window.WowProfile.dispatcher, downloadLinkId: "2a1c1546630e6361ad3731c3174e2b30" } } $('.js-work-strip[data-work-id=108934703]').each(function() { if (!$(this).data('initialized')) { new WowProfile.WorkStripView({ el: this, workJSON: {"id":108934703,"title":"Explorotron: An IDE Extension for Guided and Independent Code Exploration and Learning","translated_title":"","metadata":{"doi":"10.1145/3631802.3631816","abstract":"We introduce the Explorotron Visual Studio Code extension for guided and independent code exploration and learning. Explorotron is a continuation of earlier work to explore how we can enable small organisations with limited resources to provide pedagogically sound learning experiences in programming. We situate Explorotron in the field of Computing Education Research (CER) and envision it to initiate a discussion around different topics, including how to balance the optimisation between the researcher-student-teacher trifecta that is inherent in CER, how to ethically and responsibly use large language models (LLMs) in the independent learning and exploration by students, and how to define better learning sessions over coding content that students obtained on their own. We further reflect on the question raised by Begel and Ko whether technology should \"structure learning for learners\" or whether learners should \"be taught how to structure their own independent learning\" outside of the classroom.","more_info":"Yoshi Malaise and Beat Signer, Proceedings of Koli Calling 2023, 23rd International Conference on Computing Education Research, Koli, Finland, November 2023","publication_date":{"day":null,"month":null,"year":2023,"errors":{}}},"translated_abstract":"We introduce the Explorotron Visual Studio Code extension for guided and independent code exploration and learning. Explorotron is a continuation of earlier work to explore how we can enable small organisations with limited resources to provide pedagogically sound learning experiences in programming. We situate Explorotron in the field of Computing Education Research (CER) and envision it to initiate a discussion around different topics, including how to balance the optimisation between the researcher-student-teacher trifecta that is inherent in CER, how to ethically and responsibly use large language models (LLMs) in the independent learning and exploration by students, and how to define better learning sessions over coding content that students obtained on their own. We further reflect on the question raised by Begel and Ko whether technology should \"structure learning for learners\" or whether learners should \"be taught how to structure their own independent learning\" outside of the classroom.","internal_url":"https://www.academia.edu/108934703/Explorotron_An_IDE_Extension_for_Guided_and_Independent_Code_Exploration_and_Learning","translated_internal_url":"","created_at":"2023-11-06T12:23:57.019-08:00","preview_url":null,"current_user_can_edit":null,"current_user_is_owner":null,"owner_id":13155,"coauthors_can_edit":true,"document_type":"paper","co_author_tags":[{"id":40513373,"work_id":108934703,"tagging_user_id":13155,"tagged_user_id":230910613,"co_author_invite_id":null,"email":"y***e@vub.be","affiliation":"Vrije Universiteit Brussel","display_order":1,"name":"Yoshi Malaise","title":"Explorotron: An IDE Extension for Guided and Independent Code Exploration and Learning"}],"downloadable_attachments":[{"id":107202372,"title":"","file_type":"pdf","scribd_thumbnail_url":"https://attachments.academia-assets.com/107202372/thumbnails/1.jpg","file_name":"Malaise_KoliCalling2023.pdf","download_url":"https://www.academia.edu/attachments/107202372/download_file","bulk_download_file_name":"Explorotron_An_IDE_Extension_for_Guided.pdf","bulk_download_url":"https://d1wqtxts1xzle7.cloudfront.net/107202372/Malaise_KoliCalling2023-libre.pdf?1699333282=\u0026response-content-disposition=attachment%3B+filename%3DExplorotron_An_IDE_Extension_for_Guided.pdf\u0026Expires=1744203891\u0026Signature=UQDIoJorTmZ9zbaqUoKm6vs0afIEY3P24Z5yWXKB5aFedrMpQFi~VPJZoJ0xZ5aOqCv5dklBNC5KjUk-IVdV7wa-K~53qPwFMch6Fs077umlQoTxyP2k8Yx4D5Z9BP3aojKaWOBEKmfcOJM8xdtOSSknfeTb5YuhX2hxmLaEv~uzH25Je~viWiSCj-jHFsEMEh6LKh4vHsUi7phf7~ZXnvFRQFUYTX7sG85zWwPKYDhHkoarBzi0G4MdoTd9jM8Z2d7BCxjgWQK7IiFxEoBvTbsyIrN-aFgg2z98GQqhAGenvT0CbJ2BQeH0Finf0WEdZCyFCh1~sGpRhWUTig74qQ__\u0026Key-Pair-Id=APKAJLOHF5GGSLRBV4ZA"}],"slug":"Explorotron_An_IDE_Extension_for_Guided_and_Independent_Code_Exploration_and_Learning","translated_slug":"","page_count":8,"language":"en","content_type":"Work","summary":"We introduce the Explorotron Visual Studio Code extension for guided and independent code exploration and learning. Explorotron is a continuation of earlier work to explore how we can enable small organisations with limited resources to provide pedagogically sound learning experiences in programming. We situate Explorotron in the field of Computing Education Research (CER) and envision it to initiate a discussion around different topics, including how to balance the optimisation between the researcher-student-teacher trifecta that is inherent in CER, how to ethically and responsibly use large language models (LLMs) in the independent learning and exploration by students, and how to define better learning sessions over coding content that students obtained on their own. We further reflect on the question raised by Begel and Ko whether technology should \"structure learning for learners\" or whether learners should \"be taught how to structure their own independent learning\" outside of the classroom.","impression_tracking_id":null,"owner":{"id":13155,"first_name":"Beat","middle_initials":null,"last_name":"Signer","page_name":"BeatSigner","domain_name":"vub","created_at":"2008-10-29T17:11:18.015-07:00","display_name":"Beat Signer","url":"https://vub.academia.edu/BeatSigner"},"attachments":[{"id":107202372,"title":"","file_type":"pdf","scribd_thumbnail_url":"https://attachments.academia-assets.com/107202372/thumbnails/1.jpg","file_name":"Malaise_KoliCalling2023.pdf","download_url":"https://www.academia.edu/attachments/107202372/download_file","bulk_download_file_name":"Explorotron_An_IDE_Extension_for_Guided.pdf","bulk_download_url":"https://d1wqtxts1xzle7.cloudfront.net/107202372/Malaise_KoliCalling2023-libre.pdf?1699333282=\u0026response-content-disposition=attachment%3B+filename%3DExplorotron_An_IDE_Extension_for_Guided.pdf\u0026Expires=1744203891\u0026Signature=UQDIoJorTmZ9zbaqUoKm6vs0afIEY3P24Z5yWXKB5aFedrMpQFi~VPJZoJ0xZ5aOqCv5dklBNC5KjUk-IVdV7wa-K~53qPwFMch6Fs077umlQoTxyP2k8Yx4D5Z9BP3aojKaWOBEKmfcOJM8xdtOSSknfeTb5YuhX2hxmLaEv~uzH25Je~viWiSCj-jHFsEMEh6LKh4vHsUi7phf7~ZXnvFRQFUYTX7sG85zWwPKYDhHkoarBzi0G4MdoTd9jM8Z2d7BCxjgWQK7IiFxEoBvTbsyIrN-aFgg2z98GQqhAGenvT0CbJ2BQeH0Finf0WEdZCyFCh1~sGpRhWUTig74qQ__\u0026Key-Pair-Id=APKAJLOHF5GGSLRBV4ZA"}],"research_interests":[{"id":422,"name":"Computer Science","url":"https://www.academia.edu/Documents/in/Computer_Science"},{"id":887,"name":"Teaching and Learning","url":"https://www.academia.edu/Documents/in/Teaching_and_Learning"},{"id":922,"name":"Education","url":"https://www.academia.edu/Documents/in/Education"},{"id":1003,"name":"Educational Technology","url":"https://www.academia.edu/Documents/in/Educational_Technology"},{"id":1210,"name":"Informatics","url":"https://www.academia.edu/Documents/in/Informatics"},{"id":1380,"name":"Computer Engineering","url":"https://www.academia.edu/Documents/in/Computer_Engineering"},{"id":1609,"name":"E-learning","url":"https://www.academia.edu/Documents/in/E-learning"},{"id":7585,"name":"ICT in Education","url":"https://www.academia.edu/Documents/in/ICT_in_Education"},{"id":17951,"name":"Learning And Teaching In Higher Education","url":"https://www.academia.edu/Documents/in/Learning_And_Teaching_In_Higher_Education"},{"id":53292,"name":"Programming","url":"https://www.academia.edu/Documents/in/Programming"}],"urls":[{"id":35241699,"url":"https://beatsigner.com/publications/explorotron-an-ide-extension-for-guided-and-independent-code-exploration-and-learning.pdf"}]}, dispatcherData: dispatcherData }); $(this).data('initialized', true); } }); $a.trackClickSource(".js-work-strip-work-link", "profile_work_strip") if (true) { Aedu.setUpFigureCarousel('profile-work-108934703-figures'); } }); </script> <div class="js-work-strip profile--work_container" data-work-id="4685517"><div class="profile--work_thumbnail hidden-xs"><a class="js-work-strip-work-link" data-click-track="profile-work-strip-thumbnail" href="https://www.academia.edu/4685517/SpeeG2_A_Speech_and_Gesture_based_Interface_for_Efficient_Controller_free_Text_Entry"><img alt="Research paper thumbnail of SpeeG2: A Speech- and Gesture-based Interface for Efficient Controller-free Text Entry" class="work-thumbnail" src="https://attachments.academia-assets.com/56291532/thumbnails/1.jpg" /></a></div><div class="wp-workCard wp-workCard_itemContainer"><div class="wp-workCard_item wp-workCard--title"><a class="js-work-strip-work-link text-gray-darker" data-click-track="profile-work-strip-title" href="https://www.academia.edu/4685517/SpeeG2_A_Speech_and_Gesture_based_Interface_for_Efficient_Controller_free_Text_Entry">SpeeG2: A Speech- and Gesture-based Interface for Efficient Controller-free Text Entry</a></div><div class="wp-workCard_item"><span class="js-work-more-abstract-truncated">With the emergence of smart TVs, set-top boxes and public information screens over the last few y...</span><a class="js-work-more-abstract" data-broccoli-component="work_strip.more_abstract" data-click-track="profile-work-strip-more-abstract" href="javascript:;"><span> more </span><span><i class="fa fa-caret-down"></i></span></a><span class="js-work-more-abstract-untruncated hidden">With the emergence of smart TVs, set-top boxes and public information screens over the last few years, there is an increasing demand to no longer use these appliances only for passive output. These devices can also be used to do text-based web search as well as other tasks which require some form of text input. However, the design of text entry interfaces for efficient input on such appliances represents a major challenge. With current virtual keyboard solutions we only achieve an average text input rate of 5.79 words per minute (WPM) while the average typing speed on a traditional keyboard is 38 WPM. Furthermore, so-called controller-free appliances such as Samsung&#39;s Smart TV or Microsoft&#39;s Xbox Kinect result in even lower average text input rates. We present SpeeG2, a multimodal text entry solution combining speech recognition with gesture-based error correction. Four innovative prototypes for the efficient controller-free text entry have been developed and evaluated. A quantitative evaluation of our SpeeG2 text entry solution revealed that the best of our four prototypes achieves an average input rate of 21.04 WPM (without errors), outperforming current state-of-the-art solutions for controller-free text input. <br /> <br />A video about the system can be found at: <a href="https://www.academia.edu/11590838/SpeeG2" rel="nofollow">https://www.academia.edu/11590838/SpeeG2</a></span></div><div class="wp-workCard_item"><div class="carousel-container carousel-container--sm" id="profile-work-4685517-figures"><div class="prev-slide-container js-prev-button-container"><button aria-label="Previous" class="carousel-navigation-button js-profile-work-4685517-figures-prev"><span class="material-symbols-outlined" style="font-size: 24px" translate="no">arrow_back_ios</span></button></div><div class="slides-container js-slides-container"><figure class="figure-slide-container"><a href="https://www.academia.edu/figures/50722871/figure-1-speeg-speech-and-gesture-based-interface-for"><img alt="Figure 1: SpeeG2 interaction " class="figure-slide-image" src="https://figures.academia-assets.com/56291532/figure_001.jpg" /></a></figure><figure class="figure-slide-container"><a href="https://www.academia.edu/figures/50722877/figure-2-highlighted-parts-of-the-grid-layout"><img alt="Figure 2: Highlighted parts of the grid layout " class="figure-slide-image" src="https://figures.academia-assets.com/56291532/figure_002.jpg" /></a></figure><figure class="figure-slide-container"><a href="https://www.academia.edu/figures/50722888/figure-4-previous-features-show-how-speeg-deals-with"><img alt="Previous features show how SpeeG2 deals with substitution, in- sertion and deletion errors in the speech recognition hypothesis. However, some words like the names of people are not likely go- ing to be part of the speech recognition vocabulary. To offer a complete text entry solution, SpeeG2 provides a spelling mode where words can be spelled out. This mode can also be used when non-native English speakers continuously mispronounce a partic- ular word. The spelling mode works as a substitution method for an invalid word and is activated by a push up gesture with the non- dominant hand. The grid component is then transformed from a word-based to a character-based selection. All other GUI elements such as the insertion buttons, the skip element or feedback views re- main intact. A user can now spell the word and the rows in the grid will provide candidate letters instead of words. Furthermore, the spelling mode provides a natural language feature allowing users to elaborate on their spelling by using words starting with a spe- cific letter. For example, a user might say “a as in alpha” to clarify the character “a”. Note that the spelling mode can also be used to slightly modify a candidate word. For instance, to add an “s” at the end of a word, the user activates the spelling mode and then uses the existing insertion feature to add an “s” character. As illustrated in Figure 4, the spelling mode is visualised by purple column bor- ders, a single letter in each column and a special “*end*” block at the end of a word. Figure 4: Correcting the word “fill” in spelling mode " class="figure-slide-image" src="https://figures.academia-assets.com/56291532/figure_003.jpg" /></a></figure><figure class="figure-slide-container"><a href="https://www.academia.edu/figures/50722898/figure-5-user-interacting-with-one-of-the-speeg-prototypes"><img alt="Figure 5: User interacting with one of the SpeeG2 prototypes We introduce four different prototypes which share a common grid interface but offer different forms of interaction for correcting speech recognition errors. We evaluated different selection strate- gies in the setup shown in Figure 5 and observed whether accidental triggering is an issue in some of the prototypes. " class="figure-slide-image" src="https://figures.academia-assets.com/56291532/figure_004.jpg" /></a></figure><figure class="figure-slide-container"><a href="https://www.academia.edu/figures/50722908/figure-6-the-scroller-prototype-uses-similar-design-concepts"><img alt="The Scroller prototype uses similar design concepts as the Dasher interface. The interface shown in Figure 6, is controlled by navi- gating towards the next word in a stepwise manner. The scrolling steps are represented by the numbers -2 to 3 which have been added to the screenshot. When the progress bar at the top is fil fully green), a step occurs and the next word is put into ed (i.e. is the active column (0). The speed at which the progress bar fills is controlled by the horizontal movement of the dominant hand. T he further away the hand is from the body, the faster the progress bar fills. " class="figure-slide-image" src="https://figures.academia-assets.com/56291532/figure_005.jpg" /></a></figure><figure class="figure-slide-container"><a href="https://www.academia.edu/figures/50722915/figure-7-variation-of-the-scroller-prototype-is-the-scroller"><img alt="A variation of the Scroller prototype is the Scroller Auto proto- type shown in Figure 7. The difference is that the green progress bar has been removed and the movement occurs continuously. In- stead of processing the words in a step-by-step manner, the columns move sideways (similar to a 2D side-scrolling game). " class="figure-slide-image" src="https://figures.academia-assets.com/56291532/figure_006.jpg" /></a></figure><figure class="figure-slide-container"><a href="https://www.academia.edu/figures/50722941/figure-7-speeg-speech-and-gesture-based-interface-for"><img alt="" class="figure-slide-image" src="https://figures.academia-assets.com/56291532/figure_007.jpg" /></a></figure><figure class="figure-slide-container"><a href="https://www.academia.edu/figures/50722955/figure-9-box-plot-of-wpm-per-prototype"><img alt="Figure 9: Box plot of WPM per prototype " class="figure-slide-image" src="https://figures.academia-assets.com/56291532/figure_008.jpg" /></a></figure><figure class="figure-slide-container"><a href="https://www.academia.edu/figures/50722963/table-1-the-mean-wpm-and-wer-for-each-prototype-are"><img alt="The mean WPM and WER for each prototype are highlighted in Table 1. The highest mean WPM was achieved in the speech-only test. However, there was no correction phase besides repeating a sentence. Therefore, the WER of the speech-only test should be ob- served as the error score after correction. The WER of other tests (Scroller, Scroller Auto, Typewriter and Typewriter Drag) shows the WER before correction. After correction, all SpeeG2 proto- types resulted in a WER of 0% for all participants. Table 1: Average per participant WPM and WER before correc- tion (BC-WER) for each prototype together with the corresponding standard deviation (SD) " class="figure-slide-image" src="https://figures.academia-assets.com/56291532/table_001.jpg" /></a></figure></div><div class="next-slide-container js-next-button-container"><button aria-label="Next" class="carousel-navigation-button js-profile-work-4685517-figures-next"><span class="material-symbols-outlined" style="font-size: 24px" translate="no">arrow_forward_ios</span></button></div></div></div><div class="wp-workCard_item wp-workCard--actions"><span class="work-strip-bookmark-button-container"></span><a id="81fb3c94730a261347766a74294c4e39" class="wp-workCard--action" rel="nofollow" data-click-track="profile-work-strip-download" data-download="{&quot;attachment_id&quot;:56291532,&quot;asset_id&quot;:4685517,&quot;asset_type&quot;:&quot;Work&quot;,&quot;button_location&quot;:&quot;profile&quot;}" href="https://www.academia.edu/attachments/56291532/download_file?s=profile"><span><i class="fa fa-arrow-down"></i></span><span>Download</span></a><span class="wp-workCard--action visible-if-viewed-by-owner inline-block" style="display: none;"><span class="js-profile-work-strip-edit-button-wrapper profile-work-strip-edit-button-wrapper" data-work-id="4685517"><a class="js-profile-work-strip-edit-button" tabindex="0"><span><i class="fa fa-pencil"></i></span><span>Edit</span></a></span></span></div><div class="wp-workCard_item wp-workCard--stats"><span><span><span class="js-view-count view-count u-mr2x" data-work-id="4685517"><i class="fa fa-spinner fa-spin"></i></span><script>$(function () { var workId = 4685517; window.Academia.workViewCountsFetcher.queue(workId, function (count) { var description = window.$h.commaizeInt(count) + " " + window.$h.pluralize(count, 'View'); $(".js-view-count[data-work-id=4685517]").text(description); $(".js-view-count[data-work-id=4685517]").attr('title', description).tooltip(); }); });</script></span></span><span><span class="percentile-widget hidden"><span class="u-mr2x work-percentile"></span></span><script>$(function () { var workId = 4685517; window.Academia.workPercentilesFetcher.queue(workId, function (percentileText) { var container = $(".js-work-strip[data-work-id='4685517']"); container.find('.work-percentile').text(percentileText.charAt(0).toUpperCase() + percentileText.slice(1)); container.find('.percentile-widget').show(); container.find('.percentile-widget').removeClass('hidden'); }); });</script></span></div><div id="work-strip-premium-row-container"></div></div></div><script> require.config({ waitSeconds: 90 })(["https://a.academia-assets.com/assets/wow_profile-a9bf3a2bc8c89fa2a77156577594264ee8a0f214d74241bc0fcd3f69f8d107ac.js","https://a.academia-assets.com/assets/work_edit-ad038b8c047c1a8d4fa01b402d530ff93c45fee2137a149a4a5398bc8ad67560.js"], function() { // from javascript_helper.rb var dispatcherData = {} if (true){ window.WowProfile.dispatcher = window.WowProfile.dispatcher || _.clone(Backbone.Events); dispatcherData = { dispatcher: window.WowProfile.dispatcher, downloadLinkId: "81fb3c94730a261347766a74294c4e39" } } $('.js-work-strip[data-work-id=4685517]').each(function() { if (!$(this).data('initialized')) { new WowProfile.WorkStripView({ el: this, workJSON: {"id":4685517,"title":"SpeeG2: A Speech- and Gesture-based Interface for Efficient Controller-free Text Entry","translated_title":"","metadata":{"doi":"10.1145/2522848.2522861","abstract":"With the emergence of smart TVs, set-top boxes and public information screens over the last few years, there is an increasing demand to no longer use these appliances only for passive output. These devices can also be used to do text-based web search as well as other tasks which require some form of text input. However, the design of text entry interfaces for efficient input on such appliances represents a major challenge. With current virtual keyboard solutions we only achieve an average text input rate of 5.79 words per minute (WPM) while the average typing speed on a traditional keyboard is 38 WPM. Furthermore, so-called controller-free appliances such as Samsung's Smart TV or Microsoft's Xbox Kinect result in even lower average text input rates. We present SpeeG2, a multimodal text entry solution combining speech recognition with gesture-based error correction. Four innovative prototypes for the efficient controller-free text entry have been developed and evaluated. A quantitative evaluation of our SpeeG2 text entry solution revealed that the best of our four prototypes achieves an average input rate of 21.04 WPM (without errors), outperforming current state-of-the-art solutions for controller-free text input.\r\n\r\nA video about the system can be found at: https://www.academia.edu/11590838/SpeeG2","more_info":"Lode Hoste and Beat Signer, Proceedings of ICMI 2013, 15th International Conference on Multimodal Interaction, Sydney, Australia, December 2013","publication_date":{"day":null,"month":null,"year":2013,"errors":{}}},"translated_abstract":"With the emergence of smart TVs, set-top boxes and public information screens over the last few years, there is an increasing demand to no longer use these appliances only for passive output. These devices can also be used to do text-based web search as well as other tasks which require some form of text input. However, the design of text entry interfaces for efficient input on such appliances represents a major challenge. With current virtual keyboard solutions we only achieve an average text input rate of 5.79 words per minute (WPM) while the average typing speed on a traditional keyboard is 38 WPM. Furthermore, so-called controller-free appliances such as Samsung's Smart TV or Microsoft's Xbox Kinect result in even lower average text input rates. We present SpeeG2, a multimodal text entry solution combining speech recognition with gesture-based error correction. Four innovative prototypes for the efficient controller-free text entry have been developed and evaluated. A quantitative evaluation of our SpeeG2 text entry solution revealed that the best of our four prototypes achieves an average input rate of 21.04 WPM (without errors), outperforming current state-of-the-art solutions for controller-free text input.\r\n\r\nA video about the system can be found at: https://www.academia.edu/11590838/SpeeG2","internal_url":"https://www.academia.edu/4685517/SpeeG2_A_Speech_and_Gesture_based_Interface_for_Efficient_Controller_free_Text_Entry","translated_internal_url":"","created_at":"2013-10-05T10:06:01.877-07:00","preview_url":null,"current_user_can_edit":null,"current_user_is_owner":null,"owner_id":13155,"coauthors_can_edit":true,"document_type":"paper","co_author_tags":[{"id":18078207,"work_id":4685517,"tagging_user_id":13155,"tagged_user_id":307701,"co_author_invite_id":null,"email":"z***e@zillode.be","affiliation":"Vrije Universiteit Brussel","display_order":-1,"name":"Lode Hoste","title":"SpeeG2: A Speech- and Gesture-based Interface for Efficient Controller-free Text Entry"}],"downloadable_attachments":[{"id":56291532,"title":"","file_type":"pdf","scribd_thumbnail_url":"https://attachments.academia-assets.com/56291532/thumbnails/1.jpg","file_name":"hoste_ICMI2013.pdf","download_url":"https://www.academia.edu/attachments/56291532/download_file","bulk_download_file_name":"SpeeG2_A_Speech_and_Gesture_based_Interf.pdf","bulk_download_url":"https://d1wqtxts1xzle7.cloudfront.net/56291532/hoste_ICMI2013-libre.pdf?1523427619=\u0026response-content-disposition=attachment%3B+filename%3DSpeeG2_A_Speech_and_Gesture_based_Interf.pdf\u0026Expires=1744203891\u0026Signature=G3FBhJb2mywkNDtQRtmH7yG7I8SNZiP8ws50H5txmQ2k~5mjokM33mFvqMrye5O9A4RZECv9aNDG8F5vE6Ljcf0kQCPtrIM4fes0PfSYyZze7wPRnb4ThJ~IwrjcYA4tHOrIFOZ15Ycp~pXHJTMVEcXRMacqHHgx-PSgsMsOhKaFcn4kZ87Pm0RN0Pq78X-k2~BH7VmEZqPY0ypGiLrrqzttnmbaloAJqzi8H~ooJ6OK3zyCYxUOgCYhq-2doAlMqoyxD~AmNKulCC8IUYka3haBRCKTnjoJgzS7xQzJyvapvHb33B9FqgJ146QExWyFgCNzttY-ADzGkNSe7IA4Zw__\u0026Key-Pair-Id=APKAJLOHF5GGSLRBV4ZA"}],"slug":"SpeeG2_A_Speech_and_Gesture_based_Interface_for_Efficient_Controller_free_Text_Entry","translated_slug":"","page_count":8,"language":"en","content_type":"Work","summary":"With the emergence of smart TVs, set-top boxes and public information screens over the last few years, there is an increasing demand to no longer use these appliances only for passive output. These devices can also be used to do text-based web search as well as other tasks which require some form of text input. However, the design of text entry interfaces for efficient input on such appliances represents a major challenge. With current virtual keyboard solutions we only achieve an average text input rate of 5.79 words per minute (WPM) while the average typing speed on a traditional keyboard is 38 WPM. Furthermore, so-called controller-free appliances such as Samsung's Smart TV or Microsoft's Xbox Kinect result in even lower average text input rates. We present SpeeG2, a multimodal text entry solution combining speech recognition with gesture-based error correction. Four innovative prototypes for the efficient controller-free text entry have been developed and evaluated. A quantitative evaluation of our SpeeG2 text entry solution revealed that the best of our four prototypes achieves an average input rate of 21.04 WPM (without errors), outperforming current state-of-the-art solutions for controller-free text input.\r\n\r\nA video about the system can be found at: https://www.academia.edu/11590838/SpeeG2","impression_tracking_id":null,"owner":{"id":13155,"first_name":"Beat","middle_initials":null,"last_name":"Signer","page_name":"BeatSigner","domain_name":"vub","created_at":"2008-10-29T17:11:18.015-07:00","display_name":"Beat Signer","url":"https://vub.academia.edu/BeatSigner"},"attachments":[{"id":56291532,"title":"","file_type":"pdf","scribd_thumbnail_url":"https://attachments.academia-assets.com/56291532/thumbnails/1.jpg","file_name":"hoste_ICMI2013.pdf","download_url":"https://www.academia.edu/attachments/56291532/download_file","bulk_download_file_name":"SpeeG2_A_Speech_and_Gesture_based_Interf.pdf","bulk_download_url":"https://d1wqtxts1xzle7.cloudfront.net/56291532/hoste_ICMI2013-libre.pdf?1523427619=\u0026response-content-disposition=attachment%3B+filename%3DSpeeG2_A_Speech_and_Gesture_based_Interf.pdf\u0026Expires=1744203891\u0026Signature=G3FBhJb2mywkNDtQRtmH7yG7I8SNZiP8ws50H5txmQ2k~5mjokM33mFvqMrye5O9A4RZECv9aNDG8F5vE6Ljcf0kQCPtrIM4fes0PfSYyZze7wPRnb4ThJ~IwrjcYA4tHOrIFOZ15Ycp~pXHJTMVEcXRMacqHHgx-PSgsMsOhKaFcn4kZ87Pm0RN0Pq78X-k2~BH7VmEZqPY0ypGiLrrqzttnmbaloAJqzi8H~ooJ6OK3zyCYxUOgCYhq-2doAlMqoyxD~AmNKulCC8IUYka3haBRCKTnjoJgzS7xQzJyvapvHb33B9FqgJ146QExWyFgCNzttY-ADzGkNSe7IA4Zw__\u0026Key-Pair-Id=APKAJLOHF5GGSLRBV4ZA"}],"research_interests":[{"id":37,"name":"Information Systems","url":"https://www.academia.edu/Documents/in/Information_Systems"},{"id":48,"name":"Engineering","url":"https://www.academia.edu/Documents/in/Engineering"},{"id":422,"name":"Computer Science","url":"https://www.academia.edu/Documents/in/Computer_Science"},{"id":449,"name":"Software Engineering","url":"https://www.academia.edu/Documents/in/Software_Engineering"},{"id":472,"name":"Human Computer Interaction","url":"https://www.academia.edu/Documents/in/Human_Computer_Interaction"},{"id":491,"name":"Information Technology","url":"https://www.academia.edu/Documents/in/Information_Technology"},{"id":923,"name":"Technology","url":"https://www.academia.edu/Documents/in/Technology"},{"id":988,"name":"Design","url":"https://www.academia.edu/Documents/in/Design"},{"id":1210,"name":"Informatics","url":"https://www.academia.edu/Documents/in/Informatics"},{"id":1380,"name":"Computer Engineering","url":"https://www.academia.edu/Documents/in/Computer_Engineering"},{"id":1440,"name":"Visualization","url":"https://www.academia.edu/Documents/in/Visualization"},{"id":2869,"name":"Digital Media","url":"https://www.academia.edu/Documents/in/Digital_Media"},{"id":2875,"name":"User Experience (UX)","url":"https://www.academia.edu/Documents/in/User_Experience_UX_"},{"id":2879,"name":"Ubiquitous Computing","url":"https://www.academia.edu/Documents/in/Ubiquitous_Computing"},{"id":3147,"name":"Gesture","url":"https://www.academia.edu/Documents/in/Gesture"},{"id":3419,"name":"Multimedia","url":"https://www.academia.edu/Documents/in/Multimedia"},{"id":3424,"name":"Information Visualization","url":"https://www.academia.edu/Documents/in/Information_Visualization"},{"id":4416,"name":"Interaction Design","url":"https://www.academia.edu/Documents/in/Interaction_Design"},{"id":5266,"name":"Human Information Interaction","url":"https://www.academia.edu/Documents/in/Human_Information_Interaction"},{"id":10165,"name":"Interactive and Digital Media","url":"https://www.academia.edu/Documents/in/Interactive_and_Digital_Media"},{"id":11081,"name":"Multimodal Interaction","url":"https://www.academia.edu/Documents/in/Multimodal_Interaction"},{"id":11085,"name":"Cross-Media Information Spaces","url":"https://www.academia.edu/Documents/in/Cross-Media_Information_Spaces"},{"id":11119,"name":"User Interface","url":"https://www.academia.edu/Documents/in/User_Interface"},{"id":11984,"name":"Speech Recognition","url":"https://www.academia.edu/Documents/in/Speech_Recognition"},{"id":16009,"name":"Sonic Interaction Design","url":"https://www.academia.edu/Documents/in/Sonic_Interaction_Design"},{"id":17701,"name":"Gesture Recognition","url":"https://www.academia.edu/Documents/in/Gesture_Recognition"},{"id":20470,"name":"Tangible User Interfaces","url":"https://www.academia.edu/Documents/in/Tangible_User_Interfaces"},{"id":21201,"name":"Multimodality","url":"https://www.academia.edu/Documents/in/Multimodality"},{"id":23997,"name":"HCI","url":"https://www.academia.edu/Documents/in/HCI"},{"id":37228,"name":"Multimodal Interfaces","url":"https://www.academia.edu/Documents/in/Multimodal_Interfaces"},{"id":91387,"name":"Kinect","url":"https://www.academia.edu/Documents/in/Kinect"},{"id":97585,"name":"User interfaces","url":"https://www.academia.edu/Documents/in/User_interfaces"},{"id":150569,"name":"Voice Recognition","url":"https://www.academia.edu/Documents/in/Voice_Recognition"},{"id":197254,"name":"Graphical User Interfaces","url":"https://www.academia.edu/Documents/in/Graphical_User_Interfaces"},{"id":310988,"name":"Microsoft Kinect Applications","url":"https://www.academia.edu/Documents/in/Microsoft_Kinect_Applications"},{"id":458849,"name":"Pervasive and Ubiquitous Computing","url":"https://www.academia.edu/Documents/in/Pervasive_and_Ubiquitous_Computing"},{"id":587803,"name":"Multimodal Human Interaction","url":"https://www.academia.edu/Documents/in/Multimodal_Human_Interaction"},{"id":1020985,"name":"Speech Interaction","url":"https://www.academia.edu/Documents/in/Speech_Interaction"}],"urls":[{"id":9196134,"url":"https://beatsigner.com/publications/speeg2-a-speech-and-gesture-based-interface-for-efficient-controller-free-text-entry.pdf"}]}, dispatcherData: dispatcherData }); $(this).data('initialized', true); } }); $a.trackClickSource(".js-work-strip-work-link", "profile_work_strip") if (true) { Aedu.setUpFigureCarousel('profile-work-4685517-figures'); } }); </script> <div class="js-work-strip profile--work_container" data-work-id="128127504"><div class="profile--work_thumbnail hidden-xs"><a class="js-work-strip-work-link" data-click-track="profile-work-strip-thumbnail" href="https://www.academia.edu/128127504/A_Modular_and_Extensible_Hardware_Platform_Prototype_for_Dynamic_Data_Physicalisation"><img alt="Research paper thumbnail of A Modular and Extensible Hardware Platform Prototype for Dynamic Data Physicalisation" class="work-thumbnail" src="https://attachments.academia-assets.com/121756791/thumbnails/1.jpg" /></a></div><div class="wp-workCard wp-workCard_itemContainer"><div class="wp-workCard_item wp-workCard--title"><a class="js-work-strip-work-link text-gray-darker" data-click-track="profile-work-strip-title" href="https://www.academia.edu/128127504/A_Modular_and_Extensible_Hardware_Platform_Prototype_for_Dynamic_Data_Physicalisation">A Modular and Extensible Hardware Platform Prototype for Dynamic Data Physicalisation</a></div><div class="wp-workCard_item"><span class="js-work-more-abstract-truncated">Dynamic data physicalisation is an emerging field of research, investigating the representation a...</span><a class="js-work-more-abstract" data-broccoli-component="work_strip.more_abstract" data-click-track="profile-work-strip-more-abstract" href="javascript:;"><span> more </span><span><i class="fa fa-caret-down"></i></span></a><span class="js-work-more-abstract-untruncated hidden">Dynamic data physicalisation is an emerging field of research, investigating the representation and exploration of data via multiple modalities, extending beyond traditional visual methods. Despite the development of various data physicalisation applications in recent years, the integration of diverse hardware components remains both time-consuming and costly. Further, there is a lack of solutions for rapid prototyping and experimentation with different dynamic data physicalisation alternatives. To address this problem, we propose a modular and extensible hardware platform for dynamic data physicalisation. This platform introduces a communication architecture that ensures seamless plug-and-play functionality for modules representing different physical variables. We detail the implementation and technical evaluation of an initial prototype of our platform, demonstrating its potential to facilitate rapid prototyping and experimentation with various data physicalisation designs. This platform aims to support researchers and developers in the field by providing a versatile and efficient tool for the rapid prototyping and experimentation with different data physicalisation design alternatives.</span></div><div class="wp-workCard_item wp-workCard--actions"><span class="work-strip-bookmark-button-container"></span><a id="9b805fc78beb71fe35f2665df6b6f724" class="wp-workCard--action" rel="nofollow" data-click-track="profile-work-strip-download" data-download="{&quot;attachment_id&quot;:121756791,&quot;asset_id&quot;:128127504,&quot;asset_type&quot;:&quot;Work&quot;,&quot;button_location&quot;:&quot;profile&quot;}" href="https://www.academia.edu/attachments/121756791/download_file?s=profile"><span><i class="fa fa-arrow-down"></i></span><span>Download</span></a><span class="wp-workCard--action visible-if-viewed-by-owner inline-block" style="display: none;"><span class="js-profile-work-strip-edit-button-wrapper profile-work-strip-edit-button-wrapper" data-work-id="128127504"><a class="js-profile-work-strip-edit-button" tabindex="0"><span><i class="fa fa-pencil"></i></span><span>Edit</span></a></span></span></div><div class="wp-workCard_item wp-workCard--stats"><span><span><span class="js-view-count view-count u-mr2x" data-work-id="128127504"><i class="fa fa-spinner fa-spin"></i></span><script>$(function () { var workId = 128127504; window.Academia.workViewCountsFetcher.queue(workId, function (count) { var description = window.$h.commaizeInt(count) + " " + window.$h.pluralize(count, 'View'); $(".js-view-count[data-work-id=128127504]").text(description); $(".js-view-count[data-work-id=128127504]").attr('title', description).tooltip(); }); });</script></span></span><span><span class="percentile-widget hidden"><span class="u-mr2x work-percentile"></span></span><script>$(function () { var workId = 128127504; window.Academia.workPercentilesFetcher.queue(workId, function (percentileText) { var container = $(".js-work-strip[data-work-id='128127504']"); container.find('.work-percentile').text(percentileText.charAt(0).toUpperCase() + percentileText.slice(1)); container.find('.percentile-widget').show(); container.find('.percentile-widget').removeClass('hidden'); }); });</script></span></div><div id="work-strip-premium-row-container"></div></div></div><script> require.config({ waitSeconds: 90 })(["https://a.academia-assets.com/assets/wow_profile-a9bf3a2bc8c89fa2a77156577594264ee8a0f214d74241bc0fcd3f69f8d107ac.js","https://a.academia-assets.com/assets/work_edit-ad038b8c047c1a8d4fa01b402d530ff93c45fee2137a149a4a5398bc8ad67560.js"], function() { // from javascript_helper.rb var dispatcherData = {} if (true){ window.WowProfile.dispatcher = window.WowProfile.dispatcher || _.clone(Backbone.Events); dispatcherData = { dispatcher: window.WowProfile.dispatcher, downloadLinkId: "9b805fc78beb71fe35f2665df6b6f724" } } $('.js-work-strip[data-work-id=128127504]').each(function() { if (!$(this).data('initialized')) { new WowProfile.WorkStripView({ el: this, workJSON: {"id":128127504,"title":"A Modular and Extensible Hardware Platform Prototype for Dynamic Data Physicalisation","translated_title":"","metadata":{"doi":"10.48550/arXiv.2503.06583","abstract":"Dynamic data physicalisation is an emerging field of research, investigating the representation and exploration of data via multiple modalities, extending beyond traditional visual methods. Despite the development of various data physicalisation applications in recent years, the integration of diverse hardware components remains both time-consuming and costly. Further, there is a lack of solutions for rapid prototyping and experimentation with different dynamic data physicalisation alternatives. To address this problem, we propose a modular and extensible hardware platform for dynamic data physicalisation. This platform introduces a communication architecture that ensures seamless plug-and-play functionality for modules representing different physical variables. We detail the implementation and technical evaluation of an initial prototype of our platform, demonstrating its potential to facilitate rapid prototyping and experimentation with various data physicalisation designs. This platform aims to support researchers and developers in the field by providing a versatile and efficient tool for the rapid prototyping and experimentation with different data physicalisation design alternatives.","more_info":"Xuyao Zhang, Milan Ilić and Beat Signer, WISE-2025-01, arXiv preprint, March 2025","grobid_abstract":"Dynamic data physicalisation is an emerging field of research, investigating the representation and exploration of data via multiple modalities, beyond traditional visual methods. Despite the development of various data physicalisation applications in recent years, the integration of diverse hardware components remains both timeconsuming and costly. Further, there is a lack of solutions for rapid prototyping and experimentation with different dynamic data physicalisation alternatives. To address this problem, we propose a modular and extensible hardware platform for dynamic data physicalisation. This platform introduces a communication architecture that ensures seamless plug-and-play functionality for modules representing different physical variables. We detail the implementation and technical evaluation of a preliminary prototype of our platform, demonstrating its potential to facilitate rapid prototyping and experimentation with various data physicalisation designs. The platform aims to support researchers and developers in the field by providing a versatile and efficient tool for the rapid prototyping and experimentation with different data physicalisation design alternatives. • Human-centered computing → Interaction devices; • Hardware → Sensors and actuators.","publication_date":{"day":null,"month":null,"year":2025,"errors":{}},"grobid_abstract_attachment_id":121756791},"translated_abstract":"Dynamic data physicalisation is an emerging field of research, investigating the representation and exploration of data via multiple modalities, extending beyond traditional visual methods. Despite the development of various data physicalisation applications in recent years, the integration of diverse hardware components remains both time-consuming and costly. Further, there is a lack of solutions for rapid prototyping and experimentation with different dynamic data physicalisation alternatives. To address this problem, we propose a modular and extensible hardware platform for dynamic data physicalisation. This platform introduces a communication architecture that ensures seamless plug-and-play functionality for modules representing different physical variables. We detail the implementation and technical evaluation of an initial prototype of our platform, demonstrating its potential to facilitate rapid prototyping and experimentation with various data physicalisation designs. This platform aims to support researchers and developers in the field by providing a versatile and efficient tool for the rapid prototyping and experimentation with different data physicalisation design alternatives.","internal_url":"https://www.academia.edu/128127504/A_Modular_and_Extensible_Hardware_Platform_Prototype_for_Dynamic_Data_Physicalisation","translated_internal_url":"","created_at":"2025-03-10T18:49:33.580-07:00","preview_url":null,"current_user_can_edit":null,"current_user_is_owner":null,"owner_id":13155,"coauthors_can_edit":true,"document_type":"paper","co_author_tags":[],"downloadable_attachments":[{"id":121756791,"title":"","file_type":"pdf","scribd_thumbnail_url":"https://attachments.academia-assets.com/121756791/thumbnails/1.jpg","file_name":"xuyao_CoRR2025.pdf","download_url":"https://www.academia.edu/attachments/121756791/download_file","bulk_download_file_name":"A_Modular_and_Extensible_Hardware_Platfo.pdf","bulk_download_url":"https://d1wqtxts1xzle7.cloudfront.net/121756791/xuyao_CoRR2025-libre.pdf?1741660577=\u0026response-content-disposition=attachment%3B+filename%3DA_Modular_and_Extensible_Hardware_Platfo.pdf\u0026Expires=1744203891\u0026Signature=XhDuQwn315Oe2gXg3W4b7a9W~QCI3BnAAnO8KMm3nhDcrl5R7QABMYS5oml3KxhcCwAw5YcjB-UESSdzfWOxA-YwIzhIbDKLjwgD5BB3TrlujjyGLwxL8S6UyuXbOqvNckzbtwEhbHz-Cq4RuZ~w3Lr2uRoJvtnZCNUEW3~s7-enDqOTHnWu7cHm6cGqOo7y497bLEI0VOPzhlMqNN-Dm5RMefTy~D095joevXsTROqbc-Br2JYzb5GGtjD60y5B7a6dSr7DJKpGZ~XT0HXHv~GJcEzduqSZXdDUxf6BFeghatq-m9O7awcRrW~o7AbJe12veDcadcYhMER~eeuJrw__\u0026Key-Pair-Id=APKAJLOHF5GGSLRBV4ZA"}],"slug":"A_Modular_and_Extensible_Hardware_Platform_Prototype_for_Dynamic_Data_Physicalisation","translated_slug":"","page_count":6,"language":"en","content_type":"Work","summary":"Dynamic data physicalisation is an emerging field of research, investigating the representation and exploration of data via multiple modalities, extending beyond traditional visual methods. Despite the development of various data physicalisation applications in recent years, the integration of diverse hardware components remains both time-consuming and costly. Further, there is a lack of solutions for rapid prototyping and experimentation with different dynamic data physicalisation alternatives. To address this problem, we propose a modular and extensible hardware platform for dynamic data physicalisation. This platform introduces a communication architecture that ensures seamless plug-and-play functionality for modules representing different physical variables. We detail the implementation and technical evaluation of an initial prototype of our platform, demonstrating its potential to facilitate rapid prototyping and experimentation with various data physicalisation designs. This platform aims to support researchers and developers in the field by providing a versatile and efficient tool for the rapid prototyping and experimentation with different data physicalisation design alternatives.","impression_tracking_id":null,"owner":{"id":13155,"first_name":"Beat","middle_initials":null,"last_name":"Signer","page_name":"BeatSigner","domain_name":"vub","created_at":"2008-10-29T17:11:18.015-07:00","display_name":"Beat Signer","url":"https://vub.academia.edu/BeatSigner"},"attachments":[{"id":121756791,"title":"","file_type":"pdf","scribd_thumbnail_url":"https://attachments.academia-assets.com/121756791/thumbnails/1.jpg","file_name":"xuyao_CoRR2025.pdf","download_url":"https://www.academia.edu/attachments/121756791/download_file","bulk_download_file_name":"A_Modular_and_Extensible_Hardware_Platfo.pdf","bulk_download_url":"https://d1wqtxts1xzle7.cloudfront.net/121756791/xuyao_CoRR2025-libre.pdf?1741660577=\u0026response-content-disposition=attachment%3B+filename%3DA_Modular_and_Extensible_Hardware_Platfo.pdf\u0026Expires=1744203891\u0026Signature=XhDuQwn315Oe2gXg3W4b7a9W~QCI3BnAAnO8KMm3nhDcrl5R7QABMYS5oml3KxhcCwAw5YcjB-UESSdzfWOxA-YwIzhIbDKLjwgD5BB3TrlujjyGLwxL8S6UyuXbOqvNckzbtwEhbHz-Cq4RuZ~w3Lr2uRoJvtnZCNUEW3~s7-enDqOTHnWu7cHm6cGqOo7y497bLEI0VOPzhlMqNN-Dm5RMefTy~D095joevXsTROqbc-Br2JYzb5GGtjD60y5B7a6dSr7DJKpGZ~XT0HXHv~GJcEzduqSZXdDUxf6BFeghatq-m9O7awcRrW~o7AbJe12veDcadcYhMER~eeuJrw__\u0026Key-Pair-Id=APKAJLOHF5GGSLRBV4ZA"}],"research_interests":[{"id":422,"name":"Computer Science","url":"https://www.academia.edu/Documents/in/Computer_Science"},{"id":1210,"name":"Informatics","url":"https://www.academia.edu/Documents/in/Informatics"},{"id":3424,"name":"Information Visualization","url":"https://www.academia.edu/Documents/in/Information_Visualization"},{"id":6079,"name":"Infovis","url":"https://www.academia.edu/Documents/in/Infovis"},{"id":20470,"name":"Tangible User Interfaces","url":"https://www.academia.edu/Documents/in/Tangible_User_Interfaces"},{"id":20481,"name":"Information Visualisation","url":"https://www.academia.edu/Documents/in/Information_Visualisation"},{"id":34190,"name":"Rapid Prototyping","url":"https://www.academia.edu/Documents/in/Rapid_Prototyping"},{"id":90556,"name":"TEI (Tangible, Embedded, and Embodied Interaction)","url":"https://www.academia.edu/Documents/in/TEI_Tangible_Embedded_and_Embodied_Interaction_"},{"id":876643,"name":"Tangible User interface","url":"https://www.academia.edu/Documents/in/Tangible_User_interface"},{"id":1993399,"name":"Data Physicalisation","url":"https://www.academia.edu/Documents/in/Data_Physicalisation"},{"id":2947471,"name":"Data Physicalization","url":"https://www.academia.edu/Documents/in/Data_Physicalization"}],"urls":[{"id":47123339,"url":"https://beatsigner.com/publications/a-modular-and-extensible-hardware-platform-prototype-for-dynamic-data-physicalisation.pdf"}]}, dispatcherData: dispatcherData }); $(this).data('initialized', true); } }); $a.trackClickSource(".js-work-strip-work-link", "profile_work_strip") if (false) { Aedu.setUpFigureCarousel('profile-work-128127504-figures'); } }); </script> <div class="js-work-strip profile--work_container" data-work-id="175439"><div class="profile--work_thumbnail hidden-xs"><a class="js-work-strip-work-link" data-click-track="profile-work-strip-thumbnail" href="https://www.academia.edu/175439/PaperPoint_A_Paper_based_Presentation_and_Interactive_Paper_Prototyping_Tool"><img alt="Research paper thumbnail of PaperPoint: A Paper-based Presentation and Interactive Paper Prototyping Tool" class="work-thumbnail" src="https://attachments.academia-assets.com/45649418/thumbnails/1.jpg" /></a></div><div class="wp-workCard wp-workCard_itemContainer"><div class="wp-workCard_item wp-workCard--title"><a class="js-work-strip-work-link text-gray-darker" data-click-track="profile-work-strip-title" href="https://www.academia.edu/175439/PaperPoint_A_Paper_based_Presentation_and_Interactive_Paper_Prototyping_Tool">PaperPoint: A Paper-based Presentation and Interactive Paper Prototyping Tool</a></div><div class="wp-workCard_item"><span class="js-work-more-abstract-truncated">Recent developments in digital pen and paper solutions enable, not only the digital capture of ha...</span><a class="js-work-more-abstract" data-broccoli-component="work_strip.more_abstract" data-click-track="profile-work-strip-more-abstract" href="javascript:;"><span> more </span><span><i class="fa fa-caret-down"></i></span></a><span class="js-work-more-abstract-untruncated hidden">Recent developments in digital pen and paper solutions enable, not only the digital capture of handwriting, but also paper to be used as an interactive medium that links to digital information and services. We present a tool that builds on technologies for interactive paper to enable PowerPoint presentations to be controlled from printed slide handouts. Furthermore, slides can be easily annotated during presentations by simply drawing on the printed version of the slide. As well as discussing the advantages of such a paper-based interface and initial findings on its use, we describe how we were also able to exploit it to provide a general prototyping tool for interactive paper applications.</span></div><div class="wp-workCard_item"><div class="carousel-container carousel-container--sm" id="profile-work-175439-figures"><div class="prev-slide-container js-prev-button-container"><button aria-label="Previous" class="carousel-navigation-button js-profile-work-175439-figures-prev"><span class="material-symbols-outlined" style="font-size: 24px" translate="no">arrow_back_ios</span></button></div><div class="slides-container js-slides-container"><figure class="figure-slide-container"><a href="https://www.academia.edu/figures/2333452/figure-1-interactive-paper-framework-the-interactive-paper"><img alt="Figure 1. Interactive paper framework The interactive paper framework is based on a client-server architecture as shown in Figure 1. On the client side, a spe- cial input device, for example a digital pen, is used to detect (x,y) coordinates within an interactive paper document and send these to a computing device such as a regular PC ora PDA. In addition, the input device has to identify the docu- mentit is used on and the page number within this document. The document&#39;s identifier (ID) and page number together with the positional information are transmitted from the cli- ent to the server component responsible for further data pro- cessing via an HTTP request. " class="figure-slide-image" src="https://figures.academia-assets.com/45649418/figure_001.jpg" /></a></figure><figure class="figure-slide-container"><a href="https://www.academia.edu/figures/2333459/figure-3-interactive-slide-handouts-controls-which-only"><img alt="Figure 3. Interactive slide handouts controls which only allow the user to go one step forward or backward in their presentation. In addition, the ‘Next’ and ‘Previous’ buttons can be used to control the different steps within a PowerPoint animation. There is also a rectangular area with numbers ranging from 1 to 60 providing direct ac- cess to a specific slide. " class="figure-slide-image" src="https://figures.academia-assets.com/45649418/figure_002.jpg" /></a></figure><figure class="figure-slide-container"><a href="https://www.academia.edu/figures/2333463/figure-2-paperpoint-printing-process"><img alt="Figure 2. PaperPoint printing process " class="figure-slide-image" src="https://figures.academia-assets.com/45649418/figure_003.jpg" /></a></figure><figure class="figure-slide-container"><a href="https://www.academia.edu/figures/2333468/figure-4-annotations-on-paper-handouts"><img alt="Annotations on paper handouts " class="figure-slide-image" src="https://figures.academia-assets.com/45649418/figure_004.jpg" /></a></figure><figure class="figure-slide-container"><a href="https://www.academia.edu/figures/2333471/figure-5-clipboard-with-paper-controls-while-it-is"><img alt="Figure 5. Clipboard with paper controls While it is convenient to annotate an interactive paper han- dout lying on a flat surface, it becomes much harder to write on the flexible paper sheets while walking around in a con- ference room or lecture theatre. Therefore, we have found that many PaperPoint users put their slides on a robust clip- board, as shown in Figure 5, enabling them to write on the slides while on the move. However, the clipboard is an op- tional tool and we have used different formats in different situations, for example individual paper sheets in meetings and cardboard in mobile environments. " class="figure-slide-image" src="https://figures.academia-assets.com/45649418/figure_005.jpg" /></a></figure><figure class="figure-slide-container"><a href="https://www.academia.edu/figures/2333474/figure-6-resulting-digital-slide"><img alt="Resulting digital slide " class="figure-slide-image" src="https://figures.academia-assets.com/45649418/figure_006.jpg" /></a></figure><figure class="figure-slide-container"><a href="https://www.academia.edu/figures/2333475/figure-7-the-first-step-in-implementing-the-paperpoint"><img alt="The first step in implementing the PaperPoint application on top of the iPaper framework was the definition of the acti- ve paper regions to be linked to digital information or ser- vices. A printed overview slide is shown on the left hand side of Figure 7 whereas the right hand side shows the ac- tive areas, defined by rectangular shapes, that have to be stored in the iServer database. For each slide, we have an active area covering the whole slide region that is linked to the annotation functionality as described later. Other active areas are defined for the paper buttons used to access speci- fic slides and control the presentation (e.g. ‘Show’, ‘Next’, ‘Previous’ etc.). As mentioned earlier, our interactive paper framework is ba- sed on a general cross-media information platform that ena- bles links between arbitrary digital or physical resources ba- sed on a resource plug-in mechanism. We now introduce active content, a form of resource that was developed for the iServer platform to support the design of complex interacti- on components as required, for example, by the PaperPoint application to communicate with PowerPoint. While regular " class="figure-slide-image" src="https://figures.academia-assets.com/45649418/figure_007.jpg" /></a></figure><figure class="figure-slide-container"><a href="https://www.academia.edu/figures/2333479/figure-6-implementation-to-support-this-customisation-of"><img alt="IMPLEMENTATION To support this customisation of paper-based interfaces, we provide special empty PaperPoint stickers, covered only with the A noto pattem, that are linked to specific digital ser- vices. Figure 6 shows a sheet of PaperPoint stickers that are all linked to the command that shows a specific slide. Similar stickers are available for the ‘Next’, ‘Previous’ and all other commands. A user can design their own user interface by pa- sting the empty stickers on a surface— for example a blank paper sheet. In a second step, the paper interface may be de- corated to highlight the functionality of the paper stickers. Another application of the PaperPoint stickers in combina- tion with PowerPoint is the early stage prototyping of in- teractive paper applications without the need for any pro- gramming. Since some of our project partners have no Java programming skills, we were looking for a tool that would enable them to quickly build mockups of interactive paper interfaces. PowerPoint is a widely used application that al- lows for the integration of different types of media, inclu- ding movies, sounds, links to web pages or screenshots of arbitrary applications and requires no programming skills. A mockup of an interactive paper interface can easily be built by attaching the PaperPoint stickers to parts of different pa- per documents or other physical objects and then providing a mockup of the required “functionality” by adding the con- tent to the PowerPoint slide linked to by the corresponding sticker. " class="figure-slide-image" src="https://figures.academia-assets.com/45649418/figure_008.jpg" /></a></figure></div><div class="next-slide-container js-next-button-container"><button aria-label="Next" class="carousel-navigation-button js-profile-work-175439-figures-next"><span class="material-symbols-outlined" style="font-size: 24px" translate="no">arrow_forward_ios</span></button></div></div></div><div class="wp-workCard_item wp-workCard--actions"><span class="work-strip-bookmark-button-container"></span><a id="4c96fb0eef7756e5a0e4cc982622b76d" class="wp-workCard--action" rel="nofollow" data-click-track="profile-work-strip-download" data-download="{&quot;attachment_id&quot;:45649418,&quot;asset_id&quot;:175439,&quot;asset_type&quot;:&quot;Work&quot;,&quot;button_location&quot;:&quot;profile&quot;}" href="https://www.academia.edu/attachments/45649418/download_file?s=profile"><span><i class="fa fa-arrow-down"></i></span><span>Download</span></a><span class="wp-workCard--action visible-if-viewed-by-owner inline-block" style="display: none;"><span class="js-profile-work-strip-edit-button-wrapper profile-work-strip-edit-button-wrapper" data-work-id="175439"><a class="js-profile-work-strip-edit-button" tabindex="0"><span><i class="fa fa-pencil"></i></span><span>Edit</span></a></span></span></div><div class="wp-workCard_item wp-workCard--stats"><span><span><span class="js-view-count view-count u-mr2x" data-work-id="175439"><i class="fa fa-spinner fa-spin"></i></span><script>$(function () { var workId = 175439; window.Academia.workViewCountsFetcher.queue(workId, function (count) { var description = window.$h.commaizeInt(count) + " " + window.$h.pluralize(count, 'View'); $(".js-view-count[data-work-id=175439]").text(description); $(".js-view-count[data-work-id=175439]").attr('title', description).tooltip(); }); });</script></span></span><span><span class="percentile-widget hidden"><span class="u-mr2x work-percentile"></span></span><script>$(function () { var workId = 175439; window.Academia.workPercentilesFetcher.queue(workId, function (percentileText) { var container = $(".js-work-strip[data-work-id='175439']"); container.find('.work-percentile').text(percentileText.charAt(0).toUpperCase() + percentileText.slice(1)); container.find('.percentile-widget').show(); container.find('.percentile-widget').removeClass('hidden'); }); });</script></span></div><div id="work-strip-premium-row-container"></div></div></div><script> require.config({ waitSeconds: 90 })(["https://a.academia-assets.com/assets/wow_profile-a9bf3a2bc8c89fa2a77156577594264ee8a0f214d74241bc0fcd3f69f8d107ac.js","https://a.academia-assets.com/assets/work_edit-ad038b8c047c1a8d4fa01b402d530ff93c45fee2137a149a4a5398bc8ad67560.js"], function() { // from javascript_helper.rb var dispatcherData = {} if (true){ window.WowProfile.dispatcher = window.WowProfile.dispatcher || _.clone(Backbone.Events); dispatcherData = { dispatcher: window.WowProfile.dispatcher, downloadLinkId: "4c96fb0eef7756e5a0e4cc982622b76d" } } $('.js-work-strip[data-work-id=175439]').each(function() { if (!$(this).data('initialized')) { new WowProfile.WorkStripView({ el: this, workJSON: {"id":175439,"title":"PaperPoint: A Paper-based Presentation and Interactive Paper Prototyping Tool","translated_title":"","metadata":{"doi":"10.1145/1226969.1226981","abstract":"Recent developments in digital pen and paper solutions enable, not only the digital capture of handwriting, but also paper to be used as an interactive medium that links to digital information and services. We present a tool that builds on technologies for interactive paper to enable PowerPoint presentations to be controlled from printed slide handouts. Furthermore, slides can be easily annotated during presentations by simply drawing on the printed version of the slide. As well as discussing the advantages of such a paper-based interface and initial findings on its use, we describe how we were also able to exploit it to provide a general prototyping tool for interactive paper applications.","more_info":"Beat Signer and Moira C. Norrie, Proceedings of TEI 2007, First International Conference on Tangible and Embedded Interaction, Baton Rouge, USA, February 2007","ai_title_tag":"PaperPoint: Interactive Paper Presentation Tool","publication_date":{"day":null,"month":null,"year":2007,"errors":{}}},"translated_abstract":"Recent developments in digital pen and paper solutions enable, not only the digital capture of handwriting, but also paper to be used as an interactive medium that links to digital information and services. We present a tool that builds on technologies for interactive paper to enable PowerPoint presentations to be controlled from printed slide handouts. Furthermore, slides can be easily annotated during presentations by simply drawing on the printed version of the slide. As well as discussing the advantages of such a paper-based interface and initial findings on its use, we describe how we were also able to exploit it to provide a general prototyping tool for interactive paper applications.","internal_url":"https://www.academia.edu/175439/PaperPoint_A_Paper_based_Presentation_and_Interactive_Paper_Prototyping_Tool","translated_internal_url":"","created_at":"2009-03-16T15:53:01.866-07:00","preview_url":null,"current_user_can_edit":null,"current_user_is_owner":null,"owner_id":13155,"coauthors_can_edit":true,"document_type":"paper","co_author_tags":[{"id":21088,"work_id":175439,"tagging_user_id":13155,"tagged_user_id":120441,"co_author_invite_id":null,"email":"n***e@inf.ethz.ch","affiliation":"Swiss Federal Institute of Technology (ETH)","display_order":null,"name":"Moira Norrie","title":"PaperPoint: A Paper-based Presentation and Interactive Paper Prototyping Tool"}],"downloadable_attachments":[{"id":45649418,"title":"","file_type":"pdf","scribd_thumbnail_url":"https://attachments.academia-assets.com/45649418/thumbnails/1.jpg","file_name":"signer_TEI2007.pdf","download_url":"https://www.academia.edu/attachments/45649418/download_file","bulk_download_file_name":"PaperPoint_A_Paper_based_Presentation_an.pdf","bulk_download_url":"https://d1wqtxts1xzle7.cloudfront.net/45649418/signer_TEI2007-libre.pdf?1463348015=\u0026response-content-disposition=attachment%3B+filename%3DPaperPoint_A_Paper_based_Presentation_an.pdf\u0026Expires=1744203891\u0026Signature=YcxWlddUbpvTTkOC~WCvP93n2Q0xeLKV5iEgJpnfGjTyF4BNFqYNFhEYX1J73O8yhQJMTS4YFNnQqkB4t958no0OSUdyumZTvde3U9ZYt98aXrtisdKjMgp-znABmtKKt9bwg073tQOtVcMPWAccgza0rcEe5iSNTUwET8EdLBs0rgNN0JXyXpmVHfQsHe6ing6spCUqbDwCX6cJasLxInQ-1bhrlSyMGyJvVmFSBJGldhq6VfcyWpJZp0IPX5Wttl0CXv4cmMbHwdgFAL3ITnq-Ov1RICsD2G9LWoDsQ-ZuEuCOz5EvbcQcg87y2~U4DmA2FlRUL7MRh3cFaNYoUQ__\u0026Key-Pair-Id=APKAJLOHF5GGSLRBV4ZA"}],"slug":"PaperPoint_A_Paper_based_Presentation_and_Interactive_Paper_Prototyping_Tool","translated_slug":"","page_count":8,"language":"en","content_type":"Work","summary":"Recent developments in digital pen and paper solutions enable, not only the digital capture of handwriting, but also paper to be used as an interactive medium that links to digital information and services. We present a tool that builds on technologies for interactive paper to enable PowerPoint presentations to be controlled from printed slide handouts. Furthermore, slides can be easily annotated during presentations by simply drawing on the printed version of the slide. As well as discussing the advantages of such a paper-based interface and initial findings on its use, we describe how we were also able to exploit it to provide a general prototyping tool for interactive paper applications.","impression_tracking_id":null,"owner":{"id":13155,"first_name":"Beat","middle_initials":null,"last_name":"Signer","page_name":"BeatSigner","domain_name":"vub","created_at":"2008-10-29T17:11:18.015-07:00","display_name":"Beat Signer","url":"https://vub.academia.edu/BeatSigner"},"attachments":[{"id":45649418,"title":"","file_type":"pdf","scribd_thumbnail_url":"https://attachments.academia-assets.com/45649418/thumbnails/1.jpg","file_name":"signer_TEI2007.pdf","download_url":"https://www.academia.edu/attachments/45649418/download_file","bulk_download_file_name":"PaperPoint_A_Paper_based_Presentation_an.pdf","bulk_download_url":"https://d1wqtxts1xzle7.cloudfront.net/45649418/signer_TEI2007-libre.pdf?1463348015=\u0026response-content-disposition=attachment%3B+filename%3DPaperPoint_A_Paper_based_Presentation_an.pdf\u0026Expires=1744203891\u0026Signature=YcxWlddUbpvTTkOC~WCvP93n2Q0xeLKV5iEgJpnfGjTyF4BNFqYNFhEYX1J73O8yhQJMTS4YFNnQqkB4t958no0OSUdyumZTvde3U9ZYt98aXrtisdKjMgp-znABmtKKt9bwg073tQOtVcMPWAccgza0rcEe5iSNTUwET8EdLBs0rgNN0JXyXpmVHfQsHe6ing6spCUqbDwCX6cJasLxInQ-1bhrlSyMGyJvVmFSBJGldhq6VfcyWpJZp0IPX5Wttl0CXv4cmMbHwdgFAL3ITnq-Ov1RICsD2G9LWoDsQ-ZuEuCOz5EvbcQcg87y2~U4DmA2FlRUL7MRh3cFaNYoUQ__\u0026Key-Pair-Id=APKAJLOHF5GGSLRBV4ZA"}],"research_interests":[{"id":48,"name":"Engineering","url":"https://www.academia.edu/Documents/in/Engineering"},{"id":422,"name":"Computer Science","url":"https://www.academia.edu/Documents/in/Computer_Science"},{"id":449,"name":"Software Engineering","url":"https://www.academia.edu/Documents/in/Software_Engineering"},{"id":472,"name":"Human Computer Interaction","url":"https://www.academia.edu/Documents/in/Human_Computer_Interaction"},{"id":491,"name":"Information Technology","url":"https://www.academia.edu/Documents/in/Information_Technology"},{"id":887,"name":"Teaching and Learning","url":"https://www.academia.edu/Documents/in/Teaching_and_Learning"},{"id":922,"name":"Education","url":"https://www.academia.edu/Documents/in/Education"},{"id":933,"name":"New Media","url":"https://www.academia.edu/Documents/in/New_Media"},{"id":1003,"name":"Educational Technology","url":"https://www.academia.edu/Documents/in/Educational_Technology"},{"id":1210,"name":"Informatics","url":"https://www.academia.edu/Documents/in/Informatics"},{"id":1380,"name":"Computer Engineering","url":"https://www.academia.edu/Documents/in/Computer_Engineering"},{"id":1601,"name":"Teacher Education","url":"https://www.academia.edu/Documents/in/Teacher_Education"},{"id":1609,"name":"E-learning","url":"https://www.academia.edu/Documents/in/E-learning"},{"id":1736,"name":"Science Education","url":"https://www.academia.edu/Documents/in/Science_Education"},{"id":1763,"name":"Mobile Learning","url":"https://www.academia.edu/Documents/in/Mobile_Learning"},{"id":2482,"name":"Database Systems","url":"https://www.academia.edu/Documents/in/Database_Systems"},{"id":2621,"name":"Higher Education","url":"https://www.academia.edu/Documents/in/Higher_Education"},{"id":2869,"name":"Digital Media","url":"https://www.academia.edu/Documents/in/Digital_Media"},{"id":2879,"name":"Ubiquitous Computing","url":"https://www.academia.edu/Documents/in/Ubiquitous_Computing"},{"id":3419,"name":"Multimedia","url":"https://www.academia.edu/Documents/in/Multimedia"},{"id":3457,"name":"Learning and Teaching","url":"https://www.academia.edu/Documents/in/Learning_and_Teaching"},{"id":4198,"name":"Mobile Technology","url":"https://www.academia.edu/Documents/in/Mobile_Technology"},{"id":4416,"name":"Interaction Design","url":"https://www.academia.edu/Documents/in/Interaction_Design"},{"id":5673,"name":"Augmented Reality","url":"https://www.academia.edu/Documents/in/Augmented_Reality"},{"id":7585,"name":"ICT in Education","url":"https://www.academia.edu/Documents/in/ICT_in_Education"},{"id":8673,"name":"Digital Media \u0026 Learning","url":"https://www.academia.edu/Documents/in/Digital_Media_and_Learning"},{"id":8679,"name":"Computer Supported Collaborative Learning (CSCL)","url":"https://www.academia.edu/Documents/in/Computer_Supported_Collaborative_Learning_CSCL_"},{"id":9134,"name":"Pervasive Computing","url":"https://www.academia.edu/Documents/in/Pervasive_Computing"},{"id":10414,"name":"ICT","url":"https://www.academia.edu/Documents/in/ICT"},{"id":11085,"name":"Cross-Media Information Spaces","url":"https://www.academia.edu/Documents/in/Cross-Media_Information_Spaces"},{"id":11086,"name":"Interactive Paper","url":"https://www.academia.edu/Documents/in/Interactive_Paper"},{"id":11119,"name":"User Interface","url":"https://www.academia.edu/Documents/in/User_Interface"},{"id":15951,"name":"Digital Pen and Paper","url":"https://www.academia.edu/Documents/in/Digital_Pen_and_Paper"},{"id":17758,"name":"Technology Enhanced Learning","url":"https://www.academia.edu/Documents/in/Technology_Enhanced_Learning"},{"id":18711,"name":"Technology-mediated teaching and learning","url":"https://www.academia.edu/Documents/in/Technology-mediated_teaching_and_learning"},{"id":20470,"name":"Tangible User Interfaces","url":"https://www.academia.edu/Documents/in/Tangible_User_Interfaces"},{"id":26825,"name":"Mobile Computing","url":"https://www.academia.edu/Documents/in/Mobile_Computing"},{"id":37753,"name":"Teaching","url":"https://www.academia.edu/Documents/in/Teaching"},{"id":41030,"name":"Mobile Augmented Reality","url":"https://www.academia.edu/Documents/in/Mobile_Augmented_Reality"},{"id":44606,"name":"Hypermedia","url":"https://www.academia.edu/Documents/in/Hypermedia"},{"id":69857,"name":"PowerPoint","url":"https://www.academia.edu/Documents/in/PowerPoint"},{"id":502875,"name":"Microsoft Powerpoint","url":"https://www.academia.edu/Documents/in/Microsoft_Powerpoint"},{"id":631659,"name":"Power Point Presentations","url":"https://www.academia.edu/Documents/in/Power_Point_Presentations"},{"id":710689,"name":"PowerPoint Presentation","url":"https://www.academia.edu/Documents/in/PowerPoint_Presentation"},{"id":721414,"name":"Augmented Paper","url":"https://www.academia.edu/Documents/in/Augmented_Paper"},{"id":951287,"name":"Power Point","url":"https://www.academia.edu/Documents/in/Power_Point"}],"urls":[{"id":9196136,"url":"https://beatsigner.com/publications/paperpoint-a-paper-based-presentation-and-interactive-paperp-prototyping-tool.pdf"}]}, dispatcherData: dispatcherData }); $(this).data('initialized', true); } }); $a.trackClickSource(".js-work-strip-work-link", "profile_work_strip") if (true) { Aedu.setUpFigureCarousel('profile-work-175439-figures'); } }); </script> <div class="js-work-strip profile--work_container" data-work-id="38292382"><div class="profile--work_thumbnail hidden-xs"><a class="js-work-strip-work-link" data-click-track="profile-work-strip-thumbnail" href="https://www.academia.edu/38292382/A_Conceptual_Framework_and_Content_Model_for_Next_Generation_Presentation_Solutions"><img alt="Research paper thumbnail of A Conceptual Framework and Content Model for Next Generation Presentation Solutions" class="work-thumbnail" src="https://attachments.academia-assets.com/60026610/thumbnails/1.jpg" /></a></div><div class="wp-workCard wp-workCard_itemContainer"><div class="wp-workCard_item wp-workCard--title"><a class="js-work-strip-work-link text-gray-darker" data-click-track="profile-work-strip-title" href="https://www.academia.edu/38292382/A_Conceptual_Framework_and_Content_Model_for_Next_Generation_Presentation_Solutions">A Conceptual Framework and Content Model for Next Generation Presentation Solutions</a></div><div class="wp-workCard_item wp-workCard--coauthors"><span>by </span><span><a class="" data-click-track="profile-work-strip-authors" href="https://independent.academia.edu/ReinoutRoels">Reinout Roels</a> and <a class="" data-click-track="profile-work-strip-authors" href="https://vub.academia.edu/BeatSigner">Beat Signer</a></span></div><div class="wp-workCard_item"><span class="js-work-more-abstract-truncated">Mainstream presentation tools such as Microsoft PowerPoint were originally built to mimic physica...</span><a class="js-work-more-abstract" data-broccoli-component="work_strip.more_abstract" data-click-track="profile-work-strip-more-abstract" href="javascript:;"><span> more </span><span><i class="fa fa-caret-down"></i></span></a><span class="js-work-more-abstract-untruncated hidden">Mainstream presentation tools such as Microsoft PowerPoint were originally built to mimic physical media like photographic slides and still exhibit the same characteristics. However, the state of the art in presentation tools shows that more recent solutions start to go beyond the classic presentation paradigms. For instance, presentations are becoming increasingly non-linear, content is quickly evolving beyond simple text and images and the way we author our presentations is becoming more collaborative. Nevertheless, existing presentation content models are often based on assumptions that do not apply to the current state of presentations any more, making them incompatible for some use cases and limiting the potential of end-user presentation solutions. In order to support state-of-the-art presentation functionality, we rethink the concept of a presentation and introduce a conceptual framework for presentation content. We then present a new content model for presentation solutions based on the Resource-Selector-Link (RSL) hypermedia metamodel. We further discuss an implementation of our model and show some example use cases. We conclude by outlining how design choices in the model address currently unmet needs with regards to extensibility, content reuse, collaboration, semantics, user access management, non-linearity, and context awareness, resulting in better support for the corresponding end-user functionality in presentation tools.</span></div><div class="wp-workCard_item wp-workCard--actions"><span class="work-strip-bookmark-button-container"></span><a id="ead46aa3451f434ab52e27e555735ef4" class="wp-workCard--action" rel="nofollow" data-click-track="profile-work-strip-download" data-download="{&quot;attachment_id&quot;:60026610,&quot;asset_id&quot;:38292382,&quot;asset_type&quot;:&quot;Work&quot;,&quot;button_location&quot;:&quot;profile&quot;}" href="https://www.academia.edu/attachments/60026610/download_file?s=profile"><span><i class="fa fa-arrow-down"></i></span><span>Download</span></a><span class="wp-workCard--action visible-if-viewed-by-owner inline-block" style="display: none;"><span class="js-profile-work-strip-edit-button-wrapper profile-work-strip-edit-button-wrapper" data-work-id="38292382"><a class="js-profile-work-strip-edit-button" tabindex="0"><span><i class="fa fa-pencil"></i></span><span>Edit</span></a></span></span></div><div class="wp-workCard_item wp-workCard--stats"><span><span><span class="js-view-count view-count u-mr2x" data-work-id="38292382"><i class="fa fa-spinner fa-spin"></i></span><script>$(function () { var workId = 38292382; window.Academia.workViewCountsFetcher.queue(workId, function (count) { var description = window.$h.commaizeInt(count) + " " + window.$h.pluralize(count, 'View'); $(".js-view-count[data-work-id=38292382]").text(description); $(".js-view-count[data-work-id=38292382]").attr('title', description).tooltip(); }); });</script></span></span><span><span class="percentile-widget hidden"><span class="u-mr2x work-percentile"></span></span><script>$(function () { var workId = 38292382; window.Academia.workPercentilesFetcher.queue(workId, function (percentileText) { var container = $(".js-work-strip[data-work-id='38292382']"); container.find('.work-percentile').text(percentileText.charAt(0).toUpperCase() + percentileText.slice(1)); container.find('.percentile-widget').show(); container.find('.percentile-widget').removeClass('hidden'); }); });</script></span></div><div id="work-strip-premium-row-container"></div></div></div><script> require.config({ waitSeconds: 90 })(["https://a.academia-assets.com/assets/wow_profile-a9bf3a2bc8c89fa2a77156577594264ee8a0f214d74241bc0fcd3f69f8d107ac.js","https://a.academia-assets.com/assets/work_edit-ad038b8c047c1a8d4fa01b402d530ff93c45fee2137a149a4a5398bc8ad67560.js"], function() { // from javascript_helper.rb var dispatcherData = {} if (true){ window.WowProfile.dispatcher = window.WowProfile.dispatcher || _.clone(Backbone.Events); dispatcherData = { dispatcher: window.WowProfile.dispatcher, downloadLinkId: "ead46aa3451f434ab52e27e555735ef4" } } $('.js-work-strip[data-work-id=38292382]').each(function() { if (!$(this).data('initialized')) { new WowProfile.WorkStripView({ el: this, workJSON: {"id":38292382,"title":"A Conceptual Framework and Content Model for Next Generation Presentation Solutions","translated_title":"","metadata":{"doi":"10.1145/3331149","volume":"3","abstract":"Mainstream presentation tools such as Microsoft PowerPoint were originally built to mimic physical media like photographic slides and still exhibit the same characteristics. However, the state of the art in presentation tools shows that more recent solutions start to go beyond the classic presentation paradigms. For instance, presentations are becoming increasingly non-linear, content is quickly evolving beyond simple text and images and the way we author our presentations is becoming more collaborative. Nevertheless, existing presentation content models are often based on assumptions that do not apply to the current state of presentations any more, making them incompatible for some use cases and limiting the potential of end-user presentation solutions. In order to support state-of-the-art presentation functionality, we rethink the concept of a presentation and introduce a conceptual framework for presentation content. We then present a new content model for presentation solutions based on the Resource-Selector-Link (RSL) hypermedia metamodel. We further discuss an implementation of our model and show some example use cases. We conclude by outlining how design choices in the model address currently unmet needs with regards to extensibility, content reuse, collaboration, semantics, user access management, non-linearity, and context awareness, resulting in better support for the corresponding end-user functionality in presentation tools.","more_info":"Reinout Roels and Beat Signer, Proceedings of the ACM on Human-Computer Interaction (PACMHCI), 2019","publication_date":{"day":null,"month":null,"year":2019,"errors":{}}},"translated_abstract":"Mainstream presentation tools such as Microsoft PowerPoint were originally built to mimic physical media like photographic slides and still exhibit the same characteristics. However, the state of the art in presentation tools shows that more recent solutions start to go beyond the classic presentation paradigms. For instance, presentations are becoming increasingly non-linear, content is quickly evolving beyond simple text and images and the way we author our presentations is becoming more collaborative. Nevertheless, existing presentation content models are often based on assumptions that do not apply to the current state of presentations any more, making them incompatible for some use cases and limiting the potential of end-user presentation solutions. In order to support state-of-the-art presentation functionality, we rethink the concept of a presentation and introduce a conceptual framework for presentation content. We then present a new content model for presentation solutions based on the Resource-Selector-Link (RSL) hypermedia metamodel. We further discuss an implementation of our model and show some example use cases. We conclude by outlining how design choices in the model address currently unmet needs with regards to extensibility, content reuse, collaboration, semantics, user access management, non-linearity, and context awareness, resulting in better support for the corresponding end-user functionality in presentation tools.","internal_url":"https://www.academia.edu/38292382/A_Conceptual_Framework_and_Content_Model_for_Next_Generation_Presentation_Solutions","translated_internal_url":"","created_at":"2019-02-05T13:54:56.836-08:00","preview_url":null,"current_user_can_edit":null,"current_user_is_owner":null,"owner_id":13155,"coauthors_can_edit":true,"document_type":"paper","co_author_tags":[{"id":32255650,"work_id":38292382,"tagging_user_id":13155,"tagged_user_id":2181828,"co_author_invite_id":null,"email":"r***s@gmail.com","display_order":-1,"name":"Reinout Roels","title":"A Conceptual Framework and Content Model for Next Generation Presentation Solutions"}],"downloadable_attachments":[{"id":60026610,"title":"","file_type":"pdf","scribd_thumbnail_url":"https://attachments.academia-assets.com/60026610/thumbnails/1.jpg","file_name":"roels_PACMHCI2019.pdf","download_url":"https://www.academia.edu/attachments/60026610/download_file","bulk_download_file_name":"A_Conceptual_Framework_and_Content_Model.pdf","bulk_download_url":"https://d1wqtxts1xzle7.cloudfront.net/60026610/roels_PACMHCI2019-libre.pdf?1563290002=\u0026response-content-disposition=attachment%3B+filename%3DA_Conceptual_Framework_and_Content_Model.pdf\u0026Expires=1744203891\u0026Signature=OZvmyAf3S6xJSXxNJOrgDoLTBkgWr9DgLSDhyVNANFjCQWeo5Y571QzAUKpohPWRN3RnNQterrLH2XqrIpCzWjP~SqsRn9ox6T2nqS~tpUBU9FvPFW4WQketwa6cIdaqBWjU9N1gscuC60EpfLINQxhpHzGIj2vooAqgH5ylix5MMEbOrbh7SnWEdPQO8fknaSOtNoRTpJywkfseWLiWyuyiQ-kfxOhMD2KTv4PvfXkJWGMQqhhvPS~aTlL4c-t4isRemUx3R-M77oT84jDzVJ8GEevOK1xwcMNKhOuHntAM~nBsODL9lbXZiZyR9JD1yoVST04vvEX4ag1ErDZBzA__\u0026Key-Pair-Id=APKAJLOHF5GGSLRBV4ZA"}],"slug":"A_Conceptual_Framework_and_Content_Model_for_Next_Generation_Presentation_Solutions","translated_slug":"","page_count":22,"language":"en","content_type":"Work","summary":"Mainstream presentation tools such as Microsoft PowerPoint were originally built to mimic physical media like photographic slides and still exhibit the same characteristics. However, the state of the art in presentation tools shows that more recent solutions start to go beyond the classic presentation paradigms. For instance, presentations are becoming increasingly non-linear, content is quickly evolving beyond simple text and images and the way we author our presentations is becoming more collaborative. Nevertheless, existing presentation content models are often based on assumptions that do not apply to the current state of presentations any more, making them incompatible for some use cases and limiting the potential of end-user presentation solutions. In order to support state-of-the-art presentation functionality, we rethink the concept of a presentation and introduce a conceptual framework for presentation content. We then present a new content model for presentation solutions based on the Resource-Selector-Link (RSL) hypermedia metamodel. We further discuss an implementation of our model and show some example use cases. We conclude by outlining how design choices in the model address currently unmet needs with regards to extensibility, content reuse, collaboration, semantics, user access management, non-linearity, and context awareness, resulting in better support for the corresponding end-user functionality in presentation tools.","impression_tracking_id":null,"owner":{"id":13155,"first_name":"Beat","middle_initials":null,"last_name":"Signer","page_name":"BeatSigner","domain_name":"vub","created_at":"2008-10-29T17:11:18.015-07:00","display_name":"Beat Signer","url":"https://vub.academia.edu/BeatSigner"},"attachments":[{"id":60026610,"title":"","file_type":"pdf","scribd_thumbnail_url":"https://attachments.academia-assets.com/60026610/thumbnails/1.jpg","file_name":"roels_PACMHCI2019.pdf","download_url":"https://www.academia.edu/attachments/60026610/download_file","bulk_download_file_name":"A_Conceptual_Framework_and_Content_Model.pdf","bulk_download_url":"https://d1wqtxts1xzle7.cloudfront.net/60026610/roels_PACMHCI2019-libre.pdf?1563290002=\u0026response-content-disposition=attachment%3B+filename%3DA_Conceptual_Framework_and_Content_Model.pdf\u0026Expires=1744203891\u0026Signature=OZvmyAf3S6xJSXxNJOrgDoLTBkgWr9DgLSDhyVNANFjCQWeo5Y571QzAUKpohPWRN3RnNQterrLH2XqrIpCzWjP~SqsRn9ox6T2nqS~tpUBU9FvPFW4WQketwa6cIdaqBWjU9N1gscuC60EpfLINQxhpHzGIj2vooAqgH5ylix5MMEbOrbh7SnWEdPQO8fknaSOtNoRTpJywkfseWLiWyuyiQ-kfxOhMD2KTv4PvfXkJWGMQqhhvPS~aTlL4c-t4isRemUx3R-M77oT84jDzVJ8GEevOK1xwcMNKhOuHntAM~nBsODL9lbXZiZyR9JD1yoVST04vvEX4ag1ErDZBzA__\u0026Key-Pair-Id=APKAJLOHF5GGSLRBV4ZA"}],"research_interests":[{"id":37,"name":"Information Systems","url":"https://www.academia.edu/Documents/in/Information_Systems"},{"id":422,"name":"Computer Science","url":"https://www.academia.edu/Documents/in/Computer_Science"},{"id":449,"name":"Software Engineering","url":"https://www.academia.edu/Documents/in/Software_Engineering"},{"id":459,"name":"Information Science","url":"https://www.academia.edu/Documents/in/Information_Science"},{"id":472,"name":"Human Computer Interaction","url":"https://www.academia.edu/Documents/in/Human_Computer_Interaction"},{"id":491,"name":"Information Technology","url":"https://www.academia.edu/Documents/in/Information_Technology"},{"id":922,"name":"Education","url":"https://www.academia.edu/Documents/in/Education"},{"id":923,"name":"Technology","url":"https://www.academia.edu/Documents/in/Technology"},{"id":933,"name":"New Media","url":"https://www.academia.edu/Documents/in/New_Media"},{"id":1003,"name":"Educational Technology","url":"https://www.academia.edu/Documents/in/Educational_Technology"},{"id":1210,"name":"Informatics","url":"https://www.academia.edu/Documents/in/Informatics"},{"id":3424,"name":"Information Visualization","url":"https://www.academia.edu/Documents/in/Information_Visualization"},{"id":5266,"name":"Human Information Interaction","url":"https://www.academia.edu/Documents/in/Human_Information_Interaction"},{"id":7585,"name":"ICT in Education","url":"https://www.academia.edu/Documents/in/ICT_in_Education"},{"id":33915,"name":"Technology-enhanced Learning","url":"https://www.academia.edu/Documents/in/Technology-enhanced_Learning"},{"id":69857,"name":"PowerPoint","url":"https://www.academia.edu/Documents/in/PowerPoint"},{"id":140534,"name":"Conceptual Framework","url":"https://www.academia.edu/Documents/in/Conceptual_Framework"},{"id":369022,"name":"Cross Media","url":"https://www.academia.edu/Documents/in/Cross_Media"},{"id":389372,"name":"Presentations","url":"https://www.academia.edu/Documents/in/Presentations"},{"id":1273494,"name":"Slideware","url":"https://www.academia.edu/Documents/in/Slideware"}],"urls":[{"id":9196137,"url":"https://beatsigner.com/publications/a-conceptual-framework-and-content-model-for-next-generation-presentation-solutions.pdf"}]}, dispatcherData: dispatcherData }); $(this).data('initialized', true); } }); $a.trackClickSource(".js-work-strip-work-link", "profile_work_strip") if (false) { Aedu.setUpFigureCarousel('profile-work-38292382-figures'); } }); </script> </div><div class="profile--tab_content_container js-tab-pane tab-pane" data-section-id="221924" id="talks"><div class="js-work-strip profile--work_container" data-work-id="108585825"><div class="profile--work_thumbnail hidden-xs"><a class="js-work-strip-work-link" data-click-track="profile-work-strip-thumbnail" href="https://www.academia.edu/108585825/From_Hypertext_to_Cross_Media_Information_Spaces"><img alt="Research paper thumbnail of From Hypertext to Cross-Media Information Spaces" class="work-thumbnail" src="https://attachments.academia-assets.com/107017648/thumbnails/1.jpg" /></a></div><div class="wp-workCard wp-workCard_itemContainer"><div class="wp-workCard_item wp-workCard--title"><a class="js-work-strip-work-link text-gray-darker" data-click-track="profile-work-strip-title" href="https://www.academia.edu/108585825/From_Hypertext_to_Cross_Media_Information_Spaces">From Hypertext to Cross-Media Information Spaces</a></div><div class="wp-workCard_item"><span class="js-work-more-abstract-truncated">Invited presentation given to students at Pandit Deendayal Energy University.</span></div><div class="wp-workCard_item wp-workCard--actions"><span class="work-strip-bookmark-button-container"></span><a id="79aa7ee7d6b3f8e7da6866710525d8c7" class="wp-workCard--action" rel="nofollow" data-click-track="profile-work-strip-download" data-download="{&quot;attachment_id&quot;:107017648,&quot;asset_id&quot;:108585825,&quot;asset_type&quot;:&quot;Work&quot;,&quot;button_location&quot;:&quot;profile&quot;}" href="https://www.academia.edu/attachments/107017648/download_file?s=profile"><span><i class="fa fa-arrow-down"></i></span><span>Download</span></a><span class="wp-workCard--action visible-if-viewed-by-owner inline-block" style="display: none;"><span class="js-profile-work-strip-edit-button-wrapper profile-work-strip-edit-button-wrapper" data-work-id="108585825"><a class="js-profile-work-strip-edit-button" tabindex="0"><span><i class="fa fa-pencil"></i></span><span>Edit</span></a></span></span></div><div class="wp-workCard_item wp-workCard--stats"><span><span><span class="js-view-count view-count u-mr2x" data-work-id="108585825"><i class="fa fa-spinner fa-spin"></i></span><script>$(function () { var workId = 108585825; window.Academia.workViewCountsFetcher.queue(workId, function (count) { var description = window.$h.commaizeInt(count) + " " + window.$h.pluralize(count, 'View'); $(".js-view-count[data-work-id=108585825]").text(description); $(".js-view-count[data-work-id=108585825]").attr('title', description).tooltip(); }); });</script></span></span><span><span class="percentile-widget hidden"><span class="u-mr2x work-percentile"></span></span><script>$(function () { var workId = 108585825; window.Academia.workPercentilesFetcher.queue(workId, function (percentileText) { var container = $(".js-work-strip[data-work-id='108585825']"); container.find('.work-percentile').text(percentileText.charAt(0).toUpperCase() + percentileText.slice(1)); container.find('.percentile-widget').show(); container.find('.percentile-widget').removeClass('hidden'); }); });</script></span></div><div id="work-strip-premium-row-container"></div></div></div><script> require.config({ waitSeconds: 90 })(["https://a.academia-assets.com/assets/wow_profile-a9bf3a2bc8c89fa2a77156577594264ee8a0f214d74241bc0fcd3f69f8d107ac.js","https://a.academia-assets.com/assets/work_edit-ad038b8c047c1a8d4fa01b402d530ff93c45fee2137a149a4a5398bc8ad67560.js"], function() { // from javascript_helper.rb var dispatcherData = {} if (true){ window.WowProfile.dispatcher = window.WowProfile.dispatcher || _.clone(Backbone.Events); dispatcherData = { dispatcher: window.WowProfile.dispatcher, downloadLinkId: "79aa7ee7d6b3f8e7da6866710525d8c7" } } $('.js-work-strip[data-work-id=108585825]').each(function() { if (!$(this).data('initialized')) { new WowProfile.WorkStripView({ el: this, workJSON: {"id":108585825,"title":"From Hypertext to Cross-Media Information Spaces","translated_title":"","metadata":{"abstract":"Invited presentation given to students at Pandit Deendayal Energy University.","publication_date":{"day":null,"month":null,"year":2023,"errors":{}}},"translated_abstract":"Invited presentation given to students at Pandit Deendayal Energy University.","internal_url":"https://www.academia.edu/108585825/From_Hypertext_to_Cross_Media_Information_Spaces","translated_internal_url":"","created_at":"2023-10-25T13:26:51.676-07:00","preview_url":null,"current_user_can_edit":null,"current_user_is_owner":null,"owner_id":13155,"coauthors_can_edit":true,"document_type":"talk","co_author_tags":[],"downloadable_attachments":[{"id":107017648,"title":"","file_type":"pdf","scribd_thumbnail_url":"https://attachments.academia-assets.com/107017648/thumbnails/1.jpg","file_name":"From_Hypertext_to_Cross_Media_Information_Spaces.pdf","download_url":"https://www.academia.edu/attachments/107017648/download_file","bulk_download_file_name":"From_Hypertext_to_Cross_Media_Informatio.pdf","bulk_download_url":"https://d1wqtxts1xzle7.cloudfront.net/107017648/From_Hypertext_to_Cross_Media_Information_Spaces-libre.pdf?1698611852=\u0026response-content-disposition=attachment%3B+filename%3DFrom_Hypertext_to_Cross_Media_Informatio.pdf\u0026Expires=1744203891\u0026Signature=NDaaoefIBty0bJ587oWW-X24jiOpBsZPoQnLsmeyNKgCUtmRes0n6xE-iG9z5sW3oqJfBRF9quPA1vX13HUe5MHyB-UfEA~29s4WfqG740tD3~TiKDcYYBVLRwIBXiH6oZ49y3BhyMgWDdlj7yr38ZDHq5ABnrnxhF-In3Cv4LajKUFa19OcWUoeaXzK7qTKbRMFAnWin~kuddR4Owk8mV9-JhLsB7koCjLp4wJhy4KEyQSuoziLVc4AjwbMGLZ2ue2JZ3GWYyZLWykApS1LoVCVO~SIKdBhczUgMFDr4IqVnaLK4dfnuWY3nWW2F4f-w7~B4fY9yswD3m9ZPUYWTA__\u0026Key-Pair-Id=APKAJLOHF5GGSLRBV4ZA"}],"slug":"From_Hypertext_to_Cross_Media_Information_Spaces","translated_slug":"","page_count":43,"language":"en","content_type":"Work","summary":"Invited presentation given to students at Pandit Deendayal Energy University.","impression_tracking_id":null,"owner":{"id":13155,"first_name":"Beat","middle_initials":null,"last_name":"Signer","page_name":"BeatSigner","domain_name":"vub","created_at":"2008-10-29T17:11:18.015-07:00","display_name":"Beat Signer","url":"https://vub.academia.edu/BeatSigner"},"attachments":[{"id":107017648,"title":"","file_type":"pdf","scribd_thumbnail_url":"https://attachments.academia-assets.com/107017648/thumbnails/1.jpg","file_name":"From_Hypertext_to_Cross_Media_Information_Spaces.pdf","download_url":"https://www.academia.edu/attachments/107017648/download_file","bulk_download_file_name":"From_Hypertext_to_Cross_Media_Informatio.pdf","bulk_download_url":"https://d1wqtxts1xzle7.cloudfront.net/107017648/From_Hypertext_to_Cross_Media_Information_Spaces-libre.pdf?1698611852=\u0026response-content-disposition=attachment%3B+filename%3DFrom_Hypertext_to_Cross_Media_Informatio.pdf\u0026Expires=1744203891\u0026Signature=NDaaoefIBty0bJ587oWW-X24jiOpBsZPoQnLsmeyNKgCUtmRes0n6xE-iG9z5sW3oqJfBRF9quPA1vX13HUe5MHyB-UfEA~29s4WfqG740tD3~TiKDcYYBVLRwIBXiH6oZ49y3BhyMgWDdlj7yr38ZDHq5ABnrnxhF-In3Cv4LajKUFa19OcWUoeaXzK7qTKbRMFAnWin~kuddR4Owk8mV9-JhLsB7koCjLp4wJhy4KEyQSuoziLVc4AjwbMGLZ2ue2JZ3GWYyZLWykApS1LoVCVO~SIKdBhczUgMFDr4IqVnaLK4dfnuWY3nWW2F4f-w7~B4fY9yswD3m9ZPUYWTA__\u0026Key-Pair-Id=APKAJLOHF5GGSLRBV4ZA"}],"research_interests":[{"id":37,"name":"Information Systems","url":"https://www.academia.edu/Documents/in/Information_Systems"},{"id":422,"name":"Computer Science","url":"https://www.academia.edu/Documents/in/Computer_Science"},{"id":459,"name":"Information Science","url":"https://www.academia.edu/Documents/in/Information_Science"},{"id":472,"name":"Human Computer Interaction","url":"https://www.academia.edu/Documents/in/Human_Computer_Interaction"},{"id":1210,"name":"Informatics","url":"https://www.academia.edu/Documents/in/Informatics"},{"id":1380,"name":"Computer Engineering","url":"https://www.academia.edu/Documents/in/Computer_Engineering"},{"id":11085,"name":"Cross-Media Information Spaces","url":"https://www.academia.edu/Documents/in/Cross-Media_Information_Spaces"},{"id":11123,"name":"Personal Information Management","url":"https://www.academia.edu/Documents/in/Personal_Information_Management"},{"id":44606,"name":"Hypermedia","url":"https://www.academia.edu/Documents/in/Hypermedia"},{"id":54192,"name":"Hypertext","url":"https://www.academia.edu/Documents/in/Hypertext"},{"id":141114,"name":"World Wide Web","url":"https://www.academia.edu/Documents/in/World_Wide_Web"},{"id":193390,"name":"RSL","url":"https://www.academia.edu/Documents/in/RSL"}],"urls":[{"id":35017527,"url":"https://speakerdeck.com/signer/from-hypertext-to-cross-media-information-spaces"}]}, dispatcherData: dispatcherData }); $(this).data('initialized', true); } }); $a.trackClickSource(".js-work-strip-work-link", "profile_work_strip") if (false) { Aedu.setUpFigureCarousel('profile-work-108585825-figures'); } }); </script> <div class="js-work-strip profile--work_container" data-work-id="102982068"><div class="profile--work_thumbnail hidden-xs"><a class="js-work-strip-work-link" data-click-track="profile-work-strip-thumbnail" href="https://www.academia.edu/102982068/The_RSL_Hypermedia_Metamodel_and_Its_Application_in_Cross_Media_Solutions"><img alt="Research paper thumbnail of The RSL Hypermedia Metamodel and Its Application in Cross-Media Solutions" class="work-thumbnail" src="https://attachments.academia-assets.com/103109952/thumbnails/1.jpg" /></a></div><div class="wp-workCard wp-workCard_itemContainer"><div class="wp-workCard_item wp-workCard--title"><a class="js-work-strip-work-link text-gray-darker" data-click-track="profile-work-strip-title" href="https://www.academia.edu/102982068/The_RSL_Hypermedia_Metamodel_and_Its_Application_in_Cross_Media_Solutions">The RSL Hypermedia Metamodel and Its Application in Cross-Media Solutions</a></div><div class="wp-workCard_item"><span class="js-work-more-abstract-truncated">Guest lecture in the &#39;Hypertext and Hypernarratives&#39; seminar organised by Prof. Claus Atzenbeck a...</span><a class="js-work-more-abstract" data-broccoli-component="work_strip.more_abstract" data-click-track="profile-work-strip-more-abstract" href="javascript:;"><span> more </span><span><i class="fa fa-caret-down"></i></span></a><span class="js-work-more-abstract-untruncated hidden">Guest lecture in the &#39;Hypertext and Hypernarratives&#39; seminar organised by Prof. Claus Atzenbeck at Hof University, online, June 2023</span></div><div class="wp-workCard_item wp-workCard--actions"><span class="work-strip-bookmark-button-container"></span><a id="a18bde4b837ccb8d433555ae9542958f" class="wp-workCard--action" rel="nofollow" data-click-track="profile-work-strip-download" data-download="{&quot;attachment_id&quot;:103109952,&quot;asset_id&quot;:102982068,&quot;asset_type&quot;:&quot;Work&quot;,&quot;button_location&quot;:&quot;profile&quot;}" href="https://www.academia.edu/attachments/103109952/download_file?s=profile"><span><i class="fa fa-arrow-down"></i></span><span>Download</span></a><span class="wp-workCard--action visible-if-viewed-by-owner inline-block" style="display: none;"><span class="js-profile-work-strip-edit-button-wrapper profile-work-strip-edit-button-wrapper" data-work-id="102982068"><a class="js-profile-work-strip-edit-button" tabindex="0"><span><i class="fa fa-pencil"></i></span><span>Edit</span></a></span></span></div><div class="wp-workCard_item wp-workCard--stats"><span><span><span class="js-view-count view-count u-mr2x" data-work-id="102982068"><i class="fa fa-spinner fa-spin"></i></span><script>$(function () { var workId = 102982068; window.Academia.workViewCountsFetcher.queue(workId, function (count) { var description = window.$h.commaizeInt(count) + " " + window.$h.pluralize(count, 'View'); $(".js-view-count[data-work-id=102982068]").text(description); $(".js-view-count[data-work-id=102982068]").attr('title', description).tooltip(); }); });</script></span></span><span><span class="percentile-widget hidden"><span class="u-mr2x work-percentile"></span></span><script>$(function () { var workId = 102982068; window.Academia.workPercentilesFetcher.queue(workId, function (percentileText) { var container = $(".js-work-strip[data-work-id='102982068']"); container.find('.work-percentile').text(percentileText.charAt(0).toUpperCase() + percentileText.slice(1)); container.find('.percentile-widget').show(); container.find('.percentile-widget').removeClass('hidden'); }); });</script></span></div><div id="work-strip-premium-row-container"></div></div></div><script> require.config({ waitSeconds: 90 })(["https://a.academia-assets.com/assets/wow_profile-a9bf3a2bc8c89fa2a77156577594264ee8a0f214d74241bc0fcd3f69f8d107ac.js","https://a.academia-assets.com/assets/work_edit-ad038b8c047c1a8d4fa01b402d530ff93c45fee2137a149a4a5398bc8ad67560.js"], function() { // from javascript_helper.rb var dispatcherData = {} if (true){ window.WowProfile.dispatcher = window.WowProfile.dispatcher || _.clone(Backbone.Events); dispatcherData = { dispatcher: window.WowProfile.dispatcher, downloadLinkId: "a18bde4b837ccb8d433555ae9542958f" } } $('.js-work-strip[data-work-id=102982068]').each(function() { if (!$(this).data('initialized')) { new WowProfile.WorkStripView({ el: this, workJSON: {"id":102982068,"title":"The RSL Hypermedia Metamodel and Its Application in Cross-Media Solutions","translated_title":"","metadata":{"abstract":"Guest lecture in the 'Hypertext and Hypernarratives' seminar organised by Prof. Claus Atzenbeck at Hof University, online, June 2023","ai_title_tag":"RSL Hypermedia Metamodel in Cross-Media Solutions"},"translated_abstract":"Guest lecture in the 'Hypertext and Hypernarratives' seminar organised by Prof. Claus Atzenbeck at Hof University, online, June 2023","internal_url":"https://www.academia.edu/102982068/The_RSL_Hypermedia_Metamodel_and_Its_Application_in_Cross_Media_Solutions","translated_internal_url":"","created_at":"2023-06-06T23:26:18.627-07:00","preview_url":null,"current_user_can_edit":null,"current_user_is_owner":null,"owner_id":13155,"coauthors_can_edit":true,"document_type":"talk","co_author_tags":[],"downloadable_attachments":[{"id":103109952,"title":"","file_type":"pdf","scribd_thumbnail_url":"https://attachments.academia-assets.com/103109952/thumbnails/1.jpg","file_name":"RSL_Hypermedia_Metamodel.pdf","download_url":"https://www.academia.edu/attachments/103109952/download_file","bulk_download_file_name":"The_RSL_Hypermedia_Metamodel_and_Its_App.pdf","bulk_download_url":"https://d1wqtxts1xzle7.cloudfront.net/103109952/RSL_Hypermedia_Metamodel-libre.pdf?1686130051=\u0026response-content-disposition=attachment%3B+filename%3DThe_RSL_Hypermedia_Metamodel_and_Its_App.pdf\u0026Expires=1744203891\u0026Signature=NZ2NZxzSQGVCBefrRblbu72E4UFtZOtupuP5f3W-q1KMNHSgXi82r2PDt02YjKPI-PPi6ohEpTCqFXbAHUzpwlwUpaIZ6s-Di1I4Nx4Q4LZYbjuaLce2qsF9ZK1EGz1s9WRcc3mjMKitAtwZrGbcB5gVn-Y8vGdzg2OkP2A~VJI5pHisG1Z-ddZecbe~r06tdxavlxZi8-Wfh2p2Wgmff4ZlkoG7qBEeHFFQCgLuG28QlqgFjaqufQ8ydWcTrz80Kz0ftwoyYc7pXe2MpOtFmReBiOF6Ig3zxabgCzvaSznltu0-wxn1yJwJedkepJ-jkubA5WDJsURkYJMRxYfe4w__\u0026Key-Pair-Id=APKAJLOHF5GGSLRBV4ZA"}],"slug":"The_RSL_Hypermedia_Metamodel_and_Its_Application_in_Cross_Media_Solutions","translated_slug":"","page_count":40,"language":"en","content_type":"Work","summary":"Guest lecture in the 'Hypertext and Hypernarratives' seminar organised by Prof. Claus Atzenbeck at Hof University, online, June 2023","impression_tracking_id":null,"owner":{"id":13155,"first_name":"Beat","middle_initials":null,"last_name":"Signer","page_name":"BeatSigner","domain_name":"vub","created_at":"2008-10-29T17:11:18.015-07:00","display_name":"Beat Signer","url":"https://vub.academia.edu/BeatSigner"},"attachments":[{"id":103109952,"title":"","file_type":"pdf","scribd_thumbnail_url":"https://attachments.academia-assets.com/103109952/thumbnails/1.jpg","file_name":"RSL_Hypermedia_Metamodel.pdf","download_url":"https://www.academia.edu/attachments/103109952/download_file","bulk_download_file_name":"The_RSL_Hypermedia_Metamodel_and_Its_App.pdf","bulk_download_url":"https://d1wqtxts1xzle7.cloudfront.net/103109952/RSL_Hypermedia_Metamodel-libre.pdf?1686130051=\u0026response-content-disposition=attachment%3B+filename%3DThe_RSL_Hypermedia_Metamodel_and_Its_App.pdf\u0026Expires=1744203891\u0026Signature=NZ2NZxzSQGVCBefrRblbu72E4UFtZOtupuP5f3W-q1KMNHSgXi82r2PDt02YjKPI-PPi6ohEpTCqFXbAHUzpwlwUpaIZ6s-Di1I4Nx4Q4LZYbjuaLce2qsF9ZK1EGz1s9WRcc3mjMKitAtwZrGbcB5gVn-Y8vGdzg2OkP2A~VJI5pHisG1Z-ddZecbe~r06tdxavlxZi8-Wfh2p2Wgmff4ZlkoG7qBEeHFFQCgLuG28QlqgFjaqufQ8ydWcTrz80Kz0ftwoyYc7pXe2MpOtFmReBiOF6Ig3zxabgCzvaSznltu0-wxn1yJwJedkepJ-jkubA5WDJsURkYJMRxYfe4w__\u0026Key-Pair-Id=APKAJLOHF5GGSLRBV4ZA"}],"research_interests":[{"id":37,"name":"Information Systems","url":"https://www.academia.edu/Documents/in/Information_Systems"},{"id":422,"name":"Computer Science","url":"https://www.academia.edu/Documents/in/Computer_Science"},{"id":459,"name":"Information Science","url":"https://www.academia.edu/Documents/in/Information_Science"},{"id":1210,"name":"Informatics","url":"https://www.academia.edu/Documents/in/Informatics"},{"id":1382,"name":"Adaptive Hypermedia","url":"https://www.academia.edu/Documents/in/Adaptive_Hypermedia"},{"id":11085,"name":"Cross-Media Information Spaces","url":"https://www.academia.edu/Documents/in/Cross-Media_Information_Spaces"},{"id":11123,"name":"Personal Information Management","url":"https://www.academia.edu/Documents/in/Personal_Information_Management"},{"id":41474,"name":"Cross Media Platforms","url":"https://www.academia.edu/Documents/in/Cross_Media_Platforms"},{"id":44606,"name":"Hypermedia","url":"https://www.academia.edu/Documents/in/Hypermedia"},{"id":54192,"name":"Hypertext","url":"https://www.academia.edu/Documents/in/Hypertext"},{"id":193390,"name":"RSL","url":"https://www.academia.edu/Documents/in/RSL"},{"id":369022,"name":"Cross Media","url":"https://www.academia.edu/Documents/in/Cross_Media"},{"id":672167,"name":"Cross-Media Information Spaces and Architectures (CISA)","url":"https://www.academia.edu/Documents/in/Cross-Media_Information_Spaces_and_Architectures_CISA_"}],"urls":[]}, dispatcherData: dispatcherData }); $(this).data('initialized', true); } }); $a.trackClickSource(".js-work-strip-work-link", "profile_work_strip") if (false) { Aedu.setUpFigureCarousel('profile-work-102982068-figures'); } }); </script> <div class="js-work-strip profile--work_container" data-work-id="97421819"><div class="profile--work_thumbnail hidden-xs"><a class="js-work-strip-work-link" data-click-track="profile-work-strip-thumbnail" href="https://www.academia.edu/97421819/Bridging_the_Gap_Managing_and_Interacting_with_Information_Across_Media_Boundaries"><img alt="Research paper thumbnail of Bridging the Gap: Managing and Interacting with Information Across Media Boundaries" class="work-thumbnail" src="https://attachments.academia-assets.com/99048390/thumbnails/1.jpg" /></a></div><div class="wp-workCard wp-workCard_itemContainer"><div class="wp-workCard_item wp-workCard--title"><a class="js-work-strip-work-link text-gray-darker" data-click-track="profile-work-strip-title" href="https://www.academia.edu/97421819/Bridging_the_Gap_Managing_and_Interacting_with_Information_Across_Media_Boundaries">Bridging the Gap: Managing and Interacting with Information Across Media Boundaries</a></div><div class="wp-workCard_item wp-workCard--actions"><span class="work-strip-bookmark-button-container"></span><a id="6cbaf841e25dbcf8a6e6159d0cdac051" class="wp-workCard--action" rel="nofollow" data-click-track="profile-work-strip-download" data-download="{&quot;attachment_id&quot;:99048390,&quot;asset_id&quot;:97421819,&quot;asset_type&quot;:&quot;Work&quot;,&quot;button_location&quot;:&quot;profile&quot;}" href="https://www.academia.edu/attachments/99048390/download_file?s=profile"><span><i class="fa fa-arrow-down"></i></span><span>Download</span></a><span class="wp-workCard--action visible-if-viewed-by-owner inline-block" style="display: none;"><span class="js-profile-work-strip-edit-button-wrapper profile-work-strip-edit-button-wrapper" data-work-id="97421819"><a class="js-profile-work-strip-edit-button" tabindex="0"><span><i class="fa fa-pencil"></i></span><span>Edit</span></a></span></span></div><div class="wp-workCard_item wp-workCard--stats"><span><span><span class="js-view-count view-count u-mr2x" data-work-id="97421819"><i class="fa fa-spinner fa-spin"></i></span><script>$(function () { var workId = 97421819; window.Academia.workViewCountsFetcher.queue(workId, function (count) { var description = window.$h.commaizeInt(count) + " " + window.$h.pluralize(count, 'View'); $(".js-view-count[data-work-id=97421819]").text(description); $(".js-view-count[data-work-id=97421819]").attr('title', description).tooltip(); }); });</script></span></span><span><span class="percentile-widget hidden"><span class="u-mr2x work-percentile"></span></span><script>$(function () { var workId = 97421819; window.Academia.workPercentilesFetcher.queue(workId, function (percentileText) { var container = $(".js-work-strip[data-work-id='97421819']"); container.find('.work-percentile').text(percentileText.charAt(0).toUpperCase() + percentileText.slice(1)); container.find('.percentile-widget').show(); container.find('.percentile-widget').removeClass('hidden'); }); });</script></span></div><div id="work-strip-premium-row-container"></div></div></div><script> require.config({ waitSeconds: 90 })(["https://a.academia-assets.com/assets/wow_profile-a9bf3a2bc8c89fa2a77156577594264ee8a0f214d74241bc0fcd3f69f8d107ac.js","https://a.academia-assets.com/assets/work_edit-ad038b8c047c1a8d4fa01b402d530ff93c45fee2137a149a4a5398bc8ad67560.js"], function() { // from javascript_helper.rb var dispatcherData = {} if (true){ window.WowProfile.dispatcher = window.WowProfile.dispatcher || _.clone(Backbone.Events); dispatcherData = { dispatcher: window.WowProfile.dispatcher, downloadLinkId: "6cbaf841e25dbcf8a6e6159d0cdac051" } } $('.js-work-strip[data-work-id=97421819]').each(function() { if (!$(this).data('initialized')) { new WowProfile.WorkStripView({ el: this, workJSON: {"id":97421819,"title":"Bridging the Gap: Managing and Interacting with Information Across Media Boundaries","translated_title":"","metadata":{"ai_abstract":"The paper discusses the challenges associated with traditional digital documents and presentation formats like slideware, highlighting their limitations in cross-media information management. It proposes a system-wide hypermedia engine designed to facilitate seamless interaction and navigation across different applications and media types, promoting a flexible and data-driven storytelling approach. The research emphasizes the need for new frameworks that separate content from structure, enabling non-linear storytelling and richer media integration.","ai_title_tag":"Enhancing Cross-Media Information Sharing"},"translated_abstract":null,"internal_url":"https://www.academia.edu/97421819/Bridging_the_Gap_Managing_and_Interacting_with_Information_Across_Media_Boundaries","translated_internal_url":"","created_at":"2023-02-23T13:18:59.549-08:00","preview_url":null,"current_user_can_edit":null,"current_user_is_owner":null,"owner_id":13155,"coauthors_can_edit":true,"document_type":"talk","co_author_tags":[],"downloadable_attachments":[{"id":99048390,"title":"","file_type":"pdf","scribd_thumbnail_url":"https://attachments.academia-assets.com/99048390/thumbnails/1.jpg","file_name":"PIM_Seminar_2023.pdf","download_url":"https://www.academia.edu/attachments/99048390/download_file","bulk_download_file_name":"Bridging_the_Gap_Managing_and_Interactin.pdf","bulk_download_url":"https://d1wqtxts1xzle7.cloudfront.net/99048390/PIM_Seminar_2023-libre.pdf?1677187520=\u0026response-content-disposition=attachment%3B+filename%3DBridging_the_Gap_Managing_and_Interactin.pdf\u0026Expires=1744203891\u0026Signature=NfYitZRTgQ8ikAK76jrFfNJPa~x6gwraO9pRCWXHZXGEeJPnwiipl8TZZpIqDmAe1Pnr7pB-YBeDxkzLjf2Y3MoOsvKl9Z-C08aRDR42fEcDD9T4CuobgnrTSmU-dCKM5SKOktPom1~akjCrrkgfstiRa-bZtFwWAMsOhswOm0x91P0TDXDrsN3VTz-LIyWh3jcwftFj38sz-Rsd6~otzKpAbi6Dv1j8eYsZyF4D~FNpcVGZCxhbXZnszmd~BTkTR~lujAqEQI239g94DNOMb94gepWH8L2iuupsTiJgTNJIiPIE7krtE8i8sxJiHOTh4kpjzkPFF9eK0wjiSZGRBA__\u0026Key-Pair-Id=APKAJLOHF5GGSLRBV4ZA"}],"slug":"Bridging_the_Gap_Managing_and_Interacting_with_Information_Across_Media_Boundaries","translated_slug":"","page_count":40,"language":"en","content_type":"Work","summary":null,"impression_tracking_id":null,"owner":{"id":13155,"first_name":"Beat","middle_initials":null,"last_name":"Signer","page_name":"BeatSigner","domain_name":"vub","created_at":"2008-10-29T17:11:18.015-07:00","display_name":"Beat Signer","url":"https://vub.academia.edu/BeatSigner"},"attachments":[{"id":99048390,"title":"","file_type":"pdf","scribd_thumbnail_url":"https://attachments.academia-assets.com/99048390/thumbnails/1.jpg","file_name":"PIM_Seminar_2023.pdf","download_url":"https://www.academia.edu/attachments/99048390/download_file","bulk_download_file_name":"Bridging_the_Gap_Managing_and_Interactin.pdf","bulk_download_url":"https://d1wqtxts1xzle7.cloudfront.net/99048390/PIM_Seminar_2023-libre.pdf?1677187520=\u0026response-content-disposition=attachment%3B+filename%3DBridging_the_Gap_Managing_and_Interactin.pdf\u0026Expires=1744203891\u0026Signature=NfYitZRTgQ8ikAK76jrFfNJPa~x6gwraO9pRCWXHZXGEeJPnwiipl8TZZpIqDmAe1Pnr7pB-YBeDxkzLjf2Y3MoOsvKl9Z-C08aRDR42fEcDD9T4CuobgnrTSmU-dCKM5SKOktPom1~akjCrrkgfstiRa-bZtFwWAMsOhswOm0x91P0TDXDrsN3VTz-LIyWh3jcwftFj38sz-Rsd6~otzKpAbi6Dv1j8eYsZyF4D~FNpcVGZCxhbXZnszmd~BTkTR~lujAqEQI239g94DNOMb94gepWH8L2iuupsTiJgTNJIiPIE7krtE8i8sxJiHOTh4kpjzkPFF9eK0wjiSZGRBA__\u0026Key-Pair-Id=APKAJLOHF5GGSLRBV4ZA"}],"research_interests":[{"id":422,"name":"Computer Science","url":"https://www.academia.edu/Documents/in/Computer_Science"},{"id":472,"name":"Human Computer Interaction","url":"https://www.academia.edu/Documents/in/Human_Computer_Interaction"},{"id":1210,"name":"Informatics","url":"https://www.academia.edu/Documents/in/Informatics"},{"id":11123,"name":"Personal Information Management","url":"https://www.academia.edu/Documents/in/Personal_Information_Management"}],"urls":[]}, dispatcherData: dispatcherData }); $(this).data('initialized', true); } }); $a.trackClickSource(".js-work-strip-work-link", "profile_work_strip") if (false) { Aedu.setUpFigureCarousel('profile-work-97421819-figures'); } }); </script> <div class="js-work-strip profile--work_container" data-work-id="81082149"><div class="profile--work_thumbnail hidden-xs"><a class="js-work-strip-work-link" data-click-track="profile-work-strip-thumbnail" href="https://www.academia.edu/81082149/Cross_Media_Technologies_and_Applications_Future_Directions_for_Personal_Information_Management"><img alt="Research paper thumbnail of Cross-Media Technologies and Applications: Future Directions for Personal Information Management" class="work-thumbnail" src="https://attachments.academia-assets.com/87249939/thumbnails/1.jpg" /></a></div><div class="wp-workCard wp-workCard_itemContainer"><div class="wp-workCard_item wp-workCard--title"><a class="js-work-strip-work-link text-gray-darker" data-click-track="profile-work-strip-title" href="https://www.academia.edu/81082149/Cross_Media_Technologies_and_Applications_Future_Directions_for_Personal_Information_Management">Cross-Media Technologies and Applications: Future Directions for Personal Information Management</a></div><div class="wp-workCard_item"><span class="js-work-more-abstract-truncated">Webinar given at icity Lab Talks - The Digital Value Chain In this talk, I will first provide ...</span><a class="js-work-more-abstract" data-broccoli-component="work_strip.more_abstract" data-click-track="profile-work-strip-more-abstract" href="javascript:;"><span> more </span><span><i class="fa fa-caret-down"></i></span></a><span class="js-work-more-abstract-untruncated hidden">Webinar given at icity Lab Talks - The Digital Value Chain <br /> <br />In this talk, I will first provide an overview of the lab’s research on a general data-driven approach for cross-media information system and architectures based on the resource-selector-link (RSL) hypermedia metamodel. We will then have a look at several cross-media applications for personal information management and next-generation presentation solutions (MindXpres). Finally, I will outline the lab’s most recent research on tangible interaction and dynamic data physicalisation.</span></div><div class="wp-workCard_item wp-workCard--actions"><span class="work-strip-bookmark-button-container"></span><a id="8d5608f3a259889c4a3bca558fe0422e" class="wp-workCard--action" rel="nofollow" data-click-track="profile-work-strip-download" data-download="{&quot;attachment_id&quot;:87249939,&quot;asset_id&quot;:81082149,&quot;asset_type&quot;:&quot;Work&quot;,&quot;button_location&quot;:&quot;profile&quot;}" href="https://www.academia.edu/attachments/87249939/download_file?s=profile"><span><i class="fa fa-arrow-down"></i></span><span>Download</span></a><span class="wp-workCard--action visible-if-viewed-by-owner inline-block" style="display: none;"><span class="js-profile-work-strip-edit-button-wrapper profile-work-strip-edit-button-wrapper" data-work-id="81082149"><a class="js-profile-work-strip-edit-button" tabindex="0"><span><i class="fa fa-pencil"></i></span><span>Edit</span></a></span></span></div><div class="wp-workCard_item wp-workCard--stats"><span><span><span class="js-view-count view-count u-mr2x" data-work-id="81082149"><i class="fa fa-spinner fa-spin"></i></span><script>$(function () { var workId = 81082149; window.Academia.workViewCountsFetcher.queue(workId, function (count) { var description = window.$h.commaizeInt(count) + " " + window.$h.pluralize(count, 'View'); $(".js-view-count[data-work-id=81082149]").text(description); $(".js-view-count[data-work-id=81082149]").attr('title', description).tooltip(); }); });</script></span></span><span><span class="percentile-widget hidden"><span class="u-mr2x work-percentile"></span></span><script>$(function () { var workId = 81082149; window.Academia.workPercentilesFetcher.queue(workId, function (percentileText) { var container = $(".js-work-strip[data-work-id='81082149']"); container.find('.work-percentile').text(percentileText.charAt(0).toUpperCase() + percentileText.slice(1)); container.find('.percentile-widget').show(); container.find('.percentile-widget').removeClass('hidden'); }); });</script></span></div><div id="work-strip-premium-row-container"></div></div></div><script> require.config({ waitSeconds: 90 })(["https://a.academia-assets.com/assets/wow_profile-a9bf3a2bc8c89fa2a77156577594264ee8a0f214d74241bc0fcd3f69f8d107ac.js","https://a.academia-assets.com/assets/work_edit-ad038b8c047c1a8d4fa01b402d530ff93c45fee2137a149a4a5398bc8ad67560.js"], function() { // from javascript_helper.rb var dispatcherData = {} if (true){ window.WowProfile.dispatcher = window.WowProfile.dispatcher || _.clone(Backbone.Events); dispatcherData = { dispatcher: window.WowProfile.dispatcher, downloadLinkId: "8d5608f3a259889c4a3bca558fe0422e" } } $('.js-work-strip[data-work-id=81082149]').each(function() { if (!$(this).data('initialized')) { new WowProfile.WorkStripView({ el: this, workJSON: {"id":81082149,"title":"Cross-Media Technologies and Applications: Future Directions for Personal Information Management","translated_title":"","metadata":{"abstract":"Webinar given at icity Lab Talks - The Digital Value Chain\r\n\r\nIn this talk, I will first provide an overview of the lab’s research on a general data-driven approach for cross-media information system and architectures based on the resource-selector-link (RSL) hypermedia metamodel. We will then have a look at several cross-media applications for personal information management and next-generation presentation solutions (MindXpres). Finally, I will outline the lab’s most recent research on tangible interaction and dynamic data physicalisation.","publication_date":{"day":null,"month":null,"year":2022,"errors":{}}},"translated_abstract":"Webinar given at icity Lab Talks - The Digital Value Chain\r\n\r\nIn this talk, I will first provide an overview of the lab’s research on a general data-driven approach for cross-media information system and architectures based on the resource-selector-link (RSL) hypermedia metamodel. We will then have a look at several cross-media applications for personal information management and next-generation presentation solutions (MindXpres). Finally, I will outline the lab’s most recent research on tangible interaction and dynamic data physicalisation.","internal_url":"https://www.academia.edu/81082149/Cross_Media_Technologies_and_Applications_Future_Directions_for_Personal_Information_Management","translated_internal_url":"","created_at":"2022-06-09T04:33:33.218-07:00","preview_url":null,"current_user_can_edit":null,"current_user_is_owner":null,"owner_id":13155,"coauthors_can_edit":true,"document_type":"talk","co_author_tags":[],"downloadable_attachments":[{"id":87249939,"title":"","file_type":"pdf","scribd_thumbnail_url":"https://attachments.academia-assets.com/87249939/thumbnails/1.jpg","file_name":"iCity.pdf","download_url":"https://www.academia.edu/attachments/87249939/download_file","bulk_download_file_name":"Cross_Media_Technologies_and_Application.pdf","bulk_download_url":"https://d1wqtxts1xzle7.cloudfront.net/87249939/iCity-libre.pdf?1654774484=\u0026response-content-disposition=attachment%3B+filename%3DCross_Media_Technologies_and_Application.pdf\u0026Expires=1744203891\u0026Signature=flfuDe0eJ8tQ-6tAv3tvMGxxQcOcDR5KOOedMd2mxhZnTWfq2V1qf12WvCB3k~tMC5bOPoovuWXXGobpnA4z~OQtieDUGacXsJH-p46Gi8ldvIpVxEJ~EJbBaoQKLZNyucB0~KOBtRjBUpmYPZFYscHKhPi3FuETDsMtj-Vv~omthQunjSqNhAq02odixKJXdLI61Rq4QGp6bCbInBT79dzm60QHQgH5DgX-TnDu6CMwfF2jy4Q49yIqQQTavjJUVMkPDam8RishScJpu3W4Da3tcTqgoplPpAiWXnmhl3BGYATwQDbQg3OowWJQ-ITdwL~RkZNeccTHPo69ozkLsQ__\u0026Key-Pair-Id=APKAJLOHF5GGSLRBV4ZA"}],"slug":"Cross_Media_Technologies_and_Applications_Future_Directions_for_Personal_Information_Management","translated_slug":"","page_count":27,"language":"en","content_type":"Work","summary":"Webinar given at icity Lab Talks - The Digital Value Chain\r\n\r\nIn this talk, I will first provide an overview of the lab’s research on a general data-driven approach for cross-media information system and architectures based on the resource-selector-link (RSL) hypermedia metamodel. We will then have a look at several cross-media applications for personal information management and next-generation presentation solutions (MindXpres). Finally, I will outline the lab’s most recent research on tangible interaction and dynamic data physicalisation.","impression_tracking_id":null,"owner":{"id":13155,"first_name":"Beat","middle_initials":null,"last_name":"Signer","page_name":"BeatSigner","domain_name":"vub","created_at":"2008-10-29T17:11:18.015-07:00","display_name":"Beat Signer","url":"https://vub.academia.edu/BeatSigner"},"attachments":[{"id":87249939,"title":"","file_type":"pdf","scribd_thumbnail_url":"https://attachments.academia-assets.com/87249939/thumbnails/1.jpg","file_name":"iCity.pdf","download_url":"https://www.academia.edu/attachments/87249939/download_file","bulk_download_file_name":"Cross_Media_Technologies_and_Application.pdf","bulk_download_url":"https://d1wqtxts1xzle7.cloudfront.net/87249939/iCity-libre.pdf?1654774484=\u0026response-content-disposition=attachment%3B+filename%3DCross_Media_Technologies_and_Application.pdf\u0026Expires=1744203891\u0026Signature=flfuDe0eJ8tQ-6tAv3tvMGxxQcOcDR5KOOedMd2mxhZnTWfq2V1qf12WvCB3k~tMC5bOPoovuWXXGobpnA4z~OQtieDUGacXsJH-p46Gi8ldvIpVxEJ~EJbBaoQKLZNyucB0~KOBtRjBUpmYPZFYscHKhPi3FuETDsMtj-Vv~omthQunjSqNhAq02odixKJXdLI61Rq4QGp6bCbInBT79dzm60QHQgH5DgX-TnDu6CMwfF2jy4Q49yIqQQTavjJUVMkPDam8RishScJpu3W4Da3tcTqgoplPpAiWXnmhl3BGYATwQDbQg3OowWJQ-ITdwL~RkZNeccTHPo69ozkLsQ__\u0026Key-Pair-Id=APKAJLOHF5GGSLRBV4ZA"}],"research_interests":[{"id":422,"name":"Computer Science","url":"https://www.academia.edu/Documents/in/Computer_Science"}],"urls":[]}, dispatcherData: dispatcherData }); $(this).data('initialized', true); } }); $a.trackClickSource(".js-work-strip-work-link", "profile_work_strip") if (false) { Aedu.setUpFigureCarousel('profile-work-81082149-figures'); } }); </script> <div class="js-work-strip profile--work_container" data-work-id="9649249"><div class="profile--work_thumbnail hidden-xs"><a class="js-work-strip-work-link" data-click-track="profile-work-strip-thumbnail" href="https://www.academia.edu/9649249/From_PaperPoint_to_MindXpres_Towards_Enhanced_Presentation_Tools"><img alt="Research paper thumbnail of From PaperPoint to MindXpres - Towards Enhanced Presentation Tools" class="work-thumbnail" src="https://attachments.academia-assets.com/77357987/thumbnails/1.jpg" /></a></div><div class="wp-workCard wp-workCard_itemContainer"><div class="wp-workCard_item wp-workCard--title"><a class="js-work-strip-work-link text-gray-darker" data-click-track="profile-work-strip-title" href="https://www.academia.edu/9649249/From_PaperPoint_to_MindXpres_Towards_Enhanced_Presentation_Tools">From PaperPoint to MindXpres - Towards Enhanced Presentation Tools</a></div><div class="wp-workCard_item wp-workCard--actions"><span class="work-strip-bookmark-button-container"></span><a id="f111b332c99982064c56b4a79210767b" class="wp-workCard--action" rel="nofollow" data-click-track="profile-work-strip-download" data-download="{&quot;attachment_id&quot;:77357987,&quot;asset_id&quot;:9649249,&quot;asset_type&quot;:&quot;Work&quot;,&quot;button_location&quot;:&quot;profile&quot;}" href="https://www.academia.edu/attachments/77357987/download_file?s=profile"><span><i class="fa fa-arrow-down"></i></span><span>Download</span></a><span class="wp-workCard--action visible-if-viewed-by-owner inline-block" style="display: none;"><span class="js-profile-work-strip-edit-button-wrapper profile-work-strip-edit-button-wrapper" data-work-id="9649249"><a class="js-profile-work-strip-edit-button" tabindex="0"><span><i class="fa fa-pencil"></i></span><span>Edit</span></a></span></span></div><div class="wp-workCard_item wp-workCard--stats"><span><span><span class="js-view-count view-count u-mr2x" data-work-id="9649249"><i class="fa fa-spinner fa-spin"></i></span><script>$(function () { var workId = 9649249; window.Academia.workViewCountsFetcher.queue(workId, function (count) { var description = window.$h.commaizeInt(count) + " " + window.$h.pluralize(count, 'View'); $(".js-view-count[data-work-id=9649249]").text(description); $(".js-view-count[data-work-id=9649249]").attr('title', description).tooltip(); }); });</script></span></span><span><span class="percentile-widget hidden"><span class="u-mr2x work-percentile"></span></span><script>$(function () { var workId = 9649249; window.Academia.workPercentilesFetcher.queue(workId, function (percentileText) { var container = $(".js-work-strip[data-work-id='9649249']"); container.find('.work-percentile').text(percentileText.charAt(0).toUpperCase() + percentileText.slice(1)); container.find('.percentile-widget').show(); container.find('.percentile-widget').removeClass('hidden'); }); });</script></span></div><div id="work-strip-premium-row-container"></div></div></div><script> require.config({ waitSeconds: 90 })(["https://a.academia-assets.com/assets/wow_profile-a9bf3a2bc8c89fa2a77156577594264ee8a0f214d74241bc0fcd3f69f8d107ac.js","https://a.academia-assets.com/assets/work_edit-ad038b8c047c1a8d4fa01b402d530ff93c45fee2137a149a4a5398bc8ad67560.js"], function() { // from javascript_helper.rb var dispatcherData = {} if (true){ window.WowProfile.dispatcher = window.WowProfile.dispatcher || _.clone(Backbone.Events); dispatcherData = { dispatcher: window.WowProfile.dispatcher, downloadLinkId: "f111b332c99982064c56b4a79210767b" } } $('.js-work-strip[data-work-id=9649249]').each(function() { if (!$(this).data('initialized')) { new WowProfile.WorkStripView({ el: this, workJSON: {"id":9649249,"title":"From PaperPoint to MindXpres - Towards Enhanced Presentation Tools","translated_title":"","metadata":{"time":{"end_hour":19,"start_hour":18,"errors":{}},"location":"Moonbeat, Mechelen, Belgium","event_date":{"day":4,"month":12,"year":2014,"errors":{}},"ai_abstract":"This research discusses the evolution of presentation tools from traditional platforms like PowerPoint to innovative systems such as PaperPoint and MindXpres. PaperPoint enhances mobile presentations with non-linear navigation and audience interactivity, while MindXpres serves as a prototyping platform that supports dynamic, content-driven presentations through an extensible framework. Future directions include collaborative features, integrated multimedia, and improved audience engagement.","ai_title_tag":"Evolving Presentation Tools: PaperPoint to MindXpres","organization":"Vrije Universiteit Brussel"},"translated_abstract":null,"internal_url":"https://www.academia.edu/9649249/From_PaperPoint_to_MindXpres_Towards_Enhanced_Presentation_Tools","translated_internal_url":"","created_at":"2014-12-06T09:48:22.142-08:00","preview_url":null,"current_user_can_edit":null,"current_user_is_owner":null,"owner_id":13155,"coauthors_can_edit":true,"document_type":"talk","co_author_tags":[],"downloadable_attachments":[{"id":77357987,"title":"","file_type":"pdf","scribd_thumbnail_url":"https://attachments.academia-assets.com/77357987/thumbnails/1.jpg","file_name":"PresentationTools.pdf","download_url":"https://www.academia.edu/attachments/77357987/download_file","bulk_download_file_name":"From_PaperPoint_to_MindXpres_Towards_Enh.pdf","bulk_download_url":"https://d1wqtxts1xzle7.cloudfront.net/77357987/PresentationTools-libre.pdf?1640509034=\u0026response-content-disposition=attachment%3B+filename%3DFrom_PaperPoint_to_MindXpres_Towards_Enh.pdf\u0026Expires=1744203891\u0026Signature=b9IudiuN1F1OpIk3lbPoZ7H61oth5gIjmRGJ2FsfEQ5DeMjassD0KVV670vi9tjx4xflQpsyK84n8FN~UC5Clc33YbjB97b4bJHICxMDMbXsKTPGff~NrN9jLcXRR4sE80pRn09EG52TAyDhewEYMO7avBdt2nJF9SIzbCJLJX1LufUoEYhOcKWHcX1qdxXIuR7KmVaDq8zGDYy2xcmRHOA5sXWVGn~QhFjnaRSJMgwydB~~9OtEMrwcpmJHQlI7EMwAXOTFcStQtm6OHKx3MBnzBis2soRh21Ssz6-HPAeKVtoF0BGDHEXuaw4N1lxvq0u6QAsbfetkwg3EM7bNXA__\u0026Key-Pair-Id=APKAJLOHF5GGSLRBV4ZA"}],"slug":"From_PaperPoint_to_MindXpres_Towards_Enhanced_Presentation_Tools","translated_slug":"","page_count":30,"language":"en","content_type":"Work","summary":null,"impression_tracking_id":null,"owner":{"id":13155,"first_name":"Beat","middle_initials":null,"last_name":"Signer","page_name":"BeatSigner","domain_name":"vub","created_at":"2008-10-29T17:11:18.015-07:00","display_name":"Beat Signer","url":"https://vub.academia.edu/BeatSigner"},"attachments":[{"id":77357987,"title":"","file_type":"pdf","scribd_thumbnail_url":"https://attachments.academia-assets.com/77357987/thumbnails/1.jpg","file_name":"PresentationTools.pdf","download_url":"https://www.academia.edu/attachments/77357987/download_file","bulk_download_file_name":"From_PaperPoint_to_MindXpres_Towards_Enh.pdf","bulk_download_url":"https://d1wqtxts1xzle7.cloudfront.net/77357987/PresentationTools-libre.pdf?1640509034=\u0026response-content-disposition=attachment%3B+filename%3DFrom_PaperPoint_to_MindXpres_Towards_Enh.pdf\u0026Expires=1744203891\u0026Signature=b9IudiuN1F1OpIk3lbPoZ7H61oth5gIjmRGJ2FsfEQ5DeMjassD0KVV670vi9tjx4xflQpsyK84n8FN~UC5Clc33YbjB97b4bJHICxMDMbXsKTPGff~NrN9jLcXRR4sE80pRn09EG52TAyDhewEYMO7avBdt2nJF9SIzbCJLJX1LufUoEYhOcKWHcX1qdxXIuR7KmVaDq8zGDYy2xcmRHOA5sXWVGn~QhFjnaRSJMgwydB~~9OtEMrwcpmJHQlI7EMwAXOTFcStQtm6OHKx3MBnzBis2soRh21Ssz6-HPAeKVtoF0BGDHEXuaw4N1lxvq0u6QAsbfetkwg3EM7bNXA__\u0026Key-Pair-Id=APKAJLOHF5GGSLRBV4ZA"}],"research_interests":[{"id":48,"name":"Engineering","url":"https://www.academia.edu/Documents/in/Engineering"},{"id":422,"name":"Computer Science","url":"https://www.academia.edu/Documents/in/Computer_Science"},{"id":449,"name":"Software Engineering","url":"https://www.academia.edu/Documents/in/Software_Engineering"},{"id":472,"name":"Human Computer Interaction","url":"https://www.academia.edu/Documents/in/Human_Computer_Interaction"},{"id":922,"name":"Education","url":"https://www.academia.edu/Documents/in/Education"},{"id":933,"name":"New Media","url":"https://www.academia.edu/Documents/in/New_Media"},{"id":1453,"name":"Information Management","url":"https://www.academia.edu/Documents/in/Information_Management"},{"id":1609,"name":"E-learning","url":"https://www.academia.edu/Documents/in/E-learning"},{"id":2869,"name":"Digital Media","url":"https://www.academia.edu/Documents/in/Digital_Media"},{"id":2879,"name":"Ubiquitous Computing","url":"https://www.academia.edu/Documents/in/Ubiquitous_Computing"},{"id":3419,"name":"Multimedia","url":"https://www.academia.edu/Documents/in/Multimedia"},{"id":3457,"name":"Learning and Teaching","url":"https://www.academia.edu/Documents/in/Learning_and_Teaching"},{"id":5673,"name":"Augmented Reality","url":"https://www.academia.edu/Documents/in/Augmented_Reality"},{"id":11081,"name":"Multimodal Interaction","url":"https://www.academia.edu/Documents/in/Multimodal_Interaction"},{"id":11085,"name":"Cross-Media Information Spaces","url":"https://www.academia.edu/Documents/in/Cross-Media_Information_Spaces"},{"id":11086,"name":"Interactive Paper","url":"https://www.academia.edu/Documents/in/Interactive_Paper"},{"id":15951,"name":"Digital Pen and Paper","url":"https://www.academia.edu/Documents/in/Digital_Pen_and_Paper"},{"id":20470,"name":"Tangible User Interfaces","url":"https://www.academia.edu/Documents/in/Tangible_User_Interfaces"},{"id":37228,"name":"Multimodal Interfaces","url":"https://www.academia.edu/Documents/in/Multimodal_Interfaces"},{"id":69857,"name":"PowerPoint","url":"https://www.academia.edu/Documents/in/PowerPoint"},{"id":631659,"name":"Power Point Presentations","url":"https://www.academia.edu/Documents/in/Power_Point_Presentations"},{"id":1273494,"name":"Slideware","url":"https://www.academia.edu/Documents/in/Slideware"},{"id":1705150,"name":"MindXpres","url":"https://www.academia.edu/Documents/in/MindXpres"}],"urls":[{"id":3973246,"url":"http://www.slideshare.net/signer/from-paperpoint-to-mindxpres-towards-enhanced-presentation-tools"}]}, dispatcherData: dispatcherData }); $(this).data('initialized', true); } }); $a.trackClickSource(".js-work-strip-work-link", "profile_work_strip") if (false) { Aedu.setUpFigureCarousel('profile-work-9649249-figures'); } }); </script> <div class="js-work-strip profile--work_container" data-work-id="2084803"><div class="profile--work_thumbnail hidden-xs"><a class="js-work-strip-work-link" data-click-track="profile-work-strip-thumbnail" href="https://www.academia.edu/2084803/Cross_Media_Information_Systems_Quo_Vadis"><img alt="Research paper thumbnail of Cross-Media Information Systems - Quo Vadis?" class="work-thumbnail" src="https://attachments.academia-assets.com/77525694/thumbnails/1.jpg" /></a></div><div class="wp-workCard wp-workCard_itemContainer"><div class="wp-workCard_item wp-workCard--title"><a class="js-work-strip-work-link text-gray-darker" data-click-track="profile-work-strip-title" href="https://www.academia.edu/2084803/Cross_Media_Information_Systems_Quo_Vadis">Cross-Media Information Systems - Quo Vadis?</a></div><div class="wp-workCard_item wp-workCard--actions"><span class="work-strip-bookmark-button-container"></span><a id="c9fe10ff7a0109c4eb6a39d155285a00" class="wp-workCard--action" rel="nofollow" data-click-track="profile-work-strip-download" data-download="{&quot;attachment_id&quot;:77525694,&quot;asset_id&quot;:2084803,&quot;asset_type&quot;:&quot;Work&quot;,&quot;button_location&quot;:&quot;profile&quot;}" href="https://www.academia.edu/attachments/77525694/download_file?s=profile"><span><i class="fa fa-arrow-down"></i></span><span>Download</span></a><span class="wp-workCard--action visible-if-viewed-by-owner inline-block" style="display: none;"><span class="js-profile-work-strip-edit-button-wrapper profile-work-strip-edit-button-wrapper" data-work-id="2084803"><a class="js-profile-work-strip-edit-button" tabindex="0"><span><i class="fa fa-pencil"></i></span><span>Edit</span></a></span></span></div><div class="wp-workCard_item wp-workCard--stats"><span><span><span class="js-view-count view-count u-mr2x" data-work-id="2084803"><i class="fa fa-spinner fa-spin"></i></span><script>$(function () { var workId = 2084803; window.Academia.workViewCountsFetcher.queue(workId, function (count) { var description = window.$h.commaizeInt(count) + " " + window.$h.pluralize(count, 'View'); $(".js-view-count[data-work-id=2084803]").text(description); $(".js-view-count[data-work-id=2084803]").attr('title', description).tooltip(); }); });</script></span></span><span><span class="percentile-widget hidden"><span class="u-mr2x work-percentile"></span></span><script>$(function () { var workId = 2084803; window.Academia.workPercentilesFetcher.queue(workId, function (percentileText) { var container = $(".js-work-strip[data-work-id='2084803']"); container.find('.work-percentile').text(percentileText.charAt(0).toUpperCase() + percentileText.slice(1)); container.find('.percentile-widget').show(); container.find('.percentile-widget').removeClass('hidden'); }); });</script></span></div><div id="work-strip-premium-row-container"></div></div></div><script> require.config({ waitSeconds: 90 })(["https://a.academia-assets.com/assets/wow_profile-a9bf3a2bc8c89fa2a77156577594264ee8a0f214d74241bc0fcd3f69f8d107ac.js","https://a.academia-assets.com/assets/work_edit-ad038b8c047c1a8d4fa01b402d530ff93c45fee2137a149a4a5398bc8ad67560.js"], function() { // from javascript_helper.rb var dispatcherData = {} if (true){ window.WowProfile.dispatcher = window.WowProfile.dispatcher || _.clone(Backbone.Events); dispatcherData = { dispatcher: window.WowProfile.dispatcher, downloadLinkId: "c9fe10ff7a0109c4eb6a39d155285a00" } } $('.js-work-strip[data-work-id=2084803]').each(function() { if (!$(this).data('initialized')) { new WowProfile.WorkStripView({ el: this, workJSON: {"id":2084803,"title":"Cross-Media Information Systems - Quo Vadis?","translated_title":"","metadata":{"time":{"end_hour":11,"start_hour":10,"errors":{}},"location":"EPFL, Lausanne","event_date":{"day":2,"month":11,"year":2012,"errors":{}},"ai_abstract":"The paper explores the shortcomings of current digital documents that mimic traditional paper interfaces and emphasizes the need for innovative cross-media information systems. It critiques the limitations of existing WYSIWYG approaches and discusses alternative frameworks for enhanced reading and writing experiences that leverage richer multimedia interactions. The proposed MindXpres Presentation Tool exemplifies these concepts by enabling non-linear navigation and associative linking, addressing the challenges of content reusability and rich media embedding.","organization":"Vrije Universiteit Brussel"},"translated_abstract":null,"internal_url":"https://www.academia.edu/2084803/Cross_Media_Information_Systems_Quo_Vadis","translated_internal_url":"","created_at":"2012-11-03T12:44:53.564-07:00","preview_url":null,"current_user_can_edit":null,"current_user_is_owner":null,"owner_id":13155,"coauthors_can_edit":true,"document_type":"talk","co_author_tags":[],"downloadable_attachments":[{"id":77525694,"title":"","file_type":"pdf","scribd_thumbnail_url":"https://attachments.academia-assets.com/77525694/thumbnails/1.jpg","file_name":"EPFL.pdf","download_url":"https://www.academia.edu/attachments/77525694/download_file","bulk_download_file_name":"Cross_Media_Information_Systems_Quo_Vadi.pdf","bulk_download_url":"https://d1wqtxts1xzle7.cloudfront.net/77525694/EPFL-libre.pdf?1640735525=\u0026response-content-disposition=attachment%3B+filename%3DCross_Media_Information_Systems_Quo_Vadi.pdf\u0026Expires=1744203891\u0026Signature=VhgaXYaWoTxoZBIKZGyskyqON8O07KPVTtBAyP7YdpjZ6YSCiqTRk81bjnZmmB5BhRo6-DdxqOCPIV30A5WfmCyfOSBQjY8JMBEsP~oOE40GYRSgmDpH35d3fzaU6tzEUGbwMflkrRieGfDo-AcHAn7gTELsqmu1tG41fpQOdIQZnlN5OA2OQbB8cB7SiTvaOtdGGBVVUF4ZUUsndV0v5dz1CKmsorpCIlH99PrdUi9NcZh4pIFsUIy1acblux5WX3IXk1nC4VljbO8zEN9UQs7I1rB4WDlFcCI-Ms0pwyBMaQQ16-Nkx40bU6ehV5C3lUYN40bdEOtAOer-IOh7cg__\u0026Key-Pair-Id=APKAJLOHF5GGSLRBV4ZA"}],"slug":"Cross_Media_Information_Systems_Quo_Vadis","translated_slug":"","page_count":27,"language":"en","content_type":"Work","summary":null,"impression_tracking_id":null,"owner":{"id":13155,"first_name":"Beat","middle_initials":null,"last_name":"Signer","page_name":"BeatSigner","domain_name":"vub","created_at":"2008-10-29T17:11:18.015-07:00","display_name":"Beat Signer","url":"https://vub.academia.edu/BeatSigner"},"attachments":[{"id":77525694,"title":"","file_type":"pdf","scribd_thumbnail_url":"https://attachments.academia-assets.com/77525694/thumbnails/1.jpg","file_name":"EPFL.pdf","download_url":"https://www.academia.edu/attachments/77525694/download_file","bulk_download_file_name":"Cross_Media_Information_Systems_Quo_Vadi.pdf","bulk_download_url":"https://d1wqtxts1xzle7.cloudfront.net/77525694/EPFL-libre.pdf?1640735525=\u0026response-content-disposition=attachment%3B+filename%3DCross_Media_Information_Systems_Quo_Vadi.pdf\u0026Expires=1744203891\u0026Signature=VhgaXYaWoTxoZBIKZGyskyqON8O07KPVTtBAyP7YdpjZ6YSCiqTRk81bjnZmmB5BhRo6-DdxqOCPIV30A5WfmCyfOSBQjY8JMBEsP~oOE40GYRSgmDpH35d3fzaU6tzEUGbwMflkrRieGfDo-AcHAn7gTELsqmu1tG41fpQOdIQZnlN5OA2OQbB8cB7SiTvaOtdGGBVVUF4ZUUsndV0v5dz1CKmsorpCIlH99PrdUi9NcZh4pIFsUIy1acblux5WX3IXk1nC4VljbO8zEN9UQs7I1rB4WDlFcCI-Ms0pwyBMaQQ16-Nkx40bU6ehV5C3lUYN40bdEOtAOer-IOh7cg__\u0026Key-Pair-Id=APKAJLOHF5GGSLRBV4ZA"}],"research_interests":[{"id":48,"name":"Engineering","url":"https://www.academia.edu/Documents/in/Engineering"},{"id":422,"name":"Computer Science","url":"https://www.academia.edu/Documents/in/Computer_Science"},{"id":449,"name":"Software Engineering","url":"https://www.academia.edu/Documents/in/Software_Engineering"},{"id":459,"name":"Information Science","url":"https://www.academia.edu/Documents/in/Information_Science"},{"id":472,"name":"Human Computer Interaction","url":"https://www.academia.edu/Documents/in/Human_Computer_Interaction"},{"id":922,"name":"Education","url":"https://www.academia.edu/Documents/in/Education"},{"id":933,"name":"New Media","url":"https://www.academia.edu/Documents/in/New_Media"},{"id":1012,"name":"Digital Libraries","url":"https://www.academia.edu/Documents/in/Digital_Libraries"},{"id":1384,"name":"Web Engineering","url":"https://www.academia.edu/Documents/in/Web_Engineering"},{"id":1453,"name":"Information Management","url":"https://www.academia.edu/Documents/in/Information_Management"},{"id":2129,"name":"Computer Supported Cooperative Work (CSCW)","url":"https://www.academia.edu/Documents/in/Computer_Supported_Cooperative_Work_CSCW_"},{"id":2482,"name":"Database Systems","url":"https://www.academia.edu/Documents/in/Database_Systems"},{"id":2879,"name":"Ubiquitous Computing","url":"https://www.academia.edu/Documents/in/Ubiquitous_Computing"},{"id":3419,"name":"Multimedia","url":"https://www.academia.edu/Documents/in/Multimedia"},{"id":5673,"name":"Augmented Reality","url":"https://www.academia.edu/Documents/in/Augmented_Reality"},{"id":8215,"name":"Conceptual Modelling","url":"https://www.academia.edu/Documents/in/Conceptual_Modelling"},{"id":10165,"name":"Interactive and Digital Media","url":"https://www.academia.edu/Documents/in/Interactive_and_Digital_Media"},{"id":11081,"name":"Multimodal Interaction","url":"https://www.academia.edu/Documents/in/Multimodal_Interaction"},{"id":11085,"name":"Cross-Media Information Spaces","url":"https://www.academia.edu/Documents/in/Cross-Media_Information_Spaces"},{"id":11086,"name":"Interactive Paper","url":"https://www.academia.edu/Documents/in/Interactive_Paper"},{"id":11123,"name":"Personal Information Management","url":"https://www.academia.edu/Documents/in/Personal_Information_Management"},{"id":11732,"name":"Linked Data","url":"https://www.academia.edu/Documents/in/Linked_Data"},{"id":15951,"name":"Digital Pen and Paper","url":"https://www.academia.edu/Documents/in/Digital_Pen_and_Paper"},{"id":17701,"name":"Gesture Recognition","url":"https://www.academia.edu/Documents/in/Gesture_Recognition"},{"id":20470,"name":"Tangible User Interfaces","url":"https://www.academia.edu/Documents/in/Tangible_User_Interfaces"},{"id":29124,"name":"Web Science","url":"https://www.academia.edu/Documents/in/Web_Science"},{"id":30947,"name":"The Internet","url":"https://www.academia.edu/Documents/in/The_Internet"},{"id":37228,"name":"Multimodal Interfaces","url":"https://www.academia.edu/Documents/in/Multimodal_Interfaces"},{"id":42095,"name":"Document Engineering","url":"https://www.academia.edu/Documents/in/Document_Engineering"},{"id":44606,"name":"Hypermedia","url":"https://www.academia.edu/Documents/in/Hypermedia"},{"id":67792,"name":"Multi-Touch","url":"https://www.academia.edu/Documents/in/Multi-Touch"},{"id":141114,"name":"World Wide Web","url":"https://www.academia.edu/Documents/in/World_Wide_Web"}],"urls":[{"id":370437,"url":"http://www.slideshare.net/signer/crossmedia-information-systems-quo-vadis"}]}, dispatcherData: dispatcherData }); $(this).data('initialized', true); } }); $a.trackClickSource(".js-work-strip-work-link", "profile_work_strip") if (false) { Aedu.setUpFigureCarousel('profile-work-2084803-figures'); } }); </script> <div class="js-work-strip profile--work_container" data-work-id="1719072"><div class="profile--work_thumbnail hidden-xs"><a class="js-work-strip-work-link" data-click-track="profile-work-strip-thumbnail" href="https://www.academia.edu/1719072/HTML5_and_the_Open_Web_Platform"><img alt="Research paper thumbnail of HTML5 and the Open Web Platform" class="work-thumbnail" src="https://attachments.academia-assets.com/64680462/thumbnails/1.jpg" /></a></div><div class="wp-workCard wp-workCard_itemContainer"><div class="wp-workCard_item wp-workCard--title"><a class="js-work-strip-work-link text-gray-darker" data-click-track="profile-work-strip-title" href="https://www.academia.edu/1719072/HTML5_and_the_Open_Web_Platform">HTML5 and the Open Web Platform</a></div><div class="wp-workCard_item"><div class="carousel-container carousel-container--sm" id="profile-work-1719072-figures"><div class="prev-slide-container js-prev-button-container"><button aria-label="Previous" class="carousel-navigation-button js-profile-work-1719072-figures-prev"><span class="material-symbols-outlined" style="font-size: 24px" translate="no">arrow_back_ios</span></button></div><div class="slides-container js-slides-container"><figure class="figure-slide-container"><a href="https://www.academia.edu/figures/11077816/figure-1-beat-signer-department-of-computer-science-bsigner"><img alt="Beat Signer - Department of Computer Science - bsigner@vub.ac.be " class="figure-slide-image" src="https://figures.academia-assets.com/64680462/figure_001.jpg" /></a></figure><figure class="figure-slide-container"><a href="https://www.academia.edu/figures/11077827/figure-2-html-and-the-open-web-platform"><img alt="" class="figure-slide-image" src="https://figures.academia-assets.com/64680462/figure_002.jpg" /></a></figure><figure class="figure-slide-container"><a href="https://www.academia.edu/figures/11077850/figure-3-html-and-the-open-web-platform"><img alt="" class="figure-slide-image" src="https://figures.academia-assets.com/64680462/figure_003.jpg" /></a></figure><figure class="figure-slide-container"><a href="https://www.academia.edu/figures/11077870/figure-4-seat-signer-department-of-computer-science-bsigner"><img alt="seat Signer - Department of Computer Science - bsigner@vub.ac.be " class="figure-slide-image" src="https://figures.academia-assets.com/64680462/figure_004.jpg" /></a></figure><figure class="figure-slide-container"><a href="https://www.academia.edu/figures/11077886/figure-5-html-and-the-open-web-platform"><img alt="" class="figure-slide-image" src="https://figures.academia-assets.com/64680462/figure_005.jpg" /></a></figure><figure class="figure-slide-container"><a href="https://www.academia.edu/figures/11077910/figure-6-html-and-the-open-web-platform"><img alt="" class="figure-slide-image" src="https://figures.academia-assets.com/64680462/figure_006.jpg" /></a></figure><figure class="figure-slide-container"><a href="https://www.academia.edu/figures/11077935/figure-7-html-and-the-open-web-platform"><img alt="" class="figure-slide-image" src="https://figures.academia-assets.com/64680462/figure_007.jpg" /></a></figure><figure class="figure-slide-container"><a href="https://www.academia.edu/figures/11077949/figure-8-html-and-the-open-web-platform"><img alt="" class="figure-slide-image" src="https://figures.academia-assets.com/64680462/figure_008.jpg" /></a></figure><figure class="figure-slide-container"><a href="https://www.academia.edu/figures/11077977/figure-9-html-and-the-open-web-platform"><img alt="" class="figure-slide-image" src="https://figures.academia-assets.com/64680462/figure_009.jpg" /></a></figure><figure class="figure-slide-container"><a href="https://www.academia.edu/figures/11077995/figure-10-html-and-the-open-web-platform"><img alt="" class="figure-slide-image" src="https://figures.academia-assets.com/64680462/figure_010.jpg" /></a></figure><figure class="figure-slide-container"><a href="https://www.academia.edu/figures/11078021/figure-11-rather-than-updating-an-entire-resource-webpage-we"><img alt="Rather than updating an entire resource (e.g. webpage we can asynchronously update parts of a resource = e.g. implementation of Rich Internet Applications via AJAX " class="figure-slide-image" src="https://figures.academia-assets.com/64680462/figure_011.jpg" /></a></figure><figure class="figure-slide-container"><a href="https://www.academia.edu/figures/11078038/figure-12-html-and-the-open-web-platform"><img alt="" class="figure-slide-image" src="https://figures.academia-assets.com/64680462/figure_012.jpg" /></a></figure><figure class="figure-slide-container"><a href="https://www.academia.edu/figures/11078077/figure-13-html-and-the-open-web-platform"><img alt="" class="figure-slide-image" src="https://figures.academia-assets.com/64680462/figure_013.jpg" /></a></figure><figure class="figure-slide-container"><a href="https://www.academia.edu/figures/11078109/table-1-when-can-use-cats-html"><img alt="When can | use..., http://caniuse.com/#cats=HTML5 " class="figure-slide-image" src="https://figures.academia-assets.com/64680462/table_001.jpg" /></a></figure><figure class="figure-slide-container"><a href="https://www.academia.edu/figures/11078125/table-2-will-work-in-ie-and-safari-provided-the-user-has-the"><img alt="Will work in IE9 and Safari provided the user has the WebM codecs installed. Multimedia format designed to provide a royalty-free, high-quality open video compression format for use with HTMLS5 video. When can | use..., http://caniuse.com/#feat=webm « WebM/VP8 video format - other " class="figure-slide-image" src="https://figures.academia-assets.com/64680462/table_002.jpg" /></a></figure><figure class="figure-slide-container"><a href="https://www.academia.edu/figures/11078143/table-3-commonly-used-video-compression-format-not-royalty"><img alt="Commonly used video compression format (not royalty-free) Support in Chrome may be dropped in some upcoming version. The Android 2.3 browser currently requires specific handling to play videos. Firefox may include support on some platforms in an upcoming version. - MPEG-4/H.264 video format - other When can | use..., http://caniuse.com/#feat=mpeg4 " class="figure-slide-image" src="https://figures.academia-assets.com/64680462/table_003.jpg" /></a></figure><figure class="figure-slide-container"><a href="https://www.academia.edu/figures/11078160/table-4-nhen-can-use-feat-ogv-ogg-theora-video-format-other"><img alt="Nhen can | use..., http://caniuse.com/#feat=ogv + Ogg/Theora video format - other " class="figure-slide-image" src="https://figures.academia-assets.com/64680462/table_004.jpg" /></a></figure><figure class="figure-slide-container"><a href="https://www.academia.edu/figures/11078183/table-5-support-listed-as-partial-refers-to-the-fact-that"><img alt="Support listed as &quot;partial&quot; refers to the fact that not all users with these browsers have WebGL access. This is due to the additional requirement for users to have up to date video drivers. This problem was solved in Chrome as of version 18. Ni that WebGL is part of the Khronos Group, not the W3C. " class="figure-slide-image" src="https://figures.academia-assets.com/64680462/table_005.jpg" /></a></figure><figure class="figure-slide-container"><a href="https://www.academia.edu/figures/11078197/table-6-html-and-the-open-web-platform"><img alt="" class="figure-slide-image" src="https://figures.academia-assets.com/64680462/table_006.jpg" /></a></figure><figure class="figure-slide-container"><a href="https://www.academia.edu/figures/11078218/table-7-method-of-storing-data-client-side-allows-indexed"><img alt="Method of storing data client-side, allows indexed database queries. Previously known as WebSimpleDB API. When can | use..., http://caniuse.com/#search=indexed%20data " class="figure-slide-image" src="https://figures.academia-assets.com/64680462/table_007.jpg" /></a></figure><figure class="figure-slide-container"><a href="https://www.academia.edu/figures/11078230/table-8-method-of-informing-website-of-the-user-geographical"><img alt="Method of informing a website of the user&#39;s geographical location When can | use..., http://caniuse.com/#search=geolocation " class="figure-slide-image" src="https://figures.academia-assets.com/64680462/table_008.jpg" /></a></figure><figure class="figure-slide-container"><a href="https://www.academia.edu/figures/11078241/table-9-partial-support-in-ie-refers-to-no-support-for-the"><img alt="Partial support in IE refers to no support for the dataTransfer.files or .types objects and limited supported formats for dataTransfer.setData/getData. When can | use..., http://caniuse.com/#feat=dragndrop " class="figure-slide-image" src="https://figures.academia-assets.com/64680462/table_009.jpg" /></a></figure><figure class="figure-slide-container"><a href="https://www.academia.edu/figures/11078250/table-10-method-of-running-scripts-in-the-background"><img alt="Method of running scripts in the background, isolated from the web page When can | use..., http://caniuse.com/#feat=webworkers " class="figure-slide-image" src="https://figures.academia-assets.com/64680462/table_010.jpg" /></a></figure><figure class="figure-slide-container"><a href="https://www.academia.edu/figures/11078270/table-11-when-can-use-feat-offline-apps-offline-web"><img alt="When can | use..., http://caniuse.com/#feat=offline-apps « Offline web applications - Working Draft " class="figure-slide-image" src="https://figures.academia-assets.com/64680462/table_011.jpg" /></a></figure></div><div class="next-slide-container js-next-button-container"><button aria-label="Next" class="carousel-navigation-button js-profile-work-1719072-figures-next"><span class="material-symbols-outlined" style="font-size: 24px" translate="no">arrow_forward_ios</span></button></div></div></div><div class="wp-workCard_item wp-workCard--actions"><span class="work-strip-bookmark-button-container"></span><a id="3a93d5b2943a0796224e33ce7df87f3f" class="wp-workCard--action" rel="nofollow" data-click-track="profile-work-strip-download" data-download="{&quot;attachment_id&quot;:64680462,&quot;asset_id&quot;:1719072,&quot;asset_type&quot;:&quot;Work&quot;,&quot;button_location&quot;:&quot;profile&quot;}" href="https://www.academia.edu/attachments/64680462/download_file?s=profile"><span><i class="fa fa-arrow-down"></i></span><span>Download</span></a><span class="wp-workCard--action visible-if-viewed-by-owner inline-block" style="display: none;"><span class="js-profile-work-strip-edit-button-wrapper profile-work-strip-edit-button-wrapper" data-work-id="1719072"><a class="js-profile-work-strip-edit-button" tabindex="0"><span><i class="fa fa-pencil"></i></span><span>Edit</span></a></span></span></div><div class="wp-workCard_item wp-workCard--stats"><span><span><span class="js-view-count view-count u-mr2x" data-work-id="1719072"><i class="fa fa-spinner fa-spin"></i></span><script>$(function () { var workId = 1719072; window.Academia.workViewCountsFetcher.queue(workId, function (count) { var description = window.$h.commaizeInt(count) + " " + window.$h.pluralize(count, 'View'); $(".js-view-count[data-work-id=1719072]").text(description); $(".js-view-count[data-work-id=1719072]").attr('title', description).tooltip(); }); });</script></span></span><span><span class="percentile-widget hidden"><span class="u-mr2x work-percentile"></span></span><script>$(function () { var workId = 1719072; window.Academia.workPercentilesFetcher.queue(workId, function (percentileText) { var container = $(".js-work-strip[data-work-id='1719072']"); container.find('.work-percentile').text(percentileText.charAt(0).toUpperCase() + percentileText.slice(1)); container.find('.percentile-widget').show(); container.find('.percentile-widget').removeClass('hidden'); }); });</script></span></div><div id="work-strip-premium-row-container"></div></div></div><script> require.config({ waitSeconds: 90 })(["https://a.academia-assets.com/assets/wow_profile-a9bf3a2bc8c89fa2a77156577594264ee8a0f214d74241bc0fcd3f69f8d107ac.js","https://a.academia-assets.com/assets/work_edit-ad038b8c047c1a8d4fa01b402d530ff93c45fee2137a149a4a5398bc8ad67560.js"], function() { // from javascript_helper.rb var dispatcherData = {} if (true){ window.WowProfile.dispatcher = window.WowProfile.dispatcher || _.clone(Backbone.Events); dispatcherData = { dispatcher: window.WowProfile.dispatcher, downloadLinkId: "3a93d5b2943a0796224e33ce7df87f3f" } } $('.js-work-strip[data-work-id=1719072]').each(function() { if (!$(this).data('initialized')) { new WowProfile.WorkStripView({ el: this, workJSON: {"id":1719072,"title":"HTML5 and the Open Web Platform","translated_title":"","metadata":{"time":{"end_hour":16,"start_hour":15,"errors":{}},"location":"Vrije Universiteit Brussel Computer Science","event_date":{"day":26,"month":3,"year":2012,"errors":{}},"ai_abstract":"The paper discusses the development and features of HTML5, emphasizing its community-driven approach and collaborative efforts among major technology companies to enhance web standards. It explores key HTML5 features such as the Canvas element for graphics, offline web applications, geolocation capabilities, web workers for background processing, microdata for enriched semantics, and drag-and-drop functionalities. The advancements brought by HTML5 significantly improve web experience, enabling high-performance applications and interactivity."},"translated_abstract":null,"internal_url":"https://www.academia.edu/1719072/HTML5_and_the_Open_Web_Platform","translated_internal_url":"","created_at":"2012-03-27T21:30:45.056-07:00","preview_url":null,"current_user_can_edit":null,"current_user_is_owner":null,"owner_id":13155,"coauthors_can_edit":true,"document_type":"talk","co_author_tags":[],"downloadable_attachments":[{"id":64680462,"title":"","file_type":"pdf","scribd_thumbnail_url":"https://attachments.academia-assets.com/64680462/thumbnails/1.jpg","file_name":"HTML5_and_the_Open_Web_Platform.pdf","download_url":"https://www.academia.edu/attachments/64680462/download_file","bulk_download_file_name":"HTML5_and_the_Open_Web_Platform.pdf","bulk_download_url":"https://d1wqtxts1xzle7.cloudfront.net/64680462/HTML5_and_the_Open_Web_Platform-libre.pdf?1602699393=\u0026response-content-disposition=attachment%3B+filename%3DHTML5_and_the_Open_Web_Platform.pdf\u0026Expires=1744203891\u0026Signature=d6tbfHlo-~tyjQCrNiUjSiKTxeT8Ctor9hE3hh7tsxky-KwLRvaJhPA5~tUM1OzjxnYDqUXtK3Ra8cZntuNcJRs4TRRzSqf8zSwPq~LJt3ia66k2OClTSp7fStyvIrae9Bw3vQpRbFaAEwmsT19MKVpKHSvH1QlfbNDVvkfscHkytIQ58b78VZiG9xLZYLsK181xFvN0glofvTRdkmOfU08PqijPcPlga4gaKAFDkOOHiVIfgV9a4U-mxAYrsO0tQmIvQQ-v22bq-po9NUKEXNfN2204yxPyeYM80jDdOR0pIQ3cPAAF3lk3x0aLNyyKzi2gRSxkA11YZ7~6I6untg__\u0026Key-Pair-Id=APKAJLOHF5GGSLRBV4ZA"}],"slug":"HTML5_and_the_Open_Web_Platform","translated_slug":"","page_count":53,"language":"en","content_type":"Work","summary":null,"impression_tracking_id":null,"owner":{"id":13155,"first_name":"Beat","middle_initials":null,"last_name":"Signer","page_name":"BeatSigner","domain_name":"vub","created_at":"2008-10-29T17:11:18.015-07:00","display_name":"Beat Signer","url":"https://vub.academia.edu/BeatSigner"},"attachments":[{"id":64680462,"title":"","file_type":"pdf","scribd_thumbnail_url":"https://attachments.academia-assets.com/64680462/thumbnails/1.jpg","file_name":"HTML5_and_the_Open_Web_Platform.pdf","download_url":"https://www.academia.edu/attachments/64680462/download_file","bulk_download_file_name":"HTML5_and_the_Open_Web_Platform.pdf","bulk_download_url":"https://d1wqtxts1xzle7.cloudfront.net/64680462/HTML5_and_the_Open_Web_Platform-libre.pdf?1602699393=\u0026response-content-disposition=attachment%3B+filename%3DHTML5_and_the_Open_Web_Platform.pdf\u0026Expires=1744203891\u0026Signature=d6tbfHlo-~tyjQCrNiUjSiKTxeT8Ctor9hE3hh7tsxky-KwLRvaJhPA5~tUM1OzjxnYDqUXtK3Ra8cZntuNcJRs4TRRzSqf8zSwPq~LJt3ia66k2OClTSp7fStyvIrae9Bw3vQpRbFaAEwmsT19MKVpKHSvH1QlfbNDVvkfscHkytIQ58b78VZiG9xLZYLsK181xFvN0glofvTRdkmOfU08PqijPcPlga4gaKAFDkOOHiVIfgV9a4U-mxAYrsO0tQmIvQQ-v22bq-po9NUKEXNfN2204yxPyeYM80jDdOR0pIQ3cPAAF3lk3x0aLNyyKzi2gRSxkA11YZ7~6I6untg__\u0026Key-Pair-Id=APKAJLOHF5GGSLRBV4ZA"}],"research_interests":[{"id":422,"name":"Computer Science","url":"https://www.academia.edu/Documents/in/Computer_Science"},{"id":5279,"name":"XML","url":"https://www.academia.edu/Documents/in/XML"},{"id":5978,"name":"Web Technologies","url":"https://www.academia.edu/Documents/in/Web_Technologies"},{"id":8130,"name":"Web Development","url":"https://www.academia.edu/Documents/in/Web_Development"},{"id":10472,"name":"Web Applications","url":"https://www.academia.edu/Documents/in/Web_Applications"},{"id":40094,"name":"CSS","url":"https://www.academia.edu/Documents/in/CSS"},{"id":63534,"name":"Javascript","url":"https://www.academia.edu/Documents/in/Javascript"},{"id":105982,"name":"HTML","url":"https://www.academia.edu/Documents/in/HTML"}],"urls":[{"id":296070,"url":"http://www.slideshare.net/signer/html5-and-the-open-web-platform"}]}, dispatcherData: dispatcherData }); $(this).data('initialized', true); } }); $a.trackClickSource(".js-work-strip-work-link", "profile_work_strip") if (true) { Aedu.setUpFigureCarousel('profile-work-1719072-figures'); } }); </script> <div class="js-work-strip profile--work_container" data-work-id="1697630"><div class="profile--work_thumbnail hidden-xs"><a class="js-work-strip-work-link" data-click-track="profile-work-strip-thumbnail" href="https://www.academia.edu/1697630/Search_Engine_Optimisation_SEO_and_Search_Engine_Marketing_SEM_Seminar_on_Web_Search"><img alt="Research paper thumbnail of Search Engine Optimisation (SEO) and Search Engine Marketing (SEM) - Seminar on Web Search" class="work-thumbnail" src="https://a.academia-assets.com/images/blank-paper.jpg" /></a></div><div class="wp-workCard wp-workCard_itemContainer"><div class="wp-workCard_item wp-workCard--title"><a class="js-work-strip-work-link text-gray-darker" data-click-track="profile-work-strip-title" href="https://www.academia.edu/1697630/Search_Engine_Optimisation_SEO_and_Search_Engine_Marketing_SEM_Seminar_on_Web_Search">Search Engine Optimisation (SEO) and Search Engine Marketing (SEM) - Seminar on Web Search</a></div><div class="wp-workCard_item wp-workCard--actions"><span class="work-strip-bookmark-button-container"></span><a id="8034235092bea99e6178a73962bce4dc" class="wp-workCard--action" rel="nofollow" data-click-track="profile-work-strip-download" data-download="{&quot;attachment_id&quot;:30274735,&quot;asset_id&quot;:1697630,&quot;asset_type&quot;:&quot;Work&quot;,&quot;button_location&quot;:&quot;profile&quot;}" href="https://www.academia.edu/attachments/30274735/download_file?s=profile"><span><i class="fa fa-arrow-down"></i></span><span>Download</span></a><span class="wp-workCard--action visible-if-viewed-by-owner inline-block" style="display: none;"><span class="js-profile-work-strip-edit-button-wrapper profile-work-strip-edit-button-wrapper" data-work-id="1697630"><a class="js-profile-work-strip-edit-button" tabindex="0"><span><i class="fa fa-pencil"></i></span><span>Edit</span></a></span></span></div><div class="wp-workCard_item wp-workCard--stats"><span><span><span class="js-view-count view-count u-mr2x" data-work-id="1697630"><i class="fa fa-spinner fa-spin"></i></span><script>$(function () { var workId = 1697630; window.Academia.workViewCountsFetcher.queue(workId, function (count) { var description = window.$h.commaizeInt(count) + " " + window.$h.pluralize(count, 'View'); $(".js-view-count[data-work-id=1697630]").text(description); $(".js-view-count[data-work-id=1697630]").attr('title', description).tooltip(); }); });</script></span></span><span><span class="percentile-widget hidden"><span class="u-mr2x work-percentile"></span></span><script>$(function () { var workId = 1697630; window.Academia.workPercentilesFetcher.queue(workId, function (percentileText) { var container = $(".js-work-strip[data-work-id='1697630']"); container.find('.work-percentile').text(percentileText.charAt(0).toUpperCase() + percentileText.slice(1)); container.find('.percentile-widget').show(); container.find('.percentile-widget').removeClass('hidden'); }); });</script></span></div><div id="work-strip-premium-row-container"></div></div></div><script> require.config({ waitSeconds: 90 })(["https://a.academia-assets.com/assets/wow_profile-a9bf3a2bc8c89fa2a77156577594264ee8a0f214d74241bc0fcd3f69f8d107ac.js","https://a.academia-assets.com/assets/work_edit-ad038b8c047c1a8d4fa01b402d530ff93c45fee2137a149a4a5398bc8ad67560.js"], function() { // from javascript_helper.rb var dispatcherData = {} if (true){ window.WowProfile.dispatcher = window.WowProfile.dispatcher || _.clone(Backbone.Events); dispatcherData = { dispatcher: window.WowProfile.dispatcher, downloadLinkId: "8034235092bea99e6178a73962bce4dc" } } $('.js-work-strip[data-work-id=1697630]').each(function() { if (!$(this).data('initialized')) { new WowProfile.WorkStripView({ el: this, workJSON: {"id":1697630,"title":"Search Engine Optimisation (SEO) and Search Engine Marketing (SEM) - Seminar on Web Search","translated_title":"","metadata":{"time":{"end_hour":14,"start_hour":12,"errors":{}},"event_date":{"day":9,"month":9,"year":2011,"errors":{}},"ai_abstract":"This paper provides a comprehensive overview of Search Engine Optimization (SEO) and Search Engine Marketing (SEM), focusing on structural and technological choices for improving website visibility. It outlines various on-page and off-page optimization strategies, differentiating between positive and negative factors, including the impacts of black hat techniques. Furthermore, it discusses tools and guidelines available for optimizing websites, the importance of web analysis, and the role of social networks and advertising platforms in attracting visitors."},"translated_abstract":null,"internal_url":"https://www.academia.edu/1697630/Search_Engine_Optimisation_SEO_and_Search_Engine_Marketing_SEM_Seminar_on_Web_Search","translated_internal_url":"","created_at":"2011-09-10T06:46:47.652-07:00","preview_url":null,"current_user_can_edit":null,"current_user_is_owner":null,"owner_id":13155,"coauthors_can_edit":true,"document_type":"talk","co_author_tags":[],"downloadable_attachments":[{"id":30274735,"title":"","file_type":"pdf","scribd_thumbnail_url":"https://a.academia-assets.com/images/blank-paper.jpg","file_name":"seo.pdf","download_url":"https://www.academia.edu/attachments/30274735/download_file","bulk_download_file_name":"Search_Engine_Optimisation_SEO_and_Searc.pdf","bulk_download_url":"https://d1wqtxts1xzle7.cloudfront.net/30274735/seo-libre.pdf?1390882657=\u0026response-content-disposition=attachment%3B+filename%3DSearch_Engine_Optimisation_SEO_and_Searc.pdf\u0026Expires=1744203891\u0026Signature=AV9tRT76GNp21bv-xf7KbDT7-4VhFqT~DWUU3yNXd3iYeqQbEv9Hhkju3KupJ5JynteWa1JUsGytnMb5TFNkeItr2dBtNC0abzD7CZMh0d1eeSjmfXj7kV1BAls8mlYLf0ejtcglIBPTqImOkb3JlQB8q3KyQCxRZDSJPeZrAPuKtEW32DullJCFkHw0zp-RkSw-1dIuA0rCriq9dr1NLQFW3~HMmMRSox6vvc7dh4OcXXT8W3uMWe4VWhL7loGQNuKBmjw1n2zrXYEyIlXRxVLm6Ik7pDYvDNOdHANSdx2EEAVEUcC-OAdQMhVqegQHiXKj5FdPDOypPPP2kwhy4w__\u0026Key-Pair-Id=APKAJLOHF5GGSLRBV4ZA"}],"slug":"Search_Engine_Optimisation_SEO_and_Search_Engine_Marketing_SEM_Seminar_on_Web_Search","translated_slug":"","page_count":42,"language":"en","content_type":"Work","summary":null,"impression_tracking_id":null,"owner":{"id":13155,"first_name":"Beat","middle_initials":null,"last_name":"Signer","page_name":"BeatSigner","domain_name":"vub","created_at":"2008-10-29T17:11:18.015-07:00","display_name":"Beat Signer","url":"https://vub.academia.edu/BeatSigner"},"attachments":[{"id":30274735,"title":"","file_type":"pdf","scribd_thumbnail_url":"https://a.academia-assets.com/images/blank-paper.jpg","file_name":"seo.pdf","download_url":"https://www.academia.edu/attachments/30274735/download_file","bulk_download_file_name":"Search_Engine_Optimisation_SEO_and_Searc.pdf","bulk_download_url":"https://d1wqtxts1xzle7.cloudfront.net/30274735/seo-libre.pdf?1390882657=\u0026response-content-disposition=attachment%3B+filename%3DSearch_Engine_Optimisation_SEO_and_Searc.pdf\u0026Expires=1744203891\u0026Signature=AV9tRT76GNp21bv-xf7KbDT7-4VhFqT~DWUU3yNXd3iYeqQbEv9Hhkju3KupJ5JynteWa1JUsGytnMb5TFNkeItr2dBtNC0abzD7CZMh0d1eeSjmfXj7kV1BAls8mlYLf0ejtcglIBPTqImOkb3JlQB8q3KyQCxRZDSJPeZrAPuKtEW32DullJCFkHw0zp-RkSw-1dIuA0rCriq9dr1NLQFW3~HMmMRSox6vvc7dh4OcXXT8W3uMWe4VWhL7loGQNuKBmjw1n2zrXYEyIlXRxVLm6Ik7pDYvDNOdHANSdx2EEAVEUcC-OAdQMhVqegQHiXKj5FdPDOypPPP2kwhy4w__\u0026Key-Pair-Id=APKAJLOHF5GGSLRBV4ZA"}],"research_interests":[{"id":422,"name":"Computer Science","url":"https://www.academia.edu/Documents/in/Computer_Science"},{"id":975,"name":"Sex and Gender","url":"https://www.academia.edu/Documents/in/Sex_and_Gender"},{"id":5273,"name":"Class","url":"https://www.academia.edu/Documents/in/Class"},{"id":53238,"name":"Race","url":"https://www.academia.edu/Documents/in/Race"}],"urls":[{"id":295619,"url":"http://www.slideshare.net/signer/seo-search-engine-optimisation-and-sem-search-engine-marketing-seminar-on-web-search"}]}, dispatcherData: dispatcherData }); $(this).data('initialized', true); } }); $a.trackClickSource(".js-work-strip-work-link", "profile_work_strip") if (false) { Aedu.setUpFigureCarousel('profile-work-1697630-figures'); } }); </script> <div class="js-work-strip profile--work_container" data-work-id="1697164"><div class="profile--work_thumbnail hidden-xs"><a class="js-work-strip-work-link" data-click-track="profile-work-strip-thumbnail" href="https://www.academia.edu/1697164/History_of_Search_and_Web_Search_Engines_Seminar_on_Web_Search"><img alt="Research paper thumbnail of History of Search and Web Search Engines - Seminar on Web Search" class="work-thumbnail" src="https://attachments.academia-assets.com/46527420/thumbnails/1.jpg" /></a></div><div class="wp-workCard wp-workCard_itemContainer"><div class="wp-workCard_item wp-workCard--title"><a class="js-work-strip-work-link text-gray-darker" data-click-track="profile-work-strip-title" href="https://www.academia.edu/1697164/History_of_Search_and_Web_Search_Engines_Seminar_on_Web_Search">History of Search and Web Search Engines - Seminar on Web Search</a></div><div class="wp-workCard_item"><span class="js-work-more-abstract-truncated">WISE Lab, Vrije Universiteit Brussel bsigner@vub.ac.be  cross-media information spaces and archi...</span><a class="js-work-more-abstract" data-broccoli-component="work_strip.more_abstract" data-click-track="profile-work-strip-more-abstract" href="javascript:;"><span> more </span><span><i class="fa fa-caret-down"></i></span></a><span class="js-work-more-abstract-untruncated hidden">WISE Lab, Vrije Universiteit Brussel <a href="mailto:bsigner@vub.ac.be" rel="nofollow">bsigner@vub.ac.be</a>  cross-media information spaces and architectures  interactive paper and augmented reality  multimodal and multi-touch interaction  Content of the Seminar  history of search and web search engines  search engine optimisation (SEO) and search engine marketing (SEM)  current and future trends in web search</span></div><div class="wp-workCard_item wp-workCard--actions"><span class="work-strip-bookmark-button-container"></span><a id="be65fdf8749368a4d5dee16345aa456e" class="wp-workCard--action" rel="nofollow" data-click-track="profile-work-strip-download" data-download="{&quot;attachment_id&quot;:46527420,&quot;asset_id&quot;:1697164,&quot;asset_type&quot;:&quot;Work&quot;,&quot;button_location&quot;:&quot;profile&quot;}" href="https://www.academia.edu/attachments/46527420/download_file?s=profile"><span><i class="fa fa-arrow-down"></i></span><span>Download</span></a><span class="wp-workCard--action visible-if-viewed-by-owner inline-block" style="display: none;"><span class="js-profile-work-strip-edit-button-wrapper profile-work-strip-edit-button-wrapper" data-work-id="1697164"><a class="js-profile-work-strip-edit-button" tabindex="0"><span><i class="fa fa-pencil"></i></span><span>Edit</span></a></span></span></div><div class="wp-workCard_item wp-workCard--stats"><span><span><span class="js-view-count view-count u-mr2x" data-work-id="1697164"><i class="fa fa-spinner fa-spin"></i></span><script>$(function () { var workId = 1697164; window.Academia.workViewCountsFetcher.queue(workId, function (count) { var description = window.$h.commaizeInt(count) + " " + window.$h.pluralize(count, 'View'); $(".js-view-count[data-work-id=1697164]").text(description); $(".js-view-count[data-work-id=1697164]").attr('title', description).tooltip(); }); });</script></span></span><span><span class="percentile-widget hidden"><span class="u-mr2x work-percentile"></span></span><script>$(function () { var workId = 1697164; window.Academia.workPercentilesFetcher.queue(workId, function (percentileText) { var container = $(".js-work-strip[data-work-id='1697164']"); container.find('.work-percentile').text(percentileText.charAt(0).toUpperCase() + percentileText.slice(1)); container.find('.percentile-widget').show(); container.find('.percentile-widget').removeClass('hidden'); }); });</script></span></div><div id="work-strip-premium-row-container"></div></div></div><script> require.config({ waitSeconds: 90 })(["https://a.academia-assets.com/assets/wow_profile-a9bf3a2bc8c89fa2a77156577594264ee8a0f214d74241bc0fcd3f69f8d107ac.js","https://a.academia-assets.com/assets/work_edit-ad038b8c047c1a8d4fa01b402d530ff93c45fee2137a149a4a5398bc8ad67560.js"], function() { // from javascript_helper.rb var dispatcherData = {} if (true){ window.WowProfile.dispatcher = window.WowProfile.dispatcher || _.clone(Backbone.Events); dispatcherData = { dispatcher: window.WowProfile.dispatcher, downloadLinkId: "be65fdf8749368a4d5dee16345aa456e" } } $('.js-work-strip[data-work-id=1697164]').each(function() { if (!$(this).data('initialized')) { new WowProfile.WorkStripView({ el: this, workJSON: {"id":1697164,"title":"History of Search and Web Search Engines - Seminar on Web Search","translated_title":"","metadata":{"time":{"end_hour":14,"start_hour":12,"errors":{}},"event_date":{"day":5,"month":9,"year":2011,"errors":{}},"grobid_abstract":"WISE Lab, Vrije Universiteit Brussel bsigner@vub.ac.be  cross-media information spaces and architectures  interactive paper and augmented reality  multimodal and multi-touch interaction  Content of the Seminar  history of search and web search engines  search engine optimisation (SEO) and search engine marketing (SEM)  current and future trends in web search","grobid_abstract_attachment_id":46527420},"translated_abstract":null,"internal_url":"https://www.academia.edu/1697164/History_of_Search_and_Web_Search_Engines_Seminar_on_Web_Search","translated_internal_url":"","created_at":"2011-09-05T06:28:09.465-07:00","preview_url":null,"current_user_can_edit":null,"current_user_is_owner":null,"owner_id":13155,"coauthors_can_edit":true,"document_type":"talk","co_author_tags":[],"downloadable_attachments":[{"id":46527420,"title":"","file_type":"pdf","scribd_thumbnail_url":"https://attachments.academia-assets.com/46527420/thumbnails/1.jpg","file_name":"webSearch.pdf","download_url":"https://www.academia.edu/attachments/46527420/download_file","bulk_download_file_name":"History_of_Search_and_Web_Search_Engines.pdf","bulk_download_url":"https://d1wqtxts1xzle7.cloudfront.net/46527420/webSearch-libre.pdf?1466055574=\u0026response-content-disposition=attachment%3B+filename%3DHistory_of_Search_and_Web_Search_Engines.pdf\u0026Expires=1744203892\u0026Signature=N9SJVXsK9JHvKJyqsT6yKDl~F3d2v-0wwxBkKslGpPdlSeiJcE6XqIA~5e3dcivBpVIXaogA89YVIGZfl4id0EG1ZS44lynJlSfmzk6mxj~s-jIOeXGEyT4fCXaynORHz2h3Mm-bdLq62xQPnLF~grUWem-R3kGVweZ1rvvPfscbLTa5XBTmG50-qq3pc8wgHGPG~nZ0kGxwBwFTaH9p9CmalQ0mCjU5Ot0OLnF9W~YPD8yyzC7A0U1SIyo4dpVWy3DvFUv~stCtA-2-2beNE4L2AQsc92uRnD--pon3ojIQffMs9KILnB9FGZRUJJVS9EpUPFw9X2tgGuLLIBANeg__\u0026Key-Pair-Id=APKAJLOHF5GGSLRBV4ZA"}],"slug":"History_of_Search_and_Web_Search_Engines_Seminar_on_Web_Search","translated_slug":"","page_count":54,"language":"en","content_type":"Work","summary":"WISE Lab, Vrije Universiteit Brussel bsigner@vub.ac.be  cross-media information spaces and architectures  interactive paper and augmented reality  multimodal and multi-touch interaction  Content of the Seminar  history of search and web search engines  search engine optimisation (SEO) and search engine marketing (SEM)  current and future trends in web search","impression_tracking_id":null,"owner":{"id":13155,"first_name":"Beat","middle_initials":null,"last_name":"Signer","page_name":"BeatSigner","domain_name":"vub","created_at":"2008-10-29T17:11:18.015-07:00","display_name":"Beat Signer","url":"https://vub.academia.edu/BeatSigner"},"attachments":[{"id":46527420,"title":"","file_type":"pdf","scribd_thumbnail_url":"https://attachments.academia-assets.com/46527420/thumbnails/1.jpg","file_name":"webSearch.pdf","download_url":"https://www.academia.edu/attachments/46527420/download_file","bulk_download_file_name":"History_of_Search_and_Web_Search_Engines.pdf","bulk_download_url":"https://d1wqtxts1xzle7.cloudfront.net/46527420/webSearch-libre.pdf?1466055574=\u0026response-content-disposition=attachment%3B+filename%3DHistory_of_Search_and_Web_Search_Engines.pdf\u0026Expires=1744203892\u0026Signature=N9SJVXsK9JHvKJyqsT6yKDl~F3d2v-0wwxBkKslGpPdlSeiJcE6XqIA~5e3dcivBpVIXaogA89YVIGZfl4id0EG1ZS44lynJlSfmzk6mxj~s-jIOeXGEyT4fCXaynORHz2h3Mm-bdLq62xQPnLF~grUWem-R3kGVweZ1rvvPfscbLTa5XBTmG50-qq3pc8wgHGPG~nZ0kGxwBwFTaH9p9CmalQ0mCjU5Ot0OLnF9W~YPD8yyzC7A0U1SIyo4dpVWy3DvFUv~stCtA-2-2beNE4L2AQsc92uRnD--pon3ojIQffMs9KILnB9FGZRUJJVS9EpUPFw9X2tgGuLLIBANeg__\u0026Key-Pair-Id=APKAJLOHF5GGSLRBV4ZA"}],"research_interests":[{"id":422,"name":"Computer Science","url":"https://www.academia.edu/Documents/in/Computer_Science"},{"id":8813,"name":"Web search","url":"https://www.academia.edu/Documents/in/Web_search"},{"id":35838,"name":"Search Engines","url":"https://www.academia.edu/Documents/in/Search_Engines"},{"id":1005705,"name":"Search Engine Optimiztion","url":"https://www.academia.edu/Documents/in/Search_Engine_Optimiztion"}],"urls":[{"id":295613,"url":"http://www.slideshare.net/signer/web-search-history-of-search-and-web-search-engines"}]}, dispatcherData: dispatcherData }); $(this).data('initialized', true); } }); $a.trackClickSource(".js-work-strip-work-link", "profile_work_strip") if (false) { Aedu.setUpFigureCarousel('profile-work-1697164-figures'); } }); </script> <div class="js-work-strip profile--work_container" data-work-id="1659572"><div class="profile--work_thumbnail hidden-xs"><a class="js-work-strip-work-link" data-click-track="profile-work-strip-thumbnail" href="https://www.academia.edu/1659572/Paper_Digital_User_Interfaces_Applications_Frameworks_and_Future_Challenges"><img alt="Research paper thumbnail of Paper-Digital User Interfaces - Applications, Frameworks and Future Challenges" class="work-thumbnail" src="https://attachments.academia-assets.com/77358306/thumbnails/1.jpg" /></a></div><div class="wp-workCard wp-workCard_itemContainer"><div class="wp-workCard_item wp-workCard--title"><a class="js-work-strip-work-link text-gray-darker" data-click-track="profile-work-strip-title" href="https://www.academia.edu/1659572/Paper_Digital_User_Interfaces_Applications_Frameworks_and_Future_Challenges">Paper-Digital User Interfaces - Applications, Frameworks and Future Challenges</a></div><div class="wp-workCard_item"><span class="js-work-more-abstract-truncated">&quot;While there have been dramatic increases in the use of digital technologies for information stor...</span><a class="js-work-more-abstract" data-broccoli-component="work_strip.more_abstract" data-click-track="profile-work-strip-more-abstract" href="javascript:;"><span> more </span><span><i class="fa fa-caret-down"></i></span></a><span class="js-work-more-abstract-untruncated hidden">&quot;While there have been dramatic increases in the use of digital technologies for information storage, processing and delivery over the last few decades, the affordances of paper have ensured its retention as a key information medium. Despite predictions of the paperless office, paper is ever more present in our daily work. However, there is a gap between the paper and digital worlds: information present in paper documents cannot be seamlessly transferred to digital media and digital services are not easily accessible from the paper world. <br /> <br />In this talk I will present an information-centric approach for integrating paper with digital as well as physical media based on a general cross-media information platform (iServer). Some details about the architecture and implementation of the iServer platform as well as the underlying resource-selector-link (RSL) metamodel for cross-media linking will be highlighted. A selection of interactive paper applications that have been developed based on this platform over the past nine years will be presented, including the EdFest interactive paper guide for the Edinburgh festivals, the PaperPoint presentation tool as well as the PaperProof proof-editing solution. Challenges and solutions for novel forms of interactive paper and cross-media publishing are discussed based on the presented applications. This includes specific extensions of the iServer platform and RSL model as well as the application of our solution in new domains such as digital libraries, cross-media annotation and retrieval or personal cross-media information management that goes beyond the hierarchical information management imposed by the desktop metaphor.&quot;</span></div><div class="wp-workCard_item wp-workCard--actions"><span class="work-strip-bookmark-button-container"></span><a id="d832174ccbe851e29bd6fb2b4e8590ef" class="wp-workCard--action" rel="nofollow" data-click-track="profile-work-strip-download" data-download="{&quot;attachment_id&quot;:77358306,&quot;asset_id&quot;:1659572,&quot;asset_type&quot;:&quot;Work&quot;,&quot;button_location&quot;:&quot;profile&quot;}" href="https://www.academia.edu/attachments/77358306/download_file?s=profile"><span><i class="fa fa-arrow-down"></i></span><span>Download</span></a><span class="wp-workCard--action visible-if-viewed-by-owner inline-block" style="display: none;"><span class="js-profile-work-strip-edit-button-wrapper profile-work-strip-edit-button-wrapper" data-work-id="1659572"><a class="js-profile-work-strip-edit-button" tabindex="0"><span><i class="fa fa-pencil"></i></span><span>Edit</span></a></span></span></div><div class="wp-workCard_item wp-workCard--stats"><span><span><span class="js-view-count view-count u-mr2x" data-work-id="1659572"><i class="fa fa-spinner fa-spin"></i></span><script>$(function () { var workId = 1659572; window.Academia.workViewCountsFetcher.queue(workId, function (count) { var description = window.$h.commaizeInt(count) + " " + window.$h.pluralize(count, 'View'); $(".js-view-count[data-work-id=1659572]").text(description); $(".js-view-count[data-work-id=1659572]").attr('title', description).tooltip(); }); });</script></span></span><span><span class="percentile-widget hidden"><span class="u-mr2x work-percentile"></span></span><script>$(function () { var workId = 1659572; window.Academia.workPercentilesFetcher.queue(workId, function (percentileText) { var container = $(".js-work-strip[data-work-id='1659572']"); container.find('.work-percentile').text(percentileText.charAt(0).toUpperCase() + percentileText.slice(1)); container.find('.percentile-widget').show(); container.find('.percentile-widget').removeClass('hidden'); }); });</script></span></div><div id="work-strip-premium-row-container"></div></div></div><script> require.config({ waitSeconds: 90 })(["https://a.academia-assets.com/assets/wow_profile-a9bf3a2bc8c89fa2a77156577594264ee8a0f214d74241bc0fcd3f69f8d107ac.js","https://a.academia-assets.com/assets/work_edit-ad038b8c047c1a8d4fa01b402d530ff93c45fee2137a149a4a5398bc8ad67560.js"], function() { // from javascript_helper.rb var dispatcherData = {} if (true){ window.WowProfile.dispatcher = window.WowProfile.dispatcher || _.clone(Backbone.Events); dispatcherData = { dispatcher: window.WowProfile.dispatcher, downloadLinkId: "d832174ccbe851e29bd6fb2b4e8590ef" } } $('.js-work-strip[data-work-id=1659572]').each(function() { if (!$(this).data('initialized')) { new WowProfile.WorkStripView({ el: this, workJSON: {"id":1659572,"title":"Paper-Digital User Interfaces - Applications, Frameworks and Future Challenges","translated_title":"","metadata":{"time":{"end_hour":15,"start_hour":14,"errors":{}},"abstract":"\"While there have been dramatic increases in the use of digital technologies for information storage, processing and delivery over the last few decades, the affordances of paper have ensured its retention as a key information medium. Despite predictions of the paperless office, paper is ever more present in our daily work. However, there is a gap between the paper and digital worlds: information present in paper documents cannot be seamlessly transferred to digital media and digital services are not easily accessible from the paper world.\r\n\r\nIn this talk I will present an information-centric approach for integrating paper with digital as well as physical media based on a general cross-media information platform (iServer). Some details about the architecture and implementation of the iServer platform as well as the underlying resource-selector-link (RSL) metamodel for cross-media linking will be highlighted. A selection of interactive paper applications that have been developed based on this platform over the past nine years will be presented, including the EdFest interactive paper guide for the Edinburgh festivals, the PaperPoint presentation tool as well as the PaperProof proof-editing solution. Challenges and solutions for novel forms of interactive paper and cross-media publishing are discussed based on the presented applications. This includes specific extensions of the iServer platform and RSL model as well as the application of our solution in new domains such as digital libraries, cross-media annotation and retrieval or personal cross-media information management that goes beyond the hierarchical information management imposed by the desktop metaphor.\"","location":"User Interface Colloquium, Otto-von-Guericke University Magdeburg, Germany, November 2009","event_date":{"day":2,"month":11,"year":2009,"errors":{}}},"translated_abstract":"\"While there have been dramatic increases in the use of digital technologies for information storage, processing and delivery over the last few decades, the affordances of paper have ensured its retention as a key information medium. Despite predictions of the paperless office, paper is ever more present in our daily work. However, there is a gap between the paper and digital worlds: information present in paper documents cannot be seamlessly transferred to digital media and digital services are not easily accessible from the paper world.\r\n\r\nIn this talk I will present an information-centric approach for integrating paper with digital as well as physical media based on a general cross-media information platform (iServer). Some details about the architecture and implementation of the iServer platform as well as the underlying resource-selector-link (RSL) metamodel for cross-media linking will be highlighted. A selection of interactive paper applications that have been developed based on this platform over the past nine years will be presented, including the EdFest interactive paper guide for the Edinburgh festivals, the PaperPoint presentation tool as well as the PaperProof proof-editing solution. Challenges and solutions for novel forms of interactive paper and cross-media publishing are discussed based on the presented applications. This includes specific extensions of the iServer platform and RSL model as well as the application of our solution in new domains such as digital libraries, cross-media annotation and retrieval or personal cross-media information management that goes beyond the hierarchical information management imposed by the desktop metaphor.\"","internal_url":"https://www.academia.edu/1659572/Paper_Digital_User_Interfaces_Applications_Frameworks_and_Future_Challenges","translated_internal_url":"","created_at":"2009-10-27T16:51:48.287-07:00","preview_url":null,"current_user_can_edit":null,"current_user_is_owner":null,"owner_id":13155,"coauthors_can_edit":true,"document_type":"talk","co_author_tags":[],"downloadable_attachments":[{"id":77358306,"title":"","file_type":"pdf","scribd_thumbnail_url":"https://attachments.academia-assets.com/77358306/thumbnails/1.jpg","file_name":"paperDigital.pdf","download_url":"https://www.academia.edu/attachments/77358306/download_file","bulk_download_file_name":"Paper_Digital_User_Interfaces_Applicatio.pdf","bulk_download_url":"https://d1wqtxts1xzle7.cloudfront.net/77358306/paperDigital-libre.pdf?1640509005=\u0026response-content-disposition=attachment%3B+filename%3DPaper_Digital_User_Interfaces_Applicatio.pdf\u0026Expires=1744203892\u0026Signature=Bg0tr2YklBJmE5HvlotQtUNmlVxg3hOTo7jpfy0W0zEx9L4Q9Z4mzfK4~8qW4VpDH2hfceB6TnZeHUHBAdnDxuEs~Bx3qJHNf5BaAKIkKzt6QFLcL0dGoEXMUqFODw6ncyu1Qj-FUarL3dekmf4BgcV6dzmLjco22cJonaMKisGNXVN7v335W8T-tsWM7ioYUYsf-2uLYFFKHoYLI1vKAD4e9Y7KZhZf52roJoHKUEQpMDI4mOZUxsNipRBzB~cSiG8n9i04bl7bQLqfYUrdNutQGQpRV48H6yV1riRoLJrSmMWCIVYf9f9ZVJs1HlWY7~~CSyKpPWL9uAJe1HYkaA__\u0026Key-Pair-Id=APKAJLOHF5GGSLRBV4ZA"}],"slug":"Paper_Digital_User_Interfaces_Applications_Frameworks_and_Future_Challenges","translated_slug":"","page_count":27,"language":"en","content_type":"Work","summary":"\"While there have been dramatic increases in the use of digital technologies for information storage, processing and delivery over the last few decades, the affordances of paper have ensured its retention as a key information medium. Despite predictions of the paperless office, paper is ever more present in our daily work. However, there is a gap between the paper and digital worlds: information present in paper documents cannot be seamlessly transferred to digital media and digital services are not easily accessible from the paper world.\r\n\r\nIn this talk I will present an information-centric approach for integrating paper with digital as well as physical media based on a general cross-media information platform (iServer). Some details about the architecture and implementation of the iServer platform as well as the underlying resource-selector-link (RSL) metamodel for cross-media linking will be highlighted. A selection of interactive paper applications that have been developed based on this platform over the past nine years will be presented, including the EdFest interactive paper guide for the Edinburgh festivals, the PaperPoint presentation tool as well as the PaperProof proof-editing solution. Challenges and solutions for novel forms of interactive paper and cross-media publishing are discussed based on the presented applications. This includes specific extensions of the iServer platform and RSL model as well as the application of our solution in new domains such as digital libraries, cross-media annotation and retrieval or personal cross-media information management that goes beyond the hierarchical information management imposed by the desktop metaphor.\"","impression_tracking_id":null,"owner":{"id":13155,"first_name":"Beat","middle_initials":null,"last_name":"Signer","page_name":"BeatSigner","domain_name":"vub","created_at":"2008-10-29T17:11:18.015-07:00","display_name":"Beat Signer","url":"https://vub.academia.edu/BeatSigner"},"attachments":[{"id":77358306,"title":"","file_type":"pdf","scribd_thumbnail_url":"https://attachments.academia-assets.com/77358306/thumbnails/1.jpg","file_name":"paperDigital.pdf","download_url":"https://www.academia.edu/attachments/77358306/download_file","bulk_download_file_name":"Paper_Digital_User_Interfaces_Applicatio.pdf","bulk_download_url":"https://d1wqtxts1xzle7.cloudfront.net/77358306/paperDigital-libre.pdf?1640509005=\u0026response-content-disposition=attachment%3B+filename%3DPaper_Digital_User_Interfaces_Applicatio.pdf\u0026Expires=1744203892\u0026Signature=Bg0tr2YklBJmE5HvlotQtUNmlVxg3hOTo7jpfy0W0zEx9L4Q9Z4mzfK4~8qW4VpDH2hfceB6TnZeHUHBAdnDxuEs~Bx3qJHNf5BaAKIkKzt6QFLcL0dGoEXMUqFODw6ncyu1Qj-FUarL3dekmf4BgcV6dzmLjco22cJonaMKisGNXVN7v335W8T-tsWM7ioYUYsf-2uLYFFKHoYLI1vKAD4e9Y7KZhZf52roJoHKUEQpMDI4mOZUxsNipRBzB~cSiG8n9i04bl7bQLqfYUrdNutQGQpRV48H6yV1riRoLJrSmMWCIVYf9f9ZVJs1HlWY7~~CSyKpPWL9uAJe1HYkaA__\u0026Key-Pair-Id=APKAJLOHF5GGSLRBV4ZA"}],"research_interests":[{"id":422,"name":"Computer Science","url":"https://www.academia.edu/Documents/in/Computer_Science"},{"id":472,"name":"Human Computer Interaction","url":"https://www.academia.edu/Documents/in/Human_Computer_Interaction"},{"id":11085,"name":"Cross-Media Information Spaces","url":"https://www.academia.edu/Documents/in/Cross-Media_Information_Spaces"},{"id":11086,"name":"Interactive Paper","url":"https://www.academia.edu/Documents/in/Interactive_Paper"}],"urls":[{"id":295097,"url":"http://www.slideshare.net/signer/paperdigital-user-interfaces-applications-frameworks-and-future-challenges"}]}, dispatcherData: dispatcherData }); $(this).data('initialized', true); } }); $a.trackClickSource(".js-work-strip-work-link", "profile_work_strip") if (false) { Aedu.setUpFigureCarousel('profile-work-1659572-figures'); } }); </script> <div class="js-work-strip profile--work_container" data-work-id="1661716"><div class="profile--work_thumbnail hidden-xs"><a class="js-work-strip-work-link" data-click-track="profile-work-strip-thumbnail" href="https://www.academia.edu/1661716/The_Remediatio_n_of_Paper_Creating_Affinities_between_Paper_and_Cross_Medi_a_Informatio_n_Spaces"><img alt="Research paper thumbnail of The Remediatio­n of Paper - Creating Affinities between Paper and Cross-Medi­a Informatio­n Spaces" class="work-thumbnail" src="https://attachments.academia-assets.com/77528258/thumbnails/1.jpg" /></a></div><div class="wp-workCard wp-workCard_itemContainer"><div class="wp-workCard_item wp-workCard--title"><a class="js-work-strip-work-link text-gray-darker" data-click-track="profile-work-strip-title" href="https://www.academia.edu/1661716/The_Remediatio_n_of_Paper_Creating_Affinities_between_Paper_and_Cross_Medi_a_Informatio_n_Spaces">The Remediatio­n of Paper - Creating Affinities between Paper and Cross-Medi­a Informatio­n Spaces</a></div><div class="wp-workCard_item"><span class="js-work-more-abstract-truncated">&quot;While there have been dramatic increases in the use of digital technologies for information stor...</span><a class="js-work-more-abstract" data-broccoli-component="work_strip.more_abstract" data-click-track="profile-work-strip-more-abstract" href="javascript:;"><span> more </span><span><i class="fa fa-caret-down"></i></span></a><span class="js-work-more-abstract-untruncated hidden">&quot;While there have been dramatic increases in the use of digital technologies for information storage, processing and delivery over the last few decades, the affordances of paper have ensured its retention as a key information medium. Despite predictions of the paperless office, paper is ever more present in our daily work. However, there is a gap between the paper and digital worlds: information present in paper documents cannot be transferred seamlessly to digital media and digital services are not easily accessible from the paper world. <br /> <br />In this seminar talk I will present an information-centric approach for integrating paper with digital as well as physical media based on a general cross-media information platform (iServer). Some details about the architecture and implementation of the iServer platform as well as the underlying RSL metamodel for cross-media linking will be highlighted. A selection of interactive paper applications that have been developed based on this platform over the past few years will be presented, including the EdFest interactive paper guide for the Edinburgh festivals and the PaperPoint tool for paper-driven PowerPoint presentations. Challenges and solutions for novel forms of interactive paper and cross-media publishing are discussed based on the presented applications. <br /> <br />The second part of my talk will address future directions for research on interactive paper and cross-media information management. This includes specific extensions of the iServer platform and RSL model as well as the application of our solution in new domains such as digital libraries, cross-media annotation and retrieval or personal cross-media information management that goes beyond the hierarchical information management imposed by the desktop metaphor.&quot;</span></div><div class="wp-workCard_item wp-workCard--actions"><span class="work-strip-bookmark-button-container"></span><a id="2f10342a1bae5c0d6d39a093412d1f14" class="wp-workCard--action" rel="nofollow" data-click-track="profile-work-strip-download" data-download="{&quot;attachment_id&quot;:77528258,&quot;asset_id&quot;:1661716,&quot;asset_type&quot;:&quot;Work&quot;,&quot;button_location&quot;:&quot;profile&quot;}" href="https://www.academia.edu/attachments/77528258/download_file?s=profile"><span><i class="fa fa-arrow-down"></i></span><span>Download</span></a><span class="wp-workCard--action visible-if-viewed-by-owner inline-block" style="display: none;"><span class="js-profile-work-strip-edit-button-wrapper profile-work-strip-edit-button-wrapper" data-work-id="1661716"><a class="js-profile-work-strip-edit-button" tabindex="0"><span><i class="fa fa-pencil"></i></span><span>Edit</span></a></span></span></div><div class="wp-workCard_item wp-workCard--stats"><span><span><span class="js-view-count view-count u-mr2x" data-work-id="1661716"><i class="fa fa-spinner fa-spin"></i></span><script>$(function () { var workId = 1661716; window.Academia.workViewCountsFetcher.queue(workId, function (count) { var description = window.$h.commaizeInt(count) + " " + window.$h.pluralize(count, 'View'); $(".js-view-count[data-work-id=1661716]").text(description); $(".js-view-count[data-work-id=1661716]").attr('title', description).tooltip(); }); });</script></span></span><span><span class="percentile-widget hidden"><span class="u-mr2x work-percentile"></span></span><script>$(function () { var workId = 1661716; window.Academia.workPercentilesFetcher.queue(workId, function (percentileText) { var container = $(".js-work-strip[data-work-id='1661716']"); container.find('.work-percentile').text(percentileText.charAt(0).toUpperCase() + percentileText.slice(1)); container.find('.percentile-widget').show(); container.find('.percentile-widget').removeClass('hidden'); }); });</script></span></div><div id="work-strip-premium-row-container"></div></div></div><script> require.config({ waitSeconds: 90 })(["https://a.academia-assets.com/assets/wow_profile-a9bf3a2bc8c89fa2a77156577594264ee8a0f214d74241bc0fcd3f69f8d107ac.js","https://a.academia-assets.com/assets/work_edit-ad038b8c047c1a8d4fa01b402d530ff93c45fee2137a149a4a5398bc8ad67560.js"], function() { // from javascript_helper.rb var dispatcherData = {} if (true){ window.WowProfile.dispatcher = window.WowProfile.dispatcher || _.clone(Backbone.Events); dispatcherData = { dispatcher: window.WowProfile.dispatcher, downloadLinkId: "2f10342a1bae5c0d6d39a093412d1f14" } } $('.js-work-strip[data-work-id=1661716]').each(function() { if (!$(this).data('initialized')) { new WowProfile.WorkStripView({ el: this, workJSON: {"id":1661716,"title":"The Remediatio­n of Paper - Creating Affinities between Paper and Cross-Medi­a Informatio­n Spaces","translated_title":"","metadata":{"abstract":"\"While there have been dramatic increases in the use of digital technologies for information storage, processing and delivery over the last few decades, the affordances of paper have ensured its retention as a key information medium. Despite predictions of the paperless office, paper is ever more present in our daily work. However, there is a gap between the paper and digital worlds: information present in paper documents cannot be transferred seamlessly to digital media and digital services are not easily accessible from the paper world. \r\n\r\nIn this seminar talk I will present an information-centric approach for integrating paper with digital as well as physical media based on a general cross-media information platform (iServer). Some details about the architecture and implementation of the iServer platform as well as the underlying RSL metamodel for cross-media linking will be highlighted. A selection of interactive paper applications that have been developed based on this platform over the past few years will be presented, including the EdFest interactive paper guide for the Edinburgh festivals and the PaperPoint tool for paper-driven PowerPoint presentations. Challenges and solutions for novel forms of interactive paper and cross-media publishing are discussed based on the presented applications.\r\n\r\nThe second part of my talk will address future directions for research on interactive paper and cross-media information management. This includes specific extensions of the iServer platform and RSL model as well as the application of our solution in new domains such as digital libraries, cross-media annotation and retrieval or personal cross-media information management that goes beyond the hierarchical information management imposed by the desktop metaphor.\"","location":"Vrije Universiteit Brussel Computer Science","event_date":{"day":null,"month":4,"year":2009,"errors":{}}},"translated_abstract":"\"While there have been dramatic increases in the use of digital technologies for information storage, processing and delivery over the last few decades, the affordances of paper have ensured its retention as a key information medium. Despite predictions of the paperless office, paper is ever more present in our daily work. However, there is a gap between the paper and digital worlds: information present in paper documents cannot be transferred seamlessly to digital media and digital services are not easily accessible from the paper world. \r\n\r\nIn this seminar talk I will present an information-centric approach for integrating paper with digital as well as physical media based on a general cross-media information platform (iServer). Some details about the architecture and implementation of the iServer platform as well as the underlying RSL metamodel for cross-media linking will be highlighted. A selection of interactive paper applications that have been developed based on this platform over the past few years will be presented, including the EdFest interactive paper guide for the Edinburgh festivals and the PaperPoint tool for paper-driven PowerPoint presentations. Challenges and solutions for novel forms of interactive paper and cross-media publishing are discussed based on the presented applications.\r\n\r\nThe second part of my talk will address future directions for research on interactive paper and cross-media information management. This includes specific extensions of the iServer platform and RSL model as well as the application of our solution in new domains such as digital libraries, cross-media annotation and retrieval or personal cross-media information management that goes beyond the hierarchical information management imposed by the desktop metaphor.\"","internal_url":"https://www.academia.edu/1661716/The_Remediatio_n_of_Paper_Creating_Affinities_between_Paper_and_Cross_Medi_a_Informatio_n_Spaces","translated_internal_url":"","created_at":"2010-01-12T00:20:27.908-08:00","preview_url":null,"current_user_can_edit":null,"current_user_is_owner":null,"owner_id":13155,"coauthors_can_edit":true,"document_type":"talk","co_author_tags":[],"downloadable_attachments":[{"id":77528258,"title":"","file_type":"pdf","scribd_thumbnail_url":"https://attachments.academia-assets.com/77528258/thumbnails/1.jpg","file_name":"ipaper.pdf","download_url":"https://www.academia.edu/attachments/77528258/download_file","bulk_download_file_name":"The_Remediatio_n_of_Paper_Creating_Affin.pdf","bulk_download_url":"https://d1wqtxts1xzle7.cloudfront.net/77528258/ipaper-libre.pdf?1640739447=\u0026response-content-disposition=attachment%3B+filename%3DThe_Remediatio_n_of_Paper_Creating_Affin.pdf\u0026Expires=1744203892\u0026Signature=R-70yd1p6sdlLjrfrieoKzlXVnTqwxfiaE~fGwH6~2pLbZDC2m2JlmA4~wqfIx7c8BOg~IVm~u8Cwy9PTjV1hFtjTR8NRJtA2dL9VKBtHh-zhZowlsaMlMRvdk5PxNwVi3qu5ABWCpKkgy0JFxoXY6mIhXMR2TiwEuEL~pcb1obygofYgH5tXQVyFmT8e1u5nTte7DK~EEapMo4QSphZ7O7vvvdeOamxR8I1ckusFLEujLk2N02ducFs-bbGy6ODj-mGCT1O~r61JZj2urG5p8PuewRf4CfSNAnKkdrWiVLsHQ0km6wv-JmSK1bcvtaw8qqLvatJKsXMqLOEH0rO1Q__\u0026Key-Pair-Id=APKAJLOHF5GGSLRBV4ZA"}],"slug":"The_Remediatio_n_of_Paper_Creating_Affinities_between_Paper_and_Cross_Medi_a_Informatio_n_Spaces","translated_slug":"","page_count":34,"language":"en","content_type":"Work","summary":"\"While there have been dramatic increases in the use of digital technologies for information storage, processing and delivery over the last few decades, the affordances of paper have ensured its retention as a key information medium. Despite predictions of the paperless office, paper is ever more present in our daily work. However, there is a gap between the paper and digital worlds: information present in paper documents cannot be transferred seamlessly to digital media and digital services are not easily accessible from the paper world. \r\n\r\nIn this seminar talk I will present an information-centric approach for integrating paper with digital as well as physical media based on a general cross-media information platform (iServer). Some details about the architecture and implementation of the iServer platform as well as the underlying RSL metamodel for cross-media linking will be highlighted. A selection of interactive paper applications that have been developed based on this platform over the past few years will be presented, including the EdFest interactive paper guide for the Edinburgh festivals and the PaperPoint tool for paper-driven PowerPoint presentations. Challenges and solutions for novel forms of interactive paper and cross-media publishing are discussed based on the presented applications.\r\n\r\nThe second part of my talk will address future directions for research on interactive paper and cross-media information management. This includes specific extensions of the iServer platform and RSL model as well as the application of our solution in new domains such as digital libraries, cross-media annotation and retrieval or personal cross-media information management that goes beyond the hierarchical information management imposed by the desktop metaphor.\"","impression_tracking_id":null,"owner":{"id":13155,"first_name":"Beat","middle_initials":null,"last_name":"Signer","page_name":"BeatSigner","domain_name":"vub","created_at":"2008-10-29T17:11:18.015-07:00","display_name":"Beat Signer","url":"https://vub.academia.edu/BeatSigner"},"attachments":[{"id":77528258,"title":"","file_type":"pdf","scribd_thumbnail_url":"https://attachments.academia-assets.com/77528258/thumbnails/1.jpg","file_name":"ipaper.pdf","download_url":"https://www.academia.edu/attachments/77528258/download_file","bulk_download_file_name":"The_Remediatio_n_of_Paper_Creating_Affin.pdf","bulk_download_url":"https://d1wqtxts1xzle7.cloudfront.net/77528258/ipaper-libre.pdf?1640739447=\u0026response-content-disposition=attachment%3B+filename%3DThe_Remediatio_n_of_Paper_Creating_Affin.pdf\u0026Expires=1744203892\u0026Signature=R-70yd1p6sdlLjrfrieoKzlXVnTqwxfiaE~fGwH6~2pLbZDC2m2JlmA4~wqfIx7c8BOg~IVm~u8Cwy9PTjV1hFtjTR8NRJtA2dL9VKBtHh-zhZowlsaMlMRvdk5PxNwVi3qu5ABWCpKkgy0JFxoXY6mIhXMR2TiwEuEL~pcb1obygofYgH5tXQVyFmT8e1u5nTte7DK~EEapMo4QSphZ7O7vvvdeOamxR8I1ckusFLEujLk2N02ducFs-bbGy6ODj-mGCT1O~r61JZj2urG5p8PuewRf4CfSNAnKkdrWiVLsHQ0km6wv-JmSK1bcvtaw8qqLvatJKsXMqLOEH0rO1Q__\u0026Key-Pair-Id=APKAJLOHF5GGSLRBV4ZA"}],"research_interests":[{"id":422,"name":"Computer Science","url":"https://www.academia.edu/Documents/in/Computer_Science"},{"id":422,"name":"Computer Science","url":"https://www.academia.edu/Documents/in/Computer_Science"},{"id":449,"name":"Software Engineering","url":"https://www.academia.edu/Documents/in/Software_Engineering"},{"id":472,"name":"Human Computer Interaction","url":"https://www.academia.edu/Documents/in/Human_Computer_Interaction"},{"id":2879,"name":"Ubiquitous Computing","url":"https://www.academia.edu/Documents/in/Ubiquitous_Computing"},{"id":3419,"name":"Multimedia","url":"https://www.academia.edu/Documents/in/Multimedia"},{"id":5673,"name":"Augmented Reality","url":"https://www.academia.edu/Documents/in/Augmented_Reality"},{"id":8215,"name":"Conceptual Modelling","url":"https://www.academia.edu/Documents/in/Conceptual_Modelling"},{"id":8215,"name":"Conceptual Modelling","url":"https://www.academia.edu/Documents/in/Conceptual_Modelling"},{"id":10165,"name":"Interactive and Digital Media","url":"https://www.academia.edu/Documents/in/Interactive_and_Digital_Media"},{"id":11085,"name":"Cross-Media Information Spaces","url":"https://www.academia.edu/Documents/in/Cross-Media_Information_Spaces"},{"id":11086,"name":"Interactive Paper","url":"https://www.academia.edu/Documents/in/Interactive_Paper"},{"id":20470,"name":"Tangible User Interfaces","url":"https://www.academia.edu/Documents/in/Tangible_User_Interfaces"},{"id":20470,"name":"Tangible User Interfaces","url":"https://www.academia.edu/Documents/in/Tangible_User_Interfaces"},{"id":44606,"name":"Hypermedia","url":"https://www.academia.edu/Documents/in/Hypermedia"}],"urls":[{"id":15787813,"url":"https://speakerdeck.com/signer/the-remediation-of-paper-creating-affinities-between-paper-and-cross-media-information-spaces"}]}, dispatcherData: dispatcherData }); $(this).data('initialized', true); } }); $a.trackClickSource(".js-work-strip-work-link", "profile_work_strip") if (false) { Aedu.setUpFigureCarousel('profile-work-1661716-figures'); } }); </script> <div class="js-work-strip profile--work_container" data-work-id="1661718"><div class="profile--work_thumbnail hidden-xs"><a class="js-work-strip-work-link" data-click-track="profile-work-strip-thumbnail" href="https://www.academia.edu/1661718/Interaktiv_es_Papier_Ein_neues_Medium_zur_Integratio_n_von_Papier_und_digitalen_Medien_in_crossmedia_len_Publikatio_nen"><img alt="Research paper thumbnail of Interaktiv­es Papier - Ein neues Medium zur Integratio­n von Papier und digitalen Medien in crossmedia­len Publikatio­nen" class="work-thumbnail" src="https://attachments.academia-assets.com/77881355/thumbnails/1.jpg" /></a></div><div class="wp-workCard wp-workCard_itemContainer"><div class="wp-workCard_item wp-workCard--title"><a class="js-work-strip-work-link text-gray-darker" data-click-track="profile-work-strip-title" href="https://www.academia.edu/1661718/Interaktiv_es_Papier_Ein_neues_Medium_zur_Integratio_n_von_Papier_und_digitalen_Medien_in_crossmedia_len_Publikatio_nen">Interaktiv­es Papier - Ein neues Medium zur Integratio­n von Papier und digitalen Medien in crossmedia­len Publikatio­nen</a></div><div class="wp-workCard_item wp-workCard--actions"><span class="work-strip-bookmark-button-container"></span><a id="4c722d38848c66d53110f97b7fe6f0b5" class="wp-workCard--action" rel="nofollow" data-click-track="profile-work-strip-download" data-download="{&quot;attachment_id&quot;:77881355,&quot;asset_id&quot;:1661718,&quot;asset_type&quot;:&quot;Work&quot;,&quot;button_location&quot;:&quot;profile&quot;}" href="https://www.academia.edu/attachments/77881355/download_file?s=profile"><span><i class="fa fa-arrow-down"></i></span><span>Download</span></a><span class="wp-workCard--action visible-if-viewed-by-owner inline-block" style="display: none;"><span class="js-profile-work-strip-edit-button-wrapper profile-work-strip-edit-button-wrapper" data-work-id="1661718"><a class="js-profile-work-strip-edit-button" tabindex="0"><span><i class="fa fa-pencil"></i></span><span>Edit</span></a></span></span></div><div class="wp-workCard_item wp-workCard--stats"><span><span><span class="js-view-count view-count u-mr2x" data-work-id="1661718"><i class="fa fa-spinner fa-spin"></i></span><script>$(function () { var workId = 1661718; window.Academia.workViewCountsFetcher.queue(workId, function (count) { var description = window.$h.commaizeInt(count) + " " + window.$h.pluralize(count, 'View'); $(".js-view-count[data-work-id=1661718]").text(description); $(".js-view-count[data-work-id=1661718]").attr('title', description).tooltip(); }); });</script></span></span><span><span class="percentile-widget hidden"><span class="u-mr2x work-percentile"></span></span><script>$(function () { var workId = 1661718; window.Academia.workPercentilesFetcher.queue(workId, function (percentileText) { var container = $(".js-work-strip[data-work-id='1661718']"); container.find('.work-percentile').text(percentileText.charAt(0).toUpperCase() + percentileText.slice(1)); container.find('.percentile-widget').show(); container.find('.percentile-widget').removeClass('hidden'); }); });</script></span></div><div id="work-strip-premium-row-container"></div></div></div><script> require.config({ waitSeconds: 90 })(["https://a.academia-assets.com/assets/wow_profile-a9bf3a2bc8c89fa2a77156577594264ee8a0f214d74241bc0fcd3f69f8d107ac.js","https://a.academia-assets.com/assets/work_edit-ad038b8c047c1a8d4fa01b402d530ff93c45fee2137a149a4a5398bc8ad67560.js"], function() { // from javascript_helper.rb var dispatcherData = {} if (true){ window.WowProfile.dispatcher = window.WowProfile.dispatcher || _.clone(Backbone.Events); dispatcherData = { dispatcher: window.WowProfile.dispatcher, downloadLinkId: "4c722d38848c66d53110f97b7fe6f0b5" } } $('.js-work-strip[data-work-id=1661718]').each(function() { if (!$(this).data('initialized')) { new WowProfile.WorkStripView({ el: this, workJSON: {"id":1661718,"title":"Interaktiv­es Papier - Ein neues Medium zur Integratio­n von Papier und digitalen Medien in crossmedia­len Publikatio­nen","translated_title":"","metadata":{"location":"Swiss Publishing Week, Hotel Banana City, Winterthur, Switzerland","event_date":{"day":19,"month":9,"year":2008,"errors":{}},"ai_abstract":"The paper discusses the concept of interactive paper as a new medium that integrates traditional paper with digital media in cross-media publications. It highlights the potential of this innovative medium to enhance user engagement and facilitate a seamless transition between physical and digital content.","publication_date":{"day":null,"month":null,"year":2008,"errors":{}}},"translated_abstract":null,"internal_url":"https://www.academia.edu/1661718/Interaktiv_es_Papier_Ein_neues_Medium_zur_Integratio_n_von_Papier_und_digitalen_Medien_in_crossmedia_len_Publikatio_nen","translated_internal_url":"","created_at":"2010-01-12T00:27:39.299-08:00","preview_url":null,"current_user_can_edit":null,"current_user_is_owner":null,"owner_id":13155,"coauthors_can_edit":true,"document_type":"talk","co_author_tags":[],"downloadable_attachments":[{"id":77881355,"title":"","file_type":"pdf","scribd_thumbnail_url":"https://attachments.academia-assets.com/77881355/thumbnails/1.jpg","file_name":"Swiss_Publishing_Week.pdf","download_url":"https://www.academia.edu/attachments/77881355/download_file","bulk_download_file_name":"Interaktiv_es_Papier_Ein_neues_Medium_zu.pdf","bulk_download_url":"https://d1wqtxts1xzle7.cloudfront.net/77881355/Swiss_Publishing_Week-libre.pdf?1641125228=\u0026response-content-disposition=attachment%3B+filename%3DInteraktiv_es_Papier_Ein_neues_Medium_zu.pdf\u0026Expires=1744203892\u0026Signature=ety2oUUhOhLsfpGw3tvYLt8w3lmIHXMBvWw~-2v67jpXNyd87lQHY5iwtKdaE1zcVjZbooMKL3L6TWYPaSmskOhWs3KgAkGj7T53rP5jr8WUvmhEivdNZRfbK56n1X29QUM6wJEBK-XRADBMZoqNJu1~VDbCsUkGqcPoRgyvC~08p1ZaX-ZEUohQycZhQSrftvxqvQn-7XBcPTKEDwexh1MRLr-pURD9p8TtI5XO0-I9ZTcKhfvuQJM3Ga0q~L4APAUzdnW6WXvC7GiJCTUrxswPI31tION57aWCoWPf57fvxaURzhTNVXv960826NV2DoSymlK-KfWSJTojW9nESA__\u0026Key-Pair-Id=APKAJLOHF5GGSLRBV4ZA"}],"slug":"Interaktiv_es_Papier_Ein_neues_Medium_zur_Integratio_n_von_Papier_und_digitalen_Medien_in_crossmedia_len_Publikatio_nen","translated_slug":"","page_count":18,"language":"en","content_type":"Work","summary":null,"impression_tracking_id":null,"owner":{"id":13155,"first_name":"Beat","middle_initials":null,"last_name":"Signer","page_name":"BeatSigner","domain_name":"vub","created_at":"2008-10-29T17:11:18.015-07:00","display_name":"Beat Signer","url":"https://vub.academia.edu/BeatSigner"},"attachments":[{"id":77881355,"title":"","file_type":"pdf","scribd_thumbnail_url":"https://attachments.academia-assets.com/77881355/thumbnails/1.jpg","file_name":"Swiss_Publishing_Week.pdf","download_url":"https://www.academia.edu/attachments/77881355/download_file","bulk_download_file_name":"Interaktiv_es_Papier_Ein_neues_Medium_zu.pdf","bulk_download_url":"https://d1wqtxts1xzle7.cloudfront.net/77881355/Swiss_Publishing_Week-libre.pdf?1641125228=\u0026response-content-disposition=attachment%3B+filename%3DInteraktiv_es_Papier_Ein_neues_Medium_zu.pdf\u0026Expires=1744203892\u0026Signature=ety2oUUhOhLsfpGw3tvYLt8w3lmIHXMBvWw~-2v67jpXNyd87lQHY5iwtKdaE1zcVjZbooMKL3L6TWYPaSmskOhWs3KgAkGj7T53rP5jr8WUvmhEivdNZRfbK56n1X29QUM6wJEBK-XRADBMZoqNJu1~VDbCsUkGqcPoRgyvC~08p1ZaX-ZEUohQycZhQSrftvxqvQn-7XBcPTKEDwexh1MRLr-pURD9p8TtI5XO0-I9ZTcKhfvuQJM3Ga0q~L4APAUzdnW6WXvC7GiJCTUrxswPI31tION57aWCoWPf57fvxaURzhTNVXv960826NV2DoSymlK-KfWSJTojW9nESA__\u0026Key-Pair-Id=APKAJLOHF5GGSLRBV4ZA"}],"research_interests":[{"id":422,"name":"Computer Science","url":"https://www.academia.edu/Documents/in/Computer_Science"},{"id":472,"name":"Human Computer Interaction","url":"https://www.academia.edu/Documents/in/Human_Computer_Interaction"},{"id":472,"name":"Human Computer Interaction","url":"https://www.academia.edu/Documents/in/Human_Computer_Interaction"},{"id":933,"name":"New Media","url":"https://www.academia.edu/Documents/in/New_Media"},{"id":2879,"name":"Ubiquitous Computing","url":"https://www.academia.edu/Documents/in/Ubiquitous_Computing"},{"id":3419,"name":"Multimedia","url":"https://www.academia.edu/Documents/in/Multimedia"},{"id":5673,"name":"Augmented Reality","url":"https://www.academia.edu/Documents/in/Augmented_Reality"},{"id":10165,"name":"Interactive and Digital Media","url":"https://www.academia.edu/Documents/in/Interactive_and_Digital_Media"},{"id":11085,"name":"Cross-Media Information Spaces","url":"https://www.academia.edu/Documents/in/Cross-Media_Information_Spaces"},{"id":11086,"name":"Interactive Paper","url":"https://www.academia.edu/Documents/in/Interactive_Paper"},{"id":15951,"name":"Digital Pen and Paper","url":"https://www.academia.edu/Documents/in/Digital_Pen_and_Paper"},{"id":20470,"name":"Tangible User Interfaces","url":"https://www.academia.edu/Documents/in/Tangible_User_Interfaces"},{"id":44606,"name":"Hypermedia","url":"https://www.academia.edu/Documents/in/Hypermedia"}],"urls":[{"id":15980863,"url":"https://speakerdeck.com/signer/interaktives-papier-ein-neues-medium-zur-integration-von-papier-und-digitalen-medien-in-crossmedialen-publikationen"}]}, dispatcherData: dispatcherData }); $(this).data('initialized', true); } }); $a.trackClickSource(".js-work-strip-work-link", "profile_work_strip") if (false) { Aedu.setUpFigureCarousel('profile-work-1661718-figures'); } }); </script> <div class="js-work-strip profile--work_container" data-work-id="1661720"><div class="profile--work_thumbnail hidden-xs"><a class="js-work-strip-work-link" data-click-track="profile-work-strip-thumbnail" href="https://www.academia.edu/1661720/Google_PageRank"><img alt="Research paper thumbnail of Google PageRank" class="work-thumbnail" src="https://attachments.academia-assets.com/30276920/thumbnails/1.jpg" /></a></div><div class="wp-workCard wp-workCard_itemContainer"><div class="wp-workCard_item wp-workCard--title"><a class="js-work-strip-work-link text-gray-darker" data-click-track="profile-work-strip-title" href="https://www.academia.edu/1661720/Google_PageRank">Google PageRank</a></div><div class="wp-workCard_item"><div class="carousel-container carousel-container--sm" id="profile-work-1661720-figures"><div class="prev-slide-container js-prev-button-container"><button aria-label="Previous" class="carousel-navigation-button js-profile-work-1661720-figures-prev"><span class="material-symbols-outlined" style="font-size: 24px" translate="no">arrow_back_ios</span></button></div><div class="slides-container js-slides-container"><figure class="figure-slide-container"><a href="https://www.academia.edu/figures/5437672/figure-1-google-pagerank"><img alt="" class="figure-slide-image" src="https://figures.academia-assets.com/30276920/figure_001.jpg" /></a></figure><figure class="figure-slide-container"><a href="https://www.academia.edu/figures/5437682/figure-2-google-pagerank"><img alt="" class="figure-slide-image" src="https://figures.academia-assets.com/30276920/figure_002.jpg" /></a></figure><figure class="figure-slide-container"><a href="https://www.academia.edu/figures/5437688/figure-3-google-pagerank"><img alt="" class="figure-slide-image" src="https://figures.academia-assets.com/30276920/figure_003.jpg" /></a></figure><figure class="figure-slide-container"><a href="https://www.academia.edu/figures/5437699/figure-4-google-pagerank"><img alt="" class="figure-slide-image" src="https://figures.academia-assets.com/30276920/figure_004.jpg" /></a></figure><figure class="figure-slide-container"><a href="https://www.academia.edu/figures/5437709/figure-5-google-pagerank"><img alt="" class="figure-slide-image" src="https://figures.academia-assets.com/30276920/figure_005.jpg" /></a></figure><figure class="figure-slide-container"><a href="https://www.academia.edu/figures/5437718/figure-6-google-pagerank"><img alt="" class="figure-slide-image" src="https://figures.academia-assets.com/30276920/figure_006.jpg" /></a></figure><figure class="figure-slide-container"><a href="https://www.academia.edu/figures/5437729/figure-7-google-pagerank"><img alt="" class="figure-slide-image" src="https://figures.academia-assets.com/30276920/figure_007.jpg" /></a></figure><figure class="figure-slide-container"><a href="https://www.academia.edu/figures/5437737/figure-8-google-pagerank"><img alt="" class="figure-slide-image" src="https://figures.academia-assets.com/30276920/figure_008.jpg" /></a></figure><figure class="figure-slide-container"><a href="https://www.academia.edu/figures/5437749/figure-9-google-pagerank"><img alt="" class="figure-slide-image" src="https://figures.academia-assets.com/30276920/figure_009.jpg" /></a></figure><figure class="figure-slide-container"><a href="https://www.academia.edu/figures/5437755/figure-10-google-pagerank"><img alt="" class="figure-slide-image" src="https://figures.academia-assets.com/30276920/figure_010.jpg" /></a></figure><figure class="figure-slide-container"><a href="https://www.academia.edu/figures/5437761/figure-11-google-pagerank"><img alt="" class="figure-slide-image" src="https://figures.academia-assets.com/30276920/figure_011.jpg" /></a></figure><figure class="figure-slide-container"><a href="https://www.academia.edu/figures/5437767/figure-12-google-pagerank"><img alt="" class="figure-slide-image" src="https://figures.academia-assets.com/30276920/figure_012.jpg" /></a></figure><figure class="figure-slide-container"><a href="https://www.academia.edu/figures/5437771/figure-13-google-pagerank"><img alt="" class="figure-slide-image" src="https://figures.academia-assets.com/30276920/figure_013.jpg" /></a></figure><figure class="figure-slide-container"><a href="https://www.academia.edu/figures/5437776/figure-14-google-pagerank"><img alt="" class="figure-slide-image" src="https://figures.academia-assets.com/30276920/figure_014.jpg" /></a></figure><figure class="figure-slide-container"><a href="https://www.academia.edu/figures/5437784/figure-15-google-pagerank"><img alt="" class="figure-slide-image" src="https://figures.academia-assets.com/30276920/figure_015.jpg" /></a></figure><figure class="figure-slide-container"><a href="https://www.academia.edu/figures/5437791/figure-16-google-pagerank"><img alt="" class="figure-slide-image" src="https://figures.academia-assets.com/30276920/figure_016.jpg" /></a></figure></div><div class="next-slide-container js-next-button-container"><button aria-label="Next" class="carousel-navigation-button js-profile-work-1661720-figures-next"><span class="material-symbols-outlined" style="font-size: 24px" translate="no">arrow_forward_ios</span></button></div></div></div><div class="wp-workCard_item wp-workCard--actions"><span class="work-strip-bookmark-button-container"></span><a id="c581aee0bc2b61c8fe37dfd950761a9a" class="wp-workCard--action" rel="nofollow" data-click-track="profile-work-strip-download" data-download="{&quot;attachment_id&quot;:30276920,&quot;asset_id&quot;:1661720,&quot;asset_type&quot;:&quot;Work&quot;,&quot;button_location&quot;:&quot;profile&quot;}" href="https://www.academia.edu/attachments/30276920/download_file?s=profile"><span><i class="fa fa-arrow-down"></i></span><span>Download</span></a><span class="wp-workCard--action visible-if-viewed-by-owner inline-block" style="display: none;"><span class="js-profile-work-strip-edit-button-wrapper profile-work-strip-edit-button-wrapper" data-work-id="1661720"><a class="js-profile-work-strip-edit-button" tabindex="0"><span><i class="fa fa-pencil"></i></span><span>Edit</span></a></span></span></div><div class="wp-workCard_item wp-workCard--stats"><span><span><span class="js-view-count view-count u-mr2x" data-work-id="1661720"><i class="fa fa-spinner fa-spin"></i></span><script>$(function () { var workId = 1661720; window.Academia.workViewCountsFetcher.queue(workId, function (count) { var description = window.$h.commaizeInt(count) + " " + window.$h.pluralize(count, 'View'); $(".js-view-count[data-work-id=1661720]").text(description); $(".js-view-count[data-work-id=1661720]").attr('title', description).tooltip(); }); });</script></span></span><span><span class="percentile-widget hidden"><span class="u-mr2x work-percentile"></span></span><script>$(function () { var workId = 1661720; window.Academia.workPercentilesFetcher.queue(workId, function (percentileText) { var container = $(".js-work-strip[data-work-id='1661720']"); container.find('.work-percentile').text(percentileText.charAt(0).toUpperCase() + percentileText.slice(1)); container.find('.percentile-widget').show(); container.find('.percentile-widget').removeClass('hidden'); }); });</script></span></div><div id="work-strip-premium-row-container"></div></div></div><script> require.config({ waitSeconds: 90 })(["https://a.academia-assets.com/assets/wow_profile-a9bf3a2bc8c89fa2a77156577594264ee8a0f214d74241bc0fcd3f69f8d107ac.js","https://a.academia-assets.com/assets/work_edit-ad038b8c047c1a8d4fa01b402d530ff93c45fee2137a149a4a5398bc8ad67560.js"], function() { // from javascript_helper.rb var dispatcherData = {} if (true){ window.WowProfile.dispatcher = window.WowProfile.dispatcher || _.clone(Backbone.Events); dispatcherData = { dispatcher: window.WowProfile.dispatcher, downloadLinkId: "c581aee0bc2b61c8fe37dfd950761a9a" } } $('.js-work-strip[data-work-id=1661720]').each(function() { if (!$(this).data('initialized')) { new WowProfile.WorkStripView({ el: this, workJSON: {"id":1661720,"title":"Google PageRank","translated_title":"","metadata":{"location":"Vrije Universiteit Brussel Computer Science","ai_abstract":"This paper discusses the PageRank algorithm, a link analysis algorithm used by Google to rank web pages in search engine results. The central premise of PageRank is that the importance of a webpage is determined by the quantity and quality of links directed towards it. The author elaborates on the mathematical formulation of PageRank, its implications for website development, and strategies to enhance a site's PageRank through effective link management.","conference_start_date":{"day":25,"month":8,"year":2008,"errors":{}}},"translated_abstract":null,"internal_url":"https://www.academia.edu/1661720/Google_PageRank","translated_internal_url":"","created_at":"2010-01-12T00:37:18.199-08:00","preview_url":null,"current_user_can_edit":null,"current_user_is_owner":null,"owner_id":13155,"coauthors_can_edit":true,"document_type":"talk","co_author_tags":[],"downloadable_attachments":[{"id":30276920,"title":"","file_type":"pdf","scribd_thumbnail_url":"https://attachments.academia-assets.com/30276920/thumbnails/1.jpg","file_name":"Google_PageRank.pdf","download_url":"https://www.academia.edu/attachments/30276920/download_file","bulk_download_file_name":"Google_PageRank.pdf","bulk_download_url":"https://d1wqtxts1xzle7.cloudfront.net/30276920/Google_PageRank-libre.pdf?1390882704=\u0026response-content-disposition=attachment%3B+filename%3DGoogle_PageRank.pdf\u0026Expires=1744203892\u0026Signature=XnD~p4vsUHBQMirgkX~~rUEhPmGGQYfyDUEhG1-M8-Kt4zS~jBq~bzlUl-sWOG-awi5a-ev~9BLyqSzBFKomNa2agLSWPdWdayoJ4GuDqYH6XCULqBizUaLub6FCaPJkO1QOMjVLL14IWueJ04O3V0-2WM-mwI5o1tq49WdSrxyL2tmZqNuyX9T5D1sYUn5bSlDQKhUjjW8oHmQNUkn1oJBjHrLisEhBxhhhA81p3b4jhMbD~kO5201wPuB5Wyu07LokUqZVQLOI1kKvJX7~BJc9L2mxueqVE8Pg59CjSnIO37chO7Q3T8QekHZ6RFqJGPLJ0VG8cVbQJBfI5CI8YQ__\u0026Key-Pair-Id=APKAJLOHF5GGSLRBV4ZA"}],"slug":"Google_PageRank","translated_slug":"","page_count":29,"language":"en","content_type":"Work","summary":null,"impression_tracking_id":null,"owner":{"id":13155,"first_name":"Beat","middle_initials":null,"last_name":"Signer","page_name":"BeatSigner","domain_name":"vub","created_at":"2008-10-29T17:11:18.015-07:00","display_name":"Beat Signer","url":"https://vub.academia.edu/BeatSigner"},"attachments":[{"id":30276920,"title":"","file_type":"pdf","scribd_thumbnail_url":"https://attachments.academia-assets.com/30276920/thumbnails/1.jpg","file_name":"Google_PageRank.pdf","download_url":"https://www.academia.edu/attachments/30276920/download_file","bulk_download_file_name":"Google_PageRank.pdf","bulk_download_url":"https://d1wqtxts1xzle7.cloudfront.net/30276920/Google_PageRank-libre.pdf?1390882704=\u0026response-content-disposition=attachment%3B+filename%3DGoogle_PageRank.pdf\u0026Expires=1744203892\u0026Signature=XnD~p4vsUHBQMirgkX~~rUEhPmGGQYfyDUEhG1-M8-Kt4zS~jBq~bzlUl-sWOG-awi5a-ev~9BLyqSzBFKomNa2agLSWPdWdayoJ4GuDqYH6XCULqBizUaLub6FCaPJkO1QOMjVLL14IWueJ04O3V0-2WM-mwI5o1tq49WdSrxyL2tmZqNuyX9T5D1sYUn5bSlDQKhUjjW8oHmQNUkn1oJBjHrLisEhBxhhhA81p3b4jhMbD~kO5201wPuB5Wyu07LokUqZVQLOI1kKvJX7~BJc9L2mxueqVE8Pg59CjSnIO37chO7Q3T8QekHZ6RFqJGPLJ0VG8cVbQJBfI5CI8YQ__\u0026Key-Pair-Id=APKAJLOHF5GGSLRBV4ZA"}],"research_interests":[{"id":422,"name":"Computer Science","url":"https://www.academia.edu/Documents/in/Computer_Science"},{"id":449,"name":"Software Engineering","url":"https://www.academia.edu/Documents/in/Software_Engineering"},{"id":459,"name":"Information Science","url":"https://www.academia.edu/Documents/in/Information_Science"},{"id":984,"name":"Web 2.0","url":"https://www.academia.edu/Documents/in/Web_2.0"},{"id":8813,"name":"Web search","url":"https://www.academia.edu/Documents/in/Web_search"},{"id":9223,"name":"Digital Preservation","url":"https://www.academia.edu/Documents/in/Digital_Preservation"},{"id":9246,"name":"Social Media","url":"https://www.academia.edu/Documents/in/Social_Media"},{"id":21389,"name":"E-Government","url":"https://www.academia.edu/Documents/in/E-Government"},{"id":30947,"name":"The Internet","url":"https://www.academia.edu/Documents/in/The_Internet"},{"id":44873,"name":"Web 3.0","url":"https://www.academia.edu/Documents/in/Web_3.0"},{"id":141114,"name":"World Wide Web","url":"https://www.academia.edu/Documents/in/World_Wide_Web"},{"id":295949,"name":"Digital Era","url":"https://www.academia.edu/Documents/in/Digital_Era"},{"id":597612,"name":"Library and Archival Science","url":"https://www.academia.edu/Documents/in/Library_and_Archival_Science"}],"urls":[{"id":295114,"url":"http://www.slideshare.net/signer/google-pagerank-presentation"}]}, dispatcherData: dispatcherData }); $(this).data('initialized', true); } }); $a.trackClickSource(".js-work-strip-work-link", "profile_work_strip") if (true) { Aedu.setUpFigureCarousel('profile-work-1661720-figures'); } }); </script> <div class="js-work-strip profile--work_container" data-work-id="1661721"><div class="profile--work_thumbnail hidden-xs"><a class="js-work-strip-work-link" data-click-track="profile-work-strip-thumbnail" href="https://www.academia.edu/1661721/PAD_Recent_Developmen_ts_and_Future_Work"><img alt="Research paper thumbnail of PAD - Recent Developmen­ts and Future Work" class="work-thumbnail" src="https://attachments.academia-assets.com/77882449/thumbnails/1.jpg" /></a></div><div class="wp-workCard wp-workCard_itemContainer"><div class="wp-workCard_item wp-workCard--title"><a class="js-work-strip-work-link text-gray-darker" data-click-track="profile-work-strip-title" href="https://www.academia.edu/1661721/PAD_Recent_Developmen_ts_and_Future_Work">PAD - Recent Developmen­ts and Future Work</a></div><div class="wp-workCard_item wp-workCard--actions"><span class="work-strip-bookmark-button-container"></span><a id="5412f30f09d6dd04c8818062bf1b5bae" class="wp-workCard--action" rel="nofollow" data-click-track="profile-work-strip-download" data-download="{&quot;attachment_id&quot;:77882449,&quot;asset_id&quot;:1661721,&quot;asset_type&quot;:&quot;Work&quot;,&quot;button_location&quot;:&quot;profile&quot;}" href="https://www.academia.edu/attachments/77882449/download_file?s=profile"><span><i class="fa fa-arrow-down"></i></span><span>Download</span></a><span class="wp-workCard--action visible-if-viewed-by-owner inline-block" style="display: none;"><span class="js-profile-work-strip-edit-button-wrapper profile-work-strip-edit-button-wrapper" data-work-id="1661721"><a class="js-profile-work-strip-edit-button" tabindex="0"><span><i class="fa fa-pencil"></i></span><span>Edit</span></a></span></span></div><div class="wp-workCard_item wp-workCard--stats"><span><span><span class="js-view-count view-count u-mr2x" data-work-id="1661721"><i class="fa fa-spinner fa-spin"></i></span><script>$(function () { var workId = 1661721; window.Academia.workViewCountsFetcher.queue(workId, function (count) { var description = window.$h.commaizeInt(count) + " " + window.$h.pluralize(count, 'View'); $(".js-view-count[data-work-id=1661721]").text(description); $(".js-view-count[data-work-id=1661721]").attr('title', description).tooltip(); }); });</script></span></span><span><span class="percentile-widget hidden"><span class="u-mr2x work-percentile"></span></span><script>$(function () { var workId = 1661721; window.Academia.workPercentilesFetcher.queue(workId, function (percentileText) { var container = $(".js-work-strip[data-work-id='1661721']"); container.find('.work-percentile').text(percentileText.charAt(0).toUpperCase() + percentileText.slice(1)); container.find('.percentile-widget').show(); container.find('.percentile-widget').removeClass('hidden'); }); });</script></span></div><div id="work-strip-premium-row-container"></div></div></div><script> require.config({ waitSeconds: 90 })(["https://a.academia-assets.com/assets/wow_profile-a9bf3a2bc8c89fa2a77156577594264ee8a0f214d74241bc0fcd3f69f8d107ac.js","https://a.academia-assets.com/assets/work_edit-ad038b8c047c1a8d4fa01b402d530ff93c45fee2137a149a4a5398bc8ad67560.js"], function() { // from javascript_helper.rb var dispatcherData = {} if (true){ window.WowProfile.dispatcher = window.WowProfile.dispatcher || _.clone(Backbone.Events); dispatcherData = { dispatcher: window.WowProfile.dispatcher, downloadLinkId: "5412f30f09d6dd04c8818062bf1b5bae" } } $('.js-work-strip[data-work-id=1661721]').each(function() { if (!$(this).data('initialized')) { new WowProfile.WorkStripView({ el: this, workJSON: {"id":1661721,"title":"PAD - Recent Developmen­ts and Future Work","translated_title":"","metadata":{"location":"GlobIS Workshop, Zurich, Switzerland","event_date":{"day":10,"month":7,"year":2008,"errors":{}},"ai_abstract":"The paper discusses recent advancements in Pen and Paper-Based Interaction (PAD) technologies, focusing on innovations such as interactive tools and interfaces that enhance user interaction with digital content. It outlines current projects, including the development of the Interactive Table and various applications for gesture recognition and collaboration, while also proposing future enhancements, such as multi-pen support and improved integration of third-party applications.","publication_date":{"day":null,"month":null,"year":2008,"errors":{}}},"translated_abstract":null,"internal_url":"https://www.academia.edu/1661721/PAD_Recent_Developmen_ts_and_Future_Work","translated_internal_url":"","created_at":"2010-01-12T00:39:40.878-08:00","preview_url":null,"current_user_can_edit":null,"current_user_is_owner":null,"owner_id":13155,"coauthors_can_edit":true,"document_type":"talk","co_author_tags":[],"downloadable_attachments":[{"id":77882449,"title":"","file_type":"pdf","scribd_thumbnail_url":"https://attachments.academia-assets.com/77882449/thumbnails/1.jpg","file_name":"i3.pdf","download_url":"https://www.academia.edu/attachments/77882449/download_file","bulk_download_file_name":"PAD_Recent_Developmen_ts_and_Future_Work.pdf","bulk_download_url":"https://d1wqtxts1xzle7.cloudfront.net/77882449/i3-libre.pdf?1641121164=\u0026response-content-disposition=attachment%3B+filename%3DPAD_Recent_Developmen_ts_and_Future_Work.pdf\u0026Expires=1744203892\u0026Signature=Yrp7TWSUjXasw3F0oLF49UBG0skCilaU2m3y-Z1XXc58fnVU5WHGcggrQKmBtbsYtQrVrR2v~AjkfVPBTM8dZJ0-BlokQDUCrP22rbdj1O~bApjWFgXRSpv6eVVQKr9koezWXCD9blqfwA2~jr3R0eeOAeiGfYr1~rza26h5OEac~XQr4v3mxO~dxWJUAW6F3UywBKsww4haOXWTIwOYgfwlMr-rEji0ikT70IJhxn4aCui4BLIxO9Q9o0WGKPJUic4JmqnVsh2E~1H2D3h3nCLSfga5BGKjpAXz2ny1XV4mpL8rRN5Ji7-2aGAJo~I4UKcFyf2XCMoSgvKKDezq4Q__\u0026Key-Pair-Id=APKAJLOHF5GGSLRBV4ZA"}],"slug":"PAD_Recent_Developmen_ts_and_Future_Work","translated_slug":"","page_count":22,"language":"en","content_type":"Work","summary":null,"impression_tracking_id":null,"owner":{"id":13155,"first_name":"Beat","middle_initials":null,"last_name":"Signer","page_name":"BeatSigner","domain_name":"vub","created_at":"2008-10-29T17:11:18.015-07:00","display_name":"Beat Signer","url":"https://vub.academia.edu/BeatSigner"},"attachments":[{"id":77882449,"title":"","file_type":"pdf","scribd_thumbnail_url":"https://attachments.academia-assets.com/77882449/thumbnails/1.jpg","file_name":"i3.pdf","download_url":"https://www.academia.edu/attachments/77882449/download_file","bulk_download_file_name":"PAD_Recent_Developmen_ts_and_Future_Work.pdf","bulk_download_url":"https://d1wqtxts1xzle7.cloudfront.net/77882449/i3-libre.pdf?1641121164=\u0026response-content-disposition=attachment%3B+filename%3DPAD_Recent_Developmen_ts_and_Future_Work.pdf\u0026Expires=1744203892\u0026Signature=Yrp7TWSUjXasw3F0oLF49UBG0skCilaU2m3y-Z1XXc58fnVU5WHGcggrQKmBtbsYtQrVrR2v~AjkfVPBTM8dZJ0-BlokQDUCrP22rbdj1O~bApjWFgXRSpv6eVVQKr9koezWXCD9blqfwA2~jr3R0eeOAeiGfYr1~rza26h5OEac~XQr4v3mxO~dxWJUAW6F3UywBKsww4haOXWTIwOYgfwlMr-rEji0ikT70IJhxn4aCui4BLIxO9Q9o0WGKPJUic4JmqnVsh2E~1H2D3h3nCLSfga5BGKjpAXz2ny1XV4mpL8rRN5Ji7-2aGAJo~I4UKcFyf2XCMoSgvKKDezq4Q__\u0026Key-Pair-Id=APKAJLOHF5GGSLRBV4ZA"}],"research_interests":[{"id":422,"name":"Computer Science","url":"https://www.academia.edu/Documents/in/Computer_Science"},{"id":449,"name":"Software Engineering","url":"https://www.academia.edu/Documents/in/Software_Engineering"},{"id":472,"name":"Human Computer Interaction","url":"https://www.academia.edu/Documents/in/Human_Computer_Interaction"},{"id":11085,"name":"Cross-Media Information Spaces","url":"https://www.academia.edu/Documents/in/Cross-Media_Information_Spaces"}],"urls":[{"id":15981486,"url":"https://speakerdeck.com/signer/pad-recent-developments-and-future-work"}]}, dispatcherData: dispatcherData }); $(this).data('initialized', true); } }); $a.trackClickSource(".js-work-strip-work-link", "profile_work_strip") if (false) { Aedu.setUpFigureCarousel('profile-work-1661721-figures'); } }); </script> <div class="js-work-strip profile--work_container" data-work-id="1661722"><div class="profile--work_thumbnail hidden-xs"><a class="js-work-strip-work-link" data-click-track="profile-work-strip-thumbnail" href="https://www.academia.edu/1661722/Collaborat_ing_over_Paper_and_Digital_Media_Interactiv_e_Paper_Applicatio_ns_at_ETH_Zurich"><img alt="Research paper thumbnail of Collaborat­ing over Paper and Digital Media - Interactiv­e Paper Applicatio­ns @ ETH Zurich" class="work-thumbnail" src="https://attachments.academia-assets.com/77884248/thumbnails/1.jpg" /></a></div><div class="wp-workCard wp-workCard_itemContainer"><div class="wp-workCard_item wp-workCard--title"><a class="js-work-strip-work-link text-gray-darker" data-click-track="profile-work-strip-title" href="https://www.academia.edu/1661722/Collaborat_ing_over_Paper_and_Digital_Media_Interactiv_e_Paper_Applicatio_ns_at_ETH_Zurich">Collaborat­ing over Paper and Digital Media - Interactiv­e Paper Applicatio­ns @ ETH Zurich</a></div><div class="wp-workCard_item wp-workCard--actions"><span class="work-strip-bookmark-button-container"></span><a id="25ff602e798860efee6c38f5d983c845" class="wp-workCard--action" rel="nofollow" data-click-track="profile-work-strip-download" data-download="{&quot;attachment_id&quot;:77884248,&quot;asset_id&quot;:1661722,&quot;asset_type&quot;:&quot;Work&quot;,&quot;button_location&quot;:&quot;profile&quot;}" href="https://www.academia.edu/attachments/77884248/download_file?s=profile"><span><i class="fa fa-arrow-down"></i></span><span>Download</span></a><span class="wp-workCard--action visible-if-viewed-by-owner inline-block" style="display: none;"><span class="js-profile-work-strip-edit-button-wrapper profile-work-strip-edit-button-wrapper" data-work-id="1661722"><a class="js-profile-work-strip-edit-button" tabindex="0"><span><i class="fa fa-pencil"></i></span><span>Edit</span></a></span></span></div><div class="wp-workCard_item wp-workCard--stats"><span><span><span class="js-view-count view-count u-mr2x" data-work-id="1661722"><i class="fa fa-spinner fa-spin"></i></span><script>$(function () { var workId = 1661722; window.Academia.workViewCountsFetcher.queue(workId, function (count) { var description = window.$h.commaizeInt(count) + " " + window.$h.pluralize(count, 'View'); $(".js-view-count[data-work-id=1661722]").text(description); $(".js-view-count[data-work-id=1661722]").attr('title', description).tooltip(); }); });</script></span></span><span><span class="percentile-widget hidden"><span class="u-mr2x work-percentile"></span></span><script>$(function () { var workId = 1661722; window.Academia.workPercentilesFetcher.queue(workId, function (percentileText) { var container = $(".js-work-strip[data-work-id='1661722']"); container.find('.work-percentile').text(percentileText.charAt(0).toUpperCase() + percentileText.slice(1)); container.find('.percentile-widget').show(); container.find('.percentile-widget').removeClass('hidden'); }); });</script></span></div><div id="work-strip-premium-row-container"></div></div></div><script> require.config({ waitSeconds: 90 })(["https://a.academia-assets.com/assets/wow_profile-a9bf3a2bc8c89fa2a77156577594264ee8a0f214d74241bc0fcd3f69f8d107ac.js","https://a.academia-assets.com/assets/work_edit-ad038b8c047c1a8d4fa01b402d530ff93c45fee2137a149a4a5398bc8ad67560.js"], function() { // from javascript_helper.rb var dispatcherData = {} if (true){ window.WowProfile.dispatcher = window.WowProfile.dispatcher || _.clone(Backbone.Events); dispatcherData = { dispatcher: window.WowProfile.dispatcher, downloadLinkId: "25ff602e798860efee6c38f5d983c845" } } $('.js-work-strip[data-work-id=1661722]').each(function() { if (!$(this).data('initialized')) { new WowProfile.WorkStripView({ el: this, workJSON: {"id":1661722,"title":"Collaborat­ing over Paper and Digital Media - Interactiv­e Paper Applicatio­ns @ ETH Zurich","translated_title":"","metadata":{"location":"CSCW Seminar, Zurich, Switzerland","event_date":{"day":27,"month":5,"year":2008,"errors":{}},"ai_abstract":"This paper discusses the integration of paper and digital media within interactive applications at ETH Zurich, focusing on collaborative editing tools that facilitate non-linear presentations and multi-user interactions. It highlights various student projects and innovations that utilize technologies like Anoto for enhancing user interfaces and memory storytelling.","publication_date":{"day":null,"month":null,"year":2008,"errors":{}}},"translated_abstract":null,"internal_url":"https://www.academia.edu/1661722/Collaborat_ing_over_Paper_and_Digital_Media_Interactiv_e_Paper_Applicatio_ns_at_ETH_Zurich","translated_internal_url":"","created_at":"2010-01-12T00:42:38.040-08:00","preview_url":null,"current_user_can_edit":null,"current_user_is_owner":null,"owner_id":13155,"coauthors_can_edit":true,"document_type":"talk","co_author_tags":[],"downloadable_attachments":[{"id":77884248,"title":"","file_type":"pdf","scribd_thumbnail_url":"https://attachments.academia-assets.com/77884248/thumbnails/1.jpg","file_name":"collaboratingOverPaperAndDigitalMedia.pdf","download_url":"https://www.academia.edu/attachments/77884248/download_file","bulk_download_file_name":"Collaborat_ing_over_Paper_and_Digital_Me.pdf","bulk_download_url":"https://d1wqtxts1xzle7.cloudfront.net/77884248/collaboratingOverPaperAndDigitalMedia-libre.pdf?1641125115=\u0026response-content-disposition=attachment%3B+filename%3DCollaborat_ing_over_Paper_and_Digital_Me.pdf\u0026Expires=1744203892\u0026Signature=DQORGzfX8F~oUgZc4byDGCofx-ANWH0HyNb3QALdGYMjPKwwZ6VJ5ITXLCNBVz3uDOgmT6OwDa4X8CWwGW-WghBRejrirrhrarmwkGoBc70icDwXxwru08EJZbggzA1K19xMd~kFbGJV2AB2FIBD5U2cXGTVu9~nl3rdWNvqsCn~a1YMEY9-OEqnygaj0OcEUJnkHtvvLQ8e3P1SHiBsccus~PXGVWWCuGj4qjpsH1mV0gx64qAK5vZ-xoFIFi-JbZJikL8k8gWJQ62GMd2JplunGLj9lw5if2DBPO4tooPgxZ2FOd4axctIk0kStGYn2oTJUgJj506Zoie1cE5gug__\u0026Key-Pair-Id=APKAJLOHF5GGSLRBV4ZA"}],"slug":"Collaborat_ing_over_Paper_and_Digital_Media_Interactiv_e_Paper_Applicatio_ns_at_ETH_Zurich","translated_slug":"","page_count":26,"language":"en","content_type":"Work","summary":null,"impression_tracking_id":null,"owner":{"id":13155,"first_name":"Beat","middle_initials":null,"last_name":"Signer","page_name":"BeatSigner","domain_name":"vub","created_at":"2008-10-29T17:11:18.015-07:00","display_name":"Beat Signer","url":"https://vub.academia.edu/BeatSigner"},"attachments":[{"id":77884248,"title":"","file_type":"pdf","scribd_thumbnail_url":"https://attachments.academia-assets.com/77884248/thumbnails/1.jpg","file_name":"collaboratingOverPaperAndDigitalMedia.pdf","download_url":"https://www.academia.edu/attachments/77884248/download_file","bulk_download_file_name":"Collaborat_ing_over_Paper_and_Digital_Me.pdf","bulk_download_url":"https://d1wqtxts1xzle7.cloudfront.net/77884248/collaboratingOverPaperAndDigitalMedia-libre.pdf?1641125115=\u0026response-content-disposition=attachment%3B+filename%3DCollaborat_ing_over_Paper_and_Digital_Me.pdf\u0026Expires=1744203892\u0026Signature=DQORGzfX8F~oUgZc4byDGCofx-ANWH0HyNb3QALdGYMjPKwwZ6VJ5ITXLCNBVz3uDOgmT6OwDa4X8CWwGW-WghBRejrirrhrarmwkGoBc70icDwXxwru08EJZbggzA1K19xMd~kFbGJV2AB2FIBD5U2cXGTVu9~nl3rdWNvqsCn~a1YMEY9-OEqnygaj0OcEUJnkHtvvLQ8e3P1SHiBsccus~PXGVWWCuGj4qjpsH1mV0gx64qAK5vZ-xoFIFi-JbZJikL8k8gWJQ62GMd2JplunGLj9lw5if2DBPO4tooPgxZ2FOd4axctIk0kStGYn2oTJUgJj506Zoie1cE5gug__\u0026Key-Pair-Id=APKAJLOHF5GGSLRBV4ZA"}],"research_interests":[{"id":422,"name":"Computer Science","url":"https://www.academia.edu/Documents/in/Computer_Science"},{"id":449,"name":"Software Engineering","url":"https://www.academia.edu/Documents/in/Software_Engineering"},{"id":472,"name":"Human Computer Interaction","url":"https://www.academia.edu/Documents/in/Human_Computer_Interaction"},{"id":2129,"name":"Computer Supported Cooperative Work (CSCW)","url":"https://www.academia.edu/Documents/in/Computer_Supported_Cooperative_Work_CSCW_"},{"id":2879,"name":"Ubiquitous Computing","url":"https://www.academia.edu/Documents/in/Ubiquitous_Computing"},{"id":5673,"name":"Augmented Reality","url":"https://www.academia.edu/Documents/in/Augmented_Reality"},{"id":11085,"name":"Cross-Media Information Spaces","url":"https://www.academia.edu/Documents/in/Cross-Media_Information_Spaces"},{"id":15951,"name":"Digital Pen and Paper","url":"https://www.academia.edu/Documents/in/Digital_Pen_and_Paper"},{"id":20470,"name":"Tangible User Interfaces","url":"https://www.academia.edu/Documents/in/Tangible_User_Interfaces"}],"urls":[{"id":15982395,"url":"https://speakerdeck.com/signer/collaborating-over-paper-and-digital-media-interactive-paper-applications-at-eth-zurich"}]}, dispatcherData: dispatcherData }); $(this).data('initialized', true); } }); $a.trackClickSource(".js-work-strip-work-link", "profile_work_strip") if (false) { Aedu.setUpFigureCarousel('profile-work-1661722-figures'); } }); </script> <div class="js-work-strip profile--work_container" data-work-id="1661723"><div class="profile--work_thumbnail hidden-xs"><a class="js-work-strip-work-link" data-click-track="profile-work-strip-thumbnail" href="https://www.academia.edu/1661723/Interactiv_e_Paper_Introducti_on_and_Basic_Technology_and_Interactiv_e_Paper_Applicatio_ns_and_Future_Developmen_ts"><img alt="Research paper thumbnail of Interactiv­e Paper - Introducti­on and Basic Technology &amp; Interactiv­e Paper - Applicatio­ns and Future Developmen­ts" class="work-thumbnail" src="https://attachments.academia-assets.com/77885705/thumbnails/1.jpg" /></a></div><div class="wp-workCard wp-workCard_itemContainer"><div class="wp-workCard_item wp-workCard--title"><a class="js-work-strip-work-link text-gray-darker" data-click-track="profile-work-strip-title" href="https://www.academia.edu/1661723/Interactiv_e_Paper_Introducti_on_and_Basic_Technology_and_Interactiv_e_Paper_Applicatio_ns_and_Future_Developmen_ts">Interactiv­e Paper - Introducti­on and Basic Technology &amp; Interactiv­e Paper - Applicatio­ns and Future Developmen­ts</a></div><div class="wp-workCard_item wp-workCard--actions"><span class="work-strip-bookmark-button-container"></span><a id="a5ff39a18299127ffa11173aeec282b8" class="wp-workCard--action" rel="nofollow" data-click-track="profile-work-strip-download" data-download="{&quot;attachment_id&quot;:77885705,&quot;asset_id&quot;:1661723,&quot;asset_type&quot;:&quot;Work&quot;,&quot;button_location&quot;:&quot;profile&quot;}" href="https://www.academia.edu/attachments/77885705/download_file?s=profile"><span><i class="fa fa-arrow-down"></i></span><span>Download</span></a><span class="wp-workCard--action visible-if-viewed-by-owner inline-block" style="display: none;"><span class="js-profile-work-strip-edit-button-wrapper profile-work-strip-edit-button-wrapper" data-work-id="1661723"><a class="js-profile-work-strip-edit-button" tabindex="0"><span><i class="fa fa-pencil"></i></span><span>Edit</span></a></span></span></div><div class="wp-workCard_item wp-workCard--stats"><span><span><span class="js-view-count view-count u-mr2x" data-work-id="1661723"><i class="fa fa-spinner fa-spin"></i></span><script>$(function () { var workId = 1661723; window.Academia.workViewCountsFetcher.queue(workId, function (count) { var description = window.$h.commaizeInt(count) + " " + window.$h.pluralize(count, 'View'); $(".js-view-count[data-work-id=1661723]").text(description); $(".js-view-count[data-work-id=1661723]").attr('title', description).tooltip(); }); });</script></span></span><span><span class="percentile-widget hidden"><span class="u-mr2x work-percentile"></span></span><script>$(function () { var workId = 1661723; window.Academia.workPercentilesFetcher.queue(workId, function (percentileText) { var container = $(".js-work-strip[data-work-id='1661723']"); container.find('.work-percentile').text(percentileText.charAt(0).toUpperCase() + percentileText.slice(1)); container.find('.percentile-widget').show(); container.find('.percentile-widget').removeClass('hidden'); }); });</script></span></div><div id="work-strip-premium-row-container"></div></div></div><script> require.config({ waitSeconds: 90 })(["https://a.academia-assets.com/assets/wow_profile-a9bf3a2bc8c89fa2a77156577594264ee8a0f214d74241bc0fcd3f69f8d107ac.js","https://a.academia-assets.com/assets/work_edit-ad038b8c047c1a8d4fa01b402d530ff93c45fee2137a149a4a5398bc8ad67560.js"], function() { // from javascript_helper.rb var dispatcherData = {} if (true){ window.WowProfile.dispatcher = window.WowProfile.dispatcher || _.clone(Backbone.Events); dispatcherData = { dispatcher: window.WowProfile.dispatcher, downloadLinkId: "a5ff39a18299127ffa11173aeec282b8" } } $('.js-work-strip[data-work-id=1661723]').each(function() { if (!$(this).data('initialized')) { new WowProfile.WorkStripView({ el: this, workJSON: {"id":1661723,"title":"Interactiv­e Paper - Introducti­on and Basic Technology \u0026 Interactiv­e Paper - Applicatio­ns and Future Developmen­ts","translated_title":"","metadata":{"location":"Y-Toolbox, Workshop about Paper, Y (Institute for Transdisciplinarity), Bern University of Arts, Bern, Switzerland","event_date":{"day":17,"month":4,"year":2008,"errors":{}},"ai_abstract":"The paper discusses the integration of electronic paper (e-paper) technologies with interactive systems, illustrating their applications and potential for future developments. It highlights the seamless communication between digital devices and physical paper through innovative tools, showcasing various examples like the usage of digital pens, automated systems, and interactive interfaces in diverse settings such as education and arts festivals.","publication_date":{"day":null,"month":null,"year":2008,"errors":{}}},"translated_abstract":null,"internal_url":"https://www.academia.edu/1661723/Interactiv_e_Paper_Introducti_on_and_Basic_Technology_and_Interactiv_e_Paper_Applicatio_ns_and_Future_Developmen_ts","translated_internal_url":"","created_at":"2010-01-12T00:45:04.285-08:00","preview_url":null,"current_user_can_edit":null,"current_user_is_owner":null,"owner_id":13155,"coauthors_can_edit":true,"document_type":"talk","co_author_tags":[],"downloadable_attachments":[{"id":77885705,"title":"","file_type":"pdf","scribd_thumbnail_url":"https://attachments.academia-assets.com/77885705/thumbnails/1.jpg","file_name":"Introduction_and_Basic_Technology.pdf","download_url":"https://www.academia.edu/attachments/77885705/download_file","bulk_download_file_name":"Interactiv_e_Paper_Introducti_on_and_Bas.pdf","bulk_download_url":"https://d1wqtxts1xzle7.cloudfront.net/77885705/Introduction_and_Basic_Technology-libre.pdf?1641127753=\u0026response-content-disposition=attachment%3B+filename%3DInteractiv_e_Paper_Introducti_on_and_Bas.pdf\u0026Expires=1744203892\u0026Signature=YU67JK903Q4gq5P~7rTu~QJq4DZTv~wjOAgbLN6romJrDPlTqavXYNHlwVeo81z88rMB6PhiudjrVc545gdk79eaakyzT5rfxZ4FhiqIRYf4~kr3eqdu5gvafpFr-7U91oH5bHzRrCIkTdsTvNXNrDarZUv8hGuDqklHy4YSV39RiTSYTNKm74IeVp5QAWzJ5x29MokcGZm2oEgBxlKWJ3t61mejFm8hHyqDvkPfmYETQBjhF7JHXDf9-wnJVACHhvKvSRCeQd~7EesQpk9ES4Fgveb1eMEiqNXub9emzEr1jO1D0xFT0A7s5Z586cYYTDIxUtjJ~p~NamvPpwNA2w__\u0026Key-Pair-Id=APKAJLOHF5GGSLRBV4ZA"}],"slug":"Interactiv_e_Paper_Introducti_on_and_Basic_Technology_and_Interactiv_e_Paper_Applicatio_ns_and_Future_Developmen_ts","translated_slug":"","page_count":50,"language":"en","content_type":"Work","summary":null,"impression_tracking_id":null,"owner":{"id":13155,"first_name":"Beat","middle_initials":null,"last_name":"Signer","page_name":"BeatSigner","domain_name":"vub","created_at":"2008-10-29T17:11:18.015-07:00","display_name":"Beat Signer","url":"https://vub.academia.edu/BeatSigner"},"attachments":[{"id":77885705,"title":"","file_type":"pdf","scribd_thumbnail_url":"https://attachments.academia-assets.com/77885705/thumbnails/1.jpg","file_name":"Introduction_and_Basic_Technology.pdf","download_url":"https://www.academia.edu/attachments/77885705/download_file","bulk_download_file_name":"Interactiv_e_Paper_Introducti_on_and_Bas.pdf","bulk_download_url":"https://d1wqtxts1xzle7.cloudfront.net/77885705/Introduction_and_Basic_Technology-libre.pdf?1641127753=\u0026response-content-disposition=attachment%3B+filename%3DInteractiv_e_Paper_Introducti_on_and_Bas.pdf\u0026Expires=1744203892\u0026Signature=YU67JK903Q4gq5P~7rTu~QJq4DZTv~wjOAgbLN6romJrDPlTqavXYNHlwVeo81z88rMB6PhiudjrVc545gdk79eaakyzT5rfxZ4FhiqIRYf4~kr3eqdu5gvafpFr-7U91oH5bHzRrCIkTdsTvNXNrDarZUv8hGuDqklHy4YSV39RiTSYTNKm74IeVp5QAWzJ5x29MokcGZm2oEgBxlKWJ3t61mejFm8hHyqDvkPfmYETQBjhF7JHXDf9-wnJVACHhvKvSRCeQd~7EesQpk9ES4Fgveb1eMEiqNXub9emzEr1jO1D0xFT0A7s5Z586cYYTDIxUtjJ~p~NamvPpwNA2w__\u0026Key-Pair-Id=APKAJLOHF5GGSLRBV4ZA"}],"research_interests":[{"id":48,"name":"Engineering","url":"https://www.academia.edu/Documents/in/Engineering"},{"id":422,"name":"Computer Science","url":"https://www.academia.edu/Documents/in/Computer_Science"},{"id":449,"name":"Software Engineering","url":"https://www.academia.edu/Documents/in/Software_Engineering"},{"id":472,"name":"Human Computer Interaction","url":"https://www.academia.edu/Documents/in/Human_Computer_Interaction"},{"id":933,"name":"New Media","url":"https://www.academia.edu/Documents/in/New_Media"},{"id":2879,"name":"Ubiquitous Computing","url":"https://www.academia.edu/Documents/in/Ubiquitous_Computing"},{"id":3419,"name":"Multimedia","url":"https://www.academia.edu/Documents/in/Multimedia"},{"id":5673,"name":"Augmented Reality","url":"https://www.academia.edu/Documents/in/Augmented_Reality"},{"id":10165,"name":"Interactive and Digital Media","url":"https://www.academia.edu/Documents/in/Interactive_and_Digital_Media"},{"id":11085,"name":"Cross-Media Information Spaces","url":"https://www.academia.edu/Documents/in/Cross-Media_Information_Spaces"},{"id":11086,"name":"Interactive Paper","url":"https://www.academia.edu/Documents/in/Interactive_Paper"},{"id":15951,"name":"Digital Pen and Paper","url":"https://www.academia.edu/Documents/in/Digital_Pen_and_Paper"},{"id":20470,"name":"Tangible User Interfaces","url":"https://www.academia.edu/Documents/in/Tangible_User_Interfaces"},{"id":44606,"name":"Hypermedia","url":"https://www.academia.edu/Documents/in/Hypermedia"}],"urls":[{"id":15982624,"url":"https://speakerdeck.com/signer/interactive-paper-introduction-and-basic-technology-and-interactive-paper-applications-and-future-developments"}]}, dispatcherData: dispatcherData }); $(this).data('initialized', true); } }); $a.trackClickSource(".js-work-strip-work-link", "profile_work_strip") if (false) { Aedu.setUpFigureCarousel('profile-work-1661723-figures'); } }); </script> <div class="js-work-strip profile--work_container" data-work-id="1661739"><div class="profile--work_thumbnail hidden-xs"><a class="js-work-strip-work-link" data-click-track="profile-work-strip-thumbnail" href="https://www.academia.edu/1661739/i3_Recent_iServer_iPaper_and_iGesture_Developmen_ts"><img alt="Research paper thumbnail of i3 - Recent iServer, iPaper and iGesture Developmen­ts" class="work-thumbnail" src="https://attachments.academia-assets.com/30277461/thumbnails/1.jpg" /></a></div><div class="wp-workCard wp-workCard_itemContainer"><div class="wp-workCard_item wp-workCard--title"><a class="js-work-strip-work-link text-gray-darker" data-click-track="profile-work-strip-title" href="https://www.academia.edu/1661739/i3_Recent_iServer_iPaper_and_iGesture_Developmen_ts">i3 - Recent iServer, iPaper and iGesture Developmen­ts</a></div><div class="wp-workCard_item wp-workCard--actions"><span class="work-strip-bookmark-button-container"></span><a id="2e818d09e8f46403c8c6eb6fb8d0ed03" class="wp-workCard--action" rel="nofollow" data-click-track="profile-work-strip-download" data-download="{&quot;attachment_id&quot;:30277461,&quot;asset_id&quot;:1661739,&quot;asset_type&quot;:&quot;Work&quot;,&quot;button_location&quot;:&quot;profile&quot;}" href="https://www.academia.edu/attachments/30277461/download_file?s=profile"><span><i class="fa fa-arrow-down"></i></span><span>Download</span></a><span class="wp-workCard--action visible-if-viewed-by-owner inline-block" style="display: none;"><span class="js-profile-work-strip-edit-button-wrapper profile-work-strip-edit-button-wrapper" data-work-id="1661739"><a class="js-profile-work-strip-edit-button" tabindex="0"><span><i class="fa fa-pencil"></i></span><span>Edit</span></a></span></span></div><div class="wp-workCard_item wp-workCard--stats"><span><span><span class="js-view-count view-count u-mr2x" data-work-id="1661739"><i class="fa fa-spinner fa-spin"></i></span><script>$(function () { var workId = 1661739; window.Academia.workViewCountsFetcher.queue(workId, function (count) { var description = window.$h.commaizeInt(count) + " " + window.$h.pluralize(count, 'View'); $(".js-view-count[data-work-id=1661739]").text(description); $(".js-view-count[data-work-id=1661739]").attr('title', description).tooltip(); }); });</script></span></span><span><span class="percentile-widget hidden"><span class="u-mr2x work-percentile"></span></span><script>$(function () { var workId = 1661739; window.Academia.workPercentilesFetcher.queue(workId, function (percentileText) { var container = $(".js-work-strip[data-work-id='1661739']"); container.find('.work-percentile').text(percentileText.charAt(0).toUpperCase() + percentileText.slice(1)); container.find('.percentile-widget').show(); container.find('.percentile-widget').removeClass('hidden'); }); });</script></span></div><div id="work-strip-premium-row-container"></div></div></div><script> require.config({ waitSeconds: 90 })(["https://a.academia-assets.com/assets/wow_profile-a9bf3a2bc8c89fa2a77156577594264ee8a0f214d74241bc0fcd3f69f8d107ac.js","https://a.academia-assets.com/assets/work_edit-ad038b8c047c1a8d4fa01b402d530ff93c45fee2137a149a4a5398bc8ad67560.js"], function() { // from javascript_helper.rb var dispatcherData = {} if (true){ window.WowProfile.dispatcher = window.WowProfile.dispatcher || _.clone(Backbone.Events); dispatcherData = { dispatcher: window.WowProfile.dispatcher, downloadLinkId: "2e818d09e8f46403c8c6eb6fb8d0ed03" } } $('.js-work-strip[data-work-id=1661739]').each(function() { if (!$(this).data('initialized')) { new WowProfile.WorkStripView({ el: this, workJSON: {"id":1661739,"title":"i3 - Recent iServer, iPaper and iGesture Developmen­ts","translated_title":"","metadata":{"location":"OMS Seminar, Zurich, Switzerland","event_date":{"day":5,"month":12,"year":2007,"errors":{}},"ai_abstract":"The paper discusses the recent developments in the i3 project, notably focusing on iServer, iPaper, and iGesture advancements. It highlights structural links between digital resources, the integration of various authoring tools, and features of interactive applications designed for pen-based interfaces. The contributions include metamodeling for hypermedia systems and enhancements in user interaction design aimed at improving the usability of digital content in a tangible format."},"translated_abstract":null,"internal_url":"https://www.academia.edu/1661739/i3_Recent_iServer_iPaper_and_iGesture_Developmen_ts","translated_internal_url":"","created_at":"2010-01-12T15:17:01.563-08:00","preview_url":null,"current_user_can_edit":null,"current_user_is_owner":null,"owner_id":13155,"coauthors_can_edit":true,"document_type":"talk","co_author_tags":[],"downloadable_attachments":[{"id":30277461,"title":"","file_type":"pdf","scribd_thumbnail_url":"https://attachments.academia-assets.com/30277461/thumbnails/1.jpg","file_name":"i3.pdf","download_url":"https://www.academia.edu/attachments/30277461/download_file","bulk_download_file_name":"i3_Recent_iServer_iPaper_and_iGesture_De.pdf","bulk_download_url":"https://d1wqtxts1xzle7.cloudfront.net/30277461/i3-libre.pdf?1390882723=\u0026response-content-disposition=attachment%3B+filename%3Di3_Recent_iServer_iPaper_and_iGesture_De.pdf\u0026Expires=1744203892\u0026Signature=BhhEYuiaF-SQ7JsMiPRbB6Ap-V3pppmufFyo65YuORML3SXn7PCkCcGpwsC6PjfFw0JjENXknTPrcWvUVLa8OMegts-Y8hQ-S5hHzjFEFzjUSrRYFLuJuy0bV6XiaQBZFt-T~GB44wBX7OnVeYt81s1W1-gKES13H0TphYAkBpyN1-XhJvnoCL2aCWW7gdNi~V7NttTMaQqYr6IdRWZVRV778UBAEQOMSevJFgR98FCSHMKqgkucbI4XZTFdwHjgZFX4XxNs-DbVkSI9Y1Dbz~168r-SXj4GPsyNMC36ACvYSTbxNknFt9Uh29YtowE8z~UpbLBG-uTyzsDw4MBFAg__\u0026Key-Pair-Id=APKAJLOHF5GGSLRBV4ZA"}],"slug":"i3_Recent_iServer_iPaper_and_iGesture_Developmen_ts","translated_slug":"","page_count":42,"language":"en","content_type":"Work","summary":null,"impression_tracking_id":null,"owner":{"id":13155,"first_name":"Beat","middle_initials":null,"last_name":"Signer","page_name":"BeatSigner","domain_name":"vub","created_at":"2008-10-29T17:11:18.015-07:00","display_name":"Beat Signer","url":"https://vub.academia.edu/BeatSigner"},"attachments":[{"id":30277461,"title":"","file_type":"pdf","scribd_thumbnail_url":"https://attachments.academia-assets.com/30277461/thumbnails/1.jpg","file_name":"i3.pdf","download_url":"https://www.academia.edu/attachments/30277461/download_file","bulk_download_file_name":"i3_Recent_iServer_iPaper_and_iGesture_De.pdf","bulk_download_url":"https://d1wqtxts1xzle7.cloudfront.net/30277461/i3-libre.pdf?1390882723=\u0026response-content-disposition=attachment%3B+filename%3Di3_Recent_iServer_iPaper_and_iGesture_De.pdf\u0026Expires=1744203892\u0026Signature=BhhEYuiaF-SQ7JsMiPRbB6Ap-V3pppmufFyo65YuORML3SXn7PCkCcGpwsC6PjfFw0JjENXknTPrcWvUVLa8OMegts-Y8hQ-S5hHzjFEFzjUSrRYFLuJuy0bV6XiaQBZFt-T~GB44wBX7OnVeYt81s1W1-gKES13H0TphYAkBpyN1-XhJvnoCL2aCWW7gdNi~V7NttTMaQqYr6IdRWZVRV778UBAEQOMSevJFgR98FCSHMKqgkucbI4XZTFdwHjgZFX4XxNs-DbVkSI9Y1Dbz~168r-SXj4GPsyNMC36ACvYSTbxNknFt9Uh29YtowE8z~UpbLBG-uTyzsDw4MBFAg__\u0026Key-Pair-Id=APKAJLOHF5GGSLRBV4ZA"}],"research_interests":[{"id":48,"name":"Engineering","url":"https://www.academia.edu/Documents/in/Engineering"},{"id":422,"name":"Computer Science","url":"https://www.academia.edu/Documents/in/Computer_Science"},{"id":449,"name":"Software Engineering","url":"https://www.academia.edu/Documents/in/Software_Engineering"},{"id":472,"name":"Human Computer Interaction","url":"https://www.academia.edu/Documents/in/Human_Computer_Interaction"},{"id":933,"name":"New Media","url":"https://www.academia.edu/Documents/in/New_Media"},{"id":2879,"name":"Ubiquitous Computing","url":"https://www.academia.edu/Documents/in/Ubiquitous_Computing"},{"id":3419,"name":"Multimedia","url":"https://www.academia.edu/Documents/in/Multimedia"},{"id":5673,"name":"Augmented Reality","url":"https://www.academia.edu/Documents/in/Augmented_Reality"},{"id":10165,"name":"Interactive and Digital Media","url":"https://www.academia.edu/Documents/in/Interactive_and_Digital_Media"},{"id":11085,"name":"Cross-Media Information Spaces","url":"https://www.academia.edu/Documents/in/Cross-Media_Information_Spaces"},{"id":11086,"name":"Interactive Paper","url":"https://www.academia.edu/Documents/in/Interactive_Paper"},{"id":15951,"name":"Digital Pen and Paper","url":"https://www.academia.edu/Documents/in/Digital_Pen_and_Paper"},{"id":17701,"name":"Gesture Recognition","url":"https://www.academia.edu/Documents/in/Gesture_Recognition"},{"id":44606,"name":"Hypermedia","url":"https://www.academia.edu/Documents/in/Hypermedia"}],"urls":[{"id":295120,"url":"http://www.slideshare.net/signer/i3-recent-iserver-ipaper-and-igesture-developments"}]}, dispatcherData: dispatcherData }); $(this).data('initialized', true); } }); $a.trackClickSource(".js-work-strip-work-link", "profile_work_strip") if (false) { Aedu.setUpFigureCarousel('profile-work-1661739-figures'); } }); </script> <div class="js-work-strip profile--work_container" data-work-id="1661741"><div class="profile--work_thumbnail hidden-xs"><a class="js-work-strip-work-link" data-click-track="profile-work-strip-thumbnail" href="https://www.academia.edu/1661741/iPaper_at_Glo_bIS_Interactiv_e_Paper_Research"><img alt="Research paper thumbnail of iPaper@Glo­bIS - Interactiv­e Paper Research" class="work-thumbnail" src="https://attachments.academia-assets.com/30277392/thumbnails/1.jpg" /></a></div><div class="wp-workCard wp-workCard_itemContainer"><div class="wp-workCard_item wp-workCard--title"><a class="js-work-strip-work-link text-gray-darker" data-click-track="profile-work-strip-title" href="https://www.academia.edu/1661741/iPaper_at_Glo_bIS_Interactiv_e_Paper_Research">iPaper@Glo­bIS - Interactiv­e Paper Research</a></div><div class="wp-workCard_item"><span class="js-work-more-abstract-truncated"> Many projects focus on the input device, paper, printing and other hardware technologies rather...</span><a class="js-work-more-abstract" data-broccoli-component="work_strip.more_abstract" data-click-track="profile-work-strip-more-abstract" href="javascript:;"><span> more </span><span><i class="fa fa-caret-down"></i></span></a><span class="js-work-more-abstract-untruncated hidden"> Many projects focus on the input device, paper, printing and other hardware technologies rather than on the data integration and information management aspects  isolated solutions  The linking of paper tends to be based on physical rather than information-centric concepts  difficult to integrate new input / output devices CSCW Seminar</span></div><div class="wp-workCard_item wp-workCard--actions"><span class="work-strip-bookmark-button-container"></span><a id="222d5ab8bc2fa9a513d82a0932367f99" class="wp-workCard--action" rel="nofollow" data-click-track="profile-work-strip-download" data-download="{&quot;attachment_id&quot;:30277392,&quot;asset_id&quot;:1661741,&quot;asset_type&quot;:&quot;Work&quot;,&quot;button_location&quot;:&quot;profile&quot;}" href="https://www.academia.edu/attachments/30277392/download_file?s=profile"><span><i class="fa fa-arrow-down"></i></span><span>Download</span></a><span class="wp-workCard--action visible-if-viewed-by-owner inline-block" style="display: none;"><span class="js-profile-work-strip-edit-button-wrapper profile-work-strip-edit-button-wrapper" data-work-id="1661741"><a class="js-profile-work-strip-edit-button" tabindex="0"><span><i class="fa fa-pencil"></i></span><span>Edit</span></a></span></span></div><div class="wp-workCard_item wp-workCard--stats"><span><span><span class="js-view-count view-count u-mr2x" data-work-id="1661741"><i class="fa fa-spinner fa-spin"></i></span><script>$(function () { var workId = 1661741; window.Academia.workViewCountsFetcher.queue(workId, function (count) { var description = window.$h.commaizeInt(count) + " " + window.$h.pluralize(count, 'View'); $(".js-view-count[data-work-id=1661741]").text(description); $(".js-view-count[data-work-id=1661741]").attr('title', description).tooltip(); }); });</script></span></span><span><span class="percentile-widget hidden"><span class="u-mr2x work-percentile"></span></span><script>$(function () { var workId = 1661741; window.Academia.workPercentilesFetcher.queue(workId, function (percentileText) { var container = $(".js-work-strip[data-work-id='1661741']"); container.find('.work-percentile').text(percentileText.charAt(0).toUpperCase() + percentileText.slice(1)); container.find('.percentile-widget').show(); container.find('.percentile-widget').removeClass('hidden'); }); });</script></span></div><div id="work-strip-premium-row-container"></div></div></div><script> require.config({ waitSeconds: 90 })(["https://a.academia-assets.com/assets/wow_profile-a9bf3a2bc8c89fa2a77156577594264ee8a0f214d74241bc0fcd3f69f8d107ac.js","https://a.academia-assets.com/assets/work_edit-ad038b8c047c1a8d4fa01b402d530ff93c45fee2137a149a4a5398bc8ad67560.js"], function() { // from javascript_helper.rb var dispatcherData = {} if (true){ window.WowProfile.dispatcher = window.WowProfile.dispatcher || _.clone(Backbone.Events); dispatcherData = { dispatcher: window.WowProfile.dispatcher, downloadLinkId: "222d5ab8bc2fa9a513d82a0932367f99" } } $('.js-work-strip[data-work-id=1661741]').each(function() { if (!$(this).data('initialized')) { new WowProfile.WorkStripView({ el: this, workJSON: {"id":1661741,"title":"iPaper@Glo­bIS - Interactiv­e Paper Research","translated_title":"","metadata":{"location":"CSCW Seminar, ETH Zurich","event_date":{"day":null,"month":6,"year":2007,"errors":{}},"ai_title_tag":"iPaper@Glo­bIS: Enhancing Paper Data Integration","grobid_abstract":" Many projects focus on the input device, paper, printing and other hardware technologies rather than on the data integration and information management aspects  isolated solutions  The linking of paper tends to be based on physical rather than information-centric concepts  difficult to integrate new input / output devices CSCW Seminar","grobid_abstract_attachment_id":30277392},"translated_abstract":null,"internal_url":"https://www.academia.edu/1661741/iPaper_at_Glo_bIS_Interactiv_e_Paper_Research","translated_internal_url":"","created_at":"2010-01-12T15:31:03.136-08:00","preview_url":null,"current_user_can_edit":null,"current_user_is_owner":null,"owner_id":13155,"coauthors_can_edit":true,"document_type":"talk","co_author_tags":[],"downloadable_attachments":[{"id":30277392,"title":"","file_type":"pdf","scribd_thumbnail_url":"https://attachments.academia-assets.com/30277392/thumbnails/1.jpg","file_name":"ipaper_at_GlobIS.pdf","download_url":"https://www.academia.edu/attachments/30277392/download_file","bulk_download_file_name":"iPaper_at_Glo_bIS_Interactiv_e_Paper_Res.pdf","bulk_download_url":"https://d1wqtxts1xzle7.cloudfront.net/30277392/ipaper_at_GlobIS-libre.pdf?1390882733=\u0026response-content-disposition=attachment%3B+filename%3DiPaper_at_Glo_bIS_Interactiv_e_Paper_Res.pdf\u0026Expires=1744203892\u0026Signature=FOuGbjL6l-jTYagb9Wg9oL6~hJcIL9pCUFyvWBjz~iYOa390i5D6g8jR8IS~7jhyxkcrsrb9lCdUiDleWue4osUoRf-6lxZGwyjdZNuEM53QXngTbGgn2oi6YFSvYNr4JDZgn1UaciC4vuMuoSJAGek8aWvCXVaYt93fVsx8oztIrXt-XcoeudUMsPA~oB2Ri5uWQYWE3tcZpamdBMfBBefQ2xrNgD8or0DGj434H~3r5c-rEx64gOaqBC5bkIOcwFNM~uKY7uKCe7p9Gv4JI11n48Kpit0hkvF1pfTb2e4~HaVMTCD3DMqFH3Vq073KOrDWx-kXNA9YUg0XS-N6MQ__\u0026Key-Pair-Id=APKAJLOHF5GGSLRBV4ZA"}],"slug":"iPaper_at_Glo_bIS_Interactiv_e_Paper_Research","translated_slug":"","page_count":35,"language":"en","content_type":"Work","summary":" Many projects focus on the input device, paper, printing and other hardware technologies rather than on the data integration and information management aspects  isolated solutions  The linking of paper tends to be based on physical rather than information-centric concepts  difficult to integrate new input / output devices CSCW Seminar","impression_tracking_id":null,"owner":{"id":13155,"first_name":"Beat","middle_initials":null,"last_name":"Signer","page_name":"BeatSigner","domain_name":"vub","created_at":"2008-10-29T17:11:18.015-07:00","display_name":"Beat Signer","url":"https://vub.academia.edu/BeatSigner"},"attachments":[{"id":30277392,"title":"","file_type":"pdf","scribd_thumbnail_url":"https://attachments.academia-assets.com/30277392/thumbnails/1.jpg","file_name":"ipaper_at_GlobIS.pdf","download_url":"https://www.academia.edu/attachments/30277392/download_file","bulk_download_file_name":"iPaper_at_Glo_bIS_Interactiv_e_Paper_Res.pdf","bulk_download_url":"https://d1wqtxts1xzle7.cloudfront.net/30277392/ipaper_at_GlobIS-libre.pdf?1390882733=\u0026response-content-disposition=attachment%3B+filename%3DiPaper_at_Glo_bIS_Interactiv_e_Paper_Res.pdf\u0026Expires=1744203892\u0026Signature=FOuGbjL6l-jTYagb9Wg9oL6~hJcIL9pCUFyvWBjz~iYOa390i5D6g8jR8IS~7jhyxkcrsrb9lCdUiDleWue4osUoRf-6lxZGwyjdZNuEM53QXngTbGgn2oi6YFSvYNr4JDZgn1UaciC4vuMuoSJAGek8aWvCXVaYt93fVsx8oztIrXt-XcoeudUMsPA~oB2Ri5uWQYWE3tcZpamdBMfBBefQ2xrNgD8or0DGj434H~3r5c-rEx64gOaqBC5bkIOcwFNM~uKY7uKCe7p9Gv4JI11n48Kpit0hkvF1pfTb2e4~HaVMTCD3DMqFH3Vq073KOrDWx-kXNA9YUg0XS-N6MQ__\u0026Key-Pair-Id=APKAJLOHF5GGSLRBV4ZA"}],"research_interests":[{"id":422,"name":"Computer Science","url":"https://www.academia.edu/Documents/in/Computer_Science"},{"id":449,"name":"Software Engineering","url":"https://www.academia.edu/Documents/in/Software_Engineering"},{"id":3419,"name":"Multimedia","url":"https://www.academia.edu/Documents/in/Multimedia"},{"id":5673,"name":"Augmented Reality","url":"https://www.academia.edu/Documents/in/Augmented_Reality"},{"id":11085,"name":"Cross-Media Information Spaces","url":"https://www.academia.edu/Documents/in/Cross-Media_Information_Spaces"},{"id":11086,"name":"Interactive Paper","url":"https://www.academia.edu/Documents/in/Interactive_Paper"},{"id":15951,"name":"Digital Pen and Paper","url":"https://www.academia.edu/Documents/in/Digital_Pen_and_Paper"},{"id":42095,"name":"Document Engineering","url":"https://www.academia.edu/Documents/in/Document_Engineering"},{"id":44606,"name":"Hypermedia","url":"https://www.academia.edu/Documents/in/Hypermedia"}],"urls":[{"id":295122,"url":"http://www.slideshare.net/signer/ipaperglobis-interactive-paper-research"}]}, dispatcherData: dispatcherData }); $(this).data('initialized', true); } }); $a.trackClickSource(".js-work-strip-work-link", "profile_work_strip") if (false) { Aedu.setUpFigureCarousel('profile-work-1661741-figures'); } }); </script> <div class="js-work-strip profile--work_container" data-work-id="1661742"><div class="profile--work_thumbnail hidden-xs"><a class="js-work-strip-work-link" data-click-track="profile-work-strip-thumbnail" href="https://www.academia.edu/1661742/Bridging_the_Paper_Digi_tal_Divide_The_iPaper_Interactiv_e_Paper_Framework"><img alt="Research paper thumbnail of Bridging the Paper-Digi­tal Divide: The iPaper Interactiv­e Paper Framework" class="work-thumbnail" src="https://attachments.academia-assets.com/30277386/thumbnails/1.jpg" /></a></div><div class="wp-workCard wp-workCard_itemContainer"><div class="wp-workCard_item wp-workCard--title"><a class="js-work-strip-work-link text-gray-darker" data-click-track="profile-work-strip-title" href="https://www.academia.edu/1661742/Bridging_the_Paper_Digi_tal_Divide_The_iPaper_Interactiv_e_Paper_Framework">Bridging the Paper-Digi­tal Divide: The iPaper Interactiv­e Paper Framework</a></div><div class="wp-workCard_item wp-workCard--actions"><span class="work-strip-bookmark-button-container"></span><a id="512ed4311c140a05d36cd09a051249c6" class="wp-workCard--action" rel="nofollow" data-click-track="profile-work-strip-download" data-download="{&quot;attachment_id&quot;:30277386,&quot;asset_id&quot;:1661742,&quot;asset_type&quot;:&quot;Work&quot;,&quot;button_location&quot;:&quot;profile&quot;}" href="https://www.academia.edu/attachments/30277386/download_file?s=profile"><span><i class="fa fa-arrow-down"></i></span><span>Download</span></a><span class="wp-workCard--action visible-if-viewed-by-owner inline-block" style="display: none;"><span class="js-profile-work-strip-edit-button-wrapper profile-work-strip-edit-button-wrapper" data-work-id="1661742"><a class="js-profile-work-strip-edit-button" tabindex="0"><span><i class="fa fa-pencil"></i></span><span>Edit</span></a></span></span></div><div class="wp-workCard_item wp-workCard--stats"><span><span><span class="js-view-count view-count u-mr2x" data-work-id="1661742"><i class="fa fa-spinner fa-spin"></i></span><script>$(function () { var workId = 1661742; window.Academia.workViewCountsFetcher.queue(workId, function (count) { var description = window.$h.commaizeInt(count) + " " + window.$h.pluralize(count, 'View'); $(".js-view-count[data-work-id=1661742]").text(description); $(".js-view-count[data-work-id=1661742]").attr('title', description).tooltip(); }); });</script></span></span><span><span class="percentile-widget hidden"><span class="u-mr2x work-percentile"></span></span><script>$(function () { var workId = 1661742; window.Academia.workPercentilesFetcher.queue(workId, function (percentileText) { var container = $(".js-work-strip[data-work-id='1661742']"); container.find('.work-percentile').text(percentileText.charAt(0).toUpperCase() + percentileText.slice(1)); container.find('.percentile-widget').show(); container.find('.percentile-widget').removeClass('hidden'); }); });</script></span></div><div id="work-strip-premium-row-container"></div></div></div><script> require.config({ waitSeconds: 90 })(["https://a.academia-assets.com/assets/wow_profile-a9bf3a2bc8c89fa2a77156577594264ee8a0f214d74241bc0fcd3f69f8d107ac.js","https://a.academia-assets.com/assets/work_edit-ad038b8c047c1a8d4fa01b402d530ff93c45fee2137a149a4a5398bc8ad67560.js"], function() { // from javascript_helper.rb var dispatcherData = {} if (true){ window.WowProfile.dispatcher = window.WowProfile.dispatcher || _.clone(Backbone.Events); dispatcherData = { dispatcher: window.WowProfile.dispatcher, downloadLinkId: "512ed4311c140a05d36cd09a051249c6" } } $('.js-work-strip[data-work-id=1661742]').each(function() { if (!$(this).data('initialized')) { new WowProfile.WorkStripView({ el: this, workJSON: {"id":1661742,"title":"Bridging the Paper-Digi­tal Divide: The iPaper Interactiv­e Paper Framework","translated_title":"","metadata":{"location":"Research Seminar, Faculty of Informatics, University of Lugano","event_date":{"day":null,"month":5,"year":2007,"errors":{}},"ai_abstract":"Citations form the basis for a web of scientific publications. Search engines, embedded hyperlinks and digital libraries all simplify the task of finding publications of interest on the web and navigating to cited publications or websites. However the actual reading of publications often takes place on paper and frequently on the move. We present a system Print-n-Link that uses technologies for interactive paper to enhance the reading process by enabling users to access digital information and/or searches for cited documents from a printed version of a publication using a digital pen for interaction. A special virtual printer driver automatically generates links from paper to digital services during the printing process based on an analysis of PDF documents. Depending on the user setting and interaction gesture, the system may retrieve metadata about the citation and inform the user through an audio channel or directly display the cited document on the user's screen."},"translated_abstract":null,"internal_url":"https://www.academia.edu/1661742/Bridging_the_Paper_Digi_tal_Divide_The_iPaper_Interactiv_e_Paper_Framework","translated_internal_url":"","created_at":"2010-01-12T15:32:18.673-08:00","preview_url":null,"current_user_can_edit":null,"current_user_is_owner":null,"owner_id":13155,"coauthors_can_edit":true,"document_type":"talk","co_author_tags":[],"downloadable_attachments":[{"id":30277386,"title":"","file_type":"pdf","scribd_thumbnail_url":"https://attachments.academia-assets.com/30277386/thumbnails/1.jpg","file_name":"Bridging_the_Paper-Digital_Divide.pdf","download_url":"https://www.academia.edu/attachments/30277386/download_file","bulk_download_file_name":"Bridging_the_Paper_Digi_tal_Divide_The_i.pdf","bulk_download_url":"https://d1wqtxts1xzle7.cloudfront.net/30277386/Bridging_the_Paper-Digital_Divide-libre.pdf?1390882725=\u0026response-content-disposition=attachment%3B+filename%3DBridging_the_Paper_Digi_tal_Divide_The_i.pdf\u0026Expires=1744203892\u0026Signature=e6PjcoARYuEERecqqtWhD8vE3era14qqLvqXqqmliPr5IIMmPOiioEMpkSlXsfrTamN90l3hEuwoNIgRaSvOxhkv6zAMqLXuPia4pNozwB0M5El8ZKG3FkWTiXNK0pPRfFmIOrhXnU786n3PfedUZEFUUnGLLvJmda8q2rsZxewWGGvhQC3lyb6yRGfcvWEJu7imeitfZ8mSguaA9fbML-B8AmgwJV4EinLSPDLPXLsv2IVqo11DduaHnsDshbYVUQBspOMpcRsjlchw9XVLGkpplUztlGlX3jioAdxkYQmNbQbPTk3FwnHoIULhIrjvTKe8mRYq~rR5vriSISmvzw__\u0026Key-Pair-Id=APKAJLOHF5GGSLRBV4ZA"}],"slug":"Bridging_the_Paper_Digi_tal_Divide_The_iPaper_Interactiv_e_Paper_Framework","translated_slug":"","page_count":33,"language":"en","content_type":"Work","summary":null,"impression_tracking_id":null,"owner":{"id":13155,"first_name":"Beat","middle_initials":null,"last_name":"Signer","page_name":"BeatSigner","domain_name":"vub","created_at":"2008-10-29T17:11:18.015-07:00","display_name":"Beat Signer","url":"https://vub.academia.edu/BeatSigner"},"attachments":[{"id":30277386,"title":"","file_type":"pdf","scribd_thumbnail_url":"https://attachments.academia-assets.com/30277386/thumbnails/1.jpg","file_name":"Bridging_the_Paper-Digital_Divide.pdf","download_url":"https://www.academia.edu/attachments/30277386/download_file","bulk_download_file_name":"Bridging_the_Paper_Digi_tal_Divide_The_i.pdf","bulk_download_url":"https://d1wqtxts1xzle7.cloudfront.net/30277386/Bridging_the_Paper-Digital_Divide-libre.pdf?1390882725=\u0026response-content-disposition=attachment%3B+filename%3DBridging_the_Paper_Digi_tal_Divide_The_i.pdf\u0026Expires=1744203892\u0026Signature=e6PjcoARYuEERecqqtWhD8vE3era14qqLvqXqqmliPr5IIMmPOiioEMpkSlXsfrTamN90l3hEuwoNIgRaSvOxhkv6zAMqLXuPia4pNozwB0M5El8ZKG3FkWTiXNK0pPRfFmIOrhXnU786n3PfedUZEFUUnGLLvJmda8q2rsZxewWGGvhQC3lyb6yRGfcvWEJu7imeitfZ8mSguaA9fbML-B8AmgwJV4EinLSPDLPXLsv2IVqo11DduaHnsDshbYVUQBspOMpcRsjlchw9XVLGkpplUztlGlX3jioAdxkYQmNbQbPTk3FwnHoIULhIrjvTKe8mRYq~rR5vriSISmvzw__\u0026Key-Pair-Id=APKAJLOHF5GGSLRBV4ZA"}],"research_interests":[{"id":422,"name":"Computer Science","url":"https://www.academia.edu/Documents/in/Computer_Science"},{"id":449,"name":"Software Engineering","url":"https://www.academia.edu/Documents/in/Software_Engineering"},{"id":472,"name":"Human Computer Interaction","url":"https://www.academia.edu/Documents/in/Human_Computer_Interaction"},{"id":933,"name":"New Media","url":"https://www.academia.edu/Documents/in/New_Media"},{"id":2879,"name":"Ubiquitous Computing","url":"https://www.academia.edu/Documents/in/Ubiquitous_Computing"},{"id":3419,"name":"Multimedia","url":"https://www.academia.edu/Documents/in/Multimedia"},{"id":5673,"name":"Augmented Reality","url":"https://www.academia.edu/Documents/in/Augmented_Reality"},{"id":10165,"name":"Interactive and Digital Media","url":"https://www.academia.edu/Documents/in/Interactive_and_Digital_Media"},{"id":11085,"name":"Cross-Media Information Spaces","url":"https://www.academia.edu/Documents/in/Cross-Media_Information_Spaces"},{"id":15951,"name":"Digital Pen and Paper","url":"https://www.academia.edu/Documents/in/Digital_Pen_and_Paper"},{"id":20470,"name":"Tangible User Interfaces","url":"https://www.academia.edu/Documents/in/Tangible_User_Interfaces"},{"id":44606,"name":"Hypermedia","url":"https://www.academia.edu/Documents/in/Hypermedia"}],"urls":[{"id":295123,"url":"http://www.slideshare.net/signer/bridging-the-paperdigital-divide-the-ipaper-interactive-paper-framework"}]}, dispatcherData: dispatcherData }); $(this).data('initialized', true); } }); $a.trackClickSource(".js-work-strip-work-link", "profile_work_strip") if (false) { Aedu.setUpFigureCarousel('profile-work-1661742-figures'); } }); </script> <div class="js-work-strip profile--work_container" data-work-id="1661743"><div class="profile--work_thumbnail hidden-xs"><a class="js-work-strip-work-link" data-click-track="profile-work-strip-thumbnail" href="https://www.academia.edu/1661743/Bridging_the_Paper_Digi_tal_Divide"><img alt="Research paper thumbnail of Bridging the Paper-Digi­tal Divide" class="work-thumbnail" src="https://attachments.academia-assets.com/30277388/thumbnails/1.jpg" /></a></div><div class="wp-workCard wp-workCard_itemContainer"><div class="wp-workCard_item wp-workCard--title"><a class="js-work-strip-work-link text-gray-darker" data-click-track="profile-work-strip-title" href="https://www.academia.edu/1661743/Bridging_the_Paper_Digi_tal_Divide">Bridging the Paper-Digi­tal Divide</a></div><div class="wp-workCard_item"><div class="carousel-container carousel-container--sm" id="profile-work-1661743-figures"><div class="prev-slide-container js-prev-button-container"><button aria-label="Previous" class="carousel-navigation-button js-profile-work-1661743-figures-prev"><span class="material-symbols-outlined" style="font-size: 24px" translate="no">arrow_back_ios</span></button></div><div class="slides-container js-slides-container"><figure class="figure-slide-container"><a href="https://www.academia.edu/figures/37941294/figure-11-bridging-the-paper-digital-divide"><img alt="" class="figure-slide-image" src="https://figures.academia-assets.com/30277388/figure_011.jpg" /></a></figure><figure class="figure-slide-container"><a href="https://www.academia.edu/figures/37941295/figure-12-bridging-the-paper-digital-divide"><img alt="" class="figure-slide-image" src="https://figures.academia-assets.com/30277388/figure_012.jpg" /></a></figure><figure class="figure-slide-container"><a href="https://www.academia.edu/figures/37941297/figure-13-bridging-the-paper-digital-divide"><img alt="" class="figure-slide-image" src="https://figures.academia-assets.com/30277388/figure_013.jpg" /></a></figure><figure class="figure-slide-container"><a href="https://www.academia.edu/figures/37941299/figure-14-bridging-the-paper-digital-divide"><img alt="" class="figure-slide-image" src="https://figures.academia-assets.com/30277388/figure_014.jpg" /></a></figure><figure class="figure-slide-container"><a href="https://www.academia.edu/figures/37941256/figure-1-bridging-the-paper-digital-divide"><img alt="" class="figure-slide-image" src="https://figures.academia-assets.com/30277388/figure_001.jpg" /></a></figure><figure class="figure-slide-container"><a href="https://www.academia.edu/figures/37941261/figure-2-bridging-the-paper-digital-divide"><img alt="" class="figure-slide-image" src="https://figures.academia-assets.com/30277388/figure_002.jpg" /></a></figure><figure class="figure-slide-container"><a href="https://www.academia.edu/figures/37941265/figure-3-bridging-the-paper-digital-divide"><img alt="" class="figure-slide-image" src="https://figures.academia-assets.com/30277388/figure_003.jpg" /></a></figure><figure class="figure-slide-container"><a href="https://www.academia.edu/figures/37941270/figure-4-bridging-the-paper-digital-divide"><img alt="" class="figure-slide-image" src="https://figures.academia-assets.com/30277388/figure_004.jpg" /></a></figure><figure class="figure-slide-container"><a href="https://www.academia.edu/figures/37941274/figure-5-bridging-the-paper-digital-divide"><img alt="" class="figure-slide-image" src="https://figures.academia-assets.com/30277388/figure_005.jpg" /></a></figure><figure class="figure-slide-container"><a href="https://www.academia.edu/figures/37941300/figure-15-two-dimensional-matrix-code"><img alt="Two-dimensional matrix code " class="figure-slide-image" src="https://figures.academia-assets.com/30277388/figure_015.jpg" /></a></figure><figure class="figure-slide-container"><a href="https://www.academia.edu/figures/37941321/figure-19-bridging-the-paper-digital-divide"><img alt="" class="figure-slide-image" src="https://figures.academia-assets.com/30277388/figure_019.jpg" /></a></figure><figure class="figure-slide-container"><a href="https://www.academia.edu/figures/37941326/figure-20-bridging-the-paper-digital-divide"><img alt="" class="figure-slide-image" src="https://figures.academia-assets.com/30277388/figure_020.jpg" /></a></figure><figure class="figure-slide-container"><a href="https://www.academia.edu/figures/37941328/figure-21-bridging-the-paper-digital-divide"><img alt="" class="figure-slide-image" src="https://figures.academia-assets.com/30277388/figure_021.jpg" /></a></figure><figure class="figure-slide-container"><a href="https://www.academia.edu/figures/37941332/figure-22-bridging-the-paper-digital-divide"><img alt="" class="figure-slide-image" src="https://figures.academia-assets.com/30277388/figure_022.jpg" /></a></figure><figure class="figure-slide-container"><a href="https://www.academia.edu/figures/37941333/figure-23-bridging-the-paper-digital-divide"><img alt="" class="figure-slide-image" src="https://figures.academia-assets.com/30277388/figure_023.jpg" /></a></figure><figure class="figure-slide-container"><a href="https://www.academia.edu/figures/37941336/figure-24-bridging-the-paper-digital-divide"><img alt="" class="figure-slide-image" src="https://figures.academia-assets.com/30277388/figure_024.jpg" /></a></figure><figure class="figure-slide-container"><a href="https://www.academia.edu/figures/37941342/figure-25-bridging-the-paper-digital-divide"><img alt="" class="figure-slide-image" src="https://figures.academia-assets.com/30277388/figure_025.jpg" /></a></figure><figure class="figure-slide-container"><a href="https://www.academia.edu/figures/37941303/figure-16-bridging-the-paper-digital-divide"><img alt="" class="figure-slide-image" src="https://figures.academia-assets.com/30277388/figure_016.jpg" /></a></figure><figure class="figure-slide-container"><a href="https://www.academia.edu/figures/37941309/figure-17-sony-librl-ebook-reader"><img alt="Sony LIBRlé eBook Reader " class="figure-slide-image" src="https://figures.academia-assets.com/30277388/figure_017.jpg" /></a></figure><figure class="figure-slide-container"><a href="https://www.academia.edu/figures/37941318/figure-18-bridging-the-paper-digital-divide"><img alt="" class="figure-slide-image" src="https://figures.academia-assets.com/30277388/figure_018.jpg" /></a></figure><figure class="figure-slide-container"><a href="https://www.academia.edu/figures/37941277/figure-6-bridging-the-paper-digital-divide"><img alt="" class="figure-slide-image" src="https://figures.academia-assets.com/30277388/figure_006.jpg" /></a></figure><figure class="figure-slide-container"><a href="https://www.academia.edu/figures/37941283/figure-7-bridging-the-paper-digital-divide"><img alt="" class="figure-slide-image" src="https://figures.academia-assets.com/30277388/figure_007.jpg" /></a></figure><figure class="figure-slide-container"><a href="https://www.academia.edu/figures/37941286/figure-8-bridging-the-paper-digital-divide"><img alt="" class="figure-slide-image" src="https://figures.academia-assets.com/30277388/figure_008.jpg" /></a></figure><figure class="figure-slide-container"><a href="https://www.academia.edu/figures/37941291/figure-9-bridging-the-paper-digital-divide"><img alt="" class="figure-slide-image" src="https://figures.academia-assets.com/30277388/figure_009.jpg" /></a></figure><figure class="figure-slide-container"><a href="https://www.academia.edu/figures/37941292/figure-10-bridging-the-paper-digital-divide"><img alt="" class="figure-slide-image" src="https://figures.academia-assets.com/30277388/figure_010.jpg" /></a></figure></div><div class="next-slide-container js-next-button-container"><button aria-label="Next" class="carousel-navigation-button js-profile-work-1661743-figures-next"><span class="material-symbols-outlined" style="font-size: 24px" translate="no">arrow_forward_ios</span></button></div></div></div><div class="wp-workCard_item wp-workCard--actions"><span class="work-strip-bookmark-button-container"></span><a id="b790fe48c99f598f39b98323400cd8bb" class="wp-workCard--action" rel="nofollow" data-click-track="profile-work-strip-download" data-download="{&quot;attachment_id&quot;:30277388,&quot;asset_id&quot;:1661743,&quot;asset_type&quot;:&quot;Work&quot;,&quot;button_location&quot;:&quot;profile&quot;}" href="https://www.academia.edu/attachments/30277388/download_file?s=profile"><span><i class="fa fa-arrow-down"></i></span><span>Download</span></a><span class="wp-workCard--action visible-if-viewed-by-owner inline-block" style="display: none;"><span class="js-profile-work-strip-edit-button-wrapper profile-work-strip-edit-button-wrapper" data-work-id="1661743"><a class="js-profile-work-strip-edit-button" tabindex="0"><span><i class="fa fa-pencil"></i></span><span>Edit</span></a></span></span></div><div class="wp-workCard_item wp-workCard--stats"><span><span><span class="js-view-count view-count u-mr2x" data-work-id="1661743"><i class="fa fa-spinner fa-spin"></i></span><script>$(function () { var workId = 1661743; window.Academia.workViewCountsFetcher.queue(workId, function (count) { var description = window.$h.commaizeInt(count) + " " + window.$h.pluralize(count, 'View'); $(".js-view-count[data-work-id=1661743]").text(description); $(".js-view-count[data-work-id=1661743]").attr('title', description).tooltip(); }); });</script></span></span><span><span class="percentile-widget hidden"><span class="u-mr2x work-percentile"></span></span><script>$(function () { var workId = 1661743; window.Academia.workPercentilesFetcher.queue(workId, function (percentileText) { var container = $(".js-work-strip[data-work-id='1661743']"); container.find('.work-percentile').text(percentileText.charAt(0).toUpperCase() + percentileText.slice(1)); container.find('.percentile-widget').show(); container.find('.percentile-widget').removeClass('hidden'); }); });</script></span></div><div id="work-strip-premium-row-container"></div></div></div><script> require.config({ waitSeconds: 90 })(["https://a.academia-assets.com/assets/wow_profile-a9bf3a2bc8c89fa2a77156577594264ee8a0f214d74241bc0fcd3f69f8d107ac.js","https://a.academia-assets.com/assets/work_edit-ad038b8c047c1a8d4fa01b402d530ff93c45fee2137a149a4a5398bc8ad67560.js"], function() { // from javascript_helper.rb var dispatcherData = {} if (true){ window.WowProfile.dispatcher = window.WowProfile.dispatcher || _.clone(Backbone.Events); dispatcherData = { dispatcher: window.WowProfile.dispatcher, downloadLinkId: "b790fe48c99f598f39b98323400cd8bb" } } $('.js-work-strip[data-work-id=1661743]').each(function() { if (!$(this).data('initialized')) { new WowProfile.WorkStripView({ el: this, workJSON: {"id":1661743,"title":"Bridging the Paper-Digi­tal Divide","translated_title":"","metadata":{"location":"CSCW Seminar, ETH Zurich","event_date":{"day":null,"month":3,"year":2007,"errors":{}},"ai_abstract":"The relationship between traditional paper and digital media is often perceived as oppositional, yet evidence suggests they function more effectively in tandem. This article explores technologies and applications that bridge the paper-digital divide, highlighting innovative methods for integrating physical documents with digital information. Emphasis is placed on tracking, writing capture, and enhanced user experiences through augmented paper solutions, aiming for a seamless cross-media interaction.","ai_title_tag":"Integrating Paper and Digital Media Solutions"},"translated_abstract":null,"internal_url":"https://www.academia.edu/1661743/Bridging_the_Paper_Digi_tal_Divide","translated_internal_url":"","created_at":"2010-01-12T15:41:46.913-08:00","preview_url":null,"current_user_can_edit":null,"current_user_is_owner":null,"owner_id":13155,"coauthors_can_edit":true,"document_type":"talk","co_author_tags":[],"downloadable_attachments":[{"id":30277388,"title":"","file_type":"pdf","scribd_thumbnail_url":"https://attachments.academia-assets.com/30277388/thumbnails/1.jpg","file_name":"Bridging_the_Paper-Digital_Divide.pdf","download_url":"https://www.academia.edu/attachments/30277388/download_file","bulk_download_file_name":"Bridging_the_Paper_Digi_tal_Divide.pdf","bulk_download_url":"https://d1wqtxts1xzle7.cloudfront.net/30277388/Bridging_the_Paper-Digital_Divide-libre.pdf?1390882733=\u0026response-content-disposition=attachment%3B+filename%3DBridging_the_Paper_Digi_tal_Divide.pdf\u0026Expires=1744203892\u0026Signature=Gc9BtmqeIvlmYS9jBvkAwyIVmOC4yCeHHyWtVcXCxj~Se~vPgbK0knCpVs2-PWNkCplzBsNcWr3vkKndVg-W0tUxSyDESzZEsWdXlE90RKd-PYsyaE1LWEkLuPMkoKSr8U0Z1iqAgxsXXkTIF43IqiOkgVrqhHRecLufNCQhEIXbqR-FzxE6kf7P52OxtmRZ-pKOaAdmg8XNweJJ2j-d6ve-1DxoKVjPdxicKo8tJXOlW-zSu8yzAl8nbwdrqXWpa-09tI884cEUs1T8k8sZrwApYCcdo4Isy4k8PEo3A0B5ElUbnuF0DAm0PY7N7Xyz7JKF5hu9R6y3cu3Z7oVRYw__\u0026Key-Pair-Id=APKAJLOHF5GGSLRBV4ZA"}],"slug":"Bridging_the_Paper_Digi_tal_Divide","translated_slug":"","page_count":44,"language":"en","content_type":"Work","summary":null,"impression_tracking_id":null,"owner":{"id":13155,"first_name":"Beat","middle_initials":null,"last_name":"Signer","page_name":"BeatSigner","domain_name":"vub","created_at":"2008-10-29T17:11:18.015-07:00","display_name":"Beat Signer","url":"https://vub.academia.edu/BeatSigner"},"attachments":[{"id":30277388,"title":"","file_type":"pdf","scribd_thumbnail_url":"https://attachments.academia-assets.com/30277388/thumbnails/1.jpg","file_name":"Bridging_the_Paper-Digital_Divide.pdf","download_url":"https://www.academia.edu/attachments/30277388/download_file","bulk_download_file_name":"Bridging_the_Paper_Digi_tal_Divide.pdf","bulk_download_url":"https://d1wqtxts1xzle7.cloudfront.net/30277388/Bridging_the_Paper-Digital_Divide-libre.pdf?1390882733=\u0026response-content-disposition=attachment%3B+filename%3DBridging_the_Paper_Digi_tal_Divide.pdf\u0026Expires=1744203892\u0026Signature=Gc9BtmqeIvlmYS9jBvkAwyIVmOC4yCeHHyWtVcXCxj~Se~vPgbK0knCpVs2-PWNkCplzBsNcWr3vkKndVg-W0tUxSyDESzZEsWdXlE90RKd-PYsyaE1LWEkLuPMkoKSr8U0Z1iqAgxsXXkTIF43IqiOkgVrqhHRecLufNCQhEIXbqR-FzxE6kf7P52OxtmRZ-pKOaAdmg8XNweJJ2j-d6ve-1DxoKVjPdxicKo8tJXOlW-zSu8yzAl8nbwdrqXWpa-09tI884cEUs1T8k8sZrwApYCcdo4Isy4k8PEo3A0B5ElUbnuF0DAm0PY7N7Xyz7JKF5hu9R6y3cu3Z7oVRYw__\u0026Key-Pair-Id=APKAJLOHF5GGSLRBV4ZA"}],"research_interests":[{"id":422,"name":"Computer Science","url":"https://www.academia.edu/Documents/in/Computer_Science"},{"id":449,"name":"Software Engineering","url":"https://www.academia.edu/Documents/in/Software_Engineering"},{"id":472,"name":"Human Computer Interaction","url":"https://www.academia.edu/Documents/in/Human_Computer_Interaction"},{"id":3419,"name":"Multimedia","url":"https://www.academia.edu/Documents/in/Multimedia"},{"id":5673,"name":"Augmented Reality","url":"https://www.academia.edu/Documents/in/Augmented_Reality"},{"id":10165,"name":"Interactive and Digital Media","url":"https://www.academia.edu/Documents/in/Interactive_and_Digital_Media"},{"id":11085,"name":"Cross-Media Information Spaces","url":"https://www.academia.edu/Documents/in/Cross-Media_Information_Spaces"},{"id":11086,"name":"Interactive Paper","url":"https://www.academia.edu/Documents/in/Interactive_Paper"},{"id":15951,"name":"Digital Pen and Paper","url":"https://www.academia.edu/Documents/in/Digital_Pen_and_Paper"},{"id":42095,"name":"Document Engineering","url":"https://www.academia.edu/Documents/in/Document_Engineering"},{"id":44606,"name":"Hypermedia","url":"https://www.academia.edu/Documents/in/Hypermedia"}],"urls":[{"id":295124,"url":"http://www.slideshare.net/signer/bridging-the-paperdigital-divide"}]}, dispatcherData: dispatcherData }); $(this).data('initialized', true); } }); $a.trackClickSource(".js-work-strip-work-link", "profile_work_strip") if (true) { Aedu.setUpFigureCarousel('profile-work-1661743-figures'); } }); </script> </div><div class="profile--tab_content_container js-tab-pane tab-pane" data-section-id="11475754" id="datasets"><div class="js-work-strip profile--work_container" data-work-id="128548420"><div class="profile--work_thumbnail hidden-xs"><a class="js-work-strip-work-link" data-click-track="profile-work-strip-thumbnail" href="https://www.academia.edu/128548420/Dataset_of_Survey_on_Current_Email_Management_Practices"><img alt="Research paper thumbnail of Dataset of Survey on Current Email Management Practices" class="work-thumbnail" src="https://attachments.academia-assets.com/122110845/thumbnails/1.jpg" /></a></div><div class="wp-workCard wp-workCard_itemContainer"><div class="wp-workCard_item wp-workCard--title"><a class="js-work-strip-work-link text-gray-darker" data-click-track="profile-work-strip-title" href="https://www.academia.edu/128548420/Dataset_of_Survey_on_Current_Email_Management_Practices">Dataset of Survey on Current Email Management Practices</a></div><div class="wp-workCard_item"><span class="js-work-more-abstract-truncated">This dataset contains anonymised survey responses from a comprehensive study conducted to explore...</span><a class="js-work-more-abstract" data-broccoli-component="work_strip.more_abstract" data-click-track="profile-work-strip-more-abstract" href="javascript:;"><span> more </span><span><i class="fa fa-caret-down"></i></span></a><span class="js-work-more-abstract-untruncated hidden">This dataset contains anonymised survey responses from a comprehensive study conducted to explore current email management practices among users. The survey aimed to gain insights into how individuals handle and organize their email communications in various contexts. The survey questionnaire consisted of carefully designed questions related to email usage patterns, organisational strategies, folder structures, and automation utilised for email management. The survey also explored participants&#39; preferences for automated rule-based filtering functionality and any challenges they face in effectively managing their mailbox. Researchers and professionals interested in email management and information organisation can leverage this dataset for research, analysis, and potential improvements in email client design and functionality. We kindly request that any publications or research utilising this dataset appropriately acknowledge and cite the original source to ensure proper attribution to the survey and its participants.</span></div><div class="wp-workCard_item wp-workCard--actions"><span class="work-strip-bookmark-button-container"></span><a id="48e1192efe76c4b9ae81bcd6bff08373" class="wp-workCard--action" rel="nofollow" data-click-track="profile-work-strip-download" data-download="{&quot;attachment_id&quot;:122110845,&quot;asset_id&quot;:128548420,&quot;asset_type&quot;:&quot;Work&quot;,&quot;button_location&quot;:&quot;profile&quot;}" href="https://www.academia.edu/attachments/122110845/download_file?s=profile"><span><i class="fa fa-arrow-down"></i></span><span>Download</span></a><span class="wp-workCard--action visible-if-viewed-by-owner inline-block" style="display: none;"><span class="js-profile-work-strip-edit-button-wrapper profile-work-strip-edit-button-wrapper" data-work-id="128548420"><a class="js-profile-work-strip-edit-button" tabindex="0"><span><i class="fa fa-pencil"></i></span><span>Edit</span></a></span></span></div><div class="wp-workCard_item wp-workCard--stats"><span><span><span class="js-view-count view-count u-mr2x" data-work-id="128548420"><i class="fa fa-spinner fa-spin"></i></span><script>$(function () { var workId = 128548420; window.Academia.workViewCountsFetcher.queue(workId, function (count) { var description = window.$h.commaizeInt(count) + " " + window.$h.pluralize(count, 'View'); $(".js-view-count[data-work-id=128548420]").text(description); $(".js-view-count[data-work-id=128548420]").attr('title', description).tooltip(); }); });</script></span></span><span><span class="percentile-widget hidden"><span class="u-mr2x work-percentile"></span></span><script>$(function () { var workId = 128548420; window.Academia.workPercentilesFetcher.queue(workId, function (percentileText) { var container = $(".js-work-strip[data-work-id='128548420']"); container.find('.work-percentile').text(percentileText.charAt(0).toUpperCase() + percentileText.slice(1)); container.find('.percentile-widget').show(); container.find('.percentile-widget').removeClass('hidden'); }); });</script></span></div><div id="work-strip-premium-row-container"></div></div></div><script> require.config({ waitSeconds: 90 })(["https://a.academia-assets.com/assets/wow_profile-a9bf3a2bc8c89fa2a77156577594264ee8a0f214d74241bc0fcd3f69f8d107ac.js","https://a.academia-assets.com/assets/work_edit-ad038b8c047c1a8d4fa01b402d530ff93c45fee2137a149a4a5398bc8ad67560.js"], function() { // from javascript_helper.rb var dispatcherData = {} if (true){ window.WowProfile.dispatcher = window.WowProfile.dispatcher || _.clone(Backbone.Events); dispatcherData = { dispatcher: window.WowProfile.dispatcher, downloadLinkId: "48e1192efe76c4b9ae81bcd6bff08373" } } $('.js-work-strip[data-work-id=128548420]').each(function() { if (!$(this).data('initialized')) { new WowProfile.WorkStripView({ el: this, workJSON: {"id":128548420,"title":"Dataset of Survey on Current Email Management Practices","translated_title":"","metadata":{"doi":"10.5281/zenodo.8028567","abstract":"This dataset contains anonymised survey responses from a comprehensive study conducted to explore current email management practices among users. The survey aimed to gain insights into how individuals handle and organize their email communications in various contexts. The survey questionnaire consisted of carefully designed questions related to email usage patterns, organisational strategies, folder structures, and automation utilised for email management. The survey also explored participants' preferences for automated rule-based filtering functionality and any challenges they face in effectively managing their mailbox. Researchers and professionals interested in email management and information organisation can leverage this dataset for research, analysis, and potential improvements in email client design and functionality. We kindly request that any publications or research utilising this dataset appropriately acknowledge and cite the original source to ensure proper attribution to the survey and its participants.","grobid_abstract":"This dataset contains anonymised survey responses from a comprehensive study conducted to explore current email management practices among users. The survey aimed to gain insights into how individuals handle and organize their email communications in various contexts. The survey questionnaire consisted of carefully designed questions related to email usage patterns, organisational strategies, folder structures, and automation utilised for email management. The survey also explored participants' preferences for automated rule-based filtering functionality and any challenges they face in effectively managing their mailbox. Researchers and professionals interested in email management and information organisation can leverage this dataset for research, analysis, and potential improvements in email client design and functionality. We kindly request that any publications or research utilising this dataset appropriately acknowledge and cite the original source to ensure proper attribution to the survey and its participants.","publication_date":{"day":null,"month":null,"year":2023,"errors":{}},"grobid_abstract_attachment_id":122110845},"translated_abstract":"This dataset contains anonymised survey responses from a comprehensive study conducted to explore current email management practices among users. The survey aimed to gain insights into how individuals handle and organize their email communications in various contexts. The survey questionnaire consisted of carefully designed questions related to email usage patterns, organisational strategies, folder structures, and automation utilised for email management. The survey also explored participants' preferences for automated rule-based filtering functionality and any challenges they face in effectively managing their mailbox. Researchers and professionals interested in email management and information organisation can leverage this dataset for research, analysis, and potential improvements in email client design and functionality. We kindly request that any publications or research utilising this dataset appropriately acknowledge and cite the original source to ensure proper attribution to the survey and its participants.","internal_url":"https://www.academia.edu/128548420/Dataset_of_Survey_on_Current_Email_Management_Practices","translated_internal_url":"","created_at":"2025-03-31T17:39:48.184-07:00","preview_url":null,"current_user_can_edit":null,"current_user_is_owner":null,"owner_id":13155,"coauthors_can_edit":true,"document_type":"book","co_author_tags":[],"downloadable_attachments":[{"id":122110845,"title":"","file_type":"pdf","scribd_thumbnail_url":"https://attachments.academia-assets.com/122110845/thumbnails/1.jpg","file_name":"CurrentEmailManagementPractices.pdf","download_url":"https://www.academia.edu/attachments/122110845/download_file","bulk_download_file_name":"Dataset_of_Survey_on_Current_Email_Manag.pdf","bulk_download_url":"https://d1wqtxts1xzle7.cloudfront.net/122110845/CurrentEmailManagementPractices-libre.pdf?1743471449=\u0026response-content-disposition=attachment%3B+filename%3DDataset_of_Survey_on_Current_Email_Manag.pdf\u0026Expires=1744203892\u0026Signature=cUF2150NeftlG3iAHTnkbzgYrkPX9yVwsEw8dN~xdsv~TLEuE~6jL1b1tccKzqzx11MiWoj16u88vFw3ZsCy5tBLY83LrNybFRISWDhup0Z2EPiybKQ2p0Kjn0xdNfIn1pppPteOBwZEaFIpK4ImB-Cl3BZn-AU7Apo596N3p6xN4Fa8uoXRibRfdTpZaHOT22bzJPPriLFBxdyU0aie1-L0Q1pB6P3KXXys-DOfs~wjovnbQe5JsVU0EyjWkJ8js4wkLK7EMCNKIFPh~4nKlK3PUyFiGt3jTvkLO8IhxHucEYMarq7eDQw34C-rJ7awIykQc06oGjLCB9X9ub3xdQ__\u0026Key-Pair-Id=APKAJLOHF5GGSLRBV4ZA"}],"slug":"Dataset_of_Survey_on_Current_Email_Management_Practices","translated_slug":"","page_count":5,"language":"en","content_type":"Work","summary":"This dataset contains anonymised survey responses from a comprehensive study conducted to explore current email management practices among users. The survey aimed to gain insights into how individuals handle and organize their email communications in various contexts. The survey questionnaire consisted of carefully designed questions related to email usage patterns, organisational strategies, folder structures, and automation utilised for email management. The survey also explored participants' preferences for automated rule-based filtering functionality and any challenges they face in effectively managing their mailbox. Researchers and professionals interested in email management and information organisation can leverage this dataset for research, analysis, and potential improvements in email client design and functionality. We kindly request that any publications or research utilising this dataset appropriately acknowledge and cite the original source to ensure proper attribution to the survey and its participants.","impression_tracking_id":null,"owner":{"id":13155,"first_name":"Beat","middle_initials":null,"last_name":"Signer","page_name":"BeatSigner","domain_name":"vub","created_at":"2008-10-29T17:11:18.015-07:00","display_name":"Beat Signer","url":"https://vub.academia.edu/BeatSigner"},"attachments":[{"id":122110845,"title":"","file_type":"pdf","scribd_thumbnail_url":"https://attachments.academia-assets.com/122110845/thumbnails/1.jpg","file_name":"CurrentEmailManagementPractices.pdf","download_url":"https://www.academia.edu/attachments/122110845/download_file","bulk_download_file_name":"Dataset_of_Survey_on_Current_Email_Manag.pdf","bulk_download_url":"https://d1wqtxts1xzle7.cloudfront.net/122110845/CurrentEmailManagementPractices-libre.pdf?1743471449=\u0026response-content-disposition=attachment%3B+filename%3DDataset_of_Survey_on_Current_Email_Manag.pdf\u0026Expires=1744203892\u0026Signature=cUF2150NeftlG3iAHTnkbzgYrkPX9yVwsEw8dN~xdsv~TLEuE~6jL1b1tccKzqzx11MiWoj16u88vFw3ZsCy5tBLY83LrNybFRISWDhup0Z2EPiybKQ2p0Kjn0xdNfIn1pppPteOBwZEaFIpK4ImB-Cl3BZn-AU7Apo596N3p6xN4Fa8uoXRibRfdTpZaHOT22bzJPPriLFBxdyU0aie1-L0Q1pB6P3KXXys-DOfs~wjovnbQe5JsVU0EyjWkJ8js4wkLK7EMCNKIFPh~4nKlK3PUyFiGt3jTvkLO8IhxHucEYMarq7eDQw34C-rJ7awIykQc06oGjLCB9X9ub3xdQ__\u0026Key-Pair-Id=APKAJLOHF5GGSLRBV4ZA"}],"research_interests":[{"id":422,"name":"Computer Science","url":"https://www.academia.edu/Documents/in/Computer_Science"},{"id":491,"name":"Information Technology","url":"https://www.academia.edu/Documents/in/Information_Technology"},{"id":1210,"name":"Informatics","url":"https://www.academia.edu/Documents/in/Informatics"},{"id":17624,"name":"E Mail Interactions","url":"https://www.academia.edu/Documents/in/E_Mail_Interactions"},{"id":117592,"name":"E-Mail","url":"https://www.academia.edu/Documents/in/E-Mail"},{"id":197069,"name":"Electronic mail","url":"https://www.academia.edu/Documents/in/Electronic_mail"},{"id":1029902,"name":"Dataset","url":"https://www.academia.edu/Documents/in/Dataset"}],"urls":[]}, dispatcherData: dispatcherData }); $(this).data('initialized', true); } }); $a.trackClickSource(".js-work-strip-work-link", "profile_work_strip") if (false) { Aedu.setUpFigureCarousel('profile-work-128548420-figures'); } }); </script> <div class="js-work-strip profile--work_container" data-work-id="128546353"><div class="profile--work_thumbnail hidden-xs"><a class="js-work-strip-work-link" data-click-track="profile-work-strip-thumbnail" href="https://www.academia.edu/128546353/Dataset_for_Personalised_Learning_Environments_Based_on_Knowledge_Graphs_and_the_Zone_of_Proximal_Development_"><img alt="Research paper thumbnail of Dataset for &quot;Personalised Learning Environments Based on Knowledge Graphs and the Zone of Proximal Development&quot;" class="work-thumbnail" src="https://attachments.academia-assets.com/122109089/thumbnails/1.jpg" /></a></div><div class="wp-workCard wp-workCard_itemContainer"><div class="wp-workCard_item wp-workCard--title"><a class="js-work-strip-work-link text-gray-darker" data-click-track="profile-work-strip-title" href="https://www.academia.edu/128546353/Dataset_for_Personalised_Learning_Environments_Based_on_Knowledge_Graphs_and_the_Zone_of_Proximal_Development_">Dataset for &quot;Personalised Learning Environments Based on Knowledge Graphs and the Zone of Proximal Development&quot;</a></div><div class="wp-workCard_item"><span class="js-work-more-abstract-truncated">The dataset accompanying our paper &quot;Personalised Learning Environments Based on Knowledge Graphs ...</span><a class="js-work-more-abstract" data-broccoli-component="work_strip.more_abstract" data-click-track="profile-work-strip-more-abstract" href="javascript:;"><span> more </span><span><i class="fa fa-caret-down"></i></span></a><span class="js-work-more-abstract-untruncated hidden">The dataset accompanying our paper &quot;Personalised Learning Environments Based on Knowledge Graphs and the Zone of Proximal Development&quot; published in proceedings of CSEDU 2022.</span></div><div class="wp-workCard_item wp-workCard--actions"><span class="work-strip-bookmark-button-container"></span><a id="f1e658e2d34903ac5f084a00c25537c0" class="wp-workCard--action" rel="nofollow" data-click-track="profile-work-strip-download" data-download="{&quot;attachment_id&quot;:122109089,&quot;asset_id&quot;:128546353,&quot;asset_type&quot;:&quot;Work&quot;,&quot;button_location&quot;:&quot;profile&quot;}" href="https://www.academia.edu/attachments/122109089/download_file?s=profile"><span><i class="fa fa-arrow-down"></i></span><span>Download</span></a><span class="wp-workCard--action visible-if-viewed-by-owner inline-block" style="display: none;"><span class="js-profile-work-strip-edit-button-wrapper profile-work-strip-edit-button-wrapper" data-work-id="128546353"><a class="js-profile-work-strip-edit-button" tabindex="0"><span><i class="fa fa-pencil"></i></span><span>Edit</span></a></span></span></div><div class="wp-workCard_item wp-workCard--stats"><span><span><span class="js-view-count view-count u-mr2x" data-work-id="128546353"><i class="fa fa-spinner fa-spin"></i></span><script>$(function () { var workId = 128546353; window.Academia.workViewCountsFetcher.queue(workId, function (count) { var description = window.$h.commaizeInt(count) + " " + window.$h.pluralize(count, 'View'); $(".js-view-count[data-work-id=128546353]").text(description); $(".js-view-count[data-work-id=128546353]").attr('title', description).tooltip(); }); });</script></span></span><span><span class="percentile-widget hidden"><span class="u-mr2x work-percentile"></span></span><script>$(function () { var workId = 128546353; window.Academia.workPercentilesFetcher.queue(workId, function (percentileText) { var container = $(".js-work-strip[data-work-id='128546353']"); container.find('.work-percentile').text(percentileText.charAt(0).toUpperCase() + percentileText.slice(1)); container.find('.percentile-widget').show(); container.find('.percentile-widget').removeClass('hidden'); }); });</script></span></div><div id="work-strip-premium-row-container"></div></div></div><script> require.config({ waitSeconds: 90 })(["https://a.academia-assets.com/assets/wow_profile-a9bf3a2bc8c89fa2a77156577594264ee8a0f214d74241bc0fcd3f69f8d107ac.js","https://a.academia-assets.com/assets/work_edit-ad038b8c047c1a8d4fa01b402d530ff93c45fee2137a149a4a5398bc8ad67560.js"], function() { // from javascript_helper.rb var dispatcherData = {} if (true){ window.WowProfile.dispatcher = window.WowProfile.dispatcher || _.clone(Backbone.Events); dispatcherData = { dispatcher: window.WowProfile.dispatcher, downloadLinkId: "f1e658e2d34903ac5f084a00c25537c0" } } $('.js-work-strip[data-work-id=128546353]').each(function() { if (!$(this).data('initialized')) { new WowProfile.WorkStripView({ el: this, workJSON: {"id":128546353,"title":"Dataset for \"Personalised Learning Environments Based on Knowledge Graphs and the Zone of Proximal Development\"","translated_title":"","metadata":{"doi":"10.5281/zenodo.6091625","abstract":"The dataset accompanying our paper \"Personalised Learning Environments Based on Knowledge Graphs and the Zone of Proximal Development\" published in proceedings of CSEDU 2022.","publication_date":{"day":null,"month":null,"year":2022,"errors":{}}},"translated_abstract":"The dataset accompanying our paper \"Personalised Learning Environments Based on Knowledge Graphs and the Zone of Proximal Development\" published in proceedings of CSEDU 2022.","internal_url":"https://www.academia.edu/128546353/Dataset_for_Personalised_Learning_Environments_Based_on_Knowledge_Graphs_and_the_Zone_of_Proximal_Development_","translated_internal_url":"","created_at":"2025-03-31T14:06:43.013-07:00","preview_url":null,"current_user_can_edit":null,"current_user_is_owner":null,"owner_id":13155,"coauthors_can_edit":true,"document_type":"book","co_author_tags":[{"id":43317679,"work_id":128546353,"tagging_user_id":13155,"tagged_user_id":230910613,"co_author_invite_id":null,"email":"y***e@vub.be","affiliation":"Vrije Universiteit Brussel","display_order":1,"name":"Yoshi Malaise","title":"Dataset for \"Personalised Learning Environments Based on Knowledge Graphs and the Zone of Proximal Development\""}],"downloadable_attachments":[{"id":122109089,"title":"","file_type":"pdf","scribd_thumbnail_url":"https://attachments.academia-assets.com/122109089/thumbnails/1.jpg","file_name":"PersonalisedLearningEnvironments.pdf","download_url":"https://www.academia.edu/attachments/122109089/download_file","bulk_download_file_name":"Dataset_for_Personalised_Learning_Enviro.pdf","bulk_download_url":"https://d1wqtxts1xzle7.cloudfront.net/122109089/PersonalisedLearningEnvironments-libre.pdf?1743455736=\u0026response-content-disposition=attachment%3B+filename%3DDataset_for_Personalised_Learning_Enviro.pdf\u0026Expires=1744203892\u0026Signature=WDb0~XU2a9qD6oSppFPvwUVM8XWlCFEdGTx5yAVVMECspXsxjSEeGOkx70I3TxH6ibAkivne2YFYdwtpmLWSeX~abPpWwixbtmhFLUITpCBLtu2jpB6yilfXpddfxonSHDJPJEbEVBvPVvs9vu9pCTCxMdMh85uhaICbMl-zwcdgXM0bLnDaGxb4bulXp-~zGoU1JpNH6SuJW74BVEOieoXW2LCFrJGyjXmRQBvUvBzyCPEhSNwnGNW8zgjnTYJ~xld-KFCYw1Bkw7aKhYWpLIBiTEQoEQPcfE6b~abaoaKyEfw2VyZOlawkU8uT5q1aKrvB~2k3W6VL77U63Zd1NA__\u0026Key-Pair-Id=APKAJLOHF5GGSLRBV4ZA"}],"slug":"Dataset_for_Personalised_Learning_Environments_Based_on_Knowledge_Graphs_and_the_Zone_of_Proximal_Development_","translated_slug":"","page_count":5,"language":"en","content_type":"Work","summary":"The dataset accompanying our paper \"Personalised Learning Environments Based on Knowledge Graphs and the Zone of Proximal Development\" published in proceedings of CSEDU 2022.","impression_tracking_id":null,"owner":{"id":13155,"first_name":"Beat","middle_initials":null,"last_name":"Signer","page_name":"BeatSigner","domain_name":"vub","created_at":"2008-10-29T17:11:18.015-07:00","display_name":"Beat Signer","url":"https://vub.academia.edu/BeatSigner"},"attachments":[{"id":122109089,"title":"","file_type":"pdf","scribd_thumbnail_url":"https://attachments.academia-assets.com/122109089/thumbnails/1.jpg","file_name":"PersonalisedLearningEnvironments.pdf","download_url":"https://www.academia.edu/attachments/122109089/download_file","bulk_download_file_name":"Dataset_for_Personalised_Learning_Enviro.pdf","bulk_download_url":"https://d1wqtxts1xzle7.cloudfront.net/122109089/PersonalisedLearningEnvironments-libre.pdf?1743455736=\u0026response-content-disposition=attachment%3B+filename%3DDataset_for_Personalised_Learning_Enviro.pdf\u0026Expires=1744203892\u0026Signature=WDb0~XU2a9qD6oSppFPvwUVM8XWlCFEdGTx5yAVVMECspXsxjSEeGOkx70I3TxH6ibAkivne2YFYdwtpmLWSeX~abPpWwixbtmhFLUITpCBLtu2jpB6yilfXpddfxonSHDJPJEbEVBvPVvs9vu9pCTCxMdMh85uhaICbMl-zwcdgXM0bLnDaGxb4bulXp-~zGoU1JpNH6SuJW74BVEOieoXW2LCFrJGyjXmRQBvUvBzyCPEhSNwnGNW8zgjnTYJ~xld-KFCYw1Bkw7aKhYWpLIBiTEQoEQPcfE6b~abaoaKyEfw2VyZOlawkU8uT5q1aKrvB~2k3W6VL77U63Zd1NA__\u0026Key-Pair-Id=APKAJLOHF5GGSLRBV4ZA"}],"research_interests":[{"id":422,"name":"Computer Science","url":"https://www.academia.edu/Documents/in/Computer_Science"},{"id":491,"name":"Information Technology","url":"https://www.academia.edu/Documents/in/Information_Technology"},{"id":1210,"name":"Informatics","url":"https://www.academia.edu/Documents/in/Informatics"},{"id":92171,"name":"Personalised learning","url":"https://www.academia.edu/Documents/in/Personalised_learning"},{"id":956906,"name":"Personalised Learning Environments","url":"https://www.academia.edu/Documents/in/Personalised_Learning_Environments"},{"id":1029902,"name":"Dataset","url":"https://www.academia.edu/Documents/in/Dataset"}],"urls":[]}, dispatcherData: dispatcherData }); $(this).data('initialized', true); } }); $a.trackClickSource(".js-work-strip-work-link", "profile_work_strip") if (false) { Aedu.setUpFigureCarousel('profile-work-128546353-figures'); } }); </script> <div class="js-work-strip profile--work_container" data-work-id="128545592"><div class="profile--work_thumbnail hidden-xs"><a class="js-work-strip-work-link" data-click-track="profile-work-strip-thumbnail" href="https://www.academia.edu/128545592/Object_Tracking_on_a_Monopoly_Game_Board"><img alt="Research paper thumbnail of Object Tracking on a Monopoly Game Board" class="work-thumbnail" src="https://attachments.academia-assets.com/122108578/thumbnails/1.jpg" /></a></div><div class="wp-workCard wp-workCard_itemContainer"><div class="wp-workCard_item wp-workCard--title"><a class="js-work-strip-work-link text-gray-darker" data-click-track="profile-work-strip-title" href="https://www.academia.edu/128545592/Object_Tracking_on_a_Monopoly_Game_Board">Object Tracking on a Monopoly Game Board</a></div><div class="wp-workCard_item"><span class="js-work-more-abstract-truncated">The goal of this dataset was to track game pieces on the physical game board of Monopoly. We make...</span><a class="js-work-more-abstract" data-broccoli-component="work_strip.more_abstract" data-click-track="profile-work-strip-more-abstract" href="javascript:;"><span> more </span><span><i class="fa fa-caret-down"></i></span></a><span class="js-work-more-abstract-untruncated hidden">The goal of this dataset was to track game pieces on the physical game board of Monopoly. We make use of object classification where our training data consists of 100 pictures (taken at an angle) of the game board in order to classify the individual (moving) pieces. The training dataset was on the 9th of April 2023 and the test date recorded on the 7th of May 2023 using an iPhone 13 mini and iPhone 12.</span></div><div class="wp-workCard_item wp-workCard--actions"><span class="work-strip-bookmark-button-container"></span><a id="593ad4fb966d235babb0521891e5b580" class="wp-workCard--action" rel="nofollow" data-click-track="profile-work-strip-download" data-download="{&quot;attachment_id&quot;:122108578,&quot;asset_id&quot;:128545592,&quot;asset_type&quot;:&quot;Work&quot;,&quot;button_location&quot;:&quot;profile&quot;}" href="https://www.academia.edu/attachments/122108578/download_file?s=profile"><span><i class="fa fa-arrow-down"></i></span><span>Download</span></a><span class="wp-workCard--action visible-if-viewed-by-owner inline-block" style="display: none;"><span class="js-profile-work-strip-edit-button-wrapper profile-work-strip-edit-button-wrapper" data-work-id="128545592"><a class="js-profile-work-strip-edit-button" tabindex="0"><span><i class="fa fa-pencil"></i></span><span>Edit</span></a></span></span></div><div class="wp-workCard_item wp-workCard--stats"><span><span><span class="js-view-count view-count u-mr2x" data-work-id="128545592"><i class="fa fa-spinner fa-spin"></i></span><script>$(function () { var workId = 128545592; window.Academia.workViewCountsFetcher.queue(workId, function (count) { var description = window.$h.commaizeInt(count) + " " + window.$h.pluralize(count, 'View'); $(".js-view-count[data-work-id=128545592]").text(description); $(".js-view-count[data-work-id=128545592]").attr('title', description).tooltip(); }); });</script></span></span><span><span class="percentile-widget hidden"><span class="u-mr2x work-percentile"></span></span><script>$(function () { var workId = 128545592; window.Academia.workPercentilesFetcher.queue(workId, function (percentileText) { var container = $(".js-work-strip[data-work-id='128545592']"); container.find('.work-percentile').text(percentileText.charAt(0).toUpperCase() + percentileText.slice(1)); container.find('.percentile-widget').show(); container.find('.percentile-widget').removeClass('hidden'); }); });</script></span></div><div id="work-strip-premium-row-container"></div></div></div><script> require.config({ waitSeconds: 90 })(["https://a.academia-assets.com/assets/wow_profile-a9bf3a2bc8c89fa2a77156577594264ee8a0f214d74241bc0fcd3f69f8d107ac.js","https://a.academia-assets.com/assets/work_edit-ad038b8c047c1a8d4fa01b402d530ff93c45fee2137a149a4a5398bc8ad67560.js"], function() { // from javascript_helper.rb var dispatcherData = {} if (true){ window.WowProfile.dispatcher = window.WowProfile.dispatcher || _.clone(Backbone.Events); dispatcherData = { dispatcher: window.WowProfile.dispatcher, downloadLinkId: "593ad4fb966d235babb0521891e5b580" } } $('.js-work-strip[data-work-id=128545592]').each(function() { if (!$(this).data('initialized')) { new WowProfile.WorkStripView({ el: this, workJSON: {"id":128545592,"title":"Object Tracking on a Monopoly Game Board","translated_title":"","metadata":{"doi":"10.5281/zenodo.7990434","abstract":"The goal of this dataset was to track game pieces on the physical game board of Monopoly. We make use of object classification where our training data consists of 100 pictures (taken at an angle) of the game board in order to classify the individual (moving) pieces. The training dataset was on the 9th of April 2023 and the test date recorded on the 7th of May 2023 using an iPhone 13 mini and iPhone 12.","grobid_abstract":"The goal of this dataset was to track game pieces on the physical game board of Monopoly. We make use of object classification where our training data consists of 100 pictures (taken at an angle) of the game board in order to classify the individual (moving) pieces. The training dataset was on the 9th of April 2023 and the test date recorded on the 7th of May 2023 using an iPhone 13 mini and iPhone 12. Two participants played a game of Monopoly and each individually took pictures of the current game state after every move. These images were then processed by our application to determine the location of pawns and other game pieces such as the red and green houses. Raw images are unprocessed but may have minor edits to ensure anonymisation of participants in the background. We used Roboflow to label and train our dataset which is included in this repository. For more information about our processing and this dataset you can download the full Bachelor thesis here: (download link available after embargo at the end of the academic year) This dataset was published as part of the bachelor thesis: Location Tracking on a Physical Game Board for obtaining the degree of Bachelor in Computer Sciences at the Vrije Universiteit Brussel.","publication_date":{"day":null,"month":null,"year":2023,"errors":{}},"grobid_abstract_attachment_id":122108578},"translated_abstract":"The goal of this dataset was to track game pieces on the physical game board of Monopoly. We make use of object classification where our training data consists of 100 pictures (taken at an angle) of the game board in order to classify the individual (moving) pieces. The training dataset was on the 9th of April 2023 and the test date recorded on the 7th of May 2023 using an iPhone 13 mini and iPhone 12.","internal_url":"https://www.academia.edu/128545592/Object_Tracking_on_a_Monopoly_Game_Board","translated_internal_url":"","created_at":"2025-03-31T13:26:03.189-07:00","preview_url":null,"current_user_can_edit":null,"current_user_is_owner":null,"owner_id":13155,"coauthors_can_edit":true,"document_type":"book","co_author_tags":[{"id":43317599,"work_id":128545592,"tagging_user_id":13155,"tagged_user_id":61118416,"co_author_invite_id":null,"email":"m***l@gmail.com","affiliation":"Vrije Universiteit Brussel","display_order":1,"name":"Maxim Van de Wynckel","title":"Object Tracking on a Monopoly Game Board"}],"downloadable_attachments":[{"id":122108578,"title":"","file_type":"pdf","scribd_thumbnail_url":"https://attachments.academia-assets.com/122108578/thumbnails/1.jpg","file_name":"MonopolyGameBoard.pdf","download_url":"https://www.academia.edu/attachments/122108578/download_file","bulk_download_file_name":"Object_Tracking_on_a_Monopoly_Game_Board.pdf","bulk_download_url":"https://d1wqtxts1xzle7.cloudfront.net/122108578/MonopolyGameBoard-libre.pdf?1743455831=\u0026response-content-disposition=attachment%3B+filename%3DObject_Tracking_on_a_Monopoly_Game_Board.pdf\u0026Expires=1744203892\u0026Signature=NQO12GQw92e2NkGBHtMQyLzn8fyVFLK9GtzK~PRHufuSOJeSx5AuPMRNV25vEMkGHqurLWH5AyMd7Hu25ZlZT-hQaf~4HUQnK9DlgRhER9M5PbOODeO53woix88FN4JeJIiBWTwbC70VW-MimKsDY70HyomfTBG4WDvDS553APhTrG28ZiXONtu7UlvAgeJDb-oIF9jy8Xv-SVRd5MNOqzDguW9UQ6XiBhKe-Ptdcnww~1xeNKdILmVauA1ZvQD-HvxhNMIF0mJAF1fKFdOuKNKG~yjHg4g4sBPpk4xnxLjK2ccsK7yWoyfx2lltP9XkdCzLuhtqWSFoKU-aklBnDQ__\u0026Key-Pair-Id=APKAJLOHF5GGSLRBV4ZA"}],"slug":"Object_Tracking_on_a_Monopoly_Game_Board","translated_slug":"","page_count":6,"language":"en","content_type":"Work","summary":"The goal of this dataset was to track game pieces on the physical game board of Monopoly. We make use of object classification where our training data consists of 100 pictures (taken at an angle) of the game board in order to classify the individual (moving) pieces. The training dataset was on the 9th of April 2023 and the test date recorded on the 7th of May 2023 using an iPhone 13 mini and iPhone 12.","impression_tracking_id":null,"owner":{"id":13155,"first_name":"Beat","middle_initials":null,"last_name":"Signer","page_name":"BeatSigner","domain_name":"vub","created_at":"2008-10-29T17:11:18.015-07:00","display_name":"Beat Signer","url":"https://vub.academia.edu/BeatSigner"},"attachments":[{"id":122108578,"title":"","file_type":"pdf","scribd_thumbnail_url":"https://attachments.academia-assets.com/122108578/thumbnails/1.jpg","file_name":"MonopolyGameBoard.pdf","download_url":"https://www.academia.edu/attachments/122108578/download_file","bulk_download_file_name":"Object_Tracking_on_a_Monopoly_Game_Board.pdf","bulk_download_url":"https://d1wqtxts1xzle7.cloudfront.net/122108578/MonopolyGameBoard-libre.pdf?1743455831=\u0026response-content-disposition=attachment%3B+filename%3DObject_Tracking_on_a_Monopoly_Game_Board.pdf\u0026Expires=1744203892\u0026Signature=NQO12GQw92e2NkGBHtMQyLzn8fyVFLK9GtzK~PRHufuSOJeSx5AuPMRNV25vEMkGHqurLWH5AyMd7Hu25ZlZT-hQaf~4HUQnK9DlgRhER9M5PbOODeO53woix88FN4JeJIiBWTwbC70VW-MimKsDY70HyomfTBG4WDvDS553APhTrG28ZiXONtu7UlvAgeJDb-oIF9jy8Xv-SVRd5MNOqzDguW9UQ6XiBhKe-Ptdcnww~1xeNKdILmVauA1ZvQD-HvxhNMIF0mJAF1fKFdOuKNKG~yjHg4g4sBPpk4xnxLjK2ccsK7yWoyfx2lltP9XkdCzLuhtqWSFoKU-aklBnDQ__\u0026Key-Pair-Id=APKAJLOHF5GGSLRBV4ZA"}],"research_interests":[{"id":422,"name":"Computer Science","url":"https://www.academia.edu/Documents/in/Computer_Science"},{"id":491,"name":"Information Technology","url":"https://www.academia.edu/Documents/in/Information_Technology"},{"id":1210,"name":"Informatics","url":"https://www.academia.edu/Documents/in/Informatics"},{"id":15158,"name":"Object Tracking (Computer Vision)","url":"https://www.academia.edu/Documents/in/Object_Tracking_Computer_Vision_"},{"id":90025,"name":"Tracking","url":"https://www.academia.edu/Documents/in/Tracking"},{"id":97646,"name":"Object Tracking","url":"https://www.academia.edu/Documents/in/Object_Tracking"}],"urls":[]}, dispatcherData: dispatcherData }); $(this).data('initialized', true); } }); $a.trackClickSource(".js-work-strip-work-link", "profile_work_strip") if (false) { Aedu.setUpFigureCarousel('profile-work-128545592-figures'); } }); </script> <div class="js-work-strip profile--work_container" data-work-id="128544341"><div class="profile--work_thumbnail hidden-xs"><a class="js-work-strip-work-link" data-click-track="profile-work-strip-thumbnail" href="https://www.academia.edu/128544341/802_11_Managemement_Frames_From_a_Public_Location"><img alt="Research paper thumbnail of 802.11 Managemement Frames From a Public Location" class="work-thumbnail" src="https://attachments.academia-assets.com/122107679/thumbnails/1.jpg" /></a></div><div class="wp-workCard wp-workCard_itemContainer"><div class="wp-workCard_item wp-workCard--title"><a class="js-work-strip-work-link text-gray-darker" data-click-track="profile-work-strip-title" href="https://www.academia.edu/128544341/802_11_Managemement_Frames_From_a_Public_Location">802.11 Managemement Frames From a Public Location</a></div><div class="wp-workCard_item"><span class="js-work-more-abstract-truncated">The following datasets were captured at a busy Belgian train station between 9pm and 10pm, it con...</span><a class="js-work-more-abstract" data-broccoli-component="work_strip.more_abstract" data-click-track="profile-work-strip-more-abstract" href="javascript:;"><span> more </span><span><i class="fa fa-caret-down"></i></span></a><span class="js-work-more-abstract-untruncated hidden">The following datasets were captured at a busy Belgian train station between 9pm and 10pm, it contains all 802.11 management frames that were captured. both datasets were captured with approximately 20 minutes between then.</span></div><div class="wp-workCard_item wp-workCard--actions"><span class="work-strip-bookmark-button-container"></span><a id="777739b51818fd547a75447ebe9ffb39" class="wp-workCard--action" rel="nofollow" data-click-track="profile-work-strip-download" data-download="{&quot;attachment_id&quot;:122107679,&quot;asset_id&quot;:128544341,&quot;asset_type&quot;:&quot;Work&quot;,&quot;button_location&quot;:&quot;profile&quot;}" href="https://www.academia.edu/attachments/122107679/download_file?s=profile"><span><i class="fa fa-arrow-down"></i></span><span>Download</span></a><span class="wp-workCard--action visible-if-viewed-by-owner inline-block" style="display: none;"><span class="js-profile-work-strip-edit-button-wrapper profile-work-strip-edit-button-wrapper" data-work-id="128544341"><a class="js-profile-work-strip-edit-button" tabindex="0"><span><i class="fa fa-pencil"></i></span><span>Edit</span></a></span></span></div><div class="wp-workCard_item wp-workCard--stats"><span><span><span class="js-view-count view-count u-mr2x" data-work-id="128544341"><i class="fa fa-spinner fa-spin"></i></span><script>$(function () { var workId = 128544341; window.Academia.workViewCountsFetcher.queue(workId, function (count) { var description = window.$h.commaizeInt(count) + " " + window.$h.pluralize(count, 'View'); $(".js-view-count[data-work-id=128544341]").text(description); $(".js-view-count[data-work-id=128544341]").attr('title', description).tooltip(); }); });</script></span></span><span><span class="percentile-widget hidden"><span class="u-mr2x work-percentile"></span></span><script>$(function () { var workId = 128544341; window.Academia.workPercentilesFetcher.queue(workId, function (percentileText) { var container = $(".js-work-strip[data-work-id='128544341']"); container.find('.work-percentile').text(percentileText.charAt(0).toUpperCase() + percentileText.slice(1)); container.find('.percentile-widget').show(); container.find('.percentile-widget').removeClass('hidden'); }); });</script></span></div><div id="work-strip-premium-row-container"></div></div></div><script> require.config({ waitSeconds: 90 })(["https://a.academia-assets.com/assets/wow_profile-a9bf3a2bc8c89fa2a77156577594264ee8a0f214d74241bc0fcd3f69f8d107ac.js","https://a.academia-assets.com/assets/work_edit-ad038b8c047c1a8d4fa01b402d530ff93c45fee2137a149a4a5398bc8ad67560.js"], function() { // from javascript_helper.rb var dispatcherData = {} if (true){ window.WowProfile.dispatcher = window.WowProfile.dispatcher || _.clone(Backbone.Events); dispatcherData = { dispatcher: window.WowProfile.dispatcher, downloadLinkId: "777739b51818fd547a75447ebe9ffb39" } } $('.js-work-strip[data-work-id=128544341]').each(function() { if (!$(this).data('initialized')) { new WowProfile.WorkStripView({ el: this, workJSON: {"id":128544341,"title":"802.11 Managemement Frames From a Public Location","translated_title":"","metadata":{"doi":"10.5281/zenodo.8003772","abstract":"The following datasets were captured at a busy Belgian train station between 9pm and 10pm, it contains all 802.11 management frames that were captured. both datasets were captured with approximately 20 minutes between then.","grobid_abstract":"The following datasets were captured at a busy Belgian train station between 9pm and 10pm, it contains all 802.11 management frames that were captured. both datasets were captured with approximately 20 minutes between then.","publication_date":{"day":null,"month":null,"year":2023,"errors":{}},"grobid_abstract_attachment_id":122107679},"translated_abstract":"The following datasets were captured at a busy Belgian train station between 9pm and 10pm, it contains all 802.11 management frames that were captured. both datasets were captured with approximately 20 minutes between then.","internal_url":"https://www.academia.edu/128544341/802_11_Managemement_Frames_From_a_Public_Location","translated_internal_url":"","created_at":"2025-03-31T12:12:04.957-07:00","preview_url":null,"current_user_can_edit":null,"current_user_is_owner":null,"owner_id":13155,"coauthors_can_edit":true,"document_type":"book","co_author_tags":[{"id":43317325,"work_id":128544341,"tagging_user_id":13155,"tagged_user_id":61118416,"co_author_invite_id":null,"email":"m***l@gmail.com","affiliation":"Vrije Universiteit Brussel","display_order":1,"name":"Maxim Van de Wynckel","title":"802.11 Managemement Frames From a Public Location"}],"downloadable_attachments":[{"id":122107679,"title":"","file_type":"pdf","scribd_thumbnail_url":"https://attachments.academia-assets.com/122107679/thumbnails/1.jpg","file_name":"802.11_Managemement_frames.pdf","download_url":"https://www.academia.edu/attachments/122107679/download_file","bulk_download_file_name":"802_11_Managemement_Frames_From_a_Public.pdf","bulk_download_url":"https://d1wqtxts1xzle7.cloudfront.net/122107679/802.11_Managemement_frames-libre.pdf?1743452076=\u0026response-content-disposition=attachment%3B+filename%3D802_11_Managemement_Frames_From_a_Public.pdf\u0026Expires=1744203892\u0026Signature=HNvAS2vWVWOqUDb4ytju3D7lHrRekT4y~MmJTL0aT~W2fCcCwSckiKOZhGi3QjJ0977mdFink-UQejXZVJ5D1LE3DFtO2WLHvmjtvwKZ0MrCgWxdyUeFbxSf4K8XKGVY5Gi3gWJ3~6-saiuAj8DnunVTj2IbvY2OTcvwKetpYWcy3bLoX1HAoUfLhdfFGVfC7TkNj0UYSZ7SpyKcSpuwxPeYfG~5u0XuvO8TxqFF-gTHwsDQp-J2dzORNK58PVEClOaaypcSGn4ElHcwdmKH49YEr6ohbMO56XfKo-D52imu4TimRb9DeFtLtoRrmUxBOnFb~fnaS~DQ8vm0B1cRMw__\u0026Key-Pair-Id=APKAJLOHF5GGSLRBV4ZA"}],"slug":"802_11_Managemement_Frames_From_a_Public_Location","translated_slug":"","page_count":6,"language":"en","content_type":"Work","summary":"The following datasets were captured at a busy Belgian train station between 9pm and 10pm, it contains all 802.11 management frames that were captured. both datasets were captured with approximately 20 minutes between then.","impression_tracking_id":null,"owner":{"id":13155,"first_name":"Beat","middle_initials":null,"last_name":"Signer","page_name":"BeatSigner","domain_name":"vub","created_at":"2008-10-29T17:11:18.015-07:00","display_name":"Beat Signer","url":"https://vub.academia.edu/BeatSigner"},"attachments":[{"id":122107679,"title":"","file_type":"pdf","scribd_thumbnail_url":"https://attachments.academia-assets.com/122107679/thumbnails/1.jpg","file_name":"802.11_Managemement_frames.pdf","download_url":"https://www.academia.edu/attachments/122107679/download_file","bulk_download_file_name":"802_11_Managemement_Frames_From_a_Public.pdf","bulk_download_url":"https://d1wqtxts1xzle7.cloudfront.net/122107679/802.11_Managemement_frames-libre.pdf?1743452076=\u0026response-content-disposition=attachment%3B+filename%3D802_11_Managemement_Frames_From_a_Public.pdf\u0026Expires=1744203892\u0026Signature=HNvAS2vWVWOqUDb4ytju3D7lHrRekT4y~MmJTL0aT~W2fCcCwSckiKOZhGi3QjJ0977mdFink-UQejXZVJ5D1LE3DFtO2WLHvmjtvwKZ0MrCgWxdyUeFbxSf4K8XKGVY5Gi3gWJ3~6-saiuAj8DnunVTj2IbvY2OTcvwKetpYWcy3bLoX1HAoUfLhdfFGVfC7TkNj0UYSZ7SpyKcSpuwxPeYfG~5u0XuvO8TxqFF-gTHwsDQp-J2dzORNK58PVEClOaaypcSGn4ElHcwdmKH49YEr6ohbMO56XfKo-D52imu4TimRb9DeFtLtoRrmUxBOnFb~fnaS~DQ8vm0B1cRMw__\u0026Key-Pair-Id=APKAJLOHF5GGSLRBV4ZA"}],"research_interests":[{"id":422,"name":"Computer Science","url":"https://www.academia.edu/Documents/in/Computer_Science"},{"id":491,"name":"Information Technology","url":"https://www.academia.edu/Documents/in/Information_Technology"},{"id":1210,"name":"Informatics","url":"https://www.academia.edu/Documents/in/Informatics"},{"id":980131,"name":"802.11 Positioning","url":"https://www.academia.edu/Documents/in/802.11_Positioning"}],"urls":[]}, dispatcherData: dispatcherData }); $(this).data('initialized', true); } }); $a.trackClickSource(".js-work-strip-work-link", "profile_work_strip") if (false) { Aedu.setUpFigureCarousel('profile-work-128544341-figures'); } }); </script> <div class="js-work-strip profile--work_container" data-work-id="128544281"><div class="profile--work_thumbnail hidden-xs"><a class="js-work-strip-work-link" data-click-track="profile-work-strip-thumbnail" href="https://www.academia.edu/128544281/Transforming_Proprietary_to_High_Level_Trigger_Action_Programming_Rules_Dataset"><img alt="Research paper thumbnail of Transforming Proprietary to High-Level Trigger-Action Programming Rules Dataset" class="work-thumbnail" src="https://attachments.academia-assets.com/122107584/thumbnails/1.jpg" /></a></div><div class="wp-workCard wp-workCard_itemContainer"><div class="wp-workCard_item wp-workCard--title"><a class="js-work-strip-work-link text-gray-darker" data-click-track="profile-work-strip-title" href="https://www.academia.edu/128544281/Transforming_Proprietary_to_High_Level_Trigger_Action_Programming_Rules_Dataset">Transforming Proprietary to High-Level Trigger-Action Programming Rules Dataset</a></div><div class="wp-workCard_item"><span class="js-work-more-abstract-truncated">This dataset contains the results of performing Natural Language Processing (NLP) techniques on t...</span><a class="js-work-more-abstract" data-broccoli-component="work_strip.more_abstract" data-click-track="profile-work-strip-more-abstract" href="javascript:;"><span> more </span><span><i class="fa fa-caret-down"></i></span></a><span class="js-work-more-abstract-untruncated hidden">This dataset contains the results of performing Natural Language Processing (NLP) techniques on the&nbsp; May~2017 dataset of Mi et al. (<a href="https://www-users.cse.umn.edu/~fengqian/ifttt_measurement/download/201705.tar.gz" rel="nofollow">https://www-users.cse.umn.edu/~fengqian/ifttt_measurement/download/201705.tar.gz</a>) using 3 different approaches.</span></div><div class="wp-workCard_item wp-workCard--actions"><span class="work-strip-bookmark-button-container"></span><a id="3f17f0da2e0c636d0e542f53a7c80194" class="wp-workCard--action" rel="nofollow" data-click-track="profile-work-strip-download" data-download="{&quot;attachment_id&quot;:122107584,&quot;asset_id&quot;:128544281,&quot;asset_type&quot;:&quot;Work&quot;,&quot;button_location&quot;:&quot;profile&quot;}" href="https://www.academia.edu/attachments/122107584/download_file?s=profile"><span><i class="fa fa-arrow-down"></i></span><span>Download</span></a><span class="wp-workCard--action visible-if-viewed-by-owner inline-block" style="display: none;"><span class="js-profile-work-strip-edit-button-wrapper profile-work-strip-edit-button-wrapper" data-work-id="128544281"><a class="js-profile-work-strip-edit-button" tabindex="0"><span><i class="fa fa-pencil"></i></span><span>Edit</span></a></span></span></div><div class="wp-workCard_item wp-workCard--stats"><span><span><span class="js-view-count view-count u-mr2x" data-work-id="128544281"><i class="fa fa-spinner fa-spin"></i></span><script>$(function () { var workId = 128544281; window.Academia.workViewCountsFetcher.queue(workId, function (count) { var description = window.$h.commaizeInt(count) + " " + window.$h.pluralize(count, 'View'); $(".js-view-count[data-work-id=128544281]").text(description); $(".js-view-count[data-work-id=128544281]").attr('title', description).tooltip(); }); });</script></span></span><span><span class="percentile-widget hidden"><span class="u-mr2x work-percentile"></span></span><script>$(function () { var workId = 128544281; window.Academia.workPercentilesFetcher.queue(workId, function (percentileText) { var container = $(".js-work-strip[data-work-id='128544281']"); container.find('.work-percentile').text(percentileText.charAt(0).toUpperCase() + percentileText.slice(1)); container.find('.percentile-widget').show(); container.find('.percentile-widget').removeClass('hidden'); }); });</script></span></div><div id="work-strip-premium-row-container"></div></div></div><script> require.config({ waitSeconds: 90 })(["https://a.academia-assets.com/assets/wow_profile-a9bf3a2bc8c89fa2a77156577594264ee8a0f214d74241bc0fcd3f69f8d107ac.js","https://a.academia-assets.com/assets/work_edit-ad038b8c047c1a8d4fa01b402d530ff93c45fee2137a149a4a5398bc8ad67560.js"], function() { // from javascript_helper.rb var dispatcherData = {} if (true){ window.WowProfile.dispatcher = window.WowProfile.dispatcher || _.clone(Backbone.Events); dispatcherData = { dispatcher: window.WowProfile.dispatcher, downloadLinkId: "3f17f0da2e0c636d0e542f53a7c80194" } } $('.js-work-strip[data-work-id=128544281]').each(function() { if (!$(this).data('initialized')) { new WowProfile.WorkStripView({ el: this, workJSON: {"id":128544281,"title":"Transforming Proprietary to High-Level Trigger-Action Programming Rules Dataset","translated_title":"","metadata":{"doi":"10.5281/zenodo.10033916","abstract":"This dataset contains the results of performing Natural Language Processing (NLP) techniques on the May~2017 dataset of Mi et al. (https://www-users.cse.umn.edu/~fengqian/ifttt_measurement/download/201705.tar.gz) using 3 different approaches.","publication_date":{"day":null,"month":null,"year":2023,"errors":{}}},"translated_abstract":"This dataset contains the results of performing Natural Language Processing (NLP) techniques on the May~2017 dataset of Mi et al. (https://www-users.cse.umn.edu/~fengqian/ifttt_measurement/download/201705.tar.gz) using 3 different approaches.","internal_url":"https://www.academia.edu/128544281/Transforming_Proprietary_to_High_Level_Trigger_Action_Programming_Rules_Dataset","translated_internal_url":"","created_at":"2025-03-31T12:08:32.264-07:00","preview_url":null,"current_user_can_edit":null,"current_user_is_owner":null,"owner_id":13155,"coauthors_can_edit":true,"document_type":"book","co_author_tags":[{"id":43317307,"work_id":128544281,"tagging_user_id":13155,"tagged_user_id":200199483,"co_author_invite_id":null,"email":"e***h@vub.be","affiliation":"Vrije Universiteit Brussel (VUB)","display_order":1,"name":"Ekene Attoh","title":"Transforming Proprietary to High-Level Trigger-Action Programming Rules Dataset"}],"downloadable_attachments":[{"id":122107584,"title":"","file_type":"pdf","scribd_thumbnail_url":"https://attachments.academia-assets.com/122107584/thumbnails/1.jpg","file_name":"TriggerAction.pdf","download_url":"https://www.academia.edu/attachments/122107584/download_file","bulk_download_file_name":"Transforming_Proprietary_to_High_Level_T.pdf","bulk_download_url":"https://d1wqtxts1xzle7.cloudfront.net/122107584/TriggerAction-libre.pdf?1743452104=\u0026response-content-disposition=attachment%3B+filename%3DTransforming_Proprietary_to_High_Level_T.pdf\u0026Expires=1744203892\u0026Signature=WZfcHbpDToA3Ry8xOjBoicbGkEinQkUs62JL9iQLDP6MWEbDxcOkCEbuP6DbVc2Z4GQP0xhJSeJ34nzrosk01fGasobXwZLTdue3MdirRfmX7SDDTytKDm9RwtCk-J-M96hShXjzvX4QiSM4f7~ksi~i0-Z8mYAdY7wmt41XXRSfu7cn1OqKKeUSZCwq-atNRgdG0J1f1RW70uoPWJXdXO2XikJB9lC5Sblq6fZbCVGgOkSwqhkVYv0JuQuwEm5sWuOzIbMec975ynS9LjwTKP1cnsWYwdNug3UleYZW43RZrpItHhzRwgjg~Ykzxh2QDC8PUFc6V5Fl77XP2rF9SQ__\u0026Key-Pair-Id=APKAJLOHF5GGSLRBV4ZA"}],"slug":"Transforming_Proprietary_to_High_Level_Trigger_Action_Programming_Rules_Dataset","translated_slug":"","page_count":5,"language":"en","content_type":"Work","summary":"This dataset contains the results of performing Natural Language Processing (NLP) techniques on the May~2017 dataset of Mi et al. (https://www-users.cse.umn.edu/~fengqian/ifttt_measurement/download/201705.tar.gz) using 3 different approaches.","impression_tracking_id":null,"owner":{"id":13155,"first_name":"Beat","middle_initials":null,"last_name":"Signer","page_name":"BeatSigner","domain_name":"vub","created_at":"2008-10-29T17:11:18.015-07:00","display_name":"Beat Signer","url":"https://vub.academia.edu/BeatSigner"},"attachments":[{"id":122107584,"title":"","file_type":"pdf","scribd_thumbnail_url":"https://attachments.academia-assets.com/122107584/thumbnails/1.jpg","file_name":"TriggerAction.pdf","download_url":"https://www.academia.edu/attachments/122107584/download_file","bulk_download_file_name":"Transforming_Proprietary_to_High_Level_T.pdf","bulk_download_url":"https://d1wqtxts1xzle7.cloudfront.net/122107584/TriggerAction-libre.pdf?1743452104=\u0026response-content-disposition=attachment%3B+filename%3DTransforming_Proprietary_to_High_Level_T.pdf\u0026Expires=1744203893\u0026Signature=KZIBjdEIsVRLAjYdZT0KCFX9a9SayBcSSwTRrmYqTxZn6USNI~SrssYVcnUiUBivN1OiT3OHTJVYunWCPolJMxcrZTUUtmCuVkyf23Rg232TqNdM7r0qOjQg1xr08Hkmr9ry~BFpZn~CHC0trK0GVfrAfZHbL9aor21Uz4JZbEStj~HbCNXTUvjAXcsEieTYL5UCM25quWaak~7l7EYtyolkigl1IcWu5WnDth3BqOxtHT0lFcZrSr5JSC6jCXTXmTNvyZ~UrIyLY1hpYwhts6wojF7y8X7yAzdqOofoFO6NJaAcLIaATj1t5xONZquJblgtqbwl9LhhUKirCHV5Hw__\u0026Key-Pair-Id=APKAJLOHF5GGSLRBV4ZA"}],"research_interests":[{"id":422,"name":"Computer Science","url":"https://www.academia.edu/Documents/in/Computer_Science"},{"id":491,"name":"Information Technology","url":"https://www.academia.edu/Documents/in/Information_Technology"},{"id":1210,"name":"Informatics","url":"https://www.academia.edu/Documents/in/Informatics"},{"id":247799,"name":"Internet of Things (IoT)","url":"https://www.academia.edu/Documents/in/Internet_of_Things_IoT_"},{"id":1029902,"name":"Dataset","url":"https://www.academia.edu/Documents/in/Dataset"},{"id":1561422,"name":"Internet of Things (IoTs)","url":"https://www.academia.edu/Documents/in/Internet_of_Things_IoTs_"}],"urls":[]}, dispatcherData: dispatcherData }); $(this).data('initialized', true); } }); $a.trackClickSource(".js-work-strip-work-link", "profile_work_strip") if (false) { Aedu.setUpFigureCarousel('profile-work-128544281-figures'); } }); </script> <div class="js-work-strip profile--work_container" data-work-id="128543987"><div class="profile--work_thumbnail hidden-xs"><a class="js-work-strip-work-link" data-click-track="profile-work-strip-thumbnail" href="https://www.academia.edu/128543987/Sphero_Dead_Reckoning_and_CV_Tracking_Dataset"><img alt="Research paper thumbnail of Sphero Dead Reckoning and CV Tracking Dataset" class="work-thumbnail" src="https://attachments.academia-assets.com/122107369/thumbnails/1.jpg" /></a></div><div class="wp-workCard wp-workCard_itemContainer"><div class="wp-workCard_item wp-workCard--title"><a class="js-work-strip-work-link text-gray-darker" data-click-track="profile-work-strip-title" href="https://www.academia.edu/128543987/Sphero_Dead_Reckoning_and_CV_Tracking_Dataset">Sphero Dead Reckoning and CV Tracking Dataset</a></div><div class="wp-workCard_item"><span class="js-work-more-abstract-truncated">A Sphero Mini is a Bluetooth ball that can be controlled by a smartphone (or in our case a laptop...</span><a class="js-work-more-abstract" data-broccoli-component="work_strip.more_abstract" data-click-track="profile-work-strip-more-abstract" href="javascript:;"><span> more </span><span><i class="fa fa-caret-down"></i></span></a><span class="js-work-more-abstract-untruncated hidden">A Sphero Mini is a Bluetooth ball that can be controlled by a smartphone (or in our case a laptop). The Sphero Mini can be controlled by sending movement instructions to the Sphero consisting of a direction and speed. In this dataset, we placed a camera on top of a table to create a top-down view of the Sphero moving on the floor. The Sphero was instructed to move in a spiral trajectory from the bottom-right corner to the center of the area. The dataset contains the video recording of the Sphero moving, the input instructions given to the Sphero, and the sensor data retrieved from the Sphero (<a href="https://doi.org/10.34740/kaggle/ds/6760212" rel="nofollow">https://doi.org/10.34740/kaggle/ds/6760212</a>). The dataset was used to evaluate the sensor fusion from various sources.</span></div><div class="wp-workCard_item wp-workCard--actions"><span class="work-strip-bookmark-button-container"></span><a id="bde90df033c33ab4afa35f3312e0b7d9" class="wp-workCard--action" rel="nofollow" data-click-track="profile-work-strip-download" data-download="{&quot;attachment_id&quot;:122107369,&quot;asset_id&quot;:128543987,&quot;asset_type&quot;:&quot;Work&quot;,&quot;button_location&quot;:&quot;profile&quot;}" href="https://www.academia.edu/attachments/122107369/download_file?s=profile"><span><i class="fa fa-arrow-down"></i></span><span>Download</span></a><span class="wp-workCard--action visible-if-viewed-by-owner inline-block" style="display: none;"><span class="js-profile-work-strip-edit-button-wrapper profile-work-strip-edit-button-wrapper" data-work-id="128543987"><a class="js-profile-work-strip-edit-button" tabindex="0"><span><i class="fa fa-pencil"></i></span><span>Edit</span></a></span></span></div><div class="wp-workCard_item wp-workCard--stats"><span><span><span class="js-view-count view-count u-mr2x" data-work-id="128543987"><i class="fa fa-spinner fa-spin"></i></span><script>$(function () { var workId = 128543987; window.Academia.workViewCountsFetcher.queue(workId, function (count) { var description = window.$h.commaizeInt(count) + " " + window.$h.pluralize(count, 'View'); $(".js-view-count[data-work-id=128543987]").text(description); $(".js-view-count[data-work-id=128543987]").attr('title', description).tooltip(); }); });</script></span></span><span><span class="percentile-widget hidden"><span class="u-mr2x work-percentile"></span></span><script>$(function () { var workId = 128543987; window.Academia.workPercentilesFetcher.queue(workId, function (percentileText) { var container = $(".js-work-strip[data-work-id='128543987']"); container.find('.work-percentile').text(percentileText.charAt(0).toUpperCase() + percentileText.slice(1)); container.find('.percentile-widget').show(); container.find('.percentile-widget').removeClass('hidden'); }); });</script></span></div><div id="work-strip-premium-row-container"></div></div></div><script> require.config({ waitSeconds: 90 })(["https://a.academia-assets.com/assets/wow_profile-a9bf3a2bc8c89fa2a77156577594264ee8a0f214d74241bc0fcd3f69f8d107ac.js","https://a.academia-assets.com/assets/work_edit-ad038b8c047c1a8d4fa01b402d530ff93c45fee2137a149a4a5398bc8ad67560.js"], function() { // from javascript_helper.rb var dispatcherData = {} if (true){ window.WowProfile.dispatcher = window.WowProfile.dispatcher || _.clone(Backbone.Events); dispatcherData = { dispatcher: window.WowProfile.dispatcher, downloadLinkId: "bde90df033c33ab4afa35f3312e0b7d9" } } $('.js-work-strip[data-work-id=128543987]').each(function() { if (!$(this).data('initialized')) { new WowProfile.WorkStripView({ el: this, workJSON: {"id":128543987,"title":"Sphero Dead Reckoning and CV Tracking Dataset","translated_title":"","metadata":{"doi":"10.34740/kaggle/ds/6760212","abstract":"A Sphero Mini is a Bluetooth ball that can be controlled by a smartphone (or in our case a laptop). The Sphero Mini can be controlled by sending movement instructions to the Sphero consisting of a direction and speed. In this dataset, we placed a camera on top of a table to create a top-down view of the Sphero moving on the floor. The Sphero was instructed to move in a spiral trajectory from the bottom-right corner to the center of the area. The dataset contains the video recording of the Sphero moving, the input instructions given to the Sphero, and the sensor data retrieved from the Sphero (https://doi.org/10.34740/kaggle/ds/6760212). The dataset was used to evaluate the sensor fusion from various sources.","publication_date":{"day":null,"month":null,"year":2025,"errors":{}}},"translated_abstract":"A Sphero Mini is a Bluetooth ball that can be controlled by a smartphone (or in our case a laptop). The Sphero Mini can be controlled by sending movement instructions to the Sphero consisting of a direction and speed. In this dataset, we placed a camera on top of a table to create a top-down view of the Sphero moving on the floor. The Sphero was instructed to move in a spiral trajectory from the bottom-right corner to the center of the area. The dataset contains the video recording of the Sphero moving, the input instructions given to the Sphero, and the sensor data retrieved from the Sphero (https://doi.org/10.34740/kaggle/ds/6760212). The dataset was used to evaluate the sensor fusion from various sources.","internal_url":"https://www.academia.edu/128543987/Sphero_Dead_Reckoning_and_CV_Tracking_Dataset","translated_internal_url":"","created_at":"2025-03-31T11:44:37.279-07:00","preview_url":null,"current_user_can_edit":null,"current_user_is_owner":null,"owner_id":13155,"coauthors_can_edit":true,"document_type":"book","co_author_tags":[{"id":43317233,"work_id":128543987,"tagging_user_id":13155,"tagged_user_id":61118416,"co_author_invite_id":null,"email":"m***l@gmail.com","affiliation":"Vrije Universiteit Brussel","display_order":1,"name":"Maxim Van de Wynckel","title":"Sphero Dead Reckoning and CV Tracking Dataset"}],"downloadable_attachments":[{"id":122107369,"title":"","file_type":"pdf","scribd_thumbnail_url":"https://attachments.academia-assets.com/122107369/thumbnails/1.jpg","file_name":"sphero.pdf","download_url":"https://www.academia.edu/attachments/122107369/download_file","bulk_download_file_name":"Sphero_Dead_Reckoning_and_CV_Tracking_Da.pdf","bulk_download_url":"https://d1wqtxts1xzle7.cloudfront.net/122107369/sphero-libre.pdf?1743447959=\u0026response-content-disposition=attachment%3B+filename%3DSphero_Dead_Reckoning_and_CV_Tracking_Da.pdf\u0026Expires=1744203893\u0026Signature=HjgDa~sYTUZ5tx9hmek8sI1LYfzeNwA3blvAd6JNFiFHQZcuHQ2GGqXxp7eQiKwEj9Ay95tUlfo7umJnZtZpMX2aRTt6pNOb9-xlCf7MFlPMg0P~pFeztZ6b-WF9waV~cq-LYl4EtRoJLT1Ou9uJ~1Fwi6mHJMJ7--0Un28mmsb10NIdohD9gtGxblPLIwJJCKCWpkJljs6cHvjNC6vgtLiDENg8ZmcdkolZTYkAICSmlmpSPYRLmHINt5H-LdeCsVUcsUQ2T8uKbzsHe2sBfrGz19KzScZxPXYFhBTyJ74RQySC8nGxmR~Bb6yfS~td4CopElbT5WTsYtHU-NnceA__\u0026Key-Pair-Id=APKAJLOHF5GGSLRBV4ZA"}],"slug":"Sphero_Dead_Reckoning_and_CV_Tracking_Dataset","translated_slug":"","page_count":1,"language":"en","content_type":"Work","summary":"A Sphero Mini is a Bluetooth ball that can be controlled by a smartphone (or in our case a laptop). The Sphero Mini can be controlled by sending movement instructions to the Sphero consisting of a direction and speed. In this dataset, we placed a camera on top of a table to create a top-down view of the Sphero moving on the floor. The Sphero was instructed to move in a spiral trajectory from the bottom-right corner to the center of the area. The dataset contains the video recording of the Sphero moving, the input instructions given to the Sphero, and the sensor data retrieved from the Sphero (https://doi.org/10.34740/kaggle/ds/6760212). The dataset was used to evaluate the sensor fusion from various sources.","impression_tracking_id":null,"owner":{"id":13155,"first_name":"Beat","middle_initials":null,"last_name":"Signer","page_name":"BeatSigner","domain_name":"vub","created_at":"2008-10-29T17:11:18.015-07:00","display_name":"Beat Signer","url":"https://vub.academia.edu/BeatSigner"},"attachments":[{"id":122107369,"title":"","file_type":"pdf","scribd_thumbnail_url":"https://attachments.academia-assets.com/122107369/thumbnails/1.jpg","file_name":"sphero.pdf","download_url":"https://www.academia.edu/attachments/122107369/download_file","bulk_download_file_name":"Sphero_Dead_Reckoning_and_CV_Tracking_Da.pdf","bulk_download_url":"https://d1wqtxts1xzle7.cloudfront.net/122107369/sphero-libre.pdf?1743447959=\u0026response-content-disposition=attachment%3B+filename%3DSphero_Dead_Reckoning_and_CV_Tracking_Da.pdf\u0026Expires=1744203893\u0026Signature=HjgDa~sYTUZ5tx9hmek8sI1LYfzeNwA3blvAd6JNFiFHQZcuHQ2GGqXxp7eQiKwEj9Ay95tUlfo7umJnZtZpMX2aRTt6pNOb9-xlCf7MFlPMg0P~pFeztZ6b-WF9waV~cq-LYl4EtRoJLT1Ou9uJ~1Fwi6mHJMJ7--0Un28mmsb10NIdohD9gtGxblPLIwJJCKCWpkJljs6cHvjNC6vgtLiDENg8ZmcdkolZTYkAICSmlmpSPYRLmHINt5H-LdeCsVUcsUQ2T8uKbzsHe2sBfrGz19KzScZxPXYFhBTyJ74RQySC8nGxmR~Bb6yfS~td4CopElbT5WTsYtHU-NnceA__\u0026Key-Pair-Id=APKAJLOHF5GGSLRBV4ZA"}],"research_interests":[{"id":422,"name":"Computer Science","url":"https://www.academia.edu/Documents/in/Computer_Science"},{"id":1210,"name":"Informatics","url":"https://www.academia.edu/Documents/in/Informatics"},{"id":1526,"name":"Sensors and Sensing","url":"https://www.academia.edu/Documents/in/Sensors_and_Sensing"},{"id":368857,"name":"Positioning","url":"https://www.academia.edu/Documents/in/Positioning"},{"id":1029902,"name":"Dataset","url":"https://www.academia.edu/Documents/in/Dataset"},{"id":3857627,"name":"OpenHPS","url":"https://www.academia.edu/Documents/in/OpenHPS"}],"urls":[]}, dispatcherData: dispatcherData }); $(this).data('initialized', true); } }); $a.trackClickSource(".js-work-strip-work-link", "profile_work_strip") if (false) { Aedu.setUpFigureCarousel('profile-work-128543987-figures'); } }); </script> <div class="js-work-strip profile--work_container" data-work-id="50245988"><div class="profile--work_thumbnail hidden-xs"><a class="js-work-strip-work-link" data-click-track="profile-work-strip-thumbnail" href="https://www.academia.edu/50245988/OpenHPS_Single_Floor_Fingerprinting_and_Trajectory_Dataset"><img alt="Research paper thumbnail of OpenHPS: Single Floor Fingerprinting and Trajectory Dataset" class="work-thumbnail" src="https://attachments.academia-assets.com/68303554/thumbnails/1.jpg" /></a></div><div class="wp-workCard wp-workCard_itemContainer"><div class="wp-workCard_item wp-workCard--title"><a class="js-work-strip-work-link text-gray-darker" data-click-track="profile-work-strip-title" href="https://www.academia.edu/50245988/OpenHPS_Single_Floor_Fingerprinting_and_Trajectory_Dataset">OpenHPS: Single Floor Fingerprinting and Trajectory Dataset</a></div><div class="wp-workCard_item wp-workCard--coauthors"><span>by </span><span><a class="" data-click-track="profile-work-strip-authors" href="https://vub.academia.edu/BeatSigner">Beat Signer</a> and <a class="" data-click-track="profile-work-strip-authors" href="https://vub.academia.edu/MaximVandeWynckel">Maxim Van de Wynckel</a></span></div><div class="wp-workCard_item"><span class="js-work-more-abstract-truncated">This dataset (https://doi.org/10.5281/zenodo.4744379) contains fingerprint information of WLAN ac...</span><a class="js-work-more-abstract" data-broccoli-component="work_strip.more_abstract" data-click-track="profile-work-strip-more-abstract" href="javascript:;"><span> more </span><span><i class="fa fa-caret-down"></i></span></a><span class="js-work-more-abstract-untruncated hidden">This dataset (<a href="https://doi.org/10.5281/zenodo.4744379" rel="nofollow">https://doi.org/10.5281/zenodo.4744379</a>) contains fingerprint information of WLAN access points and BLE beacons with a known position and IMU sensor data. Data was collected on the floor of the Web and Information Systems Engineering (WISE) Lab at the VUB (Pleinlaan 9, 3rd floor) with 110 training reference points and 30 test data points. Each reference point was recorded for 20 seconds in four different orientations. In this README document we go in depth into how the data was collected and the structure of the dataset.</span></div><div class="wp-workCard_item wp-workCard--actions"><span class="work-strip-bookmark-button-container"></span><a id="4b89cf77c65c3d05c0bbd5e62c006734" class="wp-workCard--action" rel="nofollow" data-click-track="profile-work-strip-download" data-download="{&quot;attachment_id&quot;:68303554,&quot;asset_id&quot;:50245988,&quot;asset_type&quot;:&quot;Work&quot;,&quot;button_location&quot;:&quot;profile&quot;}" href="https://www.academia.edu/attachments/68303554/download_file?s=profile"><span><i class="fa fa-arrow-down"></i></span><span>Download</span></a><span class="wp-workCard--action visible-if-viewed-by-owner inline-block" style="display: none;"><span class="js-profile-work-strip-edit-button-wrapper profile-work-strip-edit-button-wrapper" data-work-id="50245988"><a class="js-profile-work-strip-edit-button" tabindex="0"><span><i class="fa fa-pencil"></i></span><span>Edit</span></a></span></span></div><div class="wp-workCard_item wp-workCard--stats"><span><span><span class="js-view-count view-count u-mr2x" data-work-id="50245988"><i class="fa fa-spinner fa-spin"></i></span><script>$(function () { var workId = 50245988; window.Academia.workViewCountsFetcher.queue(workId, function (count) { var description = window.$h.commaizeInt(count) + " " + window.$h.pluralize(count, 'View'); $(".js-view-count[data-work-id=50245988]").text(description); $(".js-view-count[data-work-id=50245988]").attr('title', description).tooltip(); }); });</script></span></span><span><span class="percentile-widget hidden"><span class="u-mr2x work-percentile"></span></span><script>$(function () { var workId = 50245988; window.Academia.workPercentilesFetcher.queue(workId, function (percentileText) { var container = $(".js-work-strip[data-work-id='50245988']"); container.find('.work-percentile').text(percentileText.charAt(0).toUpperCase() + percentileText.slice(1)); container.find('.percentile-widget').show(); container.find('.percentile-widget').removeClass('hidden'); }); });</script></span></div><div id="work-strip-premium-row-container"></div></div></div><script> require.config({ waitSeconds: 90 })(["https://a.academia-assets.com/assets/wow_profile-a9bf3a2bc8c89fa2a77156577594264ee8a0f214d74241bc0fcd3f69f8d107ac.js","https://a.academia-assets.com/assets/work_edit-ad038b8c047c1a8d4fa01b402d530ff93c45fee2137a149a4a5398bc8ad67560.js"], function() { // from javascript_helper.rb var dispatcherData = {} if (true){ window.WowProfile.dispatcher = window.WowProfile.dispatcher || _.clone(Backbone.Events); dispatcherData = { dispatcher: window.WowProfile.dispatcher, downloadLinkId: "4b89cf77c65c3d05c0bbd5e62c006734" } } $('.js-work-strip[data-work-id=50245988]').each(function() { if (!$(this).data('initialized')) { new WowProfile.WorkStripView({ el: this, workJSON: {"id":50245988,"title":"OpenHPS: Single Floor Fingerprinting and Trajectory Dataset","translated_title":"","metadata":{"doi":"10.5281/zenodo.4744379","abstract":"This dataset (https://doi.org/10.5281/zenodo.4744379) contains fingerprint information of WLAN access points and BLE beacons with a known position and IMU sensor data. Data was collected on the floor of the Web and Information Systems Engineering (WISE) Lab at the VUB (Pleinlaan 9, 3rd floor) with 110 training reference points and 30 test data points. Each reference point was recorded for 20 seconds in four different orientations. In this README document we go in depth into how the data was collected and the structure of the dataset.","publication_date":{"day":null,"month":null,"year":2021,"errors":{}}},"translated_abstract":"This dataset (https://doi.org/10.5281/zenodo.4744379) contains fingerprint information of WLAN access points and BLE beacons with a known position and IMU sensor data. Data was collected on the floor of the Web and Information Systems Engineering (WISE) Lab at the VUB (Pleinlaan 9, 3rd floor) with 110 training reference points and 30 test data points. Each reference point was recorded for 20 seconds in four different orientations. In this README document we go in depth into how the data was collected and the structure of the dataset.","internal_url":"https://www.academia.edu/50245988/OpenHPS_Single_Floor_Fingerprinting_and_Trajectory_Dataset","translated_internal_url":"","created_at":"2021-07-25T10:17:20.769-07:00","preview_url":null,"current_user_can_edit":null,"current_user_is_owner":null,"owner_id":13155,"coauthors_can_edit":true,"document_type":"book","co_author_tags":[{"id":36731732,"work_id":50245988,"tagging_user_id":13155,"tagged_user_id":61118416,"co_author_invite_id":null,"email":"m***l@gmail.com","affiliation":"Vrije Universiteit Brussel","display_order":1,"name":"Maxim Van de Wynckel","title":"OpenHPS: Single Floor Fingerprinting and Trajectory Dataset"}],"downloadable_attachments":[{"id":68303554,"title":"","file_type":"pdf","scribd_thumbnail_url":"https://attachments.academia-assets.com/68303554/thumbnails/1.jpg","file_name":"README.pdf","download_url":"https://www.academia.edu/attachments/68303554/download_file","bulk_download_file_name":"OpenHPS_Single_Floor_Fingerprinting_and.pdf","bulk_download_url":"https://d1wqtxts1xzle7.cloudfront.net/68303554/README-libre.pdf?1627234736=\u0026response-content-disposition=attachment%3B+filename%3DOpenHPS_Single_Floor_Fingerprinting_and.pdf\u0026Expires=1744203893\u0026Signature=VJVvK6NCas6sfk-5-lfiEb6bbq0RpuD7ivYVfzk3wO0rfUJ9ozjrchTxrZhaf2YTXItmkSkALdeawlJOWWBd2yFLXo5aUDbxEe-DQ8E1WkPbjvWx5NCjaQSpN4ivLCz5El5OzcoHeRWHf4ar9tMryjWEhP1vNhXVPzIyLAXawZ8B3tXlnLW5ay7ltMa7R~s4s2up6IJ~OdXrQV62uSWLb1l~tcTS6juE-Ji9VyW-1bJybYK9jqgwxTs~jsFmIjxPAkKgnjAcL4Bg84ciL~69Gjl9YMkLM3b-3hSnQ7UxVwgRtucf92X-0Mb~DsAvfKMjamYq8ghHHhzDBNz5KLTXpg__\u0026Key-Pair-Id=APKAJLOHF5GGSLRBV4ZA"}],"slug":"OpenHPS_Single_Floor_Fingerprinting_and_Trajectory_Dataset","translated_slug":"","page_count":5,"language":"en","content_type":"Work","summary":"This dataset (https://doi.org/10.5281/zenodo.4744379) contains fingerprint information of WLAN access points and BLE beacons with a known position and IMU sensor data. Data was collected on the floor of the Web and Information Systems Engineering (WISE) Lab at the VUB (Pleinlaan 9, 3rd floor) with 110 training reference points and 30 test data points. Each reference point was recorded for 20 seconds in four different orientations. In this README document we go in depth into how the data was collected and the structure of the dataset.","impression_tracking_id":null,"owner":{"id":13155,"first_name":"Beat","middle_initials":null,"last_name":"Signer","page_name":"BeatSigner","domain_name":"vub","created_at":"2008-10-29T17:11:18.015-07:00","display_name":"Beat Signer","url":"https://vub.academia.edu/BeatSigner"},"attachments":[{"id":68303554,"title":"","file_type":"pdf","scribd_thumbnail_url":"https://attachments.academia-assets.com/68303554/thumbnails/1.jpg","file_name":"README.pdf","download_url":"https://www.academia.edu/attachments/68303554/download_file","bulk_download_file_name":"OpenHPS_Single_Floor_Fingerprinting_and.pdf","bulk_download_url":"https://d1wqtxts1xzle7.cloudfront.net/68303554/README-libre.pdf?1627234736=\u0026response-content-disposition=attachment%3B+filename%3DOpenHPS_Single_Floor_Fingerprinting_and.pdf\u0026Expires=1744203893\u0026Signature=VJVvK6NCas6sfk-5-lfiEb6bbq0RpuD7ivYVfzk3wO0rfUJ9ozjrchTxrZhaf2YTXItmkSkALdeawlJOWWBd2yFLXo5aUDbxEe-DQ8E1WkPbjvWx5NCjaQSpN4ivLCz5El5OzcoHeRWHf4ar9tMryjWEhP1vNhXVPzIyLAXawZ8B3tXlnLW5ay7ltMa7R~s4s2up6IJ~OdXrQV62uSWLb1l~tcTS6juE-Ji9VyW-1bJybYK9jqgwxTs~jsFmIjxPAkKgnjAcL4Bg84ciL~69Gjl9YMkLM3b-3hSnQ7UxVwgRtucf92X-0Mb~DsAvfKMjamYq8ghHHhzDBNz5KLTXpg__\u0026Key-Pair-Id=APKAJLOHF5GGSLRBV4ZA"}],"research_interests":[{"id":422,"name":"Computer Science","url":"https://www.academia.edu/Documents/in/Computer_Science"},{"id":491,"name":"Information Technology","url":"https://www.academia.edu/Documents/in/Information_Technology"},{"id":1210,"name":"Informatics","url":"https://www.academia.edu/Documents/in/Informatics"},{"id":1380,"name":"Computer Engineering","url":"https://www.academia.edu/Documents/in/Computer_Engineering"},{"id":4222,"name":"Hybrid Systems","url":"https://www.academia.edu/Documents/in/Hybrid_Systems"},{"id":9988,"name":"Indoor Positioning","url":"https://www.academia.edu/Documents/in/Indoor_Positioning"},{"id":46957,"name":"Indoor Navigation","url":"https://www.academia.edu/Documents/in/Indoor_Navigation"},{"id":235328,"name":"Indoor Localization","url":"https://www.academia.edu/Documents/in/Indoor_Localization"},{"id":368857,"name":"Positioning","url":"https://www.academia.edu/Documents/in/Positioning"},{"id":482128,"name":"Pedestrian Dead-Reckoning","url":"https://www.academia.edu/Documents/in/Pedestrian_Dead-Reckoning"},{"id":707345,"name":"INDOOR POSITIONING SYSTEM","url":"https://www.academia.edu/Documents/in/INDOOR_POSITIONING_SYSTEM"},{"id":1029902,"name":"Dataset","url":"https://www.academia.edu/Documents/in/Dataset"},{"id":1483728,"name":"Data Fingerprinting","url":"https://www.academia.edu/Documents/in/Data_Fingerprinting"},{"id":3857627,"name":"OpenHPS","url":"https://www.academia.edu/Documents/in/OpenHPS"}],"urls":[]}, dispatcherData: dispatcherData }); $(this).data('initialized', true); } }); $a.trackClickSource(".js-work-strip-work-link", "profile_work_strip") if (false) { Aedu.setUpFigureCarousel('profile-work-50245988-figures'); } }); </script> </div><div class="profile--tab_content_container js-tab-pane tab-pane" data-section-id="238073" id="teachingdocuments"><div class="js-work-strip profile--work_container" data-work-id="118615019"><div class="profile--work_thumbnail hidden-xs"><a class="js-work-strip-work-link" data-click-track="profile-work-strip-thumbnail" href="https://www.academia.edu/118615019/Human_AI_Interaction_Next_Generation_User_Interfaces_4018166FNR_"><img alt="Research paper thumbnail of Human-AI Interaction - Next Generation User Interfaces (4018166FNR)" class="work-thumbnail" src="https://attachments.academia-assets.com/114208359/thumbnails/1.jpg" /></a></div><div class="wp-workCard wp-workCard_itemContainer"><div class="wp-workCard_item wp-workCard--title"><a class="js-work-strip-work-link text-gray-darker" data-click-track="profile-work-strip-title" href="https://www.academia.edu/118615019/Human_AI_Interaction_Next_Generation_User_Interfaces_4018166FNR_">Human-AI Interaction - Next Generation User Interfaces (4018166FNR)</a></div><div class="wp-workCard_item"><span class="js-work-more-abstract-truncated">This lecture forms part of the &#39;Next Generation User Interfaces&#39; course given at the Vrije Univer...</span><a class="js-work-more-abstract" data-broccoli-component="work_strip.more_abstract" data-click-track="profile-work-strip-more-abstract" href="javascript:;"><span> more </span><span><i class="fa fa-caret-down"></i></span></a><span class="js-work-more-abstract-untruncated hidden">This lecture forms part of the &#39;Next Generation User Interfaces&#39; course given at the Vrije Universiteit Brussel.</span></div><div class="wp-workCard_item wp-workCard--actions"><span class="work-strip-bookmark-button-container"></span><a id="15a508c7690903420a76dafc38732f14" class="wp-workCard--action" rel="nofollow" data-click-track="profile-work-strip-download" data-download="{&quot;attachment_id&quot;:114208359,&quot;asset_id&quot;:118615019,&quot;asset_type&quot;:&quot;Work&quot;,&quot;button_location&quot;:&quot;profile&quot;}" href="https://www.academia.edu/attachments/114208359/download_file?s=profile"><span><i class="fa fa-arrow-down"></i></span><span>Download</span></a><span class="wp-workCard--action visible-if-viewed-by-owner inline-block" style="display: none;"><span class="js-profile-work-strip-edit-button-wrapper profile-work-strip-edit-button-wrapper" data-work-id="118615019"><a class="js-profile-work-strip-edit-button" tabindex="0"><span><i class="fa fa-pencil"></i></span><span>Edit</span></a></span></span></div><div class="wp-workCard_item wp-workCard--stats"><span><span><span class="js-view-count view-count u-mr2x" data-work-id="118615019"><i class="fa fa-spinner fa-spin"></i></span><script>$(function () { var workId = 118615019; window.Academia.workViewCountsFetcher.queue(workId, function (count) { var description = window.$h.commaizeInt(count) + " " + window.$h.pluralize(count, 'View'); $(".js-view-count[data-work-id=118615019]").text(description); $(".js-view-count[data-work-id=118615019]").attr('title', description).tooltip(); }); });</script></span></span><span><span class="percentile-widget hidden"><span class="u-mr2x work-percentile"></span></span><script>$(function () { var workId = 118615019; window.Academia.workPercentilesFetcher.queue(workId, function (percentileText) { var container = $(".js-work-strip[data-work-id='118615019']"); container.find('.work-percentile').text(percentileText.charAt(0).toUpperCase() + percentileText.slice(1)); container.find('.percentile-widget').show(); container.find('.percentile-widget').removeClass('hidden'); }); });</script></span></div><div id="work-strip-premium-row-container"></div></div></div><script> require.config({ waitSeconds: 90 })(["https://a.academia-assets.com/assets/wow_profile-a9bf3a2bc8c89fa2a77156577594264ee8a0f214d74241bc0fcd3f69f8d107ac.js","https://a.academia-assets.com/assets/work_edit-ad038b8c047c1a8d4fa01b402d530ff93c45fee2137a149a4a5398bc8ad67560.js"], function() { // from javascript_helper.rb var dispatcherData = {} if (true){ window.WowProfile.dispatcher = window.WowProfile.dispatcher || _.clone(Backbone.Events); dispatcherData = { dispatcher: window.WowProfile.dispatcher, downloadLinkId: "15a508c7690903420a76dafc38732f14" } } $('.js-work-strip[data-work-id=118615019]').each(function() { if (!$(this).data('initialized')) { new WowProfile.WorkStripView({ el: this, workJSON: {"id":118615019,"title":"Human-AI Interaction - Next Generation User Interfaces (4018166FNR)","translated_title":"","metadata":{"abstract":"This lecture forms part of the 'Next Generation User Interfaces' course given at the Vrije Universiteit Brussel.","ai_title_tag":"Next Generation Human-AI Interfaces"},"translated_abstract":"This lecture forms part of the 'Next Generation User Interfaces' course given at the Vrije Universiteit Brussel.","internal_url":"https://www.academia.edu/118615019/Human_AI_Interaction_Next_Generation_User_Interfaces_4018166FNR_","translated_internal_url":"","created_at":"2024-05-06T00:18:16.534-07:00","preview_url":null,"current_user_can_edit":null,"current_user_is_owner":null,"owner_id":13155,"coauthors_can_edit":true,"document_type":"teaching_document","co_author_tags":[],"downloadable_attachments":[{"id":114208359,"title":"","file_type":"pdf","scribd_thumbnail_url":"https://attachments.academia-assets.com/114208359/thumbnails/1.jpg","file_name":"lecture_11_humanAiInteraction.pdf","download_url":"https://www.academia.edu/attachments/114208359/download_file","bulk_download_file_name":"Human_AI_Interaction_Next_Generation_Use.pdf","bulk_download_url":"https://d1wqtxts1xzle7.cloudfront.net/114208359/lecture_11_humanAiInteraction-libre.pdf?1714995797=\u0026response-content-disposition=attachment%3B+filename%3DHuman_AI_Interaction_Next_Generation_Use.pdf\u0026Expires=1744203893\u0026Signature=S4vI-YZ53zAgIfDakBRd1yljlZVh-KYhg2qAGx7SCnCU6f0e-6RIHJyHpp6rF~vQ6WuBSq1dP7UBtyMW~hIgpsXxhRbv81C0cfVSTXHWy7HZ0~D0Yllr4f8nPGZ~mcQN8tYz6YvsU2lcCWcATLS9XKvIIEOeoR8KKaKznpAzaF~wyYqfUomMlO0tke25Rec2ayOJUr1uT6xheJouI6ewuHyB6NB6hiGgqW4JyCPvFUyfDTJeZ9hF3zaI4LonGIYaBTv~3ttHq2ymN0k5-KP9NUEBJKhzW1oqvVdrcbcFnqy7eh8sqxOl8wND4k1Wd-p0TzCHbZlIxcMcZ5jHc-jcUg__\u0026Key-Pair-Id=APKAJLOHF5GGSLRBV4ZA"}],"slug":"Human_AI_Interaction_Next_Generation_User_Interfaces_4018166FNR_","translated_slug":"","page_count":38,"language":"en","content_type":"Work","summary":"This lecture forms part of the 'Next Generation User Interfaces' course given at the Vrije Universiteit Brussel.","impression_tracking_id":null,"owner":{"id":13155,"first_name":"Beat","middle_initials":null,"last_name":"Signer","page_name":"BeatSigner","domain_name":"vub","created_at":"2008-10-29T17:11:18.015-07:00","display_name":"Beat Signer","url":"https://vub.academia.edu/BeatSigner"},"attachments":[{"id":114208359,"title":"","file_type":"pdf","scribd_thumbnail_url":"https://attachments.academia-assets.com/114208359/thumbnails/1.jpg","file_name":"lecture_11_humanAiInteraction.pdf","download_url":"https://www.academia.edu/attachments/114208359/download_file","bulk_download_file_name":"Human_AI_Interaction_Next_Generation_Use.pdf","bulk_download_url":"https://d1wqtxts1xzle7.cloudfront.net/114208359/lecture_11_humanAiInteraction-libre.pdf?1714995797=\u0026response-content-disposition=attachment%3B+filename%3DHuman_AI_Interaction_Next_Generation_Use.pdf\u0026Expires=1744203893\u0026Signature=S4vI-YZ53zAgIfDakBRd1yljlZVh-KYhg2qAGx7SCnCU6f0e-6RIHJyHpp6rF~vQ6WuBSq1dP7UBtyMW~hIgpsXxhRbv81C0cfVSTXHWy7HZ0~D0Yllr4f8nPGZ~mcQN8tYz6YvsU2lcCWcATLS9XKvIIEOeoR8KKaKznpAzaF~wyYqfUomMlO0tke25Rec2ayOJUr1uT6xheJouI6ewuHyB6NB6hiGgqW4JyCPvFUyfDTJeZ9hF3zaI4LonGIYaBTv~3ttHq2ymN0k5-KP9NUEBJKhzW1oqvVdrcbcFnqy7eh8sqxOl8wND4k1Wd-p0TzCHbZlIxcMcZ5jHc-jcUg__\u0026Key-Pair-Id=APKAJLOHF5GGSLRBV4ZA"}],"research_interests":[{"id":422,"name":"Computer Science","url":"https://www.academia.edu/Documents/in/Computer_Science"},{"id":465,"name":"Artificial Intelligence","url":"https://www.academia.edu/Documents/in/Artificial_Intelligence"},{"id":472,"name":"Human Computer Interaction","url":"https://www.academia.edu/Documents/in/Human_Computer_Interaction"},{"id":1210,"name":"Informatics","url":"https://www.academia.edu/Documents/in/Informatics"},{"id":14203,"name":"Natural User Interfaces","url":"https://www.academia.edu/Documents/in/Natural_User_Interfaces"},{"id":23997,"name":"HCI","url":"https://www.academia.edu/Documents/in/HCI"},{"id":97585,"name":"User interfaces","url":"https://www.academia.edu/Documents/in/User_interfaces"}],"urls":[]}, dispatcherData: dispatcherData }); $(this).data('initialized', true); } }); $a.trackClickSource(".js-work-strip-work-link", "profile_work_strip") if (false) { Aedu.setUpFigureCarousel('profile-work-118615019-figures'); } }); </script> <div class="js-work-strip profile--work_container" data-work-id="117896780"><div class="profile--work_thumbnail hidden-xs"><a class="js-work-strip-work-link" data-click-track="profile-work-strip-thumbnail" href="https://www.academia.edu/117896780/Data_Physicalisation_Next_Generation_User_Interfaces_4018166FNR_"><img alt="Research paper thumbnail of Data Physicalisation - Next Generation User Interfaces (4018166FNR)" class="work-thumbnail" src="https://attachments.academia-assets.com/113644953/thumbnails/1.jpg" /></a></div><div class="wp-workCard wp-workCard_itemContainer"><div class="wp-workCard_item wp-workCard--title"><a class="js-work-strip-work-link text-gray-darker" data-click-track="profile-work-strip-title" href="https://www.academia.edu/117896780/Data_Physicalisation_Next_Generation_User_Interfaces_4018166FNR_">Data Physicalisation - Next Generation User Interfaces (4018166FNR)</a></div><div class="wp-workCard_item"><span class="js-work-more-abstract-truncated">This lecture forms part of the &#39;Next Generation User Interfaces&#39; course given at the Vrije Univer...</span><a class="js-work-more-abstract" data-broccoli-component="work_strip.more_abstract" data-click-track="profile-work-strip-more-abstract" href="javascript:;"><span> more </span><span><i class="fa fa-caret-down"></i></span></a><span class="js-work-more-abstract-untruncated hidden">This lecture forms part of the &#39;Next Generation User Interfaces&#39; course given at the Vrije Universiteit Brussel.</span></div><div class="wp-workCard_item wp-workCard--actions"><span class="work-strip-bookmark-button-container"></span><a id="604e1d0eaa876ce2c7bfaa664e2b2b74" class="wp-workCard--action" rel="nofollow" data-click-track="profile-work-strip-download" data-download="{&quot;attachment_id&quot;:113644953,&quot;asset_id&quot;:117896780,&quot;asset_type&quot;:&quot;Work&quot;,&quot;button_location&quot;:&quot;profile&quot;}" href="https://www.academia.edu/attachments/113644953/download_file?s=profile"><span><i class="fa fa-arrow-down"></i></span><span>Download</span></a><span class="wp-workCard--action visible-if-viewed-by-owner inline-block" style="display: none;"><span class="js-profile-work-strip-edit-button-wrapper profile-work-strip-edit-button-wrapper" data-work-id="117896780"><a class="js-profile-work-strip-edit-button" tabindex="0"><span><i class="fa fa-pencil"></i></span><span>Edit</span></a></span></span></div><div class="wp-workCard_item wp-workCard--stats"><span><span><span class="js-view-count view-count u-mr2x" data-work-id="117896780"><i class="fa fa-spinner fa-spin"></i></span><script>$(function () { var workId = 117896780; window.Academia.workViewCountsFetcher.queue(workId, function (count) { var description = window.$h.commaizeInt(count) + " " + window.$h.pluralize(count, 'View'); $(".js-view-count[data-work-id=117896780]").text(description); $(".js-view-count[data-work-id=117896780]").attr('title', description).tooltip(); }); });</script></span></span><span><span class="percentile-widget hidden"><span class="u-mr2x work-percentile"></span></span><script>$(function () { var workId = 117896780; window.Academia.workPercentilesFetcher.queue(workId, function (percentileText) { var container = $(".js-work-strip[data-work-id='117896780']"); container.find('.work-percentile').text(percentileText.charAt(0).toUpperCase() + percentileText.slice(1)); container.find('.percentile-widget').show(); container.find('.percentile-widget').removeClass('hidden'); }); });</script></span></div><div id="work-strip-premium-row-container"></div></div></div><script> require.config({ waitSeconds: 90 })(["https://a.academia-assets.com/assets/wow_profile-a9bf3a2bc8c89fa2a77156577594264ee8a0f214d74241bc0fcd3f69f8d107ac.js","https://a.academia-assets.com/assets/work_edit-ad038b8c047c1a8d4fa01b402d530ff93c45fee2137a149a4a5398bc8ad67560.js"], function() { // from javascript_helper.rb var dispatcherData = {} if (true){ window.WowProfile.dispatcher = window.WowProfile.dispatcher || _.clone(Backbone.Events); dispatcherData = { dispatcher: window.WowProfile.dispatcher, downloadLinkId: "604e1d0eaa876ce2c7bfaa664e2b2b74" } } $('.js-work-strip[data-work-id=117896780]').each(function() { if (!$(this).data('initialized')) { new WowProfile.WorkStripView({ el: this, workJSON: {"id":117896780,"title":"Data Physicalisation - Next Generation User Interfaces (4018166FNR)","translated_title":"","metadata":{"abstract":"This lecture forms part of the 'Next Generation User Interfaces' course given at the Vrije Universiteit Brussel. ","ai_title_tag":"Data Physicalisation in User Interfaces"},"translated_abstract":"This lecture forms part of the 'Next Generation User Interfaces' course given at the Vrije Universiteit Brussel. ","internal_url":"https://www.academia.edu/117896780/Data_Physicalisation_Next_Generation_User_Interfaces_4018166FNR_","translated_internal_url":"","created_at":"2024-04-22T11:48:58.505-07:00","preview_url":null,"current_user_can_edit":null,"current_user_is_owner":null,"owner_id":13155,"coauthors_can_edit":true,"document_type":"teaching_document","co_author_tags":[],"downloadable_attachments":[{"id":113644953,"title":"","file_type":"pdf","scribd_thumbnail_url":"https://attachments.academia-assets.com/113644953/thumbnails/1.jpg","file_name":"lecture_09_dataPhysicalisation.pdf","download_url":"https://www.academia.edu/attachments/113644953/download_file","bulk_download_file_name":"Data_Physicalisation_Next_Generation_Use.pdf","bulk_download_url":"https://d1wqtxts1xzle7.cloudfront.net/113644953/lecture_09_dataPhysicalisation-libre.pdf?1713814498=\u0026response-content-disposition=attachment%3B+filename%3DData_Physicalisation_Next_Generation_Use.pdf\u0026Expires=1744203893\u0026Signature=Ewv9UDIR1Dk4VqByEJ7--g7t5PWWuHBNvaYVEeaX6diU7IdQ8rEypMuGaku6O9Ebe37cPhT6c-eSB6gOjicK3BKdRgd7ZM0VWJYRW8-8xSU6y6t5ExUP~Q-5IFjSwCynt-WqRIymX5pQBiGpJpecGvha2XW4yuZqu9rDBzrfEsvon4-PoUFjkWLovSnU~798yPHdi3RieqEjzW2dQG2E9k0DPotoAtRdJf6BluTKP6aoaowhDWJDuwsmyC1TYPm6Tq0iCzQXTjVkjx7-fcvmPdSXjPvDtjrFH2B0Nfy99cmosOAaewhrlL1ptlzrekv2-OvROwWb-QpwS80Y1TdVtg__\u0026Key-Pair-Id=APKAJLOHF5GGSLRBV4ZA"}],"slug":"Data_Physicalisation_Next_Generation_User_Interfaces_4018166FNR_","translated_slug":"","page_count":45,"language":"en","content_type":"Work","summary":"This lecture forms part of the 'Next Generation User Interfaces' course given at the Vrije Universiteit Brussel. ","impression_tracking_id":null,"owner":{"id":13155,"first_name":"Beat","middle_initials":null,"last_name":"Signer","page_name":"BeatSigner","domain_name":"vub","created_at":"2008-10-29T17:11:18.015-07:00","display_name":"Beat Signer","url":"https://vub.academia.edu/BeatSigner"},"attachments":[{"id":113644953,"title":"","file_type":"pdf","scribd_thumbnail_url":"https://attachments.academia-assets.com/113644953/thumbnails/1.jpg","file_name":"lecture_09_dataPhysicalisation.pdf","download_url":"https://www.academia.edu/attachments/113644953/download_file","bulk_download_file_name":"Data_Physicalisation_Next_Generation_Use.pdf","bulk_download_url":"https://d1wqtxts1xzle7.cloudfront.net/113644953/lecture_09_dataPhysicalisation-libre.pdf?1713814498=\u0026response-content-disposition=attachment%3B+filename%3DData_Physicalisation_Next_Generation_Use.pdf\u0026Expires=1744203893\u0026Signature=Ewv9UDIR1Dk4VqByEJ7--g7t5PWWuHBNvaYVEeaX6diU7IdQ8rEypMuGaku6O9Ebe37cPhT6c-eSB6gOjicK3BKdRgd7ZM0VWJYRW8-8xSU6y6t5ExUP~Q-5IFjSwCynt-WqRIymX5pQBiGpJpecGvha2XW4yuZqu9rDBzrfEsvon4-PoUFjkWLovSnU~798yPHdi3RieqEjzW2dQG2E9k0DPotoAtRdJf6BluTKP6aoaowhDWJDuwsmyC1TYPm6Tq0iCzQXTjVkjx7-fcvmPdSXjPvDtjrFH2B0Nfy99cmosOAaewhrlL1ptlzrekv2-OvROwWb-QpwS80Y1TdVtg__\u0026Key-Pair-Id=APKAJLOHF5GGSLRBV4ZA"}],"research_interests":[{"id":422,"name":"Computer Science","url":"https://www.academia.edu/Documents/in/Computer_Science"},{"id":472,"name":"Human Computer Interaction","url":"https://www.academia.edu/Documents/in/Human_Computer_Interaction"},{"id":1210,"name":"Informatics","url":"https://www.academia.edu/Documents/in/Informatics"},{"id":1380,"name":"Computer Engineering","url":"https://www.academia.edu/Documents/in/Computer_Engineering"},{"id":3424,"name":"Information Visualization","url":"https://www.academia.edu/Documents/in/Information_Visualization"},{"id":4416,"name":"Interaction Design","url":"https://www.academia.edu/Documents/in/Interaction_Design"},{"id":1993399,"name":"Data Physicalisation","url":"https://www.academia.edu/Documents/in/Data_Physicalisation"},{"id":2947471,"name":"Data Physicalization","url":"https://www.academia.edu/Documents/in/Data_Physicalization"}],"urls":[]}, dispatcherData: dispatcherData }); $(this).data('initialized', true); } }); $a.trackClickSource(".js-work-strip-work-link", "profile_work_strip") if (false) { Aedu.setUpFigureCarousel('profile-work-117896780-figures'); } }); </script> <div class="js-work-strip profile--work_container" data-work-id="93028389"><div class="profile--work_thumbnail hidden-xs"><a class="js-work-strip-work-link" data-click-track="profile-work-strip-thumbnail" href="https://www.academia.edu/93028389/Use_Cases_and_Course_Review_Human_Computer_Interaction_1023841ANR_"><img alt="Research paper thumbnail of Use Cases and Course Review - Human-Computer Interaction (1023841ANR)" class="work-thumbnail" src="https://attachments.academia-assets.com/120086990/thumbnails/1.jpg" /></a></div><div class="wp-workCard wp-workCard_itemContainer"><div class="wp-workCard_item wp-workCard--title"><a class="js-work-strip-work-link text-gray-darker" data-click-track="profile-work-strip-title" href="https://www.academia.edu/93028389/Use_Cases_and_Course_Review_Human_Computer_Interaction_1023841ANR_">Use Cases and Course Review - Human-Computer Interaction (1023841ANR)</a></div><div class="wp-workCard_item"><span class="js-work-more-abstract-truncated">This lecture forms part of the &#39;Human-Computer Interaction&#39; course given at the Vrije Universitei...</span><a class="js-work-more-abstract" data-broccoli-component="work_strip.more_abstract" data-click-track="profile-work-strip-more-abstract" href="javascript:;"><span> more </span><span><i class="fa fa-caret-down"></i></span></a><span class="js-work-more-abstract-untruncated hidden">This lecture forms part of the &#39;Human-Computer Interaction&#39; course given at the Vrije Universiteit Brussel.</span></div><div class="wp-workCard_item wp-workCard--actions"><span class="work-strip-bookmark-button-container"></span><a id="26e7f343bc970781ca9da239fb230a3f" class="wp-workCard--action" rel="nofollow" data-click-track="profile-work-strip-download" data-download="{&quot;attachment_id&quot;:120086990,&quot;asset_id&quot;:93028389,&quot;asset_type&quot;:&quot;Work&quot;,&quot;button_location&quot;:&quot;profile&quot;}" href="https://www.academia.edu/attachments/120086990/download_file?s=profile"><span><i class="fa fa-arrow-down"></i></span><span>Download</span></a><span class="wp-workCard--action visible-if-viewed-by-owner inline-block" style="display: none;"><span class="js-profile-work-strip-edit-button-wrapper profile-work-strip-edit-button-wrapper" data-work-id="93028389"><a class="js-profile-work-strip-edit-button" tabindex="0"><span><i class="fa fa-pencil"></i></span><span>Edit</span></a></span></span></div><div class="wp-workCard_item wp-workCard--stats"><span><span><span class="js-view-count view-count u-mr2x" data-work-id="93028389"><i class="fa fa-spinner fa-spin"></i></span><script>$(function () { var workId = 93028389; window.Academia.workViewCountsFetcher.queue(workId, function (count) { var description = window.$h.commaizeInt(count) + " " + window.$h.pluralize(count, 'View'); $(".js-view-count[data-work-id=93028389]").text(description); $(".js-view-count[data-work-id=93028389]").attr('title', description).tooltip(); }); });</script></span></span><span><span class="percentile-widget hidden"><span class="u-mr2x work-percentile"></span></span><script>$(function () { var workId = 93028389; window.Academia.workPercentilesFetcher.queue(workId, function (percentileText) { var container = $(".js-work-strip[data-work-id='93028389']"); container.find('.work-percentile').text(percentileText.charAt(0).toUpperCase() + percentileText.slice(1)); container.find('.percentile-widget').show(); container.find('.percentile-widget').removeClass('hidden'); }); });</script></span></div><div id="work-strip-premium-row-container"></div></div></div><script> require.config({ waitSeconds: 90 })(["https://a.academia-assets.com/assets/wow_profile-a9bf3a2bc8c89fa2a77156577594264ee8a0f214d74241bc0fcd3f69f8d107ac.js","https://a.academia-assets.com/assets/work_edit-ad038b8c047c1a8d4fa01b402d530ff93c45fee2137a149a4a5398bc8ad67560.js"], function() { // from javascript_helper.rb var dispatcherData = {} if (true){ window.WowProfile.dispatcher = window.WowProfile.dispatcher || _.clone(Backbone.Events); dispatcherData = { dispatcher: window.WowProfile.dispatcher, downloadLinkId: "26e7f343bc970781ca9da239fb230a3f" } } $('.js-work-strip[data-work-id=93028389]').each(function() { if (!$(this).data('initialized')) { new WowProfile.WorkStripView({ el: this, workJSON: {"id":93028389,"title":"Use Cases and Course Review - Human-Computer Interaction (1023841ANR)","translated_title":"","metadata":{"abstract":"This lecture forms part of the 'Human-Computer Interaction' course given at the Vrije Universiteit Brussel. ","publication_date":{"day":null,"month":null,"year":2024,"errors":{}}},"translated_abstract":"This lecture forms part of the 'Human-Computer Interaction' course given at the Vrije Universiteit Brussel. ","internal_url":"https://www.academia.edu/93028389/Use_Cases_and_Course_Review_Human_Computer_Interaction_1023841ANR_","translated_internal_url":"","created_at":"2022-12-16T06:08:32.455-08:00","preview_url":null,"current_user_can_edit":null,"current_user_is_owner":null,"owner_id":13155,"coauthors_can_edit":true,"document_type":"teaching_document","co_author_tags":[],"downloadable_attachments":[{"id":120086990,"title":"","file_type":"pdf","scribd_thumbnail_url":"https://attachments.academia-assets.com/120086990/thumbnails/1.jpg","file_name":"lecture_08_useCasesAndCourseReview.pdf","download_url":"https://www.academia.edu/attachments/120086990/download_file","bulk_download_file_name":"Use_Cases_and_Course_Review_Human_Comput.pdf","bulk_download_url":"https://d1wqtxts1xzle7.cloudfront.net/120086990/lecture_08_useCasesAndCourseReview-libre.pdf?1733697207=\u0026response-content-disposition=attachment%3B+filename%3DUse_Cases_and_Course_Review_Human_Comput.pdf\u0026Expires=1744203893\u0026Signature=V3IcAGixsDC01Z4NanUhw2lbBznlCEIdyYrUu5658pf34EXG0OzRjr8q5DEofBAqifqCCUtHLnRK4wpO2AQJCHdPbbWRMhNBC4-oj9eeKqco-2bJXo~6yTHKVc5-NGwDcBpgOvkFTsD6iTWwt2yhIXBunvVKvvQiwLDc9P6v7Slbe8kfWn~WNDas8jBd3wpGtSU3-18xAa2D3JFwfMVs76kT8tmY1xzUHpDyi-7kRTm59yPyoSWifnqdTetlVlApwes66oJ~Egt-AVytdkMmg5yNNHk0P7ff5bwxbNUGGGAjW-AzCZ-RXIZOQvrR6MpOTPMA22kIKLOZGXwrdNPsew__\u0026Key-Pair-Id=APKAJLOHF5GGSLRBV4ZA"}],"slug":"Use_Cases_and_Course_Review_Human_Computer_Interaction_1023841ANR_","translated_slug":"","page_count":49,"language":"en","content_type":"Work","summary":"This lecture forms part of the 'Human-Computer Interaction' course given at the Vrije Universiteit Brussel. ","impression_tracking_id":null,"owner":{"id":13155,"first_name":"Beat","middle_initials":null,"last_name":"Signer","page_name":"BeatSigner","domain_name":"vub","created_at":"2008-10-29T17:11:18.015-07:00","display_name":"Beat Signer","url":"https://vub.academia.edu/BeatSigner"},"attachments":[{"id":120086990,"title":"","file_type":"pdf","scribd_thumbnail_url":"https://attachments.academia-assets.com/120086990/thumbnails/1.jpg","file_name":"lecture_08_useCasesAndCourseReview.pdf","download_url":"https://www.academia.edu/attachments/120086990/download_file","bulk_download_file_name":"Use_Cases_and_Course_Review_Human_Comput.pdf","bulk_download_url":"https://d1wqtxts1xzle7.cloudfront.net/120086990/lecture_08_useCasesAndCourseReview-libre.pdf?1733697207=\u0026response-content-disposition=attachment%3B+filename%3DUse_Cases_and_Course_Review_Human_Comput.pdf\u0026Expires=1744203893\u0026Signature=V3IcAGixsDC01Z4NanUhw2lbBznlCEIdyYrUu5658pf34EXG0OzRjr8q5DEofBAqifqCCUtHLnRK4wpO2AQJCHdPbbWRMhNBC4-oj9eeKqco-2bJXo~6yTHKVc5-NGwDcBpgOvkFTsD6iTWwt2yhIXBunvVKvvQiwLDc9P6v7Slbe8kfWn~WNDas8jBd3wpGtSU3-18xAa2D3JFwfMVs76kT8tmY1xzUHpDyi-7kRTm59yPyoSWifnqdTetlVlApwes66oJ~Egt-AVytdkMmg5yNNHk0P7ff5bwxbNUGGGAjW-AzCZ-RXIZOQvrR6MpOTPMA22kIKLOZGXwrdNPsew__\u0026Key-Pair-Id=APKAJLOHF5GGSLRBV4ZA"}],"research_interests":[{"id":422,"name":"Computer Science","url":"https://www.academia.edu/Documents/in/Computer_Science"},{"id":472,"name":"Human Computer Interaction","url":"https://www.academia.edu/Documents/in/Human_Computer_Interaction"},{"id":1210,"name":"Informatics","url":"https://www.academia.edu/Documents/in/Informatics"},{"id":23997,"name":"HCI","url":"https://www.academia.edu/Documents/in/HCI"},{"id":36689,"name":"Human-Computer Interface","url":"https://www.academia.edu/Documents/in/Human-Computer_Interface"},{"id":851044,"name":"Computer Human Interaction","url":"https://www.academia.edu/Documents/in/Computer_Human_Interaction"},{"id":1251579,"name":"Human-Computer Interaction","url":"https://www.academia.edu/Documents/in/Human-Computer_Interaction"}],"urls":[{"id":34176810,"url":"https://wise.vub.ac.be/course/human-computer-interaction"}]}, dispatcherData: dispatcherData }); $(this).data('initialized', true); } }); $a.trackClickSource(".js-work-strip-work-link", "profile_work_strip") if (false) { Aedu.setUpFigureCarousel('profile-work-93028389-figures'); } }); </script> <div class="js-work-strip profile--work_container" data-work-id="91637582"><div class="profile--work_thumbnail hidden-xs"><a class="js-work-strip-work-link" data-click-track="profile-work-strip-thumbnail" href="https://www.academia.edu/91637582/HCI_Research_Methods_Human_Computer_Interaction_1023841ANR_"><img alt="Research paper thumbnail of HCI Research Methods - Human-Computer Interaction (1023841ANR)" class="work-thumbnail" src="https://attachments.academia-assets.com/119338110/thumbnails/1.jpg" /></a></div><div class="wp-workCard wp-workCard_itemContainer"><div class="wp-workCard_item wp-workCard--title"><a class="js-work-strip-work-link text-gray-darker" data-click-track="profile-work-strip-title" href="https://www.academia.edu/91637582/HCI_Research_Methods_Human_Computer_Interaction_1023841ANR_">HCI Research Methods - Human-Computer Interaction (1023841ANR)</a></div><div class="wp-workCard_item"><span class="js-work-more-abstract-truncated">This lecture forms part of the &#39;Human-Computer Interaction&#39; course given at the Vrije Universitei...</span><a class="js-work-more-abstract" data-broccoli-component="work_strip.more_abstract" data-click-track="profile-work-strip-more-abstract" href="javascript:;"><span> more </span><span><i class="fa fa-caret-down"></i></span></a><span class="js-work-more-abstract-untruncated hidden">This lecture forms part of the &#39;Human-Computer Interaction&#39; course given at the Vrije Universiteit Brussel.</span></div><div class="wp-workCard_item wp-workCard--actions"><span class="work-strip-bookmark-button-container"></span><a id="7b86f4d9ecc17621a8294448ec3eba6e" class="wp-workCard--action" rel="nofollow" data-click-track="profile-work-strip-download" data-download="{&quot;attachment_id&quot;:119338110,&quot;asset_id&quot;:91637582,&quot;asset_type&quot;:&quot;Work&quot;,&quot;button_location&quot;:&quot;profile&quot;}" href="https://www.academia.edu/attachments/119338110/download_file?s=profile"><span><i class="fa fa-arrow-down"></i></span><span>Download</span></a><span class="wp-workCard--action visible-if-viewed-by-owner inline-block" style="display: none;"><span class="js-profile-work-strip-edit-button-wrapper profile-work-strip-edit-button-wrapper" data-work-id="91637582"><a class="js-profile-work-strip-edit-button" tabindex="0"><span><i class="fa fa-pencil"></i></span><span>Edit</span></a></span></span></div><div class="wp-workCard_item wp-workCard--stats"><span><span><span class="js-view-count view-count u-mr2x" data-work-id="91637582"><i class="fa fa-spinner fa-spin"></i></span><script>$(function () { var workId = 91637582; window.Academia.workViewCountsFetcher.queue(workId, function (count) { var description = window.$h.commaizeInt(count) + " " + window.$h.pluralize(count, 'View'); $(".js-view-count[data-work-id=91637582]").text(description); $(".js-view-count[data-work-id=91637582]").attr('title', description).tooltip(); }); });</script></span></span><span><span class="percentile-widget hidden"><span class="u-mr2x work-percentile"></span></span><script>$(function () { var workId = 91637582; window.Academia.workPercentilesFetcher.queue(workId, function (percentileText) { var container = $(".js-work-strip[data-work-id='91637582']"); container.find('.work-percentile').text(percentileText.charAt(0).toUpperCase() + percentileText.slice(1)); container.find('.percentile-widget').show(); container.find('.percentile-widget').removeClass('hidden'); }); });</script></span></div><div id="work-strip-premium-row-container"></div></div></div><script> require.config({ waitSeconds: 90 })(["https://a.academia-assets.com/assets/wow_profile-a9bf3a2bc8c89fa2a77156577594264ee8a0f214d74241bc0fcd3f69f8d107ac.js","https://a.academia-assets.com/assets/work_edit-ad038b8c047c1a8d4fa01b402d530ff93c45fee2137a149a4a5398bc8ad67560.js"], function() { // from javascript_helper.rb var dispatcherData = {} if (true){ window.WowProfile.dispatcher = window.WowProfile.dispatcher || _.clone(Backbone.Events); dispatcherData = { dispatcher: window.WowProfile.dispatcher, downloadLinkId: "7b86f4d9ecc17621a8294448ec3eba6e" } } $('.js-work-strip[data-work-id=91637582]').each(function() { if (!$(this).data('initialized')) { new WowProfile.WorkStripView({ el: this, workJSON: {"id":91637582,"title":"HCI Research Methods - Human-Computer Interaction (1023841ANR)","translated_title":"","metadata":{"abstract":"This lecture forms part of the 'Human-Computer Interaction' course given at the Vrije Universiteit Brussel. ","ai_title_tag":"HCI Research Methods Overview","publication_date":{"day":null,"month":null,"year":2024,"errors":{}}},"translated_abstract":"This lecture forms part of the 'Human-Computer Interaction' course given at the Vrije Universiteit Brussel. ","internal_url":"https://www.academia.edu/91637582/HCI_Research_Methods_Human_Computer_Interaction_1023841ANR_","translated_internal_url":"","created_at":"2022-11-26T05:12:46.531-08:00","preview_url":null,"current_user_can_edit":null,"current_user_is_owner":null,"owner_id":13155,"coauthors_can_edit":true,"document_type":"teaching_document","co_author_tags":[],"downloadable_attachments":[{"id":119338110,"title":"","file_type":"pdf","scribd_thumbnail_url":"https://attachments.academia-assets.com/119338110/thumbnails/1.jpg","file_name":"lecture_07_hciResearchMethods.pdf","download_url":"https://www.academia.edu/attachments/119338110/download_file","bulk_download_file_name":"HCI_Research_Methods_Human_Computer_Inte.pdf","bulk_download_url":"https://d1wqtxts1xzle7.cloudfront.net/119338110/lecture_07_hciResearchMethods-libre.pdf?1730669447=\u0026response-content-disposition=attachment%3B+filename%3DHCI_Research_Methods_Human_Computer_Inte.pdf\u0026Expires=1744203893\u0026Signature=SQI84~KdwyHwf8r1CC4Y~aMYs4k90ppxgUclWp-hWmrfJBdEEYl8~-DWV2C1HdTwBXAy-xBe5CeBZPDgC6vl07w9OKS2u5t9Vp83GBpletfvDo4KCdRK1jXy58FM8KNgt83iPL33uxG0Ep3Te~h9xVB0YWmKLsrbexBylDaU8yuSUTTk5kCnbSKpkBnq36rhkL8NtEDGOO9JrvsXMXQ5E4BTl2uM3K6wnX7Lbb4aj9Dj0JaZQcULdqJZqvyun8e0S-S2uwnQY0EQnVBuigHpnGdTrEaIFcwXaCd1YfM-yLCRDcQ6qi9pONn0GqKj~Z9fiCLMMRWVm4rqppjymfHqGA__\u0026Key-Pair-Id=APKAJLOHF5GGSLRBV4ZA"}],"slug":"HCI_Research_Methods_Human_Computer_Interaction_1023841ANR_","translated_slug":"","page_count":35,"language":"en","content_type":"Work","summary":"This lecture forms part of the 'Human-Computer Interaction' course given at the Vrije Universiteit Brussel. ","impression_tracking_id":null,"owner":{"id":13155,"first_name":"Beat","middle_initials":null,"last_name":"Signer","page_name":"BeatSigner","domain_name":"vub","created_at":"2008-10-29T17:11:18.015-07:00","display_name":"Beat Signer","url":"https://vub.academia.edu/BeatSigner"},"attachments":[{"id":119338110,"title":"","file_type":"pdf","scribd_thumbnail_url":"https://attachments.academia-assets.com/119338110/thumbnails/1.jpg","file_name":"lecture_07_hciResearchMethods.pdf","download_url":"https://www.academia.edu/attachments/119338110/download_file","bulk_download_file_name":"HCI_Research_Methods_Human_Computer_Inte.pdf","bulk_download_url":"https://d1wqtxts1xzle7.cloudfront.net/119338110/lecture_07_hciResearchMethods-libre.pdf?1730669447=\u0026response-content-disposition=attachment%3B+filename%3DHCI_Research_Methods_Human_Computer_Inte.pdf\u0026Expires=1744203893\u0026Signature=SQI84~KdwyHwf8r1CC4Y~aMYs4k90ppxgUclWp-hWmrfJBdEEYl8~-DWV2C1HdTwBXAy-xBe5CeBZPDgC6vl07w9OKS2u5t9Vp83GBpletfvDo4KCdRK1jXy58FM8KNgt83iPL33uxG0Ep3Te~h9xVB0YWmKLsrbexBylDaU8yuSUTTk5kCnbSKpkBnq36rhkL8NtEDGOO9JrvsXMXQ5E4BTl2uM3K6wnX7Lbb4aj9Dj0JaZQcULdqJZqvyun8e0S-S2uwnQY0EQnVBuigHpnGdTrEaIFcwXaCd1YfM-yLCRDcQ6qi9pONn0GqKj~Z9fiCLMMRWVm4rqppjymfHqGA__\u0026Key-Pair-Id=APKAJLOHF5GGSLRBV4ZA"}],"research_interests":[{"id":422,"name":"Computer Science","url":"https://www.academia.edu/Documents/in/Computer_Science"},{"id":472,"name":"Human Computer Interaction","url":"https://www.academia.edu/Documents/in/Human_Computer_Interaction"},{"id":1210,"name":"Informatics","url":"https://www.academia.edu/Documents/in/Informatics"},{"id":2065,"name":"Research Methodology","url":"https://www.academia.edu/Documents/in/Research_Methodology"},{"id":2170,"name":"Ethnography","url":"https://www.academia.edu/Documents/in/Ethnography"},{"id":5187,"name":"Statistical Analysis","url":"https://www.academia.edu/Documents/in/Statistical_Analysis"},{"id":13675,"name":"Grounded Theory","url":"https://www.academia.edu/Documents/in/Grounded_Theory"},{"id":30561,"name":"Experimental Research","url":"https://www.academia.edu/Documents/in/Experimental_Research"},{"id":36689,"name":"Human-Computer Interface","url":"https://www.academia.edu/Documents/in/Human-Computer_Interface"},{"id":55794,"name":"Human Computation","url":"https://www.academia.edu/Documents/in/Human_Computation"},{"id":62436,"name":"Interviews","url":"https://www.academia.edu/Documents/in/Interviews"},{"id":97585,"name":"User interfaces","url":"https://www.academia.edu/Documents/in/User_interfaces"},{"id":128673,"name":"Descriptive Research","url":"https://www.academia.edu/Documents/in/Descriptive_Research"},{"id":176632,"name":"Interfaces","url":"https://www.academia.edu/Documents/in/Interfaces"},{"id":235090,"name":"Online Surveys","url":"https://www.academia.edu/Documents/in/Online_Surveys"},{"id":851044,"name":"Computer Human Interaction","url":"https://www.academia.edu/Documents/in/Computer_Human_Interaction"},{"id":1251579,"name":"Human-Computer Interaction","url":"https://www.academia.edu/Documents/in/Human-Computer_Interaction"}],"urls":[{"id":34176816,"url":"https://wise.vub.ac.be/course/human-computer-interaction"}]}, dispatcherData: dispatcherData }); $(this).data('initialized', true); } }); $a.trackClickSource(".js-work-strip-work-link", "profile_work_strip") if (false) { Aedu.setUpFigureCarousel('profile-work-91637582-figures'); } }); </script> <div class="js-work-strip profile--work_container" data-work-id="89964187"><div class="profile--work_thumbnail hidden-xs"><a class="js-work-strip-work-link" data-click-track="profile-work-strip-thumbnail" href="https://www.academia.edu/89964187/Evaluation_Methods_Human_Computer_Interaction_1023841ANR_"><img alt="Research paper thumbnail of Evaluation Methods - Human-Computer Interaction (1023841ANR)" class="work-thumbnail" src="https://attachments.academia-assets.com/119190767/thumbnails/1.jpg" /></a></div><div class="wp-workCard wp-workCard_itemContainer"><div class="wp-workCard_item wp-workCard--title"><a class="js-work-strip-work-link text-gray-darker" data-click-track="profile-work-strip-title" href="https://www.academia.edu/89964187/Evaluation_Methods_Human_Computer_Interaction_1023841ANR_">Evaluation Methods - Human-Computer Interaction (1023841ANR)</a></div><div class="wp-workCard_item"><span class="js-work-more-abstract-truncated">This lecture forms part of the &#39;Human-Computer Interaction&#39; course given at the Vrije Universitei...</span><a class="js-work-more-abstract" data-broccoli-component="work_strip.more_abstract" data-click-track="profile-work-strip-more-abstract" href="javascript:;"><span> more </span><span><i class="fa fa-caret-down"></i></span></a><span class="js-work-more-abstract-untruncated hidden">This lecture forms part of the &#39;Human-Computer Interaction&#39; course given at the Vrije Universiteit Brussel.</span></div><div class="wp-workCard_item wp-workCard--actions"><span class="work-strip-bookmark-button-container"></span><a id="b8d9b3c92b97a0da8faaeb131b163ba7" class="wp-workCard--action" rel="nofollow" data-click-track="profile-work-strip-download" data-download="{&quot;attachment_id&quot;:119190767,&quot;asset_id&quot;:89964187,&quot;asset_type&quot;:&quot;Work&quot;,&quot;button_location&quot;:&quot;profile&quot;}" href="https://www.academia.edu/attachments/119190767/download_file?s=profile"><span><i class="fa fa-arrow-down"></i></span><span>Download</span></a><span class="wp-workCard--action visible-if-viewed-by-owner inline-block" style="display: none;"><span class="js-profile-work-strip-edit-button-wrapper profile-work-strip-edit-button-wrapper" data-work-id="89964187"><a class="js-profile-work-strip-edit-button" tabindex="0"><span><i class="fa fa-pencil"></i></span><span>Edit</span></a></span></span></div><div class="wp-workCard_item wp-workCard--stats"><span><span><span class="js-view-count view-count u-mr2x" data-work-id="89964187"><i class="fa fa-spinner fa-spin"></i></span><script>$(function () { var workId = 89964187; window.Academia.workViewCountsFetcher.queue(workId, function (count) { var description = window.$h.commaizeInt(count) + " " + window.$h.pluralize(count, 'View'); $(".js-view-count[data-work-id=89964187]").text(description); $(".js-view-count[data-work-id=89964187]").attr('title', description).tooltip(); }); });</script></span></span><span><span class="percentile-widget hidden"><span class="u-mr2x work-percentile"></span></span><script>$(function () { var workId = 89964187; window.Academia.workPercentilesFetcher.queue(workId, function (percentileText) { var container = $(".js-work-strip[data-work-id='89964187']"); container.find('.work-percentile').text(percentileText.charAt(0).toUpperCase() + percentileText.slice(1)); container.find('.percentile-widget').show(); container.find('.percentile-widget').removeClass('hidden'); }); });</script></span></div><div id="work-strip-premium-row-container"></div></div></div><script> require.config({ waitSeconds: 90 })(["https://a.academia-assets.com/assets/wow_profile-a9bf3a2bc8c89fa2a77156577594264ee8a0f214d74241bc0fcd3f69f8d107ac.js","https://a.academia-assets.com/assets/work_edit-ad038b8c047c1a8d4fa01b402d530ff93c45fee2137a149a4a5398bc8ad67560.js"], function() { // from javascript_helper.rb var dispatcherData = {} if (true){ window.WowProfile.dispatcher = window.WowProfile.dispatcher || _.clone(Backbone.Events); dispatcherData = { dispatcher: window.WowProfile.dispatcher, downloadLinkId: "b8d9b3c92b97a0da8faaeb131b163ba7" } } $('.js-work-strip[data-work-id=89964187]').each(function() { if (!$(this).data('initialized')) { new WowProfile.WorkStripView({ el: this, workJSON: {"id":89964187,"title":"Evaluation Methods - Human-Computer Interaction (1023841ANR)","translated_title":"","metadata":{"abstract":"This lecture forms part of the 'Human-Computer Interaction' course given at the Vrije Universiteit Brussel. ","ai_title_tag":"Evaluation Methods in Human-Computer Interaction","publication_date":{"day":null,"month":null,"year":2024,"errors":{}}},"translated_abstract":"This lecture forms part of the 'Human-Computer Interaction' course given at the Vrije Universiteit Brussel. ","internal_url":"https://www.academia.edu/89964187/Evaluation_Methods_Human_Computer_Interaction_1023841ANR_","translated_internal_url":"","created_at":"2022-11-04T07:35:19.119-07:00","preview_url":null,"current_user_can_edit":null,"current_user_is_owner":null,"owner_id":13155,"coauthors_can_edit":true,"document_type":"teaching_document","co_author_tags":[],"downloadable_attachments":[{"id":119190767,"title":"","file_type":"pdf","scribd_thumbnail_url":"https://attachments.academia-assets.com/119190767/thumbnails/1.jpg","file_name":"lecture_06_evaluationMethods.pdf","download_url":"https://www.academia.edu/attachments/119190767/download_file","bulk_download_file_name":"Evaluation_Methods_Human_Computer_Intera.pdf","bulk_download_url":"https://d1wqtxts1xzle7.cloudfront.net/119190767/lecture_06_evaluationMethods-libre.pdf?1730056020=\u0026response-content-disposition=attachment%3B+filename%3DEvaluation_Methods_Human_Computer_Intera.pdf\u0026Expires=1744203893\u0026Signature=IUt7m3gNuzB5yrF~ISxpZ4oB-AdPOnDSpd9icrKivvKoqSvLIApvs8euNMAypiYl82t65~HRSUbj7EVYTI6PGG0CzM~S-0KM439d1DXNtsQzmIpzdBK2ZUWLEjs3zpfsm~akz4cAG2xmyMOf3hbKVtXULVpgjh~3KH8XKSITT9fdj4chK-08edszoQlOlcN-7ZlG8-ZzkFWcV~GpIOBnmMH0QPt-axXVNqkWhcfcCGuU6kuGD2c7u8x7o9RnOJ613LMQxLiNJCmfjaQBUSarMbSPVz4va9LAB-AiO~jKTwqfhNo6wgfqttcNREcKp-ZcuvzVZ3WzAShwwF1rRJzd~A__\u0026Key-Pair-Id=APKAJLOHF5GGSLRBV4ZA"}],"slug":"Evaluation_Methods_Human_Computer_Interaction_1023841ANR_","translated_slug":"","page_count":43,"language":"en","content_type":"Work","summary":"This lecture forms part of the 'Human-Computer Interaction' course given at the Vrije Universiteit Brussel. ","impression_tracking_id":null,"owner":{"id":13155,"first_name":"Beat","middle_initials":null,"last_name":"Signer","page_name":"BeatSigner","domain_name":"vub","created_at":"2008-10-29T17:11:18.015-07:00","display_name":"Beat Signer","url":"https://vub.academia.edu/BeatSigner"},"attachments":[{"id":119190767,"title":"","file_type":"pdf","scribd_thumbnail_url":"https://attachments.academia-assets.com/119190767/thumbnails/1.jpg","file_name":"lecture_06_evaluationMethods.pdf","download_url":"https://www.academia.edu/attachments/119190767/download_file","bulk_download_file_name":"Evaluation_Methods_Human_Computer_Intera.pdf","bulk_download_url":"https://d1wqtxts1xzle7.cloudfront.net/119190767/lecture_06_evaluationMethods-libre.pdf?1730056020=\u0026response-content-disposition=attachment%3B+filename%3DEvaluation_Methods_Human_Computer_Intera.pdf\u0026Expires=1744203893\u0026Signature=IUt7m3gNuzB5yrF~ISxpZ4oB-AdPOnDSpd9icrKivvKoqSvLIApvs8euNMAypiYl82t65~HRSUbj7EVYTI6PGG0CzM~S-0KM439d1DXNtsQzmIpzdBK2ZUWLEjs3zpfsm~akz4cAG2xmyMOf3hbKVtXULVpgjh~3KH8XKSITT9fdj4chK-08edszoQlOlcN-7ZlG8-ZzkFWcV~GpIOBnmMH0QPt-axXVNqkWhcfcCGuU6kuGD2c7u8x7o9RnOJ613LMQxLiNJCmfjaQBUSarMbSPVz4va9LAB-AiO~jKTwqfhNo6wgfqttcNREcKp-ZcuvzVZ3WzAShwwF1rRJzd~A__\u0026Key-Pair-Id=APKAJLOHF5GGSLRBV4ZA"}],"research_interests":[{"id":422,"name":"Computer Science","url":"https://www.academia.edu/Documents/in/Computer_Science"},{"id":472,"name":"Human Computer Interaction","url":"https://www.academia.edu/Documents/in/Human_Computer_Interaction"},{"id":1210,"name":"Informatics","url":"https://www.academia.edu/Documents/in/Informatics"},{"id":1615,"name":"Usability","url":"https://www.academia.edu/Documents/in/Usability"},{"id":2875,"name":"User Experience (UX)","url":"https://www.academia.edu/Documents/in/User_Experience_UX_"},{"id":8910,"name":"Evaluation","url":"https://www.academia.edu/Documents/in/Evaluation"},{"id":14304,"name":"Usability and user experience","url":"https://www.academia.edu/Documents/in/Usability_and_user_experience"},{"id":36689,"name":"Human-Computer Interface","url":"https://www.academia.edu/Documents/in/Human-Computer_Interface"},{"id":97585,"name":"User interfaces","url":"https://www.academia.edu/Documents/in/User_interfaces"},{"id":176632,"name":"Interfaces","url":"https://www.academia.edu/Documents/in/Interfaces"},{"id":239810,"name":"Field Study","url":"https://www.academia.edu/Documents/in/Field_Study"},{"id":492279,"name":"Heuristic Evaluation","url":"https://www.academia.edu/Documents/in/Heuristic_Evaluation"},{"id":851044,"name":"Computer Human Interaction","url":"https://www.academia.edu/Documents/in/Computer_Human_Interaction"},{"id":1251579,"name":"Human-Computer Interaction","url":"https://www.academia.edu/Documents/in/Human-Computer_Interaction"}],"urls":[{"id":34176817,"url":"https://wise.vub.ac.be/course/human-computer-interaction"}]}, dispatcherData: dispatcherData }); $(this).data('initialized', true); } }); $a.trackClickSource(".js-work-strip-work-link", "profile_work_strip") if (false) { Aedu.setUpFigureCarousel('profile-work-89964187-figures'); } }); </script> <div class="js-work-strip profile--work_container" data-work-id="89433542"><div class="profile--work_thumbnail hidden-xs"><a class="js-work-strip-work-link" data-click-track="profile-work-strip-thumbnail" href="https://www.academia.edu/89433542/Design_Guidelines_and_Models_Human_Computer_Interaction_1023841ANR_"><img alt="Research paper thumbnail of Design Guidelines and Models - Human-Computer Interaction (1023841ANR)" class="work-thumbnail" src="https://attachments.academia-assets.com/119033413/thumbnails/1.jpg" /></a></div><div class="wp-workCard wp-workCard_itemContainer"><div class="wp-workCard_item wp-workCard--title"><a class="js-work-strip-work-link text-gray-darker" data-click-track="profile-work-strip-title" href="https://www.academia.edu/89433542/Design_Guidelines_and_Models_Human_Computer_Interaction_1023841ANR_">Design Guidelines and Models - Human-Computer Interaction (1023841ANR)</a></div><div class="wp-workCard_item"><span class="js-work-more-abstract-truncated">This lecture forms part of the &#39;Human-Computer Interaction&#39; course given at the Vrije Universitei...</span><a class="js-work-more-abstract" data-broccoli-component="work_strip.more_abstract" data-click-track="profile-work-strip-more-abstract" href="javascript:;"><span> more </span><span><i class="fa fa-caret-down"></i></span></a><span class="js-work-more-abstract-untruncated hidden">This lecture forms part of the &#39;Human-Computer Interaction&#39; course given at the Vrije Universiteit Brussel.</span></div><div class="wp-workCard_item wp-workCard--actions"><span class="work-strip-bookmark-button-container"></span><a id="ba2baba96bc70ea964f62689767e3381" class="wp-workCard--action" rel="nofollow" data-click-track="profile-work-strip-download" data-download="{&quot;attachment_id&quot;:119033413,&quot;asset_id&quot;:89433542,&quot;asset_type&quot;:&quot;Work&quot;,&quot;button_location&quot;:&quot;profile&quot;}" href="https://www.academia.edu/attachments/119033413/download_file?s=profile"><span><i class="fa fa-arrow-down"></i></span><span>Download</span></a><span class="wp-workCard--action visible-if-viewed-by-owner inline-block" style="display: none;"><span class="js-profile-work-strip-edit-button-wrapper profile-work-strip-edit-button-wrapper" data-work-id="89433542"><a class="js-profile-work-strip-edit-button" tabindex="0"><span><i class="fa fa-pencil"></i></span><span>Edit</span></a></span></span></div><div class="wp-workCard_item wp-workCard--stats"><span><span><span class="js-view-count view-count u-mr2x" data-work-id="89433542"><i class="fa fa-spinner fa-spin"></i></span><script>$(function () { var workId = 89433542; window.Academia.workViewCountsFetcher.queue(workId, function (count) { var description = window.$h.commaizeInt(count) + " " + window.$h.pluralize(count, 'View'); $(".js-view-count[data-work-id=89433542]").text(description); $(".js-view-count[data-work-id=89433542]").attr('title', description).tooltip(); }); });</script></span></span><span><span class="percentile-widget hidden"><span class="u-mr2x work-percentile"></span></span><script>$(function () { var workId = 89433542; window.Academia.workPercentilesFetcher.queue(workId, function (percentileText) { var container = $(".js-work-strip[data-work-id='89433542']"); container.find('.work-percentile').text(percentileText.charAt(0).toUpperCase() + percentileText.slice(1)); container.find('.percentile-widget').show(); container.find('.percentile-widget').removeClass('hidden'); }); });</script></span></div><div id="work-strip-premium-row-container"></div></div></div><script> require.config({ waitSeconds: 90 })(["https://a.academia-assets.com/assets/wow_profile-a9bf3a2bc8c89fa2a77156577594264ee8a0f214d74241bc0fcd3f69f8d107ac.js","https://a.academia-assets.com/assets/work_edit-ad038b8c047c1a8d4fa01b402d530ff93c45fee2137a149a4a5398bc8ad67560.js"], function() { // from javascript_helper.rb var dispatcherData = {} if (true){ window.WowProfile.dispatcher = window.WowProfile.dispatcher || _.clone(Backbone.Events); dispatcherData = { dispatcher: window.WowProfile.dispatcher, downloadLinkId: "ba2baba96bc70ea964f62689767e3381" } } $('.js-work-strip[data-work-id=89433542]').each(function() { if (!$(this).data('initialized')) { new WowProfile.WorkStripView({ el: this, workJSON: {"id":89433542,"title":"Design Guidelines and Models - Human-Computer Interaction (1023841ANR)","translated_title":"","metadata":{"abstract":"This lecture forms part of the 'Human-Computer Interaction' course given at the Vrije Universiteit Brussel. ","publication_date":{"day":null,"month":null,"year":2024,"errors":{}}},"translated_abstract":"This lecture forms part of the 'Human-Computer Interaction' course given at the Vrije Universiteit Brussel. ","internal_url":"https://www.academia.edu/89433542/Design_Guidelines_and_Models_Human_Computer_Interaction_1023841ANR_","translated_internal_url":"","created_at":"2022-10-28T21:05:08.741-07:00","preview_url":null,"current_user_can_edit":null,"current_user_is_owner":null,"owner_id":13155,"coauthors_can_edit":true,"document_type":"teaching_document","co_author_tags":[],"downloadable_attachments":[{"id":119033413,"title":"","file_type":"pdf","scribd_thumbnail_url":"https://attachments.academia-assets.com/119033413/thumbnails/1.jpg","file_name":"lecture_05_designGuidelinesAndModels.pdf","download_url":"https://www.academia.edu/attachments/119033413/download_file","bulk_download_file_name":"Design_Guidelines_and_Models_Human_Compu.pdf","bulk_download_url":"https://d1wqtxts1xzle7.cloudfront.net/119033413/lecture_05_designGuidelinesAndModels-libre.pdf?1729464071=\u0026response-content-disposition=attachment%3B+filename%3DDesign_Guidelines_and_Models_Human_Compu.pdf\u0026Expires=1744203893\u0026Signature=cU70djikxmvoRZATsGjAHvdZw9eLzXQleUfAFrZkPLlQalRN2fOhBi2denB1NHgtcbbsxu8HUaVWi26IJnlKT6ynaLcNqRNjcsuvxhHc~BfqVMTSc7MVf1WVk4q6zV2KyBNmzdtvO3zb-X7cnMnoSmiTefJ4vxJfABg6ZFuoAB~pUP-OE2ol6LZy8jNwNnlpUU5E8Dol8EHq6zn8pVucUQmVj~N3PS2AXXlsEg15WJwA8VXe18twAyCLMykpHQwnbbi2rVpOcRJBR5xQuOcKOohaQz9nbZKpdYFzmko0tdN8b5kORnSe5r07O2mD2ib6HUylsyBWJiowI9hjcFnVHQ__\u0026Key-Pair-Id=APKAJLOHF5GGSLRBV4ZA"}],"slug":"Design_Guidelines_and_Models_Human_Computer_Interaction_1023841ANR_","translated_slug":"","page_count":45,"language":"en","content_type":"Work","summary":"This lecture forms part of the 'Human-Computer Interaction' course given at the Vrije Universiteit Brussel. ","impression_tracking_id":null,"owner":{"id":13155,"first_name":"Beat","middle_initials":null,"last_name":"Signer","page_name":"BeatSigner","domain_name":"vub","created_at":"2008-10-29T17:11:18.015-07:00","display_name":"Beat Signer","url":"https://vub.academia.edu/BeatSigner"},"attachments":[{"id":119033413,"title":"","file_type":"pdf","scribd_thumbnail_url":"https://attachments.academia-assets.com/119033413/thumbnails/1.jpg","file_name":"lecture_05_designGuidelinesAndModels.pdf","download_url":"https://www.academia.edu/attachments/119033413/download_file","bulk_download_file_name":"Design_Guidelines_and_Models_Human_Compu.pdf","bulk_download_url":"https://d1wqtxts1xzle7.cloudfront.net/119033413/lecture_05_designGuidelinesAndModels-libre.pdf?1729464071=\u0026response-content-disposition=attachment%3B+filename%3DDesign_Guidelines_and_Models_Human_Compu.pdf\u0026Expires=1744203893\u0026Signature=cU70djikxmvoRZATsGjAHvdZw9eLzXQleUfAFrZkPLlQalRN2fOhBi2denB1NHgtcbbsxu8HUaVWi26IJnlKT6ynaLcNqRNjcsuvxhHc~BfqVMTSc7MVf1WVk4q6zV2KyBNmzdtvO3zb-X7cnMnoSmiTefJ4vxJfABg6ZFuoAB~pUP-OE2ol6LZy8jNwNnlpUU5E8Dol8EHq6zn8pVucUQmVj~N3PS2AXXlsEg15WJwA8VXe18twAyCLMykpHQwnbbi2rVpOcRJBR5xQuOcKOohaQz9nbZKpdYFzmko0tdN8b5kORnSe5r07O2mD2ib6HUylsyBWJiowI9hjcFnVHQ__\u0026Key-Pair-Id=APKAJLOHF5GGSLRBV4ZA"}],"research_interests":[{"id":422,"name":"Computer Science","url":"https://www.academia.edu/Documents/in/Computer_Science"},{"id":472,"name":"Human Computer Interaction","url":"https://www.academia.edu/Documents/in/Human_Computer_Interaction"},{"id":1210,"name":"Informatics","url":"https://www.academia.edu/Documents/in/Informatics"},{"id":2537,"name":"Heuristics","url":"https://www.academia.edu/Documents/in/Heuristics"},{"id":11119,"name":"User Interface","url":"https://www.academia.edu/Documents/in/User_Interface"},{"id":36689,"name":"Human-Computer Interface","url":"https://www.academia.edu/Documents/in/Human-Computer_Interface"},{"id":97585,"name":"User interfaces","url":"https://www.academia.edu/Documents/in/User_interfaces"},{"id":176632,"name":"Interfaces","url":"https://www.academia.edu/Documents/in/Interfaces"},{"id":200981,"name":"Design Guidelines","url":"https://www.academia.edu/Documents/in/Design_Guidelines"},{"id":492279,"name":"Heuristic Evaluation","url":"https://www.academia.edu/Documents/in/Heuristic_Evaluation"},{"id":851044,"name":"Computer Human Interaction","url":"https://www.academia.edu/Documents/in/Computer_Human_Interaction"},{"id":1251579,"name":"Human-Computer Interaction","url":"https://www.academia.edu/Documents/in/Human-Computer_Interaction"}],"urls":[{"id":34176818,"url":"https://wise.vub.ac.be/course/human-computer-interaction"}]}, dispatcherData: dispatcherData }); $(this).data('initialized', true); } }); $a.trackClickSource(".js-work-strip-work-link", "profile_work_strip") if (false) { Aedu.setUpFigureCarousel('profile-work-89433542-figures'); } }); </script> <div class="js-work-strip profile--work_container" data-work-id="88991044"><div class="profile--work_thumbnail hidden-xs"><a class="js-work-strip-work-link" data-click-track="profile-work-strip-thumbnail" href="https://www.academia.edu/88991044/Human_Perception_and_Cognition_Human_Computer_Interaction_1023841ANR_"><img alt="Research paper thumbnail of Human Perception and Cognition - Human-Computer Interaction (1023841ANR)" class="work-thumbnail" src="https://attachments.academia-assets.com/118869435/thumbnails/1.jpg" /></a></div><div class="wp-workCard wp-workCard_itemContainer"><div class="wp-workCard_item wp-workCard--title"><a class="js-work-strip-work-link text-gray-darker" data-click-track="profile-work-strip-title" href="https://www.academia.edu/88991044/Human_Perception_and_Cognition_Human_Computer_Interaction_1023841ANR_">Human Perception and Cognition - Human-Computer Interaction (1023841ANR)</a></div><div class="wp-workCard_item"><span class="js-work-more-abstract-truncated">This lecture forms part of the &#39;Human-Computer Interaction&#39; course given at the Vrije Universitei...</span><a class="js-work-more-abstract" data-broccoli-component="work_strip.more_abstract" data-click-track="profile-work-strip-more-abstract" href="javascript:;"><span> more </span><span><i class="fa fa-caret-down"></i></span></a><span class="js-work-more-abstract-untruncated hidden">This lecture forms part of the &#39;Human-Computer Interaction&#39; course given at the Vrije Universiteit Brussel.</span></div><div class="wp-workCard_item wp-workCard--actions"><span class="work-strip-bookmark-button-container"></span><a id="d2f18134b7a89d94cf8b8ff8146d3000" class="wp-workCard--action" rel="nofollow" data-click-track="profile-work-strip-download" data-download="{&quot;attachment_id&quot;:118869435,&quot;asset_id&quot;:88991044,&quot;asset_type&quot;:&quot;Work&quot;,&quot;button_location&quot;:&quot;profile&quot;}" href="https://www.academia.edu/attachments/118869435/download_file?s=profile"><span><i class="fa fa-arrow-down"></i></span><span>Download</span></a><span class="wp-workCard--action visible-if-viewed-by-owner inline-block" style="display: none;"><span class="js-profile-work-strip-edit-button-wrapper profile-work-strip-edit-button-wrapper" data-work-id="88991044"><a class="js-profile-work-strip-edit-button" tabindex="0"><span><i class="fa fa-pencil"></i></span><span>Edit</span></a></span></span></div><div class="wp-workCard_item wp-workCard--stats"><span><span><span class="js-view-count view-count u-mr2x" data-work-id="88991044"><i class="fa fa-spinner fa-spin"></i></span><script>$(function () { var workId = 88991044; window.Academia.workViewCountsFetcher.queue(workId, function (count) { var description = window.$h.commaizeInt(count) + " " + window.$h.pluralize(count, 'View'); $(".js-view-count[data-work-id=88991044]").text(description); $(".js-view-count[data-work-id=88991044]").attr('title', description).tooltip(); }); });</script></span></span><span><span class="percentile-widget hidden"><span class="u-mr2x work-percentile"></span></span><script>$(function () { var workId = 88991044; window.Academia.workPercentilesFetcher.queue(workId, function (percentileText) { var container = $(".js-work-strip[data-work-id='88991044']"); container.find('.work-percentile').text(percentileText.charAt(0).toUpperCase() + percentileText.slice(1)); container.find('.percentile-widget').show(); container.find('.percentile-widget').removeClass('hidden'); }); });</script></span></div><div id="work-strip-premium-row-container"></div></div></div><script> require.config({ waitSeconds: 90 })(["https://a.academia-assets.com/assets/wow_profile-a9bf3a2bc8c89fa2a77156577594264ee8a0f214d74241bc0fcd3f69f8d107ac.js","https://a.academia-assets.com/assets/work_edit-ad038b8c047c1a8d4fa01b402d530ff93c45fee2137a149a4a5398bc8ad67560.js"], function() { // from javascript_helper.rb var dispatcherData = {} if (true){ window.WowProfile.dispatcher = window.WowProfile.dispatcher || _.clone(Backbone.Events); dispatcherData = { dispatcher: window.WowProfile.dispatcher, downloadLinkId: "d2f18134b7a89d94cf8b8ff8146d3000" } } $('.js-work-strip[data-work-id=88991044]').each(function() { if (!$(this).data('initialized')) { new WowProfile.WorkStripView({ el: this, workJSON: {"id":88991044,"title":"Human Perception and Cognition - Human-Computer Interaction (1023841ANR)","translated_title":"","metadata":{"abstract":"This lecture forms part of the 'Human-Computer Interaction' course given at the Vrije Universiteit Brussel. ","publication_date":{"day":null,"month":null,"year":2024,"errors":{}}},"translated_abstract":"This lecture forms part of the 'Human-Computer Interaction' course given at the Vrije Universiteit Brussel. ","internal_url":"https://www.academia.edu/88991044/Human_Perception_and_Cognition_Human_Computer_Interaction_1023841ANR_","translated_internal_url":"","created_at":"2022-10-22T11:34:33.725-07:00","preview_url":null,"current_user_can_edit":null,"current_user_is_owner":null,"owner_id":13155,"coauthors_can_edit":true,"document_type":"teaching_document","co_author_tags":[],"downloadable_attachments":[{"id":118869435,"title":"","file_type":"pdf","scribd_thumbnail_url":"https://attachments.academia-assets.com/118869435/thumbnails/1.jpg","file_name":"lecture_04_humanPerceptionAndCognition.pdf","download_url":"https://www.academia.edu/attachments/118869435/download_file","bulk_download_file_name":"Human_Perception_and_Cognition_Human_Com.pdf","bulk_download_url":"https://d1wqtxts1xzle7.cloudfront.net/118869435/lecture_04_humanPerceptionAndCognition-libre.pdf?1728851510=\u0026response-content-disposition=attachment%3B+filename%3DHuman_Perception_and_Cognition_Human_Com.pdf\u0026Expires=1744203893\u0026Signature=BIxpeOiRVG5Ja6OwtVKTDE18Bup73Z0Cr6OyFqx~7Q8OJ28owSlNsFrCufaoTB0yTp0A3l5c-FhKfi4AbmPShvb7uWfFZkW5MMwUZYeeMLTJDY~fbamWDTWiJxulhIqBT5QHfF4HtAwP-v2Xen5fTjEhAxld786-XgmWjwfOHZ9fgv0Sl3iVJKh7xcRsC1YTaflmhlFZnbRXPSdHesplc1hiEO-InixMQyqn6j6Q38NebaaWvR31gwFkK~6tVFV-nGuuj4DFjFWVQyREFsIW5Zn5w~NGpi7EeQMxgge75jr0ZmDuFQXFj9YDHEOENyD7s6pZW5Hrj1Ij5MhfTroEWg__\u0026Key-Pair-Id=APKAJLOHF5GGSLRBV4ZA"}],"slug":"Human_Perception_and_Cognition_Human_Computer_Interaction_1023841ANR_","translated_slug":"","page_count":37,"language":"en","content_type":"Work","summary":"This lecture forms part of the 'Human-Computer Interaction' course given at the Vrije Universiteit Brussel. ","impression_tracking_id":null,"owner":{"id":13155,"first_name":"Beat","middle_initials":null,"last_name":"Signer","page_name":"BeatSigner","domain_name":"vub","created_at":"2008-10-29T17:11:18.015-07:00","display_name":"Beat Signer","url":"https://vub.academia.edu/BeatSigner"},"attachments":[{"id":118869435,"title":"","file_type":"pdf","scribd_thumbnail_url":"https://attachments.academia-assets.com/118869435/thumbnails/1.jpg","file_name":"lecture_04_humanPerceptionAndCognition.pdf","download_url":"https://www.academia.edu/attachments/118869435/download_file","bulk_download_file_name":"Human_Perception_and_Cognition_Human_Com.pdf","bulk_download_url":"https://d1wqtxts1xzle7.cloudfront.net/118869435/lecture_04_humanPerceptionAndCognition-libre.pdf?1728851510=\u0026response-content-disposition=attachment%3B+filename%3DHuman_Perception_and_Cognition_Human_Com.pdf\u0026Expires=1744203893\u0026Signature=BIxpeOiRVG5Ja6OwtVKTDE18Bup73Z0Cr6OyFqx~7Q8OJ28owSlNsFrCufaoTB0yTp0A3l5c-FhKfi4AbmPShvb7uWfFZkW5MMwUZYeeMLTJDY~fbamWDTWiJxulhIqBT5QHfF4HtAwP-v2Xen5fTjEhAxld786-XgmWjwfOHZ9fgv0Sl3iVJKh7xcRsC1YTaflmhlFZnbRXPSdHesplc1hiEO-InixMQyqn6j6Q38NebaaWvR31gwFkK~6tVFV-nGuuj4DFjFWVQyREFsIW5Zn5w~NGpi7EeQMxgge75jr0ZmDuFQXFj9YDHEOENyD7s6pZW5Hrj1Ij5MhfTroEWg__\u0026Key-Pair-Id=APKAJLOHF5GGSLRBV4ZA"}],"research_interests":[{"id":422,"name":"Computer Science","url":"https://www.academia.edu/Documents/in/Computer_Science"},{"id":472,"name":"Human Computer Interaction","url":"https://www.academia.edu/Documents/in/Human_Computer_Interaction"},{"id":867,"name":"Perception","url":"https://www.academia.edu/Documents/in/Perception"},{"id":1210,"name":"Informatics","url":"https://www.academia.edu/Documents/in/Informatics"},{"id":4212,"name":"Cognition","url":"https://www.academia.edu/Documents/in/Cognition"},{"id":8538,"name":"Working Memory","url":"https://www.academia.edu/Documents/in/Working_Memory"},{"id":11119,"name":"User Interface","url":"https://www.academia.edu/Documents/in/User_Interface"},{"id":36689,"name":"Human-Computer Interface","url":"https://www.academia.edu/Documents/in/Human-Computer_Interface"},{"id":176632,"name":"Interfaces","url":"https://www.academia.edu/Documents/in/Interfaces"},{"id":522465,"name":"Long Term Memory","url":"https://www.academia.edu/Documents/in/Long_Term_Memory"},{"id":851044,"name":"Computer Human Interaction","url":"https://www.academia.edu/Documents/in/Computer_Human_Interaction"},{"id":1251579,"name":"Human-Computer Interaction","url":"https://www.academia.edu/Documents/in/Human-Computer_Interaction"}],"urls":[{"id":34176820,"url":"https://wise.vub.ac.be/course/human-computer-interaction"}]}, dispatcherData: dispatcherData }); $(this).data('initialized', true); } }); $a.trackClickSource(".js-work-strip-work-link", "profile_work_strip") if (false) { Aedu.setUpFigureCarousel('profile-work-88991044-figures'); } }); </script> <div class="js-work-strip profile--work_container" data-work-id="88449005"><div class="profile--work_thumbnail hidden-xs"><a class="js-work-strip-work-link" data-click-track="profile-work-strip-thumbnail" href="https://www.academia.edu/88449005/Requirements_Analysis_and_Prototyping_Human_Computer_Interaction_1023841ANR_"><img alt="Research paper thumbnail of Requirements Analysis and Prototyping - Human-Computer Interaction (1023841ANR)" class="work-thumbnail" src="https://attachments.academia-assets.com/118869420/thumbnails/1.jpg" /></a></div><div class="wp-workCard wp-workCard_itemContainer"><div class="wp-workCard_item wp-workCard--title"><a class="js-work-strip-work-link text-gray-darker" data-click-track="profile-work-strip-title" href="https://www.academia.edu/88449005/Requirements_Analysis_and_Prototyping_Human_Computer_Interaction_1023841ANR_">Requirements Analysis and Prototyping - Human-Computer Interaction (1023841ANR)</a></div><div class="wp-workCard_item"><span class="js-work-more-abstract-truncated">This lecture forms part of the &#39;Human-Computer Interaction&#39; course given at the Vrije Universitei...</span><a class="js-work-more-abstract" data-broccoli-component="work_strip.more_abstract" data-click-track="profile-work-strip-more-abstract" href="javascript:;"><span> more </span><span><i class="fa fa-caret-down"></i></span></a><span class="js-work-more-abstract-untruncated hidden">This lecture forms part of the &#39;Human-Computer Interaction&#39; course given at the Vrije Universiteit Brussel.</span></div><div class="wp-workCard_item wp-workCard--actions"><span class="work-strip-bookmark-button-container"></span><a id="e1da685af05acf8924443a19649eaa28" class="wp-workCard--action" rel="nofollow" data-click-track="profile-work-strip-download" data-download="{&quot;attachment_id&quot;:118869420,&quot;asset_id&quot;:88449005,&quot;asset_type&quot;:&quot;Work&quot;,&quot;button_location&quot;:&quot;profile&quot;}" href="https://www.academia.edu/attachments/118869420/download_file?s=profile"><span><i class="fa fa-arrow-down"></i></span><span>Download</span></a><span class="wp-workCard--action visible-if-viewed-by-owner inline-block" style="display: none;"><span class="js-profile-work-strip-edit-button-wrapper profile-work-strip-edit-button-wrapper" data-work-id="88449005"><a class="js-profile-work-strip-edit-button" tabindex="0"><span><i class="fa fa-pencil"></i></span><span>Edit</span></a></span></span></div><div class="wp-workCard_item wp-workCard--stats"><span><span><span class="js-view-count view-count u-mr2x" data-work-id="88449005"><i class="fa fa-spinner fa-spin"></i></span><script>$(function () { var workId = 88449005; window.Academia.workViewCountsFetcher.queue(workId, function (count) { var description = window.$h.commaizeInt(count) + " " + window.$h.pluralize(count, 'View'); $(".js-view-count[data-work-id=88449005]").text(description); $(".js-view-count[data-work-id=88449005]").attr('title', description).tooltip(); }); });</script></span></span><span><span class="percentile-widget hidden"><span class="u-mr2x work-percentile"></span></span><script>$(function () { var workId = 88449005; window.Academia.workPercentilesFetcher.queue(workId, function (percentileText) { var container = $(".js-work-strip[data-work-id='88449005']"); container.find('.work-percentile').text(percentileText.charAt(0).toUpperCase() + percentileText.slice(1)); container.find('.percentile-widget').show(); container.find('.percentile-widget').removeClass('hidden'); }); });</script></span></div><div id="work-strip-premium-row-container"></div></div></div><script> require.config({ waitSeconds: 90 })(["https://a.academia-assets.com/assets/wow_profile-a9bf3a2bc8c89fa2a77156577594264ee8a0f214d74241bc0fcd3f69f8d107ac.js","https://a.academia-assets.com/assets/work_edit-ad038b8c047c1a8d4fa01b402d530ff93c45fee2137a149a4a5398bc8ad67560.js"], function() { // from javascript_helper.rb var dispatcherData = {} if (true){ window.WowProfile.dispatcher = window.WowProfile.dispatcher || _.clone(Backbone.Events); dispatcherData = { dispatcher: window.WowProfile.dispatcher, downloadLinkId: "e1da685af05acf8924443a19649eaa28" } } $('.js-work-strip[data-work-id=88449005]').each(function() { if (!$(this).data('initialized')) { new WowProfile.WorkStripView({ el: this, workJSON: {"id":88449005,"title":"Requirements Analysis and Prototyping - Human-Computer Interaction (1023841ANR)","translated_title":"","metadata":{"abstract":"This lecture forms part of the 'Human-Computer Interaction' course given at the Vrije Universiteit Brussel. ","ai_title_tag":"Prototyping in Human-Computer Interaction","publication_date":{"day":null,"month":null,"year":2024,"errors":{}}},"translated_abstract":"This lecture forms part of the 'Human-Computer Interaction' course given at the Vrije Universiteit Brussel. ","internal_url":"https://www.academia.edu/88449005/Requirements_Analysis_and_Prototyping_Human_Computer_Interaction_1023841ANR_","translated_internal_url":"","created_at":"2022-10-14T00:36:19.711-07:00","preview_url":null,"current_user_can_edit":null,"current_user_is_owner":null,"owner_id":13155,"coauthors_can_edit":true,"document_type":"teaching_document","co_author_tags":[],"downloadable_attachments":[{"id":118869420,"title":"","file_type":"pdf","scribd_thumbnail_url":"https://attachments.academia-assets.com/118869420/thumbnails/1.jpg","file_name":"lecture_03_requirementsAnalysisAndPrototyping.pdf","download_url":"https://www.academia.edu/attachments/118869420/download_file","bulk_download_file_name":"Requirements_Analysis_and_Prototyping_Hu.pdf","bulk_download_url":"https://d1wqtxts1xzle7.cloudfront.net/118869420/lecture_03_requirementsAnalysisAndPrototyping-libre.pdf?1728851521=\u0026response-content-disposition=attachment%3B+filename%3DRequirements_Analysis_and_Prototyping_Hu.pdf\u0026Expires=1744203893\u0026Signature=SQJeuVmYOAMxex6GdU2gAbVsVa0oduwJ~AyW71W53D1e5AyuH2BKw0mscM1RbiP~7aVgVI4YVXloAJXGR2EoewRHzDMBoaphk-4MyT4i7jljtORGkFZQJ21gXipp8knLs4NJ7AWFYI4EHaHhOL3E12dngw0Df8zkJV7HGkt-A1wwnVpdSphplbmrvAfwwxkO-mLBi1K-T9jVpXB1u-Ib8D5-OfWgZwrBgIlkGGYLoy6q2iO5-ccV6Had30cttihm5tT5NZsdKrpyBnm-ax3Iqcb1KO~iOYtTGZQlpBwm6mHBQD3ZqBF7mU2f0lkAWqPlwWEtRQNNc4kvAKqzcJ8KvQ__\u0026Key-Pair-Id=APKAJLOHF5GGSLRBV4ZA"}],"slug":"Requirements_Analysis_and_Prototyping_Human_Computer_Interaction_1023841ANR_","translated_slug":"","page_count":44,"language":"en","content_type":"Work","summary":"This lecture forms part of the 'Human-Computer Interaction' course given at the Vrije Universiteit Brussel. ","impression_tracking_id":null,"owner":{"id":13155,"first_name":"Beat","middle_initials":null,"last_name":"Signer","page_name":"BeatSigner","domain_name":"vub","created_at":"2008-10-29T17:11:18.015-07:00","display_name":"Beat Signer","url":"https://vub.academia.edu/BeatSigner"},"attachments":[{"id":118869420,"title":"","file_type":"pdf","scribd_thumbnail_url":"https://attachments.academia-assets.com/118869420/thumbnails/1.jpg","file_name":"lecture_03_requirementsAnalysisAndPrototyping.pdf","download_url":"https://www.academia.edu/attachments/118869420/download_file","bulk_download_file_name":"Requirements_Analysis_and_Prototyping_Hu.pdf","bulk_download_url":"https://d1wqtxts1xzle7.cloudfront.net/118869420/lecture_03_requirementsAnalysisAndPrototyping-libre.pdf?1728851521=\u0026response-content-disposition=attachment%3B+filename%3DRequirements_Analysis_and_Prototyping_Hu.pdf\u0026Expires=1744203893\u0026Signature=SQJeuVmYOAMxex6GdU2gAbVsVa0oduwJ~AyW71W53D1e5AyuH2BKw0mscM1RbiP~7aVgVI4YVXloAJXGR2EoewRHzDMBoaphk-4MyT4i7jljtORGkFZQJ21gXipp8knLs4NJ7AWFYI4EHaHhOL3E12dngw0Df8zkJV7HGkt-A1wwnVpdSphplbmrvAfwwxkO-mLBi1K-T9jVpXB1u-Ib8D5-OfWgZwrBgIlkGGYLoy6q2iO5-ccV6Had30cttihm5tT5NZsdKrpyBnm-ax3Iqcb1KO~iOYtTGZQlpBwm6mHBQD3ZqBF7mU2f0lkAWqPlwWEtRQNNc4kvAKqzcJ8KvQ__\u0026Key-Pair-Id=APKAJLOHF5GGSLRBV4ZA"}],"research_interests":[{"id":422,"name":"Computer Science","url":"https://www.academia.edu/Documents/in/Computer_Science"},{"id":472,"name":"Human Computer Interaction","url":"https://www.academia.edu/Documents/in/Human_Computer_Interaction"},{"id":1210,"name":"Informatics","url":"https://www.academia.edu/Documents/in/Informatics"},{"id":4416,"name":"Interaction Design","url":"https://www.academia.edu/Documents/in/Interaction_Design"},{"id":11119,"name":"User Interface","url":"https://www.academia.edu/Documents/in/User_Interface"},{"id":36689,"name":"Human-Computer Interface","url":"https://www.academia.edu/Documents/in/Human-Computer_Interface"},{"id":97585,"name":"User interfaces","url":"https://www.academia.edu/Documents/in/User_interfaces"},{"id":126585,"name":"Human Computer Interaction Design","url":"https://www.academia.edu/Documents/in/Human_Computer_Interaction_Design"},{"id":176632,"name":"Interfaces","url":"https://www.academia.edu/Documents/in/Interfaces"},{"id":469749,"name":"Human Computer Interface","url":"https://www.academia.edu/Documents/in/Human_Computer_Interface"},{"id":851044,"name":"Computer Human Interaction","url":"https://www.academia.edu/Documents/in/Computer_Human_Interaction"}],"urls":[{"id":34176821,"url":"https://wise.vub.ac.be/course/human-computer-interaction"}]}, dispatcherData: dispatcherData }); $(this).data('initialized', true); } }); $a.trackClickSource(".js-work-strip-work-link", "profile_work_strip") if (false) { Aedu.setUpFigureCarousel('profile-work-88449005-figures'); } }); </script> <div class="js-work-strip profile--work_container" data-work-id="88040096"><div class="profile--work_thumbnail hidden-xs"><a class="js-work-strip-work-link" data-click-track="profile-work-strip-thumbnail" href="https://www.academia.edu/88040096/HCI_and_Interaction_Design_Human_Computer_Interaction_1023841ANR_"><img alt="Research paper thumbnail of HCI and Interaction Design - Human-Computer Interaction (1023841ANR)" class="work-thumbnail" src="https://attachments.academia-assets.com/118869404/thumbnails/1.jpg" /></a></div><div class="wp-workCard wp-workCard_itemContainer"><div class="wp-workCard_item wp-workCard--title"><a class="js-work-strip-work-link text-gray-darker" data-click-track="profile-work-strip-title" href="https://www.academia.edu/88040096/HCI_and_Interaction_Design_Human_Computer_Interaction_1023841ANR_">HCI and Interaction Design - Human-Computer Interaction (1023841ANR)</a></div><div class="wp-workCard_item"><span class="js-work-more-abstract-truncated">This lecture forms part of the &#39;Human-Computer Interaction&#39; course given at the Vrije Universitei...</span><a class="js-work-more-abstract" data-broccoli-component="work_strip.more_abstract" data-click-track="profile-work-strip-more-abstract" href="javascript:;"><span> more </span><span><i class="fa fa-caret-down"></i></span></a><span class="js-work-more-abstract-untruncated hidden">This lecture forms part of the &#39;Human-Computer Interaction&#39; course given at the Vrije Universiteit Brussel.</span></div><div class="wp-workCard_item wp-workCard--actions"><span class="work-strip-bookmark-button-container"></span><a id="e363deeccbbf63044da0dfb32afeaa7d" class="wp-workCard--action" rel="nofollow" data-click-track="profile-work-strip-download" data-download="{&quot;attachment_id&quot;:118869404,&quot;asset_id&quot;:88040096,&quot;asset_type&quot;:&quot;Work&quot;,&quot;button_location&quot;:&quot;profile&quot;}" href="https://www.academia.edu/attachments/118869404/download_file?s=profile"><span><i class="fa fa-arrow-down"></i></span><span>Download</span></a><span class="wp-workCard--action visible-if-viewed-by-owner inline-block" style="display: none;"><span class="js-profile-work-strip-edit-button-wrapper profile-work-strip-edit-button-wrapper" data-work-id="88040096"><a class="js-profile-work-strip-edit-button" tabindex="0"><span><i class="fa fa-pencil"></i></span><span>Edit</span></a></span></span></div><div class="wp-workCard_item wp-workCard--stats"><span><span><span class="js-view-count view-count u-mr2x" data-work-id="88040096"><i class="fa fa-spinner fa-spin"></i></span><script>$(function () { var workId = 88040096; window.Academia.workViewCountsFetcher.queue(workId, function (count) { var description = window.$h.commaizeInt(count) + " " + window.$h.pluralize(count, 'View'); $(".js-view-count[data-work-id=88040096]").text(description); $(".js-view-count[data-work-id=88040096]").attr('title', description).tooltip(); }); });</script></span></span><span><span class="percentile-widget hidden"><span class="u-mr2x work-percentile"></span></span><script>$(function () { var workId = 88040096; window.Academia.workPercentilesFetcher.queue(workId, function (percentileText) { var container = $(".js-work-strip[data-work-id='88040096']"); container.find('.work-percentile').text(percentileText.charAt(0).toUpperCase() + percentileText.slice(1)); container.find('.percentile-widget').show(); container.find('.percentile-widget').removeClass('hidden'); }); });</script></span></div><div id="work-strip-premium-row-container"></div></div></div><script> require.config({ waitSeconds: 90 })(["https://a.academia-assets.com/assets/wow_profile-a9bf3a2bc8c89fa2a77156577594264ee8a0f214d74241bc0fcd3f69f8d107ac.js","https://a.academia-assets.com/assets/work_edit-ad038b8c047c1a8d4fa01b402d530ff93c45fee2137a149a4a5398bc8ad67560.js"], function() { // from javascript_helper.rb var dispatcherData = {} if (true){ window.WowProfile.dispatcher = window.WowProfile.dispatcher || _.clone(Backbone.Events); dispatcherData = { dispatcher: window.WowProfile.dispatcher, downloadLinkId: "e363deeccbbf63044da0dfb32afeaa7d" } } $('.js-work-strip[data-work-id=88040096]').each(function() { if (!$(this).data('initialized')) { new WowProfile.WorkStripView({ el: this, workJSON: {"id":88040096,"title":"HCI and Interaction Design - Human-Computer Interaction (1023841ANR)","translated_title":"","metadata":{"abstract":"This lecture forms part of the 'Human-Computer Interaction' course given at the Vrije Universiteit Brussel. ","more_info":"Vrije Universiteit Brussel, 2024","ai_title_tag":"HCI and Interaction Design at Vrije Universiteit Brussel","publication_date":{"day":null,"month":null,"year":2024,"errors":{}}},"translated_abstract":"This lecture forms part of the 'Human-Computer Interaction' course given at the Vrije Universiteit Brussel. ","internal_url":"https://www.academia.edu/88040096/HCI_and_Interaction_Design_Human_Computer_Interaction_1023841ANR_","translated_internal_url":"","created_at":"2022-10-06T22:19:38.120-07:00","preview_url":null,"current_user_can_edit":null,"current_user_is_owner":null,"owner_id":13155,"coauthors_can_edit":true,"document_type":"teaching_document","co_author_tags":[],"downloadable_attachments":[{"id":118869404,"title":"","file_type":"pdf","scribd_thumbnail_url":"https://attachments.academia-assets.com/118869404/thumbnails/1.jpg","file_name":"lecture_02_hciAndInteractionDesign.pdf","download_url":"https://www.academia.edu/attachments/118869404/download_file","bulk_download_file_name":"HCI_and_Interaction_Design_Human_Compute.pdf","bulk_download_url":"https://d1wqtxts1xzle7.cloudfront.net/118869404/lecture_02_hciAndInteractionDesign-libre.pdf?1728851534=\u0026response-content-disposition=attachment%3B+filename%3DHCI_and_Interaction_Design_Human_Compute.pdf\u0026Expires=1744203893\u0026Signature=HFmGHIfFmZEHLuPlXItFMGNbpCnO4roThIfmzkYTl2d3OP~JkU6qj4guhx0Mvv-LqfqluZpkGwv3g~LKQJUyQVuMpwSBn-lof3c4e6HWntskRgUr7fI7l~XAhOLtCJBH-FFaGbftILTw5cicV3gmzZjmEW6inBLho--HF5PXrU3yBzUdFgOp4~GWjpI5a7ejQ8PMrwYhKY3er7MgrylVfUu-mwLkHRaGN21sOkLvkw0IFHZY4-tNefBEYqsYB~3-1l0AuczfLkWD-WfJINk7siBXVrMSNGIj~~9dW-jXfKDUa1KZa0nMKqJI3znJj-NoeKRIVag4xJZEX1rQ-RRBYQ__\u0026Key-Pair-Id=APKAJLOHF5GGSLRBV4ZA"}],"slug":"HCI_and_Interaction_Design_Human_Computer_Interaction_1023841ANR_","translated_slug":"","page_count":53,"language":"en","content_type":"Work","summary":"This lecture forms part of the 'Human-Computer Interaction' course given at the Vrije Universiteit Brussel. ","impression_tracking_id":null,"owner":{"id":13155,"first_name":"Beat","middle_initials":null,"last_name":"Signer","page_name":"BeatSigner","domain_name":"vub","created_at":"2008-10-29T17:11:18.015-07:00","display_name":"Beat Signer","url":"https://vub.academia.edu/BeatSigner"},"attachments":[{"id":118869404,"title":"","file_type":"pdf","scribd_thumbnail_url":"https://attachments.academia-assets.com/118869404/thumbnails/1.jpg","file_name":"lecture_02_hciAndInteractionDesign.pdf","download_url":"https://www.academia.edu/attachments/118869404/download_file","bulk_download_file_name":"HCI_and_Interaction_Design_Human_Compute.pdf","bulk_download_url":"https://d1wqtxts1xzle7.cloudfront.net/118869404/lecture_02_hciAndInteractionDesign-libre.pdf?1728851534=\u0026response-content-disposition=attachment%3B+filename%3DHCI_and_Interaction_Design_Human_Compute.pdf\u0026Expires=1744203893\u0026Signature=HFmGHIfFmZEHLuPlXItFMGNbpCnO4roThIfmzkYTl2d3OP~JkU6qj4guhx0Mvv-LqfqluZpkGwv3g~LKQJUyQVuMpwSBn-lof3c4e6HWntskRgUr7fI7l~XAhOLtCJBH-FFaGbftILTw5cicV3gmzZjmEW6inBLho--HF5PXrU3yBzUdFgOp4~GWjpI5a7ejQ8PMrwYhKY3er7MgrylVfUu-mwLkHRaGN21sOkLvkw0IFHZY4-tNefBEYqsYB~3-1l0AuczfLkWD-WfJINk7siBXVrMSNGIj~~9dW-jXfKDUa1KZa0nMKqJI3znJj-NoeKRIVag4xJZEX1rQ-RRBYQ__\u0026Key-Pair-Id=APKAJLOHF5GGSLRBV4ZA"}],"research_interests":[{"id":422,"name":"Computer Science","url":"https://www.academia.edu/Documents/in/Computer_Science"},{"id":472,"name":"Human Computer Interaction","url":"https://www.academia.edu/Documents/in/Human_Computer_Interaction"},{"id":1210,"name":"Informatics","url":"https://www.academia.edu/Documents/in/Informatics"},{"id":4416,"name":"Interaction Design","url":"https://www.academia.edu/Documents/in/Interaction_Design"},{"id":11119,"name":"User Interface","url":"https://www.academia.edu/Documents/in/User_Interface"},{"id":20470,"name":"Tangible User Interfaces","url":"https://www.academia.edu/Documents/in/Tangible_User_Interfaces"},{"id":36689,"name":"Human-Computer Interface","url":"https://www.academia.edu/Documents/in/Human-Computer_Interface"},{"id":97585,"name":"User interfaces","url":"https://www.academia.edu/Documents/in/User_interfaces"},{"id":126585,"name":"Human Computer Interaction Design","url":"https://www.academia.edu/Documents/in/Human_Computer_Interaction_Design"},{"id":176632,"name":"Interfaces","url":"https://www.academia.edu/Documents/in/Interfaces"},{"id":469749,"name":"Human Computer Interface","url":"https://www.academia.edu/Documents/in/Human_Computer_Interface"},{"id":851044,"name":"Computer Human Interaction","url":"https://www.academia.edu/Documents/in/Computer_Human_Interaction"}],"urls":[{"id":34176825,"url":"https://wise.vub.ac.be/course/human-computer-interaction"}]}, dispatcherData: dispatcherData }); $(this).data('initialized', true); } }); $a.trackClickSource(".js-work-strip-work-link", "profile_work_strip") if (false) { Aedu.setUpFigureCarousel('profile-work-88040096-figures'); } }); </script> <div class="js-work-strip profile--work_container" data-work-id="87616775"><div class="profile--work_thumbnail hidden-xs"><a class="js-work-strip-work-link" data-click-track="profile-work-strip-thumbnail" href="https://www.academia.edu/87616775/Introduction_Human_Computer_Interaction_1023841ANR_"><img alt="Research paper thumbnail of Introduction - Human-Computer Interaction (1023841ANR)" class="work-thumbnail" src="https://attachments.academia-assets.com/118369545/thumbnails/1.jpg" /></a></div><div class="wp-workCard wp-workCard_itemContainer"><div class="wp-workCard_item wp-workCard--title"><a class="js-work-strip-work-link text-gray-darker" data-click-track="profile-work-strip-title" href="https://www.academia.edu/87616775/Introduction_Human_Computer_Interaction_1023841ANR_">Introduction - Human-Computer Interaction (1023841ANR)</a></div><div class="wp-workCard_item"><span class="js-work-more-abstract-truncated">This lecture forms part of the &#39;Human-Computer Interaction&#39; course given at the Vrije Universitei...</span><a class="js-work-more-abstract" data-broccoli-component="work_strip.more_abstract" data-click-track="profile-work-strip-more-abstract" href="javascript:;"><span> more </span><span><i class="fa fa-caret-down"></i></span></a><span class="js-work-more-abstract-untruncated hidden">This lecture forms part of the &#39;Human-Computer Interaction&#39; course given at the Vrije Universiteit Brussel.</span></div><div class="wp-workCard_item wp-workCard--actions"><span class="work-strip-bookmark-button-container"></span><a id="c5931f1490513d7725ebdb63749d6c19" class="wp-workCard--action" rel="nofollow" data-click-track="profile-work-strip-download" data-download="{&quot;attachment_id&quot;:118369545,&quot;asset_id&quot;:87616775,&quot;asset_type&quot;:&quot;Work&quot;,&quot;button_location&quot;:&quot;profile&quot;}" href="https://www.academia.edu/attachments/118369545/download_file?s=profile"><span><i class="fa fa-arrow-down"></i></span><span>Download</span></a><span class="wp-workCard--action visible-if-viewed-by-owner inline-block" style="display: none;"><span class="js-profile-work-strip-edit-button-wrapper profile-work-strip-edit-button-wrapper" data-work-id="87616775"><a class="js-profile-work-strip-edit-button" tabindex="0"><span><i class="fa fa-pencil"></i></span><span>Edit</span></a></span></span></div><div class="wp-workCard_item wp-workCard--stats"><span><span><span class="js-view-count view-count u-mr2x" data-work-id="87616775"><i class="fa fa-spinner fa-spin"></i></span><script>$(function () { var workId = 87616775; window.Academia.workViewCountsFetcher.queue(workId, function (count) { var description = window.$h.commaizeInt(count) + " " + window.$h.pluralize(count, 'View'); $(".js-view-count[data-work-id=87616775]").text(description); $(".js-view-count[data-work-id=87616775]").attr('title', description).tooltip(); }); });</script></span></span><span><span class="percentile-widget hidden"><span class="u-mr2x work-percentile"></span></span><script>$(function () { var workId = 87616775; window.Academia.workPercentilesFetcher.queue(workId, function (percentileText) { var container = $(".js-work-strip[data-work-id='87616775']"); container.find('.work-percentile').text(percentileText.charAt(0).toUpperCase() + percentileText.slice(1)); container.find('.percentile-widget').show(); container.find('.percentile-widget').removeClass('hidden'); }); });</script></span></div><div id="work-strip-premium-row-container"></div></div></div><script> require.config({ waitSeconds: 90 })(["https://a.academia-assets.com/assets/wow_profile-a9bf3a2bc8c89fa2a77156577594264ee8a0f214d74241bc0fcd3f69f8d107ac.js","https://a.academia-assets.com/assets/work_edit-ad038b8c047c1a8d4fa01b402d530ff93c45fee2137a149a4a5398bc8ad67560.js"], function() { // from javascript_helper.rb var dispatcherData = {} if (true){ window.WowProfile.dispatcher = window.WowProfile.dispatcher || _.clone(Backbone.Events); dispatcherData = { dispatcher: window.WowProfile.dispatcher, downloadLinkId: "c5931f1490513d7725ebdb63749d6c19" } } $('.js-work-strip[data-work-id=87616775]').each(function() { if (!$(this).data('initialized')) { new WowProfile.WorkStripView({ el: this, workJSON: {"id":87616775,"title":"Introduction - Human-Computer Interaction (1023841ANR)","translated_title":"","metadata":{"abstract":"This lecture forms part of the 'Human-Computer Interaction' course given at the Vrije Universiteit Brussel. ","more_info":"Vrije Universiteit Brussel, 2023","ai_title_tag":"Human-Computer Interaction Overview","publication_date":{"day":null,"month":null,"year":2024,"errors":{}}},"translated_abstract":"This lecture forms part of the 'Human-Computer Interaction' course given at the Vrije Universiteit Brussel. ","internal_url":"https://www.academia.edu/87616775/Introduction_Human_Computer_Interaction_1023841ANR_","translated_internal_url":"","created_at":"2022-09-30T05:09:46.890-07:00","preview_url":null,"current_user_can_edit":null,"current_user_is_owner":null,"owner_id":13155,"coauthors_can_edit":true,"document_type":"teaching_document","co_author_tags":[],"downloadable_attachments":[{"id":118369545,"title":"","file_type":"pdf","scribd_thumbnail_url":"https://attachments.academia-assets.com/118369545/thumbnails/1.jpg","file_name":"lecture_01_introduction.pdf","download_url":"https://www.academia.edu/attachments/118369545/download_file","bulk_download_file_name":"Introduction_Human_Computer_Interaction.pdf","bulk_download_url":"https://d1wqtxts1xzle7.cloudfront.net/118369545/lecture_01_introduction-libre.pdf?1727060346=\u0026response-content-disposition=attachment%3B+filename%3DIntroduction_Human_Computer_Interaction.pdf\u0026Expires=1744203893\u0026Signature=IqRTZPViw02e5PK4XWX9F1pNMs6GSEjx0aOhZvwdkOvvHm0saeNWRrwHdA80mDAIKjhKt2UpaBdT5HQCunwzK1lReHuudRRz6PDfjnimSuGOjy57xo9pffTEhjlBgrf8aDrBLFi3Whxxaxz0oMteqGhPg15W1otEEIKdALRXd6zK8UbueoHZqsYk~KbSwSlQBNn-fQcbRSUMLOSnB~eNzYPySlYyTIzNg4DT~hRpDQd0iGnVGmUVOuII0nD1hWK5ImBJC41y22spujaRmspWbs7nASqZmgbbqJEcMY35iil7-HTqES705us3S47G0DXi95DwAesiBk5pgZ8TQwc5Ag__\u0026Key-Pair-Id=APKAJLOHF5GGSLRBV4ZA"}],"slug":"Introduction_Human_Computer_Interaction_1023841ANR_","translated_slug":"","page_count":56,"language":"en","content_type":"Work","summary":"This lecture forms part of the 'Human-Computer Interaction' course given at the Vrije Universiteit Brussel. ","impression_tracking_id":null,"owner":{"id":13155,"first_name":"Beat","middle_initials":null,"last_name":"Signer","page_name":"BeatSigner","domain_name":"vub","created_at":"2008-10-29T17:11:18.015-07:00","display_name":"Beat Signer","url":"https://vub.academia.edu/BeatSigner"},"attachments":[{"id":118369545,"title":"","file_type":"pdf","scribd_thumbnail_url":"https://attachments.academia-assets.com/118369545/thumbnails/1.jpg","file_name":"lecture_01_introduction.pdf","download_url":"https://www.academia.edu/attachments/118369545/download_file","bulk_download_file_name":"Introduction_Human_Computer_Interaction.pdf","bulk_download_url":"https://d1wqtxts1xzle7.cloudfront.net/118369545/lecture_01_introduction-libre.pdf?1727060346=\u0026response-content-disposition=attachment%3B+filename%3DIntroduction_Human_Computer_Interaction.pdf\u0026Expires=1744203893\u0026Signature=IqRTZPViw02e5PK4XWX9F1pNMs6GSEjx0aOhZvwdkOvvHm0saeNWRrwHdA80mDAIKjhKt2UpaBdT5HQCunwzK1lReHuudRRz6PDfjnimSuGOjy57xo9pffTEhjlBgrf8aDrBLFi3Whxxaxz0oMteqGhPg15W1otEEIKdALRXd6zK8UbueoHZqsYk~KbSwSlQBNn-fQcbRSUMLOSnB~eNzYPySlYyTIzNg4DT~hRpDQd0iGnVGmUVOuII0nD1hWK5ImBJC41y22spujaRmspWbs7nASqZmgbbqJEcMY35iil7-HTqES705us3S47G0DXi95DwAesiBk5pgZ8TQwc5Ag__\u0026Key-Pair-Id=APKAJLOHF5GGSLRBV4ZA"}],"research_interests":[{"id":422,"name":"Computer Science","url":"https://www.academia.edu/Documents/in/Computer_Science"},{"id":472,"name":"Human Computer Interaction","url":"https://www.academia.edu/Documents/in/Human_Computer_Interaction"},{"id":1210,"name":"Informatics","url":"https://www.academia.edu/Documents/in/Informatics"},{"id":4416,"name":"Interaction Design","url":"https://www.academia.edu/Documents/in/Interaction_Design"},{"id":4624,"name":"Brain-computer interfaces","url":"https://www.academia.edu/Documents/in/Brain-computer_interfaces"},{"id":11081,"name":"Multimodal Interaction","url":"https://www.academia.edu/Documents/in/Multimodal_Interaction"},{"id":11119,"name":"User Interface","url":"https://www.academia.edu/Documents/in/User_Interface"},{"id":20470,"name":"Tangible User Interfaces","url":"https://www.academia.edu/Documents/in/Tangible_User_Interfaces"},{"id":36689,"name":"Human-Computer Interface","url":"https://www.academia.edu/Documents/in/Human-Computer_Interface"},{"id":37228,"name":"Multimodal Interfaces","url":"https://www.academia.edu/Documents/in/Multimodal_Interfaces"},{"id":39572,"name":"History of Computing (Computer Science)","url":"https://www.academia.edu/Documents/in/History_of_Computing_Computer_Science_"},{"id":97585,"name":"User interfaces","url":"https://www.academia.edu/Documents/in/User_interfaces"},{"id":126585,"name":"Human Computer Interaction Design","url":"https://www.academia.edu/Documents/in/Human_Computer_Interaction_Design"},{"id":149738,"name":"Man-Machine Interaction","url":"https://www.academia.edu/Documents/in/Man-Machine_Interaction"},{"id":167328,"name":"Gestural Interfaces","url":"https://www.academia.edu/Documents/in/Gestural_Interfaces"},{"id":176632,"name":"Interfaces","url":"https://www.academia.edu/Documents/in/Interfaces"},{"id":197254,"name":"Graphical User Interfaces","url":"https://www.academia.edu/Documents/in/Graphical_User_Interfaces"},{"id":851044,"name":"Computer Human Interaction","url":"https://www.academia.edu/Documents/in/Computer_Human_Interaction"},{"id":1251579,"name":"Human-Computer Interaction","url":"https://www.academia.edu/Documents/in/Human-Computer_Interaction"}],"urls":[{"id":34176827,"url":"https://wise.vub.ac.be/course/human-computer-interaction"}]}, dispatcherData: dispatcherData }); $(this).data('initialized', true); } }); $a.trackClickSource(".js-work-strip-work-link", "profile_work_strip") if (false) { Aedu.setUpFigureCarousel('profile-work-87616775-figures'); } }); </script> <div class="js-work-strip profile--work_container" data-work-id="79092772"><div class="profile--work_thumbnail hidden-xs"><a class="js-work-strip-work-link" data-click-track="profile-work-strip-thumbnail" href="https://www.academia.edu/79092772/Data_Management_and_Analytics_DAMA_Specilisation"><img alt="Research paper thumbnail of Data Management and Analytics (DAMA) Specilisation" class="work-thumbnail" src="https://attachments.academia-assets.com/112816083/thumbnails/1.jpg" /></a></div><div class="wp-workCard wp-workCard_itemContainer"><div class="wp-workCard_item wp-workCard--title"><a class="js-work-strip-work-link text-gray-darker" data-click-track="profile-work-strip-title" href="https://www.academia.edu/79092772/Data_Management_and_Analytics_DAMA_Specilisation">Data Management and Analytics (DAMA) Specilisation</a></div><div class="wp-workCard_item"><span class="js-work-more-abstract-truncated">Our goal is to prepare students for the future challenges in managing and analysing the rapidly g...</span><a class="js-work-more-abstract" data-broccoli-component="work_strip.more_abstract" data-click-track="profile-work-strip-more-abstract" href="javascript:;"><span> more </span><span><i class="fa fa-caret-down"></i></span></a><span class="js-work-more-abstract-untruncated hidden">Our goal is to prepare students for the future challenges in managing and analysing the rapidly growing amounts of data that is produced manually by humans as well as automatically generated by, for example, sensors in emerging Internet of Things solutions, data capturing on the Web or as an outcome of scientific experiments. Thereby, we focus on the scientific aspects and concepts for scalable data management solutions, information retrieval and data mining as well as different information visualisation and interaction techniques rather than on existing mainstream technologies, and provide students the necessary education for a future career as data scientists and data engineers.</span></div><div class="wp-workCard_item wp-workCard--actions"><span class="work-strip-bookmark-button-container"></span><a id="ff451cba7856046b736d46ca3970def8" class="wp-workCard--action" rel="nofollow" data-click-track="profile-work-strip-download" data-download="{&quot;attachment_id&quot;:112816083,&quot;asset_id&quot;:79092772,&quot;asset_type&quot;:&quot;Work&quot;,&quot;button_location&quot;:&quot;profile&quot;}" href="https://www.academia.edu/attachments/112816083/download_file?s=profile"><span><i class="fa fa-arrow-down"></i></span><span>Download</span></a><span class="wp-workCard--action visible-if-viewed-by-owner inline-block" style="display: none;"><span class="js-profile-work-strip-edit-button-wrapper profile-work-strip-edit-button-wrapper" data-work-id="79092772"><a class="js-profile-work-strip-edit-button" tabindex="0"><span><i class="fa fa-pencil"></i></span><span>Edit</span></a></span></span></div><div class="wp-workCard_item wp-workCard--stats"><span><span><span class="js-view-count view-count u-mr2x" data-work-id="79092772"><i class="fa fa-spinner fa-spin"></i></span><script>$(function () { var workId = 79092772; window.Academia.workViewCountsFetcher.queue(workId, function (count) { var description = window.$h.commaizeInt(count) + " " + window.$h.pluralize(count, 'View'); $(".js-view-count[data-work-id=79092772]").text(description); $(".js-view-count[data-work-id=79092772]").attr('title', description).tooltip(); }); });</script></span></span><span><span class="percentile-widget hidden"><span class="u-mr2x work-percentile"></span></span><script>$(function () { var workId = 79092772; window.Academia.workPercentilesFetcher.queue(workId, function (percentileText) { var container = $(".js-work-strip[data-work-id='79092772']"); container.find('.work-percentile').text(percentileText.charAt(0).toUpperCase() + percentileText.slice(1)); container.find('.percentile-widget').show(); container.find('.percentile-widget').removeClass('hidden'); }); });</script></span></div><div id="work-strip-premium-row-container"></div></div></div><script> require.config({ waitSeconds: 90 })(["https://a.academia-assets.com/assets/wow_profile-a9bf3a2bc8c89fa2a77156577594264ee8a0f214d74241bc0fcd3f69f8d107ac.js","https://a.academia-assets.com/assets/work_edit-ad038b8c047c1a8d4fa01b402d530ff93c45fee2137a149a4a5398bc8ad67560.js"], function() { // from javascript_helper.rb var dispatcherData = {} if (true){ window.WowProfile.dispatcher = window.WowProfile.dispatcher || _.clone(Backbone.Events); dispatcherData = { dispatcher: window.WowProfile.dispatcher, downloadLinkId: "ff451cba7856046b736d46ca3970def8" } } $('.js-work-strip[data-work-id=79092772]').each(function() { if (!$(this).data('initialized')) { new WowProfile.WorkStripView({ el: this, workJSON: {"id":79092772,"title":"Data Management and Analytics (DAMA) Specilisation","translated_title":"","metadata":{"abstract":"Our goal is to prepare students for the future challenges in managing and analysing the rapidly growing amounts of data that is produced manually by humans as well as automatically generated by, for example, sensors in emerging Internet of Things solutions, data capturing on the Web or as an outcome of scientific experiments. Thereby, we focus on the scientific aspects and concepts for scalable data management solutions, information retrieval and data mining as well as different information visualisation and interaction techniques rather than on existing mainstream technologies, and provide students the necessary education for a future career as data scientists and data engineers.","publication_date":{"day":null,"month":null,"year":2024,"errors":{}}},"translated_abstract":"Our goal is to prepare students for the future challenges in managing and analysing the rapidly growing amounts of data that is produced manually by humans as well as automatically generated by, for example, sensors in emerging Internet of Things solutions, data capturing on the Web or as an outcome of scientific experiments. Thereby, we focus on the scientific aspects and concepts for scalable data management solutions, information retrieval and data mining as well as different information visualisation and interaction techniques rather than on existing mainstream technologies, and provide students the necessary education for a future career as data scientists and data engineers.","internal_url":"https://www.academia.edu/79092772/Data_Management_and_Analytics_DAMA_Specilisation","translated_internal_url":"","created_at":"2022-05-14T03:39:17.493-07:00","preview_url":null,"current_user_can_edit":null,"current_user_is_owner":null,"owner_id":13155,"coauthors_can_edit":true,"document_type":"teaching_document","co_author_tags":[],"downloadable_attachments":[{"id":112816083,"title":"","file_type":"pdf","scribd_thumbnail_url":"https://attachments.academia-assets.com/112816083/thumbnails/1.jpg","file_name":"DAMA.pdf","download_url":"https://www.academia.edu/attachments/112816083/download_file","bulk_download_file_name":"Data_Management_and_Analytics_DAMA_Speci.pdf","bulk_download_url":"https://d1wqtxts1xzle7.cloudfront.net/112816083/DAMA-libre.pdf?1711600023=\u0026response-content-disposition=attachment%3B+filename%3DData_Management_and_Analytics_DAMA_Speci.pdf\u0026Expires=1744203893\u0026Signature=BfTwVqFrp4-CNHiDm026g-Kpn3NZjMRt2lxXhrnw9nJqWZDQ0BFBxH-e9QLMmi5Qm0H7qdSsgEFCht0M8g0HgrIGlQ3YG5-eJ8sZLsT~EQ6oE8wjij9juQnCIyBgQb9TQ8Ih-sLZwoK05Dcw~i6IMi7ftixm8Mx1KetYtG~PDSxDl6vTe-cCZY-A6ioyLH9MyByX4I0Wn6EAsrrYc3nLR31bOH3OIMveYAgk7JuDRLE0DOPGtUL2wbA1JlSd9hHmCemnlL09R83vcKUheCcbktIlRKLeYSygwnLA3~52IPzgbYgWbP9joHYsINoxrT8lxhcz-lR8xCiJE-PkFwwlWQ__\u0026Key-Pair-Id=APKAJLOHF5GGSLRBV4ZA"}],"slug":"Data_Management_and_Analytics_DAMA_Specilisation","translated_slug":"","page_count":11,"language":"en","content_type":"Work","summary":"Our goal is to prepare students for the future challenges in managing and analysing the rapidly growing amounts of data that is produced manually by humans as well as automatically generated by, for example, sensors in emerging Internet of Things solutions, data capturing on the Web or as an outcome of scientific experiments. Thereby, we focus on the scientific aspects and concepts for scalable data management solutions, information retrieval and data mining as well as different information visualisation and interaction techniques rather than on existing mainstream technologies, and provide students the necessary education for a future career as data scientists and data engineers.","impression_tracking_id":null,"owner":{"id":13155,"first_name":"Beat","middle_initials":null,"last_name":"Signer","page_name":"BeatSigner","domain_name":"vub","created_at":"2008-10-29T17:11:18.015-07:00","display_name":"Beat Signer","url":"https://vub.academia.edu/BeatSigner"},"attachments":[{"id":112816083,"title":"","file_type":"pdf","scribd_thumbnail_url":"https://attachments.academia-assets.com/112816083/thumbnails/1.jpg","file_name":"DAMA.pdf","download_url":"https://www.academia.edu/attachments/112816083/download_file","bulk_download_file_name":"Data_Management_and_Analytics_DAMA_Speci.pdf","bulk_download_url":"https://d1wqtxts1xzle7.cloudfront.net/112816083/DAMA-libre.pdf?1711600023=\u0026response-content-disposition=attachment%3B+filename%3DData_Management_and_Analytics_DAMA_Speci.pdf\u0026Expires=1744203893\u0026Signature=BfTwVqFrp4-CNHiDm026g-Kpn3NZjMRt2lxXhrnw9nJqWZDQ0BFBxH-e9QLMmi5Qm0H7qdSsgEFCht0M8g0HgrIGlQ3YG5-eJ8sZLsT~EQ6oE8wjij9juQnCIyBgQb9TQ8Ih-sLZwoK05Dcw~i6IMi7ftixm8Mx1KetYtG~PDSxDl6vTe-cCZY-A6ioyLH9MyByX4I0Wn6EAsrrYc3nLR31bOH3OIMveYAgk7JuDRLE0DOPGtUL2wbA1JlSd9hHmCemnlL09R83vcKUheCcbktIlRKLeYSygwnLA3~52IPzgbYgWbP9joHYsINoxrT8lxhcz-lR8xCiJE-PkFwwlWQ__\u0026Key-Pair-Id=APKAJLOHF5GGSLRBV4ZA"}],"research_interests":[{"id":37,"name":"Information Systems","url":"https://www.academia.edu/Documents/in/Information_Systems"},{"id":422,"name":"Computer Science","url":"https://www.academia.edu/Documents/in/Computer_Science"},{"id":472,"name":"Human Computer Interaction","url":"https://www.academia.edu/Documents/in/Human_Computer_Interaction"},{"id":1210,"name":"Informatics","url":"https://www.academia.edu/Documents/in/Informatics"},{"id":2009,"name":"Data Mining","url":"https://www.academia.edu/Documents/in/Data_Mining"},{"id":2482,"name":"Database Systems","url":"https://www.academia.edu/Documents/in/Database_Systems"},{"id":3424,"name":"Information Visualization","url":"https://www.academia.edu/Documents/in/Information_Visualization"},{"id":4205,"name":"Data Analysis","url":"https://www.academia.edu/Documents/in/Data_Analysis"},{"id":20481,"name":"Information Visualisation","url":"https://www.academia.edu/Documents/in/Information_Visualisation"},{"id":27360,"name":"Databases","url":"https://www.academia.edu/Documents/in/Databases"},{"id":45090,"name":"Database Management Systems","url":"https://www.academia.edu/Documents/in/Database_Management_Systems"},{"id":47980,"name":"Data Visualization","url":"https://www.academia.edu/Documents/in/Data_Visualization"},{"id":69100,"name":"Data Science","url":"https://www.academia.edu/Documents/in/Data_Science"},{"id":81219,"name":"Data Analytics","url":"https://www.academia.edu/Documents/in/Data_Analytics"},{"id":295037,"name":"Information Techology","url":"https://www.academia.edu/Documents/in/Information_Techology"},{"id":413148,"name":"Big Data / Analytics / Data Mining","url":"https://www.academia.edu/Documents/in/Big_Data_Analytics_Data_Mining"}],"urls":[{"id":34176693,"url":"https://wise.vub.ac.be/data-management-and-analytics-specialisation-dama"}]}, dispatcherData: dispatcherData }); $(this).data('initialized', true); } }); $a.trackClickSource(".js-work-strip-work-link", "profile_work_strip") if (false) { Aedu.setUpFigureCarousel('profile-work-79092772-figures'); } }); </script> <div class="js-work-strip profile--work_container" data-work-id="16063231"><div class="profile--work_thumbnail hidden-xs"><a class="js-work-strip-work-link" data-click-track="profile-work-strip-thumbnail" href="https://www.academia.edu/16063231/Introduction_Next_Generation_User_Interfaces_4018166FNR_"><img alt="Research paper thumbnail of Introduction - Next Generation User Interfaces (4018166FNR)" class="work-thumbnail" src="https://attachments.academia-assets.com/121253550/thumbnails/1.jpg" /></a></div><div class="wp-workCard wp-workCard_itemContainer"><div class="wp-workCard_item wp-workCard--title"><a class="js-work-strip-work-link text-gray-darker" data-click-track="profile-work-strip-title" href="https://www.academia.edu/16063231/Introduction_Next_Generation_User_Interfaces_4018166FNR_">Introduction - Next Generation User Interfaces (4018166FNR)</a></div><div class="wp-workCard_item"><span class="js-work-more-abstract-truncated">This lecture forms part of the &#39;Next Generation User Interfaces&#39; course given at the Vrije Univer...</span><a class="js-work-more-abstract" data-broccoli-component="work_strip.more_abstract" data-click-track="profile-work-strip-more-abstract" href="javascript:;"><span> more </span><span><i class="fa fa-caret-down"></i></span></a><span class="js-work-more-abstract-untruncated hidden">This lecture forms part of the &#39;Next Generation User Interfaces&#39; course given at the Vrije Universiteit Brussel.</span></div><div class="wp-workCard_item"><div class="carousel-container carousel-container--sm" id="profile-work-16063231-figures"><div class="prev-slide-container js-prev-button-container"><button aria-label="Previous" class="carousel-navigation-button js-profile-work-16063231-figures-prev"><span class="material-symbols-outlined" style="font-size: 24px" translate="no">arrow_back_ios</span></button></div><div class="slides-container js-slides-container"><figure class="figure-slide-container"><a href="https://www.academia.edu/figures/4169881/figure-1-introduction-next-generation-user-interfaces-fnr"><img alt="" class="figure-slide-image" src="https://figures.academia-assets.com/121253550/figure_001.jpg" /></a></figure><figure class="figure-slide-container"><a href="https://www.academia.edu/figures/4169891/figure-2-introduction-next-generation-user-interfaces-fnr"><img alt="" class="figure-slide-image" src="https://figures.academia-assets.com/121253550/figure_002.jpg" /></a></figure><figure class="figure-slide-container"><a href="https://www.academia.edu/figures/4169899/figure-3-introduction-next-generation-user-interfaces-fnr"><img alt="" class="figure-slide-image" src="https://figures.academia-assets.com/121253550/figure_003.jpg" /></a></figure><figure class="figure-slide-container"><a href="https://www.academia.edu/figures/4169910/figure-4-introduction-next-generation-user-interfaces-fnr"><img alt="" class="figure-slide-image" src="https://figures.academia-assets.com/121253550/figure_004.jpg" /></a></figure><figure class="figure-slide-container"><a href="https://www.academia.edu/figures/4169918/figure-5-eat-signer-department-of-computer-science-bsigner"><img alt="eat Signer - Department of Computer Science - bsigner@vub.ac.| " class="figure-slide-image" src="https://figures.academia-assets.com/121253550/figure_005.jpg" /></a></figure><figure class="figure-slide-container"><a href="https://www.academia.edu/figures/4169939/figure-6-introduction-next-generation-user-interfaces-fnr"><img alt="" class="figure-slide-image" src="https://figures.academia-assets.com/121253550/figure_006.jpg" /></a></figure><figure class="figure-slide-container"><a href="https://www.academia.edu/figures/4169954/figure-7-introduction-next-generation-user-interfaces-fnr"><img alt="" class="figure-slide-image" src="https://figures.academia-assets.com/121253550/figure_007.jpg" /></a></figure><figure class="figure-slide-container"><a href="https://www.academia.edu/figures/4169967/figure-8-introduction-next-generation-user-interfaces-fnr"><img alt="" class="figure-slide-image" src="https://figures.academia-assets.com/121253550/figure_008.jpg" /></a></figure><figure class="figure-slide-container"><a href="https://www.academia.edu/figures/4169977/figure-9-developer-watching-videotape-of-usability-test"><img alt="Developer watching videotape of usability test. " class="figure-slide-image" src="https://figures.academia-assets.com/121253550/figure_009.jpg" /></a></figure><figure class="figure-slide-container"><a href="https://www.academia.edu/figures/4169989/figure-10-aware-home-georgia-tech"><img alt="Aware Home, Georgia Tech " class="figure-slide-image" src="https://figures.academia-assets.com/121253550/figure_010.jpg" /></a></figure><figure class="figure-slide-container"><a href="https://www.academia.edu/figures/4170000/figure-11-currently-the-following-scales-are-available-this"><img alt="Currently the following scales are available. This list may grow in the future. The scales are not independent! Please check the handbook before you create your first questionnaire with the UEQ- Beat Signer - Department of Computer Science - bsigner@vub.ac.be Available Scales " class="figure-slide-image" src="https://figures.academia-assets.com/121253550/figure_011.jpg" /></a></figure><figure class="figure-slide-container"><a href="https://www.academia.edu/figures/4170012/figure-12-introduction-next-generation-user-interfaces-fnr"><img alt="" class="figure-slide-image" src="https://figures.academia-assets.com/121253550/figure_012.jpg" /></a></figure><figure class="figure-slide-container"><a href="https://www.academia.edu/figures/4170022/table-1-seat-signer-department-of-computer-science-bsigner"><img alt="seat Signer - Department of Computer Science - bsigner@vub.ac.be " class="figure-slide-image" src="https://figures.academia-assets.com/121253550/table_001.jpg" /></a></figure></div><div class="next-slide-container js-next-button-container"><button aria-label="Next" class="carousel-navigation-button js-profile-work-16063231-figures-next"><span class="material-symbols-outlined" style="font-size: 24px" translate="no">arrow_forward_ios</span></button></div></div></div><div class="wp-workCard_item wp-workCard--actions"><span class="work-strip-bookmark-button-container"></span><a id="458bba1c57c9743802c3b44e574883df" class="wp-workCard--action" rel="nofollow" data-click-track="profile-work-strip-download" data-download="{&quot;attachment_id&quot;:121253550,&quot;asset_id&quot;:16063231,&quot;asset_type&quot;:&quot;Work&quot;,&quot;button_location&quot;:&quot;profile&quot;}" href="https://www.academia.edu/attachments/121253550/download_file?s=profile"><span><i class="fa fa-arrow-down"></i></span><span>Download</span></a><span class="wp-workCard--action visible-if-viewed-by-owner inline-block" style="display: none;"><span class="js-profile-work-strip-edit-button-wrapper profile-work-strip-edit-button-wrapper" data-work-id="16063231"><a class="js-profile-work-strip-edit-button" tabindex="0"><span><i class="fa fa-pencil"></i></span><span>Edit</span></a></span></span></div><div class="wp-workCard_item wp-workCard--stats"><span><span><span class="js-view-count view-count u-mr2x" data-work-id="16063231"><i class="fa fa-spinner fa-spin"></i></span><script>$(function () { var workId = 16063231; window.Academia.workViewCountsFetcher.queue(workId, function (count) { var description = window.$h.commaizeInt(count) + " " + window.$h.pluralize(count, 'View'); $(".js-view-count[data-work-id=16063231]").text(description); $(".js-view-count[data-work-id=16063231]").attr('title', description).tooltip(); }); });</script></span></span><span><span class="percentile-widget hidden"><span class="u-mr2x work-percentile"></span></span><script>$(function () { var workId = 16063231; window.Academia.workPercentilesFetcher.queue(workId, function (percentileText) { var container = $(".js-work-strip[data-work-id='16063231']"); container.find('.work-percentile').text(percentileText.charAt(0).toUpperCase() + percentileText.slice(1)); container.find('.percentile-widget').show(); container.find('.percentile-widget').removeClass('hidden'); }); });</script></span></div><div id="work-strip-premium-row-container"></div></div></div><script> require.config({ waitSeconds: 90 })(["https://a.academia-assets.com/assets/wow_profile-a9bf3a2bc8c89fa2a77156577594264ee8a0f214d74241bc0fcd3f69f8d107ac.js","https://a.academia-assets.com/assets/work_edit-ad038b8c047c1a8d4fa01b402d530ff93c45fee2137a149a4a5398bc8ad67560.js"], function() { // from javascript_helper.rb var dispatcherData = {} if (true){ window.WowProfile.dispatcher = window.WowProfile.dispatcher || _.clone(Backbone.Events); dispatcherData = { dispatcher: window.WowProfile.dispatcher, downloadLinkId: "458bba1c57c9743802c3b44e574883df" } } $('.js-work-strip[data-work-id=16063231]').each(function() { if (!$(this).data('initialized')) { new WowProfile.WorkStripView({ el: this, workJSON: {"id":16063231,"title":"Introduction - Next Generation User Interfaces (4018166FNR)","translated_title":"","metadata":{"abstract":"This lecture forms part of the 'Next Generation User Interfaces' course given at the Vrije Universiteit Brussel. ","more_info":"Vrije Universiteit Brussel, 2025"},"translated_abstract":"This lecture forms part of the 'Next Generation User Interfaces' course given at the Vrije Universiteit Brussel. ","internal_url":"https://www.academia.edu/16063231/Introduction_Next_Generation_User_Interfaces_4018166FNR_","translated_internal_url":"","created_at":"2015-09-22T22:57:50.937-07:00","preview_url":null,"current_user_can_edit":null,"current_user_is_owner":null,"owner_id":13155,"coauthors_can_edit":true,"document_type":"teaching_document","co_author_tags":[],"downloadable_attachments":[{"id":121253550,"title":"","file_type":"pdf","scribd_thumbnail_url":"https://attachments.academia-assets.com/121253550/thumbnails/1.jpg","file_name":"lecture_01_introduction.pdf","download_url":"https://www.academia.edu/attachments/121253550/download_file","bulk_download_file_name":"Introduction_Next_Generation_User_Interf.pdf","bulk_download_url":"https://d1wqtxts1xzle7.cloudfront.net/121253550/lecture_01_introduction-libre.pdf?1739041593=\u0026response-content-disposition=attachment%3B+filename%3DIntroduction_Next_Generation_User_Interf.pdf\u0026Expires=1744203893\u0026Signature=dKVMOt53qggxVR6A-ssKEpT07l4LEQF7Xs86NIJstCrPEudmzwRPzUUsUInt9LCOf07egLNCBQwwGsSLZjcsFhMndp2xjRap9lX2w0G9KKU3eugfPelV0xD5QJNINWM7fujDe-MOfDwR9Ms0LfsOgnAcwbRLvABbIJiRnKfcPahWZS~BNZJmYBoSWTTWHQrlmg3U1pJ~mHHAc~17Acac1vmkpiio2DH8sTF-0OiJWgvTu4aiWCD42On5rAuBPBLxzXXAIITTnYPragzCUzDEJFEx5hkeIGrAVMbSdIOgi7FILKgSp8VkJtLGX4mC9bp0TkXJj6QALAB~K25y3imTaA__\u0026Key-Pair-Id=APKAJLOHF5GGSLRBV4ZA"}],"slug":"Introduction_Next_Generation_User_Interfaces_4018166FNR_","translated_slug":"","page_count":48,"language":"en","content_type":"Work","summary":"This lecture forms part of the 'Next Generation User Interfaces' course given at the Vrije Universiteit Brussel. ","impression_tracking_id":null,"owner":{"id":13155,"first_name":"Beat","middle_initials":null,"last_name":"Signer","page_name":"BeatSigner","domain_name":"vub","created_at":"2008-10-29T17:11:18.015-07:00","display_name":"Beat Signer","url":"https://vub.academia.edu/BeatSigner"},"attachments":[{"id":121253550,"title":"","file_type":"pdf","scribd_thumbnail_url":"https://attachments.academia-assets.com/121253550/thumbnails/1.jpg","file_name":"lecture_01_introduction.pdf","download_url":"https://www.academia.edu/attachments/121253550/download_file","bulk_download_file_name":"Introduction_Next_Generation_User_Interf.pdf","bulk_download_url":"https://d1wqtxts1xzle7.cloudfront.net/121253550/lecture_01_introduction-libre.pdf?1739041593=\u0026response-content-disposition=attachment%3B+filename%3DIntroduction_Next_Generation_User_Interf.pdf\u0026Expires=1744203893\u0026Signature=dKVMOt53qggxVR6A-ssKEpT07l4LEQF7Xs86NIJstCrPEudmzwRPzUUsUInt9LCOf07egLNCBQwwGsSLZjcsFhMndp2xjRap9lX2w0G9KKU3eugfPelV0xD5QJNINWM7fujDe-MOfDwR9Ms0LfsOgnAcwbRLvABbIJiRnKfcPahWZS~BNZJmYBoSWTTWHQrlmg3U1pJ~mHHAc~17Acac1vmkpiio2DH8sTF-0OiJWgvTu4aiWCD42On5rAuBPBLxzXXAIITTnYPragzCUzDEJFEx5hkeIGrAVMbSdIOgi7FILKgSp8VkJtLGX4mC9bp0TkXJj6QALAB~K25y3imTaA__\u0026Key-Pair-Id=APKAJLOHF5GGSLRBV4ZA"}],"research_interests":[{"id":37,"name":"Information Systems","url":"https://www.academia.edu/Documents/in/Information_Systems"},{"id":48,"name":"Engineering","url":"https://www.academia.edu/Documents/in/Engineering"},{"id":422,"name":"Computer Science","url":"https://www.academia.edu/Documents/in/Computer_Science"},{"id":472,"name":"Human Computer Interaction","url":"https://www.academia.edu/Documents/in/Human_Computer_Interaction"},{"id":491,"name":"Information Technology","url":"https://www.academia.edu/Documents/in/Information_Technology"},{"id":922,"name":"Education","url":"https://www.academia.edu/Documents/in/Education"},{"id":923,"name":"Technology","url":"https://www.academia.edu/Documents/in/Technology"},{"id":988,"name":"Design","url":"https://www.academia.edu/Documents/in/Design"},{"id":1128,"name":"Computer Science Education","url":"https://www.academia.edu/Documents/in/Computer_Science_Education"},{"id":1210,"name":"Informatics","url":"https://www.academia.edu/Documents/in/Informatics"},{"id":1380,"name":"Computer Engineering","url":"https://www.academia.edu/Documents/in/Computer_Engineering"},{"id":1453,"name":"Information Management","url":"https://www.academia.edu/Documents/in/Information_Management"},{"id":2869,"name":"Digital Media","url":"https://www.academia.edu/Documents/in/Digital_Media"},{"id":2875,"name":"User Experience (UX)","url":"https://www.academia.edu/Documents/in/User_Experience_UX_"},{"id":2879,"name":"Ubiquitous Computing","url":"https://www.academia.edu/Documents/in/Ubiquitous_Computing"},{"id":3147,"name":"Gesture","url":"https://www.academia.edu/Documents/in/Gesture"},{"id":3419,"name":"Multimedia","url":"https://www.academia.edu/Documents/in/Multimedia"},{"id":3424,"name":"Information Visualization","url":"https://www.academia.edu/Documents/in/Information_Visualization"},{"id":4198,"name":"Mobile Technology","url":"https://www.academia.edu/Documents/in/Mobile_Technology"},{"id":4416,"name":"Interaction Design","url":"https://www.academia.edu/Documents/in/Interaction_Design"},{"id":5266,"name":"Human Information Interaction","url":"https://www.academia.edu/Documents/in/Human_Information_Interaction"},{"id":5673,"name":"Augmented Reality","url":"https://www.academia.edu/Documents/in/Augmented_Reality"},{"id":9134,"name":"Pervasive Computing","url":"https://www.academia.edu/Documents/in/Pervasive_Computing"},{"id":10472,"name":"Web Applications","url":"https://www.academia.edu/Documents/in/Web_Applications"},{"id":11081,"name":"Multimodal Interaction","url":"https://www.academia.edu/Documents/in/Multimodal_Interaction"},{"id":11085,"name":"Cross-Media Information Spaces","url":"https://www.academia.edu/Documents/in/Cross-Media_Information_Spaces"},{"id":11086,"name":"Interactive Paper","url":"https://www.academia.edu/Documents/in/Interactive_Paper"},{"id":11119,"name":"User Interface","url":"https://www.academia.edu/Documents/in/User_Interface"},{"id":11123,"name":"Personal Information Management","url":"https://www.academia.edu/Documents/in/Personal_Information_Management"},{"id":15951,"name":"Digital Pen and Paper","url":"https://www.academia.edu/Documents/in/Digital_Pen_and_Paper"},{"id":17701,"name":"Gesture Recognition","url":"https://www.academia.edu/Documents/in/Gesture_Recognition"},{"id":20470,"name":"Tangible User Interfaces","url":"https://www.academia.edu/Documents/in/Tangible_User_Interfaces"},{"id":21201,"name":"Multimodality","url":"https://www.academia.edu/Documents/in/Multimodality"},{"id":23997,"name":"HCI","url":"https://www.academia.edu/Documents/in/HCI"},{"id":67792,"name":"Multi-Touch","url":"https://www.academia.edu/Documents/in/Multi-Touch"},{"id":85420,"name":"Crossmedia","url":"https://www.academia.edu/Documents/in/Crossmedia"},{"id":458849,"name":"Pervasive and Ubiquitous Computing","url":"https://www.academia.edu/Documents/in/Pervasive_and_Ubiquitous_Computing"}],"urls":[{"id":34176705,"url":"https://wise.vub.ac.be/course/next-generation-user-interfaces"}]}, dispatcherData: dispatcherData }); $(this).data('initialized', true); } }); $a.trackClickSource(".js-work-strip-work-link", "profile_work_strip") if (true) { Aedu.setUpFigureCarousel('profile-work-16063231-figures'); } }); </script> <div class="js-work-strip profile--work_container" data-work-id="28877516"><div class="profile--work_thumbnail hidden-xs"><a class="js-work-strip-work-link" data-click-track="profile-work-strip-thumbnail" href="https://www.academia.edu/28877516/Interaction_Design_Next_Generation_User_Interfaces_4018166FNR_"><img alt="Research paper thumbnail of Interaction Design - Next Generation User Interfaces (4018166FNR)" class="work-thumbnail" src="https://attachments.academia-assets.com/98870990/thumbnails/1.jpg" /></a></div><div class="wp-workCard wp-workCard_itemContainer"><div class="wp-workCard_item wp-workCard--title"><a class="js-work-strip-work-link text-gray-darker" data-click-track="profile-work-strip-title" href="https://www.academia.edu/28877516/Interaction_Design_Next_Generation_User_Interfaces_4018166FNR_">Interaction Design - Next Generation User Interfaces (4018166FNR)</a></div><div class="wp-workCard_item"><span class="js-work-more-abstract-truncated">This lecture forms part of the &#39;Next Generation User Interfaces&#39; course given at the Vrije Univer...</span><a class="js-work-more-abstract" data-broccoli-component="work_strip.more_abstract" data-click-track="profile-work-strip-more-abstract" href="javascript:;"><span> more </span><span><i class="fa fa-caret-down"></i></span></a><span class="js-work-more-abstract-untruncated hidden">This lecture forms part of the &#39;Next Generation User Interfaces&#39; course given at the Vrije Universiteit Brussel.</span></div><div class="wp-workCard_item wp-workCard--actions"><span class="work-strip-bookmark-button-container"></span><a id="ad6f38c63a9e6255107abf7bb5d8fee2" class="wp-workCard--action" rel="nofollow" data-click-track="profile-work-strip-download" data-download="{&quot;attachment_id&quot;:98870990,&quot;asset_id&quot;:28877516,&quot;asset_type&quot;:&quot;Work&quot;,&quot;button_location&quot;:&quot;profile&quot;}" href="https://www.academia.edu/attachments/98870990/download_file?s=profile"><span><i class="fa fa-arrow-down"></i></span><span>Download</span></a><span class="wp-workCard--action visible-if-viewed-by-owner inline-block" style="display: none;"><span class="js-profile-work-strip-edit-button-wrapper profile-work-strip-edit-button-wrapper" data-work-id="28877516"><a class="js-profile-work-strip-edit-button" tabindex="0"><span><i class="fa fa-pencil"></i></span><span>Edit</span></a></span></span></div><div class="wp-workCard_item wp-workCard--stats"><span><span><span class="js-view-count view-count u-mr2x" data-work-id="28877516"><i class="fa fa-spinner fa-spin"></i></span><script>$(function () { var workId = 28877516; window.Academia.workViewCountsFetcher.queue(workId, function (count) { var description = window.$h.commaizeInt(count) + " " + window.$h.pluralize(count, 'View'); $(".js-view-count[data-work-id=28877516]").text(description); $(".js-view-count[data-work-id=28877516]").attr('title', description).tooltip(); }); });</script></span></span><span><span class="percentile-widget hidden"><span class="u-mr2x work-percentile"></span></span><script>$(function () { var workId = 28877516; window.Academia.workPercentilesFetcher.queue(workId, function (percentileText) { var container = $(".js-work-strip[data-work-id='28877516']"); container.find('.work-percentile').text(percentileText.charAt(0).toUpperCase() + percentileText.slice(1)); container.find('.percentile-widget').show(); container.find('.percentile-widget').removeClass('hidden'); }); });</script></span></div><div id="work-strip-premium-row-container"></div></div></div><script> require.config({ waitSeconds: 90 })(["https://a.academia-assets.com/assets/wow_profile-a9bf3a2bc8c89fa2a77156577594264ee8a0f214d74241bc0fcd3f69f8d107ac.js","https://a.academia-assets.com/assets/work_edit-ad038b8c047c1a8d4fa01b402d530ff93c45fee2137a149a4a5398bc8ad67560.js"], function() { // from javascript_helper.rb var dispatcherData = {} if (true){ window.WowProfile.dispatcher = window.WowProfile.dispatcher || _.clone(Backbone.Events); dispatcherData = { dispatcher: window.WowProfile.dispatcher, downloadLinkId: "ad6f38c63a9e6255107abf7bb5d8fee2" } } $('.js-work-strip[data-work-id=28877516]').each(function() { if (!$(this).data('initialized')) { new WowProfile.WorkStripView({ el: this, workJSON: {"id":28877516,"title":"Interaction Design - Next Generation User Interfaces (4018166FNR)","translated_title":"","metadata":{"abstract":"This lecture forms part of the 'Next Generation User Interfaces' course given at the Vrije Universiteit Brussel. ","more_info":"Vrije Universiteit Brussel, 2023","publication_date":{"day":null,"month":null,"year":2023,"errors":{}}},"translated_abstract":"This lecture forms part of the 'Next Generation User Interfaces' course given at the Vrije Universiteit Brussel. ","internal_url":"https://www.academia.edu/28877516/Interaction_Design_Next_Generation_User_Interfaces_4018166FNR_","translated_internal_url":"","created_at":"2016-10-02T23:45:58.926-07:00","preview_url":null,"current_user_can_edit":null,"current_user_is_owner":null,"owner_id":13155,"coauthors_can_edit":true,"document_type":"teaching_document","co_author_tags":[],"downloadable_attachments":[{"id":98870990,"title":"","file_type":"pdf","scribd_thumbnail_url":"https://attachments.academia-assets.com/98870990/thumbnails/1.jpg","file_name":"lecture_02_interactionDesign.pdf","download_url":"https://www.academia.edu/attachments/98870990/download_file","bulk_download_file_name":"Interaction_Design_Next_Generation_User.pdf","bulk_download_url":"https://d1wqtxts1xzle7.cloudfront.net/98870990/lecture_02_interactionDesign-libre.pdf?1676839087=\u0026response-content-disposition=attachment%3B+filename%3DInteraction_Design_Next_Generation_User.pdf\u0026Expires=1744203893\u0026Signature=K8m5qRXSEWFaAnc8TgEmsgiwuV2eDYt1emrcz2fJ2OLBCVBfiSFtjnNBUG-t7CM7ctaTcXxbvPrchGEye9sL1fpJrezM0Bvr2VbORdyowfUu1H8TiovSj-MZvX18lQpLCEMnBRZXOLb~6u0IXWeut8Tse0UIz~Dep2McYNNjPW6EMfbOqmWzMlkiQ0EGoHVsnUN7KKLDeINCVTQxJbgJcgEgVaC1XoMIAwehy8VNw0gXC~V1z~2nkoUcdQQsBpQmgpoZCM00OpsuPMYBAC8YZfEREYxfW5CcPFBBaOMfcGbXy7fmoT6P33E9ucKKiO0l6p2lxNeY6uFxL-hd-Etshg__\u0026Key-Pair-Id=APKAJLOHF5GGSLRBV4ZA"}],"slug":"Interaction_Design_Next_Generation_User_Interfaces_4018166FNR_","translated_slug":"","page_count":51,"language":"en","content_type":"Work","summary":"This lecture forms part of the 'Next Generation User Interfaces' course given at the Vrije Universiteit Brussel. ","impression_tracking_id":null,"owner":{"id":13155,"first_name":"Beat","middle_initials":null,"last_name":"Signer","page_name":"BeatSigner","domain_name":"vub","created_at":"2008-10-29T17:11:18.015-07:00","display_name":"Beat Signer","url":"https://vub.academia.edu/BeatSigner"},"attachments":[{"id":98870990,"title":"","file_type":"pdf","scribd_thumbnail_url":"https://attachments.academia-assets.com/98870990/thumbnails/1.jpg","file_name":"lecture_02_interactionDesign.pdf","download_url":"https://www.academia.edu/attachments/98870990/download_file","bulk_download_file_name":"Interaction_Design_Next_Generation_User.pdf","bulk_download_url":"https://d1wqtxts1xzle7.cloudfront.net/98870990/lecture_02_interactionDesign-libre.pdf?1676839087=\u0026response-content-disposition=attachment%3B+filename%3DInteraction_Design_Next_Generation_User.pdf\u0026Expires=1744203893\u0026Signature=K8m5qRXSEWFaAnc8TgEmsgiwuV2eDYt1emrcz2fJ2OLBCVBfiSFtjnNBUG-t7CM7ctaTcXxbvPrchGEye9sL1fpJrezM0Bvr2VbORdyowfUu1H8TiovSj-MZvX18lQpLCEMnBRZXOLb~6u0IXWeut8Tse0UIz~Dep2McYNNjPW6EMfbOqmWzMlkiQ0EGoHVsnUN7KKLDeINCVTQxJbgJcgEgVaC1XoMIAwehy8VNw0gXC~V1z~2nkoUcdQQsBpQmgpoZCM00OpsuPMYBAC8YZfEREYxfW5CcPFBBaOMfcGbXy7fmoT6P33E9ucKKiO0l6p2lxNeY6uFxL-hd-Etshg__\u0026Key-Pair-Id=APKAJLOHF5GGSLRBV4ZA"}],"research_interests":[{"id":422,"name":"Computer Science","url":"https://www.academia.edu/Documents/in/Computer_Science"},{"id":472,"name":"Human Computer Interaction","url":"https://www.academia.edu/Documents/in/Human_Computer_Interaction"},{"id":1380,"name":"Computer Engineering","url":"https://www.academia.edu/Documents/in/Computer_Engineering"},{"id":1615,"name":"Usability","url":"https://www.academia.edu/Documents/in/Usability"},{"id":2875,"name":"User Experience (UX)","url":"https://www.academia.edu/Documents/in/User_Experience_UX_"},{"id":4416,"name":"Interaction Design","url":"https://www.academia.edu/Documents/in/Interaction_Design"},{"id":8194,"name":"User Centred Design","url":"https://www.academia.edu/Documents/in/User_Centred_Design"},{"id":11081,"name":"Multimodal Interaction","url":"https://www.academia.edu/Documents/in/Multimodal_Interaction"},{"id":14304,"name":"Usability and user experience","url":"https://www.academia.edu/Documents/in/Usability_and_user_experience"},{"id":16848,"name":"User Experience Design","url":"https://www.academia.edu/Documents/in/User_Experience_Design"},{"id":23997,"name":"HCI","url":"https://www.academia.edu/Documents/in/HCI"},{"id":25745,"name":"User eXperience","url":"https://www.academia.edu/Documents/in/User_eXperience"},{"id":27417,"name":"Mobile HCI","url":"https://www.academia.edu/Documents/in/Mobile_HCI"},{"id":36689,"name":"Human-Computer Interface","url":"https://www.academia.edu/Documents/in/Human-Computer_Interface"},{"id":39684,"name":"User-Centered Design (UCD)","url":"https://www.academia.edu/Documents/in/User-Centered_Design_UCD_"},{"id":68609,"name":"Usability Testing","url":"https://www.academia.edu/Documents/in/Usability_Testing"},{"id":71548,"name":"Usability Evaluation","url":"https://www.academia.edu/Documents/in/Usability_Evaluation"},{"id":126585,"name":"Human Computer Interaction Design","url":"https://www.academia.edu/Documents/in/Human_Computer_Interaction_Design"},{"id":142255,"name":"User Centered Design","url":"https://www.academia.edu/Documents/in/User_Centered_Design"},{"id":241677,"name":"interaction, HCI, cognitive psycology","url":"https://www.academia.edu/Documents/in/interaction_HCI_cognitive_psycology"},{"id":317164,"name":"User-Centered Design","url":"https://www.academia.edu/Documents/in/User-Centered_Design"},{"id":391512,"name":"User Experience Research","url":"https://www.academia.edu/Documents/in/User_Experience_Research"},{"id":878195,"name":"Computer and Human interactions","url":"https://www.academia.edu/Documents/in/Computer_and_Human_interactions"},{"id":1185463,"name":"User-Centered","url":"https://www.academia.edu/Documents/in/User-Centered"},{"id":1251579,"name":"Human-Computer Interaction","url":"https://www.academia.edu/Documents/in/Human-Computer_Interaction"}],"urls":[{"id":34176707,"url":"https://wise.vub.ac.be/course/next-generation-user-interfaces"}]}, dispatcherData: dispatcherData }); $(this).data('initialized', true); } }); $a.trackClickSource(".js-work-strip-work-link", "profile_work_strip") if (false) { Aedu.setUpFigureCarousel('profile-work-28877516-figures'); } }); </script> <div class="js-work-strip profile--work_container" data-work-id="29407199"><div class="profile--work_thumbnail hidden-xs"><a class="js-work-strip-work-link" data-click-track="profile-work-strip-thumbnail" href="https://www.academia.edu/29407199/Multimodal_Interaction_Next_Generation_User_Interfaces_4018166FNR_"><img alt="Research paper thumbnail of Multimodal Interaction - Next Generation User Interfaces (4018166FNR)" class="work-thumbnail" src="https://attachments.academia-assets.com/121511541/thumbnails/1.jpg" /></a></div><div class="wp-workCard wp-workCard_itemContainer"><div class="wp-workCard_item wp-workCard--title"><a class="js-work-strip-work-link text-gray-darker" data-click-track="profile-work-strip-title" href="https://www.academia.edu/29407199/Multimodal_Interaction_Next_Generation_User_Interfaces_4018166FNR_">Multimodal Interaction - Next Generation User Interfaces (4018166FNR)</a></div><div class="wp-workCard_item"><span class="js-work-more-abstract-truncated">This lecture forms part of the &#39;Next Generation User Interfaces&#39; course given at the Vrije Univer...</span><a class="js-work-more-abstract" data-broccoli-component="work_strip.more_abstract" data-click-track="profile-work-strip-more-abstract" href="javascript:;"><span> more </span><span><i class="fa fa-caret-down"></i></span></a><span class="js-work-more-abstract-untruncated hidden">This lecture forms part of the &#39;Next Generation User Interfaces&#39; course given at the Vrije Universiteit Brussel.</span></div><div class="wp-workCard_item wp-workCard--actions"><span class="work-strip-bookmark-button-container"></span><a id="6281cefadad314d857c93e454e05f0a5" class="wp-workCard--action" rel="nofollow" data-click-track="profile-work-strip-download" data-download="{&quot;attachment_id&quot;:121511541,&quot;asset_id&quot;:29407199,&quot;asset_type&quot;:&quot;Work&quot;,&quot;button_location&quot;:&quot;profile&quot;}" href="https://www.academia.edu/attachments/121511541/download_file?s=profile"><span><i class="fa fa-arrow-down"></i></span><span>Download</span></a><span class="wp-workCard--action visible-if-viewed-by-owner inline-block" style="display: none;"><span class="js-profile-work-strip-edit-button-wrapper profile-work-strip-edit-button-wrapper" data-work-id="29407199"><a class="js-profile-work-strip-edit-button" tabindex="0"><span><i class="fa fa-pencil"></i></span><span>Edit</span></a></span></span></div><div class="wp-workCard_item wp-workCard--stats"><span><span><span class="js-view-count view-count u-mr2x" data-work-id="29407199"><i class="fa fa-spinner fa-spin"></i></span><script>$(function () { var workId = 29407199; window.Academia.workViewCountsFetcher.queue(workId, function (count) { var description = window.$h.commaizeInt(count) + " " + window.$h.pluralize(count, 'View'); $(".js-view-count[data-work-id=29407199]").text(description); $(".js-view-count[data-work-id=29407199]").attr('title', description).tooltip(); }); });</script></span></span><span><span class="percentile-widget hidden"><span class="u-mr2x work-percentile"></span></span><script>$(function () { var workId = 29407199; window.Academia.workPercentilesFetcher.queue(workId, function (percentileText) { var container = $(".js-work-strip[data-work-id='29407199']"); container.find('.work-percentile').text(percentileText.charAt(0).toUpperCase() + percentileText.slice(1)); container.find('.percentile-widget').show(); container.find('.percentile-widget').removeClass('hidden'); }); });</script></span></div><div id="work-strip-premium-row-container"></div></div></div><script> require.config({ waitSeconds: 90 })(["https://a.academia-assets.com/assets/wow_profile-a9bf3a2bc8c89fa2a77156577594264ee8a0f214d74241bc0fcd3f69f8d107ac.js","https://a.academia-assets.com/assets/work_edit-ad038b8c047c1a8d4fa01b402d530ff93c45fee2137a149a4a5398bc8ad67560.js"], function() { // from javascript_helper.rb var dispatcherData = {} if (true){ window.WowProfile.dispatcher = window.WowProfile.dispatcher || _.clone(Backbone.Events); dispatcherData = { dispatcher: window.WowProfile.dispatcher, downloadLinkId: "6281cefadad314d857c93e454e05f0a5" } } $('.js-work-strip[data-work-id=29407199]').each(function() { if (!$(this).data('initialized')) { new WowProfile.WorkStripView({ el: this, workJSON: {"id":29407199,"title":"Multimodal Interaction - Next Generation User Interfaces (4018166FNR)","translated_title":"","metadata":{"abstract":"This lecture forms part of the 'Next Generation User Interfaces' course given at the Vrije Universiteit Brussel. ","publication_date":{"day":null,"month":null,"year":2025,"errors":{}}},"translated_abstract":"This lecture forms part of the 'Next Generation User Interfaces' course given at the Vrije Universiteit Brussel. ","internal_url":"https://www.academia.edu/29407199/Multimodal_Interaction_Next_Generation_User_Interfaces_4018166FNR_","translated_internal_url":"","created_at":"2016-10-24T22:10:41.416-07:00","preview_url":null,"current_user_can_edit":null,"current_user_is_owner":null,"owner_id":13155,"coauthors_can_edit":true,"document_type":"teaching_document","co_author_tags":[],"downloadable_attachments":[{"id":121511541,"title":"","file_type":"pdf","scribd_thumbnail_url":"https://attachments.academia-assets.com/121511541/thumbnails/1.jpg","file_name":"lecture_03_multimodalInteraction.pdf","download_url":"https://www.academia.edu/attachments/121511541/download_file","bulk_download_file_name":"Multimodal_Interaction_Next_Generation_U.pdf","bulk_download_url":"https://d1wqtxts1xzle7.cloudfront.net/121511541/lecture_03_multimodalInteraction-libre.pdf?1740373242=\u0026response-content-disposition=attachment%3B+filename%3DMultimodal_Interaction_Next_Generation_U.pdf\u0026Expires=1744203893\u0026Signature=TSRl3DqgNcmi1Sx6lPcPDe~bC8X5aPc06962wrdnE8Cfvm~lky2vnuDcKzYxmEc06hcLT6WB9tEp9lQ1IYs0DcZ5HYcvtqZbkfeiQmDAtXiU5aIAYpu3LRCHgQdMI6USXgPg4yzeW2UfCBAF6OmI5n2sLGybe-BzJ8Le-V8-TTxtqaNPybz-8gfP6bbtf9iJvzMq1k6qM9bNhsOqhzD4G4IOEPj1Fu8hP2Y-Ny50SSlTh5Jnz5-lSNB-IEys1AntJ6ZJCf~K1~bMbvKVdbY10WDaRvarUeiR~sVhlS9AFq06LI4omV~L2IG9KaJcSWoK5jS74b4w4wsQeiaRw12SGg__\u0026Key-Pair-Id=APKAJLOHF5GGSLRBV4ZA"}],"slug":"Multimodal_Interaction_Next_Generation_User_Interfaces_4018166FNR_","translated_slug":"","page_count":38,"language":"en","content_type":"Work","summary":"This lecture forms part of the 'Next Generation User Interfaces' course given at the Vrije Universiteit Brussel. ","impression_tracking_id":null,"owner":{"id":13155,"first_name":"Beat","middle_initials":null,"last_name":"Signer","page_name":"BeatSigner","domain_name":"vub","created_at":"2008-10-29T17:11:18.015-07:00","display_name":"Beat Signer","url":"https://vub.academia.edu/BeatSigner"},"attachments":[{"id":121511541,"title":"","file_type":"pdf","scribd_thumbnail_url":"https://attachments.academia-assets.com/121511541/thumbnails/1.jpg","file_name":"lecture_03_multimodalInteraction.pdf","download_url":"https://www.academia.edu/attachments/121511541/download_file","bulk_download_file_name":"Multimodal_Interaction_Next_Generation_U.pdf","bulk_download_url":"https://d1wqtxts1xzle7.cloudfront.net/121511541/lecture_03_multimodalInteraction-libre.pdf?1740373242=\u0026response-content-disposition=attachment%3B+filename%3DMultimodal_Interaction_Next_Generation_U.pdf\u0026Expires=1744203893\u0026Signature=TSRl3DqgNcmi1Sx6lPcPDe~bC8X5aPc06962wrdnE8Cfvm~lky2vnuDcKzYxmEc06hcLT6WB9tEp9lQ1IYs0DcZ5HYcvtqZbkfeiQmDAtXiU5aIAYpu3LRCHgQdMI6USXgPg4yzeW2UfCBAF6OmI5n2sLGybe-BzJ8Le-V8-TTxtqaNPybz-8gfP6bbtf9iJvzMq1k6qM9bNhsOqhzD4G4IOEPj1Fu8hP2Y-Ny50SSlTh5Jnz5-lSNB-IEys1AntJ6ZJCf~K1~bMbvKVdbY10WDaRvarUeiR~sVhlS9AFq06LI4omV~L2IG9KaJcSWoK5jS74b4w4wsQeiaRw12SGg__\u0026Key-Pair-Id=APKAJLOHF5GGSLRBV4ZA"}],"research_interests":[{"id":37,"name":"Information Systems","url":"https://www.academia.edu/Documents/in/Information_Systems"},{"id":48,"name":"Engineering","url":"https://www.academia.edu/Documents/in/Engineering"},{"id":422,"name":"Computer Science","url":"https://www.academia.edu/Documents/in/Computer_Science"},{"id":449,"name":"Software Engineering","url":"https://www.academia.edu/Documents/in/Software_Engineering"},{"id":472,"name":"Human Computer Interaction","url":"https://www.academia.edu/Documents/in/Human_Computer_Interaction"},{"id":491,"name":"Information Technology","url":"https://www.academia.edu/Documents/in/Information_Technology"},{"id":922,"name":"Education","url":"https://www.academia.edu/Documents/in/Education"},{"id":923,"name":"Technology","url":"https://www.academia.edu/Documents/in/Technology"},{"id":933,"name":"New Media","url":"https://www.academia.edu/Documents/in/New_Media"},{"id":1128,"name":"Computer Science Education","url":"https://www.academia.edu/Documents/in/Computer_Science_Education"},{"id":1210,"name":"Informatics","url":"https://www.academia.edu/Documents/in/Informatics"},{"id":1380,"name":"Computer Engineering","url":"https://www.academia.edu/Documents/in/Computer_Engineering"},{"id":1736,"name":"Science Education","url":"https://www.academia.edu/Documents/in/Science_Education"},{"id":2621,"name":"Higher Education","url":"https://www.academia.edu/Documents/in/Higher_Education"},{"id":2869,"name":"Digital Media","url":"https://www.academia.edu/Documents/in/Digital_Media"},{"id":2875,"name":"User Experience (UX)","url":"https://www.academia.edu/Documents/in/User_Experience_UX_"},{"id":2879,"name":"Ubiquitous Computing","url":"https://www.academia.edu/Documents/in/Ubiquitous_Computing"},{"id":3419,"name":"Multimedia","url":"https://www.academia.edu/Documents/in/Multimedia"},{"id":3457,"name":"Learning and Teaching","url":"https://www.academia.edu/Documents/in/Learning_and_Teaching"},{"id":4416,"name":"Interaction Design","url":"https://www.academia.edu/Documents/in/Interaction_Design"},{"id":5266,"name":"Human Information Interaction","url":"https://www.academia.edu/Documents/in/Human_Information_Interaction"},{"id":9134,"name":"Pervasive Computing","url":"https://www.academia.edu/Documents/in/Pervasive_Computing"},{"id":10165,"name":"Interactive and Digital Media","url":"https://www.academia.edu/Documents/in/Interactive_and_Digital_Media"},{"id":11081,"name":"Multimodal Interaction","url":"https://www.academia.edu/Documents/in/Multimodal_Interaction"},{"id":11119,"name":"User Interface","url":"https://www.academia.edu/Documents/in/User_Interface"},{"id":21201,"name":"Multimodality","url":"https://www.academia.edu/Documents/in/Multimodality"},{"id":23997,"name":"HCI","url":"https://www.academia.edu/Documents/in/HCI"},{"id":37228,"name":"Multimodal Interfaces","url":"https://www.academia.edu/Documents/in/Multimodal_Interfaces"},{"id":54405,"name":"Multimodal Communication","url":"https://www.academia.edu/Documents/in/Multimodal_Communication"},{"id":458849,"name":"Pervasive and Ubiquitous Computing","url":"https://www.academia.edu/Documents/in/Pervasive_and_Ubiquitous_Computing"}],"urls":[{"id":34176711,"url":"https://wise.vub.ac.be/course/next-generation-user-interfaces"}]}, dispatcherData: dispatcherData }); $(this).data('initialized', true); } }); $a.trackClickSource(".js-work-strip-work-link", "profile_work_strip") if (false) { Aedu.setUpFigureCarousel('profile-work-29407199-figures'); } }); </script> <div class="js-work-strip profile--work_container" data-work-id="29706608"><div class="profile--work_thumbnail hidden-xs"><a class="js-work-strip-work-link" data-click-track="profile-work-strip-thumbnail" href="https://www.academia.edu/29706608/Interactive_Tabletops_and_Surfaces_Next_Generation_User_Interfaces_4018166FNR_"><img alt="Research paper thumbnail of Interactive Tabletops and Surfaces - Next Generation User Interfaces (4018166FNR)" class="work-thumbnail" src="https://attachments.academia-assets.com/121736297/thumbnails/1.jpg" /></a></div><div class="wp-workCard wp-workCard_itemContainer"><div class="wp-workCard_item wp-workCard--title"><a class="js-work-strip-work-link text-gray-darker" data-click-track="profile-work-strip-title" href="https://www.academia.edu/29706608/Interactive_Tabletops_and_Surfaces_Next_Generation_User_Interfaces_4018166FNR_">Interactive Tabletops and Surfaces - Next Generation User Interfaces (4018166FNR)</a></div><div class="wp-workCard_item"><span class="js-work-more-abstract-truncated">This lecture forms part of the &#39;Next Generation User Interfaces&#39; course given at the Vrije Univer...</span><a class="js-work-more-abstract" data-broccoli-component="work_strip.more_abstract" data-click-track="profile-work-strip-more-abstract" href="javascript:;"><span> more </span><span><i class="fa fa-caret-down"></i></span></a><span class="js-work-more-abstract-untruncated hidden">This lecture forms part of the &#39;Next Generation User Interfaces&#39; course given at the Vrije Universiteit Brussel.</span></div><div class="wp-workCard_item wp-workCard--actions"><span class="work-strip-bookmark-button-container"></span><a id="c5100004b7bfc95c3a453d1f1857050c" class="wp-workCard--action" rel="nofollow" data-click-track="profile-work-strip-download" data-download="{&quot;attachment_id&quot;:121736297,&quot;asset_id&quot;:29706608,&quot;asset_type&quot;:&quot;Work&quot;,&quot;button_location&quot;:&quot;profile&quot;}" href="https://www.academia.edu/attachments/121736297/download_file?s=profile"><span><i class="fa fa-arrow-down"></i></span><span>Download</span></a><span class="wp-workCard--action visible-if-viewed-by-owner inline-block" style="display: none;"><span class="js-profile-work-strip-edit-button-wrapper profile-work-strip-edit-button-wrapper" data-work-id="29706608"><a class="js-profile-work-strip-edit-button" tabindex="0"><span><i class="fa fa-pencil"></i></span><span>Edit</span></a></span></span></div><div class="wp-workCard_item wp-workCard--stats"><span><span><span class="js-view-count view-count u-mr2x" data-work-id="29706608"><i class="fa fa-spinner fa-spin"></i></span><script>$(function () { var workId = 29706608; window.Academia.workViewCountsFetcher.queue(workId, function (count) { var description = window.$h.commaizeInt(count) + " " + window.$h.pluralize(count, 'View'); $(".js-view-count[data-work-id=29706608]").text(description); $(".js-view-count[data-work-id=29706608]").attr('title', description).tooltip(); }); });</script></span></span><span><span class="percentile-widget hidden"><span class="u-mr2x work-percentile"></span></span><script>$(function () { var workId = 29706608; window.Academia.workPercentilesFetcher.queue(workId, function (percentileText) { var container = $(".js-work-strip[data-work-id='29706608']"); container.find('.work-percentile').text(percentileText.charAt(0).toUpperCase() + percentileText.slice(1)); container.find('.percentile-widget').show(); container.find('.percentile-widget').removeClass('hidden'); }); });</script></span></div><div id="work-strip-premium-row-container"></div></div></div><script> require.config({ waitSeconds: 90 })(["https://a.academia-assets.com/assets/wow_profile-a9bf3a2bc8c89fa2a77156577594264ee8a0f214d74241bc0fcd3f69f8d107ac.js","https://a.academia-assets.com/assets/work_edit-ad038b8c047c1a8d4fa01b402d530ff93c45fee2137a149a4a5398bc8ad67560.js"], function() { // from javascript_helper.rb var dispatcherData = {} if (true){ window.WowProfile.dispatcher = window.WowProfile.dispatcher || _.clone(Backbone.Events); dispatcherData = { dispatcher: window.WowProfile.dispatcher, downloadLinkId: "c5100004b7bfc95c3a453d1f1857050c" } } $('.js-work-strip[data-work-id=29706608]').each(function() { if (!$(this).data('initialized')) { new WowProfile.WorkStripView({ el: this, workJSON: {"id":29706608,"title":"Interactive Tabletops and Surfaces - Next Generation User Interfaces (4018166FNR)","translated_title":"","metadata":{"abstract":"This lecture forms part of the 'Next Generation User Interfaces' course given at the Vrije Universiteit Brussel.","ai_title_tag":"Next Generation User Interfaces: Interactive Tabletops","publication_date":{"day":null,"month":null,"year":2025,"errors":{}}},"translated_abstract":"This lecture forms part of the 'Next Generation User Interfaces' course given at the Vrije Universiteit Brussel.","internal_url":"https://www.academia.edu/29706608/Interactive_Tabletops_and_Surfaces_Next_Generation_User_Interfaces_4018166FNR_","translated_internal_url":"","created_at":"2016-11-06T21:20:00.111-08:00","preview_url":null,"current_user_can_edit":null,"current_user_is_owner":null,"owner_id":13155,"coauthors_can_edit":true,"document_type":"teaching_document","co_author_tags":[],"downloadable_attachments":[{"id":121736297,"title":"","file_type":"pdf","scribd_thumbnail_url":"https://attachments.academia-assets.com/121736297/thumbnails/1.jpg","file_name":"lecture_05_interactivTabletopsAndSurfaces.pdf","download_url":"https://www.academia.edu/attachments/121736297/download_file","bulk_download_file_name":"Interactive_Tabletops_and_Surfaces_Next.pdf","bulk_download_url":"https://d1wqtxts1xzle7.cloudfront.net/121736297/lecture_05_interactivTabletopsAndSurfaces-libre.pdf?1741563024=\u0026response-content-disposition=attachment%3B+filename%3DInteractive_Tabletops_and_Surfaces_Next.pdf\u0026Expires=1744203893\u0026Signature=aK6HHsPGhVD7e-g0mntyMT~U2NJ-E5Yl9EArbH3cbJChZe6YdvE5qo2igN2cu2-UBn3xncW2ulP7iNhxWGj4VotQFXhL~Z8~zLs7bZgrkd3~bxE~JJzegK~hSSq5jxnh7JTAxVs7cJc44mje-QT0A~YyMQ7lw861WnyRNZfNlaOoaVD7iYheI7Af78zgEKHLnrzUyHhGmVV0vhz~uyMf1aT5b4og1Nv43bpkZIj7d8k6UBrWdp3Pjjr3~w~TeTz-z4eZD1De3RkX8apZ2XoSjlu5rPXqdl1sndmcjrVugUQtBRW0doFt2qOL38zhHYL16w9GXH2mhwGu2Pje5MOemQ__\u0026Key-Pair-Id=APKAJLOHF5GGSLRBV4ZA"}],"slug":"Interactive_Tabletops_and_Surfaces_Next_Generation_User_Interfaces_4018166FNR_","translated_slug":"","page_count":50,"language":"en","content_type":"Work","summary":"This lecture forms part of the 'Next Generation User Interfaces' course given at the Vrije Universiteit Brussel.","impression_tracking_id":null,"owner":{"id":13155,"first_name":"Beat","middle_initials":null,"last_name":"Signer","page_name":"BeatSigner","domain_name":"vub","created_at":"2008-10-29T17:11:18.015-07:00","display_name":"Beat Signer","url":"https://vub.academia.edu/BeatSigner"},"attachments":[{"id":121736297,"title":"","file_type":"pdf","scribd_thumbnail_url":"https://attachments.academia-assets.com/121736297/thumbnails/1.jpg","file_name":"lecture_05_interactivTabletopsAndSurfaces.pdf","download_url":"https://www.academia.edu/attachments/121736297/download_file","bulk_download_file_name":"Interactive_Tabletops_and_Surfaces_Next.pdf","bulk_download_url":"https://d1wqtxts1xzle7.cloudfront.net/121736297/lecture_05_interactivTabletopsAndSurfaces-libre.pdf?1741563024=\u0026response-content-disposition=attachment%3B+filename%3DInteractive_Tabletops_and_Surfaces_Next.pdf\u0026Expires=1744203893\u0026Signature=aK6HHsPGhVD7e-g0mntyMT~U2NJ-E5Yl9EArbH3cbJChZe6YdvE5qo2igN2cu2-UBn3xncW2ulP7iNhxWGj4VotQFXhL~Z8~zLs7bZgrkd3~bxE~JJzegK~hSSq5jxnh7JTAxVs7cJc44mje-QT0A~YyMQ7lw861WnyRNZfNlaOoaVD7iYheI7Af78zgEKHLnrzUyHhGmVV0vhz~uyMf1aT5b4og1Nv43bpkZIj7d8k6UBrWdp3Pjjr3~w~TeTz-z4eZD1De3RkX8apZ2XoSjlu5rPXqdl1sndmcjrVugUQtBRW0doFt2qOL38zhHYL16w9GXH2mhwGu2Pje5MOemQ__\u0026Key-Pair-Id=APKAJLOHF5GGSLRBV4ZA"},{"id":121736294,"title":"","file_type":"pdf","scribd_thumbnail_url":"https://attachments.academia-assets.com/121736294/thumbnails/1.jpg","file_name":"lecture_05_interactivTabletopsAndSurfaces.pdf","download_url":"https://www.academia.edu/attachments/121736294/download_file","bulk_download_file_name":"Interactive_Tabletops_and_Surfaces_Next.pdf","bulk_download_url":"https://d1wqtxts1xzle7.cloudfront.net/121736294/lecture_05_interactivTabletopsAndSurfaces-libre.pdf?1741563027=\u0026response-content-disposition=attachment%3B+filename%3DInteractive_Tabletops_and_Surfaces_Next.pdf\u0026Expires=1744203893\u0026Signature=dUHn9QqRTdIsZffaZmxVqH7YWIlv1OWQaT4fIaX0icitsq7L15LeQh1d-UroHdAuL~A5cRxtMh7liKGDrk716eQyG5dcFBGOyc-KigCkSHuUb4F6ZUEU0IMLpFVhY1giDBfMcrKWKHuBchb~uHOCoyRplx2Ugcs3YpxejHQpzjjSTwCJ-kH6bEEos~miuunlgVzGMpZkPv5kmdpT5ePzx-EOowTHQQYIF9W-Od-Yn57f-H2SjDCNEIt3vfmzerpDEQTd95iRyQ3w-F3QP2x7yVsCDzftBQ9Hn7IIE7IyXzir6PJTnOWmU3anOnyuFE2icSPsYMp61YWDJ8efqFQsmg__\u0026Key-Pair-Id=APKAJLOHF5GGSLRBV4ZA"}],"research_interests":[{"id":37,"name":"Information Systems","url":"https://www.academia.edu/Documents/in/Information_Systems"},{"id":48,"name":"Engineering","url":"https://www.academia.edu/Documents/in/Engineering"},{"id":422,"name":"Computer Science","url":"https://www.academia.edu/Documents/in/Computer_Science"},{"id":449,"name":"Software Engineering","url":"https://www.academia.edu/Documents/in/Software_Engineering"},{"id":472,"name":"Human Computer Interaction","url":"https://www.academia.edu/Documents/in/Human_Computer_Interaction"},{"id":491,"name":"Information Technology","url":"https://www.academia.edu/Documents/in/Information_Technology"},{"id":922,"name":"Education","url":"https://www.academia.edu/Documents/in/Education"},{"id":923,"name":"Technology","url":"https://www.academia.edu/Documents/in/Technology"},{"id":933,"name":"New Media","url":"https://www.academia.edu/Documents/in/New_Media"},{"id":1128,"name":"Computer Science Education","url":"https://www.academia.edu/Documents/in/Computer_Science_Education"},{"id":1210,"name":"Informatics","url":"https://www.academia.edu/Documents/in/Informatics"},{"id":1241,"name":"Knowledge Management","url":"https://www.academia.edu/Documents/in/Knowledge_Management"},{"id":1380,"name":"Computer Engineering","url":"https://www.academia.edu/Documents/in/Computer_Engineering"},{"id":1440,"name":"Visualization","url":"https://www.academia.edu/Documents/in/Visualization"},{"id":1736,"name":"Science Education","url":"https://www.academia.edu/Documents/in/Science_Education"},{"id":2129,"name":"Computer Supported Cooperative Work (CSCW)","url":"https://www.academia.edu/Documents/in/Computer_Supported_Cooperative_Work_CSCW_"},{"id":2621,"name":"Higher Education","url":"https://www.academia.edu/Documents/in/Higher_Education"},{"id":2869,"name":"Digital Media","url":"https://www.academia.edu/Documents/in/Digital_Media"},{"id":2875,"name":"User Experience (UX)","url":"https://www.academia.edu/Documents/in/User_Experience_UX_"},{"id":2879,"name":"Ubiquitous Computing","url":"https://www.academia.edu/Documents/in/Ubiquitous_Computing"},{"id":3147,"name":"Gesture","url":"https://www.academia.edu/Documents/in/Gesture"},{"id":3419,"name":"Multimedia","url":"https://www.academia.edu/Documents/in/Multimedia"},{"id":3424,"name":"Information Visualization","url":"https://www.academia.edu/Documents/in/Information_Visualization"},{"id":3457,"name":"Learning and Teaching","url":"https://www.academia.edu/Documents/in/Learning_and_Teaching"},{"id":4416,"name":"Interaction Design","url":"https://www.academia.edu/Documents/in/Interaction_Design"},{"id":5266,"name":"Human Information Interaction","url":"https://www.academia.edu/Documents/in/Human_Information_Interaction"},{"id":5401,"name":"Computer-Mediated Communication","url":"https://www.academia.edu/Documents/in/Computer-Mediated_Communication"},{"id":5673,"name":"Augmented Reality","url":"https://www.academia.edu/Documents/in/Augmented_Reality"},{"id":9134,"name":"Pervasive Computing","url":"https://www.academia.edu/Documents/in/Pervasive_Computing"},{"id":10165,"name":"Interactive and Digital Media","url":"https://www.academia.edu/Documents/in/Interactive_and_Digital_Media"},{"id":11081,"name":"Multimodal Interaction","url":"https://www.academia.edu/Documents/in/Multimodal_Interaction"},{"id":11085,"name":"Cross-Media Information Spaces","url":"https://www.academia.edu/Documents/in/Cross-Media_Information_Spaces"},{"id":11086,"name":"Interactive Paper","url":"https://www.academia.edu/Documents/in/Interactive_Paper"},{"id":11119,"name":"User Interface","url":"https://www.academia.edu/Documents/in/User_Interface"},{"id":15951,"name":"Digital Pen and Paper","url":"https://www.academia.edu/Documents/in/Digital_Pen_and_Paper"},{"id":17701,"name":"Gesture Recognition","url":"https://www.academia.edu/Documents/in/Gesture_Recognition"},{"id":20470,"name":"Tangible User Interfaces","url":"https://www.academia.edu/Documents/in/Tangible_User_Interfaces"},{"id":21201,"name":"Multimodality","url":"https://www.academia.edu/Documents/in/Multimodality"},{"id":42095,"name":"Document Engineering","url":"https://www.academia.edu/Documents/in/Document_Engineering"},{"id":54670,"name":"FTIR","url":"https://www.academia.edu/Documents/in/FTIR"},{"id":58051,"name":"OLED","url":"https://www.academia.edu/Documents/in/OLED"},{"id":67792,"name":"Multi-Touch","url":"https://www.academia.edu/Documents/in/Multi-Touch"},{"id":71901,"name":"Tabletop","url":"https://www.academia.edu/Documents/in/Tabletop"},{"id":85420,"name":"Crossmedia","url":"https://www.academia.edu/Documents/in/Crossmedia"},{"id":458849,"name":"Pervasive and Ubiquitous Computing","url":"https://www.academia.edu/Documents/in/Pervasive_and_Ubiquitous_Computing"}],"urls":[{"id":34176717,"url":"https://wise.vub.ac.be/course/next-generation-user-interfaces"}]}, dispatcherData: dispatcherData }); $(this).data('initialized', true); } }); $a.trackClickSource(".js-work-strip-work-link", "profile_work_strip") if (false) { Aedu.setUpFigureCarousel('profile-work-29706608-figures'); } }); </script> <div class="js-work-strip profile--work_container" data-work-id="29826883"><div class="profile--work_thumbnail hidden-xs"><a class="js-work-strip-work-link" data-click-track="profile-work-strip-thumbnail" href="https://www.academia.edu/29826883/Gesture_based_Interaction_Next_Generation_User_Interfaces_4018166FNR_"><img alt="Research paper thumbnail of Gesture-based Interaction - Next Generation User Interfaces (4018166FNR)" class="work-thumbnail" src="https://attachments.academia-assets.com/121868903/thumbnails/1.jpg" /></a></div><div class="wp-workCard wp-workCard_itemContainer"><div class="wp-workCard_item wp-workCard--title"><a class="js-work-strip-work-link text-gray-darker" data-click-track="profile-work-strip-title" href="https://www.academia.edu/29826883/Gesture_based_Interaction_Next_Generation_User_Interfaces_4018166FNR_">Gesture-based Interaction - Next Generation User Interfaces (4018166FNR)</a></div><div class="wp-workCard_item"><span class="js-work-more-abstract-truncated">This lecture forms part of the &#39;Next Generation User Interfaces&#39; course given at the Vrije Univer...</span><a class="js-work-more-abstract" data-broccoli-component="work_strip.more_abstract" data-click-track="profile-work-strip-more-abstract" href="javascript:;"><span> more </span><span><i class="fa fa-caret-down"></i></span></a><span class="js-work-more-abstract-untruncated hidden">This lecture forms part of the &#39;Next Generation User Interfaces&#39; course given at the Vrije Universiteit Brussel.</span></div><div class="wp-workCard_item wp-workCard--actions"><span class="work-strip-bookmark-button-container"></span><a id="6fc44450423765988f0f8ae3f23c883d" class="wp-workCard--action" rel="nofollow" data-click-track="profile-work-strip-download" data-download="{&quot;attachment_id&quot;:121868903,&quot;asset_id&quot;:29826883,&quot;asset_type&quot;:&quot;Work&quot;,&quot;button_location&quot;:&quot;profile&quot;}" href="https://www.academia.edu/attachments/121868903/download_file?s=profile"><span><i class="fa fa-arrow-down"></i></span><span>Download</span></a><span class="wp-workCard--action visible-if-viewed-by-owner inline-block" style="display: none;"><span class="js-profile-work-strip-edit-button-wrapper profile-work-strip-edit-button-wrapper" data-work-id="29826883"><a class="js-profile-work-strip-edit-button" tabindex="0"><span><i class="fa fa-pencil"></i></span><span>Edit</span></a></span></span></div><div class="wp-workCard_item wp-workCard--stats"><span><span><span class="js-view-count view-count u-mr2x" data-work-id="29826883"><i class="fa fa-spinner fa-spin"></i></span><script>$(function () { var workId = 29826883; window.Academia.workViewCountsFetcher.queue(workId, function (count) { var description = window.$h.commaizeInt(count) + " " + window.$h.pluralize(count, 'View'); $(".js-view-count[data-work-id=29826883]").text(description); $(".js-view-count[data-work-id=29826883]").attr('title', description).tooltip(); }); });</script></span></span><span><span class="percentile-widget hidden"><span class="u-mr2x work-percentile"></span></span><script>$(function () { var workId = 29826883; window.Academia.workPercentilesFetcher.queue(workId, function (percentileText) { var container = $(".js-work-strip[data-work-id='29826883']"); container.find('.work-percentile').text(percentileText.charAt(0).toUpperCase() + percentileText.slice(1)); container.find('.percentile-widget').show(); container.find('.percentile-widget').removeClass('hidden'); }); });</script></span></div><div id="work-strip-premium-row-container"></div></div></div><script> require.config({ waitSeconds: 90 })(["https://a.academia-assets.com/assets/wow_profile-a9bf3a2bc8c89fa2a77156577594264ee8a0f214d74241bc0fcd3f69f8d107ac.js","https://a.academia-assets.com/assets/work_edit-ad038b8c047c1a8d4fa01b402d530ff93c45fee2137a149a4a5398bc8ad67560.js"], function() { // from javascript_helper.rb var dispatcherData = {} if (true){ window.WowProfile.dispatcher = window.WowProfile.dispatcher || _.clone(Backbone.Events); dispatcherData = { dispatcher: window.WowProfile.dispatcher, downloadLinkId: "6fc44450423765988f0f8ae3f23c883d" } } $('.js-work-strip[data-work-id=29826883]').each(function() { if (!$(this).data('initialized')) { new WowProfile.WorkStripView({ el: this, workJSON: {"id":29826883,"title":"Gesture-based Interaction - Next Generation User Interfaces (4018166FNR)","translated_title":"","metadata":{"abstract":"This lecture forms part of the 'Next Generation User Interfaces' course given at the Vrije Universiteit Brussel. ","publication_date":{"day":null,"month":null,"year":2025,"errors":{}}},"translated_abstract":"This lecture forms part of the 'Next Generation User Interfaces' course given at the Vrije Universiteit Brussel. ","internal_url":"https://www.academia.edu/29826883/Gesture_based_Interaction_Next_Generation_User_Interfaces_4018166FNR_","translated_internal_url":"","created_at":"2016-11-14T01:32:20.977-08:00","preview_url":null,"current_user_can_edit":null,"current_user_is_owner":null,"owner_id":13155,"coauthors_can_edit":true,"document_type":"teaching_document","co_author_tags":[],"downloadable_attachments":[{"id":121868903,"title":"","file_type":"pdf","scribd_thumbnail_url":"https://attachments.academia-assets.com/121868903/thumbnails/1.jpg","file_name":"lecture_06_gestureBasedInteraction.pdf","download_url":"https://www.academia.edu/attachments/121868903/download_file","bulk_download_file_name":"Gesture_based_Interaction_Next_Generatio.pdf","bulk_download_url":"https://d1wqtxts1xzle7.cloudfront.net/121868903/lecture_06_gestureBasedInteraction-libre.pdf?1742211326=\u0026response-content-disposition=attachment%3B+filename%3DGesture_based_Interaction_Next_Generatio.pdf\u0026Expires=1744203893\u0026Signature=XdE~HqG1dUY1E-UOQjaZndLTHWPTkXfmK~87g4U-3yLAMoBR0O1V~F3QSmJjv2qsm1PtoNCO5PXm2M2A61R82b4GL6MhRjeWNxrvI4iaZcgPsHDgiT3I3gtyv38SOgabjf8K8l7Lx4Zy2UukyaEA~P8~H6Xv8OsNxdcehBByH7nvEQvs7yE6onVQbAMWkAgr3jBGrpRSzaFIPs12Oxg~2tHDIoD7Bhx4IOke8NtkHBo~QXAw71HNACLZ2lY9LBGxznu0K0UPs4RZ6RH9P5W7REOp9tG0dZO3FZe-lKnw7fGvvzZG5SQJ4H1yaxRrSXohsvHRptXWNCjcg2GmhtX~yQ__\u0026Key-Pair-Id=APKAJLOHF5GGSLRBV4ZA"}],"slug":"Gesture_based_Interaction_Next_Generation_User_Interfaces_4018166FNR_","translated_slug":"","page_count":56,"language":"en","content_type":"Work","summary":"This lecture forms part of the 'Next Generation User Interfaces' course given at the Vrije Universiteit Brussel. ","impression_tracking_id":null,"owner":{"id":13155,"first_name":"Beat","middle_initials":null,"last_name":"Signer","page_name":"BeatSigner","domain_name":"vub","created_at":"2008-10-29T17:11:18.015-07:00","display_name":"Beat Signer","url":"https://vub.academia.edu/BeatSigner"},"attachments":[{"id":121868903,"title":"","file_type":"pdf","scribd_thumbnail_url":"https://attachments.academia-assets.com/121868903/thumbnails/1.jpg","file_name":"lecture_06_gestureBasedInteraction.pdf","download_url":"https://www.academia.edu/attachments/121868903/download_file","bulk_download_file_name":"Gesture_based_Interaction_Next_Generatio.pdf","bulk_download_url":"https://d1wqtxts1xzle7.cloudfront.net/121868903/lecture_06_gestureBasedInteraction-libre.pdf?1742211326=\u0026response-content-disposition=attachment%3B+filename%3DGesture_based_Interaction_Next_Generatio.pdf\u0026Expires=1744203893\u0026Signature=XdE~HqG1dUY1E-UOQjaZndLTHWPTkXfmK~87g4U-3yLAMoBR0O1V~F3QSmJjv2qsm1PtoNCO5PXm2M2A61R82b4GL6MhRjeWNxrvI4iaZcgPsHDgiT3I3gtyv38SOgabjf8K8l7Lx4Zy2UukyaEA~P8~H6Xv8OsNxdcehBByH7nvEQvs7yE6onVQbAMWkAgr3jBGrpRSzaFIPs12Oxg~2tHDIoD7Bhx4IOke8NtkHBo~QXAw71HNACLZ2lY9LBGxznu0K0UPs4RZ6RH9P5W7REOp9tG0dZO3FZe-lKnw7fGvvzZG5SQJ4H1yaxRrSXohsvHRptXWNCjcg2GmhtX~yQ__\u0026Key-Pair-Id=APKAJLOHF5GGSLRBV4ZA"}],"research_interests":[{"id":37,"name":"Information Systems","url":"https://www.academia.edu/Documents/in/Information_Systems"},{"id":48,"name":"Engineering","url":"https://www.academia.edu/Documents/in/Engineering"},{"id":422,"name":"Computer Science","url":"https://www.academia.edu/Documents/in/Computer_Science"},{"id":449,"name":"Software Engineering","url":"https://www.academia.edu/Documents/in/Software_Engineering"},{"id":465,"name":"Artificial Intelligence","url":"https://www.academia.edu/Documents/in/Artificial_Intelligence"},{"id":472,"name":"Human Computer Interaction","url":"https://www.academia.edu/Documents/in/Human_Computer_Interaction"},{"id":491,"name":"Information Technology","url":"https://www.academia.edu/Documents/in/Information_Technology"},{"id":922,"name":"Education","url":"https://www.academia.edu/Documents/in/Education"},{"id":923,"name":"Technology","url":"https://www.academia.edu/Documents/in/Technology"},{"id":933,"name":"New Media","url":"https://www.academia.edu/Documents/in/New_Media"},{"id":988,"name":"Design","url":"https://www.academia.edu/Documents/in/Design"},{"id":1128,"name":"Computer Science Education","url":"https://www.academia.edu/Documents/in/Computer_Science_Education"},{"id":1210,"name":"Informatics","url":"https://www.academia.edu/Documents/in/Informatics"},{"id":1736,"name":"Science Education","url":"https://www.academia.edu/Documents/in/Science_Education"},{"id":1830,"name":"Gesture Studies","url":"https://www.academia.edu/Documents/in/Gesture_Studies"},{"id":2621,"name":"Higher Education","url":"https://www.academia.edu/Documents/in/Higher_Education"},{"id":2869,"name":"Digital Media","url":"https://www.academia.edu/Documents/in/Digital_Media"},{"id":2875,"name":"User Experience (UX)","url":"https://www.academia.edu/Documents/in/User_Experience_UX_"},{"id":2879,"name":"Ubiquitous Computing","url":"https://www.academia.edu/Documents/in/Ubiquitous_Computing"},{"id":3147,"name":"Gesture","url":"https://www.academia.edu/Documents/in/Gesture"},{"id":3419,"name":"Multimedia","url":"https://www.academia.edu/Documents/in/Multimedia"},{"id":3457,"name":"Learning and Teaching","url":"https://www.academia.edu/Documents/in/Learning_and_Teaching"},{"id":4416,"name":"Interaction Design","url":"https://www.academia.edu/Documents/in/Interaction_Design"},{"id":5266,"name":"Human Information Interaction","url":"https://www.academia.edu/Documents/in/Human_Information_Interaction"},{"id":5401,"name":"Computer-Mediated Communication","url":"https://www.academia.edu/Documents/in/Computer-Mediated_Communication"},{"id":5673,"name":"Augmented Reality","url":"https://www.academia.edu/Documents/in/Augmented_Reality"},{"id":9134,"name":"Pervasive Computing","url":"https://www.academia.edu/Documents/in/Pervasive_Computing"},{"id":10165,"name":"Interactive and Digital Media","url":"https://www.academia.edu/Documents/in/Interactive_and_Digital_Media"},{"id":11081,"name":"Multimodal Interaction","url":"https://www.academia.edu/Documents/in/Multimodal_Interaction"},{"id":11085,"name":"Cross-Media Information Spaces","url":"https://www.academia.edu/Documents/in/Cross-Media_Information_Spaces"},{"id":11086,"name":"Interactive Paper","url":"https://www.academia.edu/Documents/in/Interactive_Paper"},{"id":11119,"name":"User Interface","url":"https://www.academia.edu/Documents/in/User_Interface"},{"id":15951,"name":"Digital Pen and Paper","url":"https://www.academia.edu/Documents/in/Digital_Pen_and_Paper"},{"id":17701,"name":"Gesture Recognition","url":"https://www.academia.edu/Documents/in/Gesture_Recognition"},{"id":21201,"name":"Multimodality","url":"https://www.academia.edu/Documents/in/Multimodality"},{"id":44606,"name":"Hypermedia","url":"https://www.academia.edu/Documents/in/Hypermedia"},{"id":64453,"name":"Gestures","url":"https://www.academia.edu/Documents/in/Gestures"},{"id":67792,"name":"Multi-Touch","url":"https://www.academia.edu/Documents/in/Multi-Touch"},{"id":85420,"name":"Crossmedia","url":"https://www.academia.edu/Documents/in/Crossmedia"},{"id":97585,"name":"User interfaces","url":"https://www.academia.edu/Documents/in/User_interfaces"},{"id":240429,"name":"Multi-touch tables","url":"https://www.academia.edu/Documents/in/Multi-touch_tables"},{"id":293301,"name":"Multi Touch","url":"https://www.academia.edu/Documents/in/Multi_Touch"},{"id":317863,"name":"Hand Gesture Recognition System","url":"https://www.academia.edu/Documents/in/Hand_Gesture_Recognition_System"},{"id":347349,"name":"Vision-based hand gesture recognition","url":"https://www.academia.edu/Documents/in/Vision-based_hand_gesture_recognition"},{"id":458849,"name":"Pervasive and Ubiquitous Computing","url":"https://www.academia.edu/Documents/in/Pervasive_and_Ubiquitous_Computing"},{"id":550029,"name":"Gesture Technology","url":"https://www.academia.edu/Documents/in/Gesture_Technology"}],"urls":[{"id":34176721,"url":"https://wise.vub.ac.be/course/next-generation-user-interfaces"}]}, dispatcherData: dispatcherData }); $(this).data('initialized', true); } }); $a.trackClickSource(".js-work-strip-work-link", "profile_work_strip") if (false) { Aedu.setUpFigureCarousel('profile-work-29826883-figures'); } }); </script> <div class="js-work-strip profile--work_container" data-work-id="29977545"><div class="profile--work_thumbnail hidden-xs"><a class="js-work-strip-work-link" data-click-track="profile-work-strip-thumbnail" href="https://www.academia.edu/29977545/Tangible_Embedded_and_Embodied_Interaction_Next_Generation_User_Interfaces_4018166FNR_"><img alt="Research paper thumbnail of Tangible, Embedded and Embodied Interaction - Next Generation User Interfaces (4018166FNR)" class="work-thumbnail" src="https://attachments.academia-assets.com/121977503/thumbnails/1.jpg" /></a></div><div class="wp-workCard wp-workCard_itemContainer"><div class="wp-workCard_item wp-workCard--title"><a class="js-work-strip-work-link text-gray-darker" data-click-track="profile-work-strip-title" href="https://www.academia.edu/29977545/Tangible_Embedded_and_Embodied_Interaction_Next_Generation_User_Interfaces_4018166FNR_">Tangible, Embedded and Embodied Interaction - Next Generation User Interfaces (4018166FNR)</a></div><div class="wp-workCard_item"><span class="js-work-more-abstract-truncated">This lecture forms part of the &#39;Next Generation User Interfaces&#39; course given at the Vrije Univer...</span><a class="js-work-more-abstract" data-broccoli-component="work_strip.more_abstract" data-click-track="profile-work-strip-more-abstract" href="javascript:;"><span> more </span><span><i class="fa fa-caret-down"></i></span></a><span class="js-work-more-abstract-untruncated hidden">This lecture forms part of the &#39;Next Generation User Interfaces&#39; course given at the Vrije Universiteit Brussel.</span></div><div class="wp-workCard_item wp-workCard--actions"><span class="work-strip-bookmark-button-container"></span><a id="6f232048b4c561ac28d5f19d33a3f085" class="wp-workCard--action" rel="nofollow" data-click-track="profile-work-strip-download" data-download="{&quot;attachment_id&quot;:121977503,&quot;asset_id&quot;:29977545,&quot;asset_type&quot;:&quot;Work&quot;,&quot;button_location&quot;:&quot;profile&quot;}" href="https://www.academia.edu/attachments/121977503/download_file?s=profile"><span><i class="fa fa-arrow-down"></i></span><span>Download</span></a><span class="wp-workCard--action visible-if-viewed-by-owner inline-block" style="display: none;"><span class="js-profile-work-strip-edit-button-wrapper profile-work-strip-edit-button-wrapper" data-work-id="29977545"><a class="js-profile-work-strip-edit-button" tabindex="0"><span><i class="fa fa-pencil"></i></span><span>Edit</span></a></span></span></div><div class="wp-workCard_item wp-workCard--stats"><span><span><span class="js-view-count view-count u-mr2x" data-work-id="29977545"><i class="fa fa-spinner fa-spin"></i></span><script>$(function () { var workId = 29977545; window.Academia.workViewCountsFetcher.queue(workId, function (count) { var description = window.$h.commaizeInt(count) + " " + window.$h.pluralize(count, 'View'); $(".js-view-count[data-work-id=29977545]").text(description); $(".js-view-count[data-work-id=29977545]").attr('title', description).tooltip(); }); });</script></span></span><span><span class="percentile-widget hidden"><span class="u-mr2x work-percentile"></span></span><script>$(function () { var workId = 29977545; window.Academia.workPercentilesFetcher.queue(workId, function (percentileText) { var container = $(".js-work-strip[data-work-id='29977545']"); container.find('.work-percentile').text(percentileText.charAt(0).toUpperCase() + percentileText.slice(1)); container.find('.percentile-widget').show(); container.find('.percentile-widget').removeClass('hidden'); }); });</script></span></div><div id="work-strip-premium-row-container"></div></div></div><script> require.config({ waitSeconds: 90 })(["https://a.academia-assets.com/assets/wow_profile-a9bf3a2bc8c89fa2a77156577594264ee8a0f214d74241bc0fcd3f69f8d107ac.js","https://a.academia-assets.com/assets/work_edit-ad038b8c047c1a8d4fa01b402d530ff93c45fee2137a149a4a5398bc8ad67560.js"], function() { // from javascript_helper.rb var dispatcherData = {} if (true){ window.WowProfile.dispatcher = window.WowProfile.dispatcher || _.clone(Backbone.Events); dispatcherData = { dispatcher: window.WowProfile.dispatcher, downloadLinkId: "6f232048b4c561ac28d5f19d33a3f085" } } $('.js-work-strip[data-work-id=29977545]').each(function() { if (!$(this).data('initialized')) { new WowProfile.WorkStripView({ el: this, workJSON: {"id":29977545,"title":"Tangible, Embedded and Embodied Interaction - Next Generation User Interfaces (4018166FNR)","translated_title":"","metadata":{"abstract":"This lecture forms part of the 'Next Generation User Interfaces' course given at the Vrije Universiteit Brussel. ","ai_title_tag":"Next Generation Interfaces: Tangible and Embedded Interaction"},"translated_abstract":"This lecture forms part of the 'Next Generation User Interfaces' course given at the Vrije Universiteit Brussel. ","internal_url":"https://www.academia.edu/29977545/Tangible_Embedded_and_Embodied_Interaction_Next_Generation_User_Interfaces_4018166FNR_","translated_internal_url":"","created_at":"2016-11-20T08:11:12.601-08:00","preview_url":null,"current_user_can_edit":null,"current_user_is_owner":null,"owner_id":13155,"coauthors_can_edit":true,"document_type":"teaching_document","co_author_tags":[],"downloadable_attachments":[{"id":121977503,"title":"","file_type":"pdf","scribd_thumbnail_url":"https://attachments.academia-assets.com/121977503/thumbnails/1.jpg","file_name":"lecture_07_tangibleInteraction.pdf","download_url":"https://www.academia.edu/attachments/121977503/download_file","bulk_download_file_name":"Tangible_Embedded_and_Embodied_Interacti.pdf","bulk_download_url":"https://d1wqtxts1xzle7.cloudfront.net/121977503/lecture_07_tangibleInteraction-libre.pdf?1742802555=\u0026response-content-disposition=attachment%3B+filename%3DTangible_Embedded_and_Embodied_Interacti.pdf\u0026Expires=1744203894\u0026Signature=VCGAIc~~glHPTg3JwbRMmKfPtHtG92JgiNSrSQ4NJY-~A350hJQAM4jGDINIYXZNHCLiKCjJwIwuJu7eJsDf1n7Hep06F4XQT5JanM76BHNo~ibfWKeBr7MB0DKVXqzI1CCRCmj6MnBBE3Ut4BNHXBxIPQob-z-D354QzdwHcFbdWBmyQtMpkzimjz0QiCpXYHwV88hcQ3XHzhhtmm~YoQH4p1pilVF14t3aJI5S-x8nmI0TnErL4bXdpCzcLVtUGMUFZ4XhT8TZfMhvw81eWQDnx-q5g7kUzLI1EHpD2J5sRUPLz15oulfXlbINVKsgiN8Q4~CBNHijia8Roc9fIg__\u0026Key-Pair-Id=APKAJLOHF5GGSLRBV4ZA"}],"slug":"Tangible_Embedded_and_Embodied_Interaction_Next_Generation_User_Interfaces_4018166FNR_","translated_slug":"","page_count":45,"language":"en","content_type":"Work","summary":"This lecture forms part of the 'Next Generation User Interfaces' course given at the Vrije Universiteit Brussel. ","impression_tracking_id":null,"owner":{"id":13155,"first_name":"Beat","middle_initials":null,"last_name":"Signer","page_name":"BeatSigner","domain_name":"vub","created_at":"2008-10-29T17:11:18.015-07:00","display_name":"Beat Signer","url":"https://vub.academia.edu/BeatSigner"},"attachments":[{"id":121977503,"title":"","file_type":"pdf","scribd_thumbnail_url":"https://attachments.academia-assets.com/121977503/thumbnails/1.jpg","file_name":"lecture_07_tangibleInteraction.pdf","download_url":"https://www.academia.edu/attachments/121977503/download_file","bulk_download_file_name":"Tangible_Embedded_and_Embodied_Interacti.pdf","bulk_download_url":"https://d1wqtxts1xzle7.cloudfront.net/121977503/lecture_07_tangibleInteraction-libre.pdf?1742802555=\u0026response-content-disposition=attachment%3B+filename%3DTangible_Embedded_and_Embodied_Interacti.pdf\u0026Expires=1744203894\u0026Signature=VCGAIc~~glHPTg3JwbRMmKfPtHtG92JgiNSrSQ4NJY-~A350hJQAM4jGDINIYXZNHCLiKCjJwIwuJu7eJsDf1n7Hep06F4XQT5JanM76BHNo~ibfWKeBr7MB0DKVXqzI1CCRCmj6MnBBE3Ut4BNHXBxIPQob-z-D354QzdwHcFbdWBmyQtMpkzimjz0QiCpXYHwV88hcQ3XHzhhtmm~YoQH4p1pilVF14t3aJI5S-x8nmI0TnErL4bXdpCzcLVtUGMUFZ4XhT8TZfMhvw81eWQDnx-q5g7kUzLI1EHpD2J5sRUPLz15oulfXlbINVKsgiN8Q4~CBNHijia8Roc9fIg__\u0026Key-Pair-Id=APKAJLOHF5GGSLRBV4ZA"}],"research_interests":[{"id":37,"name":"Information Systems","url":"https://www.academia.edu/Documents/in/Information_Systems"},{"id":48,"name":"Engineering","url":"https://www.academia.edu/Documents/in/Engineering"},{"id":422,"name":"Computer Science","url":"https://www.academia.edu/Documents/in/Computer_Science"},{"id":449,"name":"Software Engineering","url":"https://www.academia.edu/Documents/in/Software_Engineering"},{"id":472,"name":"Human Computer Interaction","url":"https://www.academia.edu/Documents/in/Human_Computer_Interaction"},{"id":491,"name":"Information Technology","url":"https://www.academia.edu/Documents/in/Information_Technology"},{"id":922,"name":"Education","url":"https://www.academia.edu/Documents/in/Education"},{"id":923,"name":"Technology","url":"https://www.academia.edu/Documents/in/Technology"},{"id":933,"name":"New Media","url":"https://www.academia.edu/Documents/in/New_Media"},{"id":988,"name":"Design","url":"https://www.academia.edu/Documents/in/Design"},{"id":1128,"name":"Computer Science Education","url":"https://www.academia.edu/Documents/in/Computer_Science_Education"},{"id":1210,"name":"Informatics","url":"https://www.academia.edu/Documents/in/Informatics"},{"id":1380,"name":"Computer Engineering","url":"https://www.academia.edu/Documents/in/Computer_Engineering"},{"id":1736,"name":"Science Education","url":"https://www.academia.edu/Documents/in/Science_Education"},{"id":2621,"name":"Higher Education","url":"https://www.academia.edu/Documents/in/Higher_Education"},{"id":2869,"name":"Digital Media","url":"https://www.academia.edu/Documents/in/Digital_Media"},{"id":2875,"name":"User Experience (UX)","url":"https://www.academia.edu/Documents/in/User_Experience_UX_"},{"id":2879,"name":"Ubiquitous Computing","url":"https://www.academia.edu/Documents/in/Ubiquitous_Computing"},{"id":3147,"name":"Gesture","url":"https://www.academia.edu/Documents/in/Gesture"},{"id":3419,"name":"Multimedia","url":"https://www.academia.edu/Documents/in/Multimedia"},{"id":3424,"name":"Information Visualization","url":"https://www.academia.edu/Documents/in/Information_Visualization"},{"id":3457,"name":"Learning and Teaching","url":"https://www.academia.edu/Documents/in/Learning_and_Teaching"},{"id":4416,"name":"Interaction Design","url":"https://www.academia.edu/Documents/in/Interaction_Design"},{"id":5266,"name":"Human Information Interaction","url":"https://www.academia.edu/Documents/in/Human_Information_Interaction"},{"id":9134,"name":"Pervasive Computing","url":"https://www.academia.edu/Documents/in/Pervasive_Computing"},{"id":10165,"name":"Interactive and Digital Media","url":"https://www.academia.edu/Documents/in/Interactive_and_Digital_Media"},{"id":10907,"name":"Context-Aware Computing","url":"https://www.academia.edu/Documents/in/Context-Aware_Computing"},{"id":11085,"name":"Cross-Media Information Spaces","url":"https://www.academia.edu/Documents/in/Cross-Media_Information_Spaces"},{"id":11119,"name":"User Interface","url":"https://www.academia.edu/Documents/in/User_Interface"},{"id":20470,"name":"Tangible User Interfaces","url":"https://www.academia.edu/Documents/in/Tangible_User_Interfaces"},{"id":21201,"name":"Multimodality","url":"https://www.academia.edu/Documents/in/Multimodality"},{"id":26825,"name":"Mobile Computing","url":"https://www.academia.edu/Documents/in/Mobile_Computing"},{"id":39433,"name":"Ambient Intelligence","url":"https://www.academia.edu/Documents/in/Ambient_Intelligence"},{"id":67792,"name":"Multi-Touch","url":"https://www.academia.edu/Documents/in/Multi-Touch"},{"id":85420,"name":"Crossmedia","url":"https://www.academia.edu/Documents/in/Crossmedia"},{"id":90556,"name":"TEI (Tangible, Embedded, and Embodied Interaction)","url":"https://www.academia.edu/Documents/in/TEI_Tangible_Embedded_and_Embodied_Interaction_"},{"id":173458,"name":"Tangible Computing","url":"https://www.academia.edu/Documents/in/Tangible_Computing"},{"id":182290,"name":"Tangible User Interface, Tangible Programming","url":"https://www.academia.edu/Documents/in/Tangible_User_Interface_Tangible_Programming"},{"id":359271,"name":"Tangible Interfaces","url":"https://www.academia.edu/Documents/in/Tangible_Interfaces"},{"id":416425,"name":"Tangible Media","url":"https://www.academia.edu/Documents/in/Tangible_Media"},{"id":448968,"name":"Tangible Interaction","url":"https://www.academia.edu/Documents/in/Tangible_Interaction"},{"id":458849,"name":"Pervasive and Ubiquitous Computing","url":"https://www.academia.edu/Documents/in/Pervasive_and_Ubiquitous_Computing"},{"id":463287,"name":"Tangibles","url":"https://www.academia.edu/Documents/in/Tangibles"},{"id":711190,"name":"Tangible BIts","url":"https://www.academia.edu/Documents/in/Tangible_BIts"},{"id":876643,"name":"Tangible User interface","url":"https://www.academia.edu/Documents/in/Tangible_User_interface"},{"id":1194477,"name":"Embodied Cognition, Interaction Design, Tangible Interaction","url":"https://www.academia.edu/Documents/in/Embodied_Cognition_Interaction_Design_Tangible_Interaction"}],"urls":[{"id":34176733,"url":"https://wise.vub.ac.be/course/next-generation-user-interfaces"}]}, dispatcherData: dispatcherData }); $(this).data('initialized', true); } }); $a.trackClickSource(".js-work-strip-work-link", "profile_work_strip") if (false) { Aedu.setUpFigureCarousel('profile-work-29977545-figures'); } }); </script> <div class="js-work-strip profile--work_container" data-work-id="30158334"><div class="profile--work_thumbnail hidden-xs"><a class="js-work-strip-work-link" data-click-track="profile-work-strip-thumbnail" href="https://www.academia.edu/30158334/Virtual_and_Augmented_Reality_Next_Generation_User_Interfaces_4018166FNR_"><img alt="Research paper thumbnail of Virtual and Augmented Reality - Next Generation User Interfaces (4018166FNR)" class="work-thumbnail" src="https://attachments.academia-assets.com/122094948/thumbnails/1.jpg" /></a></div><div class="wp-workCard wp-workCard_itemContainer"><div class="wp-workCard_item wp-workCard--title"><a class="js-work-strip-work-link text-gray-darker" data-click-track="profile-work-strip-title" href="https://www.academia.edu/30158334/Virtual_and_Augmented_Reality_Next_Generation_User_Interfaces_4018166FNR_">Virtual and Augmented Reality - Next Generation User Interfaces (4018166FNR)</a></div><div class="wp-workCard_item"><span class="js-work-more-abstract-truncated">This lecture forms part of the &#39;Next Generation User Interfaces&#39; course given at the Vrije Univer...</span><a class="js-work-more-abstract" data-broccoli-component="work_strip.more_abstract" data-click-track="profile-work-strip-more-abstract" href="javascript:;"><span> more </span><span><i class="fa fa-caret-down"></i></span></a><span class="js-work-more-abstract-untruncated hidden">This lecture forms part of the &#39;Next Generation User Interfaces&#39; course given at the Vrije Universiteit Brussel.</span></div><div class="wp-workCard_item wp-workCard--actions"><span class="work-strip-bookmark-button-container"></span><a id="07872f4007365ddacc767286451f2408" class="wp-workCard--action" rel="nofollow" data-click-track="profile-work-strip-download" data-download="{&quot;attachment_id&quot;:122094948,&quot;asset_id&quot;:30158334,&quot;asset_type&quot;:&quot;Work&quot;,&quot;button_location&quot;:&quot;profile&quot;}" href="https://www.academia.edu/attachments/122094948/download_file?s=profile"><span><i class="fa fa-arrow-down"></i></span><span>Download</span></a><span class="wp-workCard--action visible-if-viewed-by-owner inline-block" style="display: none;"><span class="js-profile-work-strip-edit-button-wrapper profile-work-strip-edit-button-wrapper" data-work-id="30158334"><a class="js-profile-work-strip-edit-button" tabindex="0"><span><i class="fa fa-pencil"></i></span><span>Edit</span></a></span></span></div><div class="wp-workCard_item wp-workCard--stats"><span><span><span class="js-view-count view-count u-mr2x" data-work-id="30158334"><i class="fa fa-spinner fa-spin"></i></span><script>$(function () { var workId = 30158334; window.Academia.workViewCountsFetcher.queue(workId, function (count) { var description = window.$h.commaizeInt(count) + " " + window.$h.pluralize(count, 'View'); $(".js-view-count[data-work-id=30158334]").text(description); $(".js-view-count[data-work-id=30158334]").attr('title', description).tooltip(); }); });</script></span></span><span><span class="percentile-widget hidden"><span class="u-mr2x work-percentile"></span></span><script>$(function () { var workId = 30158334; window.Academia.workPercentilesFetcher.queue(workId, function (percentileText) { var container = $(".js-work-strip[data-work-id='30158334']"); container.find('.work-percentile').text(percentileText.charAt(0).toUpperCase() + percentileText.slice(1)); container.find('.percentile-widget').show(); container.find('.percentile-widget').removeClass('hidden'); }); });</script></span></div><div id="work-strip-premium-row-container"></div></div></div><script> require.config({ waitSeconds: 90 })(["https://a.academia-assets.com/assets/wow_profile-a9bf3a2bc8c89fa2a77156577594264ee8a0f214d74241bc0fcd3f69f8d107ac.js","https://a.academia-assets.com/assets/work_edit-ad038b8c047c1a8d4fa01b402d530ff93c45fee2137a149a4a5398bc8ad67560.js"], function() { // from javascript_helper.rb var dispatcherData = {} if (true){ window.WowProfile.dispatcher = window.WowProfile.dispatcher || _.clone(Backbone.Events); dispatcherData = { dispatcher: window.WowProfile.dispatcher, downloadLinkId: "07872f4007365ddacc767286451f2408" } } $('.js-work-strip[data-work-id=30158334]').each(function() { if (!$(this).data('initialized')) { new WowProfile.WorkStripView({ el: this, workJSON: {"id":30158334,"title":"Virtual and Augmented Reality - Next Generation User Interfaces (4018166FNR)","translated_title":"","metadata":{"abstract":"This lecture forms part of the 'Next Generation User Interfaces' course given at the Vrije Universiteit Brussel. ","publication_date":{"day":null,"month":null,"year":2025,"errors":{}}},"translated_abstract":"This lecture forms part of the 'Next Generation User Interfaces' course given at the Vrije Universiteit Brussel. ","internal_url":"https://www.academia.edu/30158334/Virtual_and_Augmented_Reality_Next_Generation_User_Interfaces_4018166FNR_","translated_internal_url":"","created_at":"2016-11-29T08:00:48.047-08:00","preview_url":null,"current_user_can_edit":null,"current_user_is_owner":null,"owner_id":13155,"coauthors_can_edit":true,"document_type":"teaching_document","co_author_tags":[],"downloadable_attachments":[{"id":122094948,"title":"","file_type":"pdf","scribd_thumbnail_url":"https://attachments.academia-assets.com/122094948/thumbnails/1.jpg","file_name":"lecture_08_virtualAugmentedReality.pdf","download_url":"https://www.academia.edu/attachments/122094948/download_file","bulk_download_file_name":"Virtual_and_Augmented_Reality_Next_Gener.pdf","bulk_download_url":"https://d1wqtxts1xzle7.cloudfront.net/122094948/lecture_08_virtualAugmentedReality-libre.pdf?1743408636=\u0026response-content-disposition=attachment%3B+filename%3DVirtual_and_Augmented_Reality_Next_Gener.pdf\u0026Expires=1744203894\u0026Signature=TbGFdKXrBtnev77OPnNhfCv9g3bWzfTKDe-ggv73NbsYXcVa5CKsRn55rHV-kMjy5nn6uObrrDJP3kLqWd27c3GWntbjczwJY3GiixBxqr~8Bb6vej0dkarze5LWzVLFxn9fTcCquUKAgD7HWunYav4JaYlIoPeSJRWbLrrEebCO400v73mThf9iyqAZBKfA3FAb5LOtxrfkRBiUmYdTuyWsHnbYmCcfou8TuBAQWidmlQt7jDiWqcA7hAzutT9htuYdbDTHuv69p7IrBTIKwMT7gfStFixOFW9C3EuhuKmfEILlBKkSgP4zDP~Jox27NMCwBtiavtiO2qsWvOHwSA__\u0026Key-Pair-Id=APKAJLOHF5GGSLRBV4ZA"}],"slug":"Virtual_and_Augmented_Reality_Next_Generation_User_Interfaces_4018166FNR_","translated_slug":"","page_count":49,"language":"en","content_type":"Work","summary":"This lecture forms part of the 'Next Generation User Interfaces' course given at the Vrije Universiteit Brussel. ","impression_tracking_id":null,"owner":{"id":13155,"first_name":"Beat","middle_initials":null,"last_name":"Signer","page_name":"BeatSigner","domain_name":"vub","created_at":"2008-10-29T17:11:18.015-07:00","display_name":"Beat Signer","url":"https://vub.academia.edu/BeatSigner"},"attachments":[{"id":122094948,"title":"","file_type":"pdf","scribd_thumbnail_url":"https://attachments.academia-assets.com/122094948/thumbnails/1.jpg","file_name":"lecture_08_virtualAugmentedReality.pdf","download_url":"https://www.academia.edu/attachments/122094948/download_file","bulk_download_file_name":"Virtual_and_Augmented_Reality_Next_Gener.pdf","bulk_download_url":"https://d1wqtxts1xzle7.cloudfront.net/122094948/lecture_08_virtualAugmentedReality-libre.pdf?1743408636=\u0026response-content-disposition=attachment%3B+filename%3DVirtual_and_Augmented_Reality_Next_Gener.pdf\u0026Expires=1744203894\u0026Signature=TbGFdKXrBtnev77OPnNhfCv9g3bWzfTKDe-ggv73NbsYXcVa5CKsRn55rHV-kMjy5nn6uObrrDJP3kLqWd27c3GWntbjczwJY3GiixBxqr~8Bb6vej0dkarze5LWzVLFxn9fTcCquUKAgD7HWunYav4JaYlIoPeSJRWbLrrEebCO400v73mThf9iyqAZBKfA3FAb5LOtxrfkRBiUmYdTuyWsHnbYmCcfou8TuBAQWidmlQt7jDiWqcA7hAzutT9htuYdbDTHuv69p7IrBTIKwMT7gfStFixOFW9C3EuhuKmfEILlBKkSgP4zDP~Jox27NMCwBtiavtiO2qsWvOHwSA__\u0026Key-Pair-Id=APKAJLOHF5GGSLRBV4ZA"}],"research_interests":[{"id":37,"name":"Information Systems","url":"https://www.academia.edu/Documents/in/Information_Systems"},{"id":48,"name":"Engineering","url":"https://www.academia.edu/Documents/in/Engineering"},{"id":422,"name":"Computer Science","url":"https://www.academia.edu/Documents/in/Computer_Science"},{"id":449,"name":"Software Engineering","url":"https://www.academia.edu/Documents/in/Software_Engineering"},{"id":472,"name":"Human Computer Interaction","url":"https://www.academia.edu/Documents/in/Human_Computer_Interaction"},{"id":491,"name":"Information Technology","url":"https://www.academia.edu/Documents/in/Information_Technology"},{"id":922,"name":"Education","url":"https://www.academia.edu/Documents/in/Education"},{"id":923,"name":"Technology","url":"https://www.academia.edu/Documents/in/Technology"},{"id":933,"name":"New Media","url":"https://www.academia.edu/Documents/in/New_Media"},{"id":1128,"name":"Computer Science Education","url":"https://www.academia.edu/Documents/in/Computer_Science_Education"},{"id":1210,"name":"Informatics","url":"https://www.academia.edu/Documents/in/Informatics"},{"id":1380,"name":"Computer Engineering","url":"https://www.academia.edu/Documents/in/Computer_Engineering"},{"id":1440,"name":"Visualization","url":"https://www.academia.edu/Documents/in/Visualization"},{"id":1453,"name":"Information Management","url":"https://www.academia.edu/Documents/in/Information_Management"},{"id":1736,"name":"Science Education","url":"https://www.academia.edu/Documents/in/Science_Education"},{"id":2151,"name":"Virtual Reality (Computer Graphics)","url":"https://www.academia.edu/Documents/in/Virtual_Reality_Computer_Graphics_"},{"id":2621,"name":"Higher Education","url":"https://www.academia.edu/Documents/in/Higher_Education"},{"id":2869,"name":"Digital Media","url":"https://www.academia.edu/Documents/in/Digital_Media"},{"id":2875,"name":"User Experience (UX)","url":"https://www.academia.edu/Documents/in/User_Experience_UX_"},{"id":2879,"name":"Ubiquitous Computing","url":"https://www.academia.edu/Documents/in/Ubiquitous_Computing"},{"id":3147,"name":"Gesture","url":"https://www.academia.edu/Documents/in/Gesture"},{"id":3419,"name":"Multimedia","url":"https://www.academia.edu/Documents/in/Multimedia"},{"id":3424,"name":"Information Visualization","url":"https://www.academia.edu/Documents/in/Information_Visualization"},{"id":3457,"name":"Learning and Teaching","url":"https://www.academia.edu/Documents/in/Learning_and_Teaching"},{"id":4198,"name":"Mobile Technology","url":"https://www.academia.edu/Documents/in/Mobile_Technology"},{"id":4416,"name":"Interaction Design","url":"https://www.academia.edu/Documents/in/Interaction_Design"},{"id":5266,"name":"Human Information Interaction","url":"https://www.academia.edu/Documents/in/Human_Information_Interaction"},{"id":5673,"name":"Augmented Reality","url":"https://www.academia.edu/Documents/in/Augmented_Reality"},{"id":9134,"name":"Pervasive Computing","url":"https://www.academia.edu/Documents/in/Pervasive_Computing"},{"id":10165,"name":"Interactive and Digital Media","url":"https://www.academia.edu/Documents/in/Interactive_and_Digital_Media"},{"id":11085,"name":"Cross-Media Information Spaces","url":"https://www.academia.edu/Documents/in/Cross-Media_Information_Spaces"},{"id":11119,"name":"User Interface","url":"https://www.academia.edu/Documents/in/User_Interface"},{"id":21201,"name":"Multimodality","url":"https://www.academia.edu/Documents/in/Multimodality"},{"id":26825,"name":"Mobile Computing","url":"https://www.academia.edu/Documents/in/Mobile_Computing"},{"id":39369,"name":"Augmented Reality (Computer Science)","url":"https://www.academia.edu/Documents/in/Augmented_Reality_Computer_Science_"},{"id":41030,"name":"Mobile Augmented Reality","url":"https://www.academia.edu/Documents/in/Mobile_Augmented_Reality"},{"id":50642,"name":"Virtual Reality","url":"https://www.academia.edu/Documents/in/Virtual_Reality"},{"id":289031,"name":"Augmented Reality, Education , Mobile application","url":"https://www.academia.edu/Documents/in/Augmented_Reality_Education_Mobile_application"},{"id":458849,"name":"Pervasive and Ubiquitous Computing","url":"https://www.academia.edu/Documents/in/Pervasive_and_Ubiquitous_Computing"},{"id":1019262,"name":"Virtual Reality Technology","url":"https://www.academia.edu/Documents/in/Virtual_Reality_Technology"}],"urls":[{"id":34176735,"url":"https://wise.vub.ac.be/course/next-generation-user-interfaces"}]}, dispatcherData: dispatcherData }); $(this).data('initialized', true); } }); $a.trackClickSource(".js-work-strip-work-link", "profile_work_strip") if (false) { Aedu.setUpFigureCarousel('profile-work-30158334-figures'); } }); </script> <div class="js-work-strip profile--work_container" data-work-id="23138235"><div class="profile--work_thumbnail hidden-xs"><a class="js-work-strip-work-link" data-click-track="profile-work-strip-thumbnail" href="https://www.academia.edu/23138235/Web_Search_and_SEO_Web_Technologies_1019888BNR_"><img alt="Research paper thumbnail of Web Search and SEO - Web Technologies (1019888BNR)" class="work-thumbnail" src="https://attachments.academia-assets.com/119843634/thumbnails/1.jpg" /></a></div><div class="wp-workCard wp-workCard_itemContainer"><div class="wp-workCard_item wp-workCard--title"><a class="js-work-strip-work-link text-gray-darker" data-click-track="profile-work-strip-title" href="https://www.academia.edu/23138235/Web_Search_and_SEO_Web_Technologies_1019888BNR_">Web Search and SEO - Web Technologies (1019888BNR)</a></div><div class="wp-workCard_item"><span class="js-work-more-abstract-truncated">This lecture forms part of the &#39;Web Technologies&#39; course given at the Vrije Universiteit Brussel.</span></div><div class="wp-workCard_item wp-workCard--actions"><span class="work-strip-bookmark-button-container"></span><a id="169dd3f505285a0babbcfed8cab9575d" class="wp-workCard--action" rel="nofollow" data-click-track="profile-work-strip-download" data-download="{&quot;attachment_id&quot;:119843634,&quot;asset_id&quot;:23138235,&quot;asset_type&quot;:&quot;Work&quot;,&quot;button_location&quot;:&quot;profile&quot;}" href="https://www.academia.edu/attachments/119843634/download_file?s=profile"><span><i class="fa fa-arrow-down"></i></span><span>Download</span></a><span class="wp-workCard--action visible-if-viewed-by-owner inline-block" style="display: none;"><span class="js-profile-work-strip-edit-button-wrapper profile-work-strip-edit-button-wrapper" data-work-id="23138235"><a class="js-profile-work-strip-edit-button" tabindex="0"><span><i class="fa fa-pencil"></i></span><span>Edit</span></a></span></span></div><div class="wp-workCard_item wp-workCard--stats"><span><span><span class="js-view-count view-count u-mr2x" data-work-id="23138235"><i class="fa fa-spinner fa-spin"></i></span><script>$(function () { var workId = 23138235; window.Academia.workViewCountsFetcher.queue(workId, function (count) { var description = window.$h.commaizeInt(count) + " " + window.$h.pluralize(count, 'View'); $(".js-view-count[data-work-id=23138235]").text(description); $(".js-view-count[data-work-id=23138235]").attr('title', description).tooltip(); }); });</script></span></span><span><span class="percentile-widget hidden"><span class="u-mr2x work-percentile"></span></span><script>$(function () { var workId = 23138235; window.Academia.workPercentilesFetcher.queue(workId, function (percentileText) { var container = $(".js-work-strip[data-work-id='23138235']"); container.find('.work-percentile').text(percentileText.charAt(0).toUpperCase() + percentileText.slice(1)); container.find('.percentile-widget').show(); container.find('.percentile-widget').removeClass('hidden'); }); });</script></span></div><div id="work-strip-premium-row-container"></div></div></div><script> require.config({ waitSeconds: 90 })(["https://a.academia-assets.com/assets/wow_profile-a9bf3a2bc8c89fa2a77156577594264ee8a0f214d74241bc0fcd3f69f8d107ac.js","https://a.academia-assets.com/assets/work_edit-ad038b8c047c1a8d4fa01b402d530ff93c45fee2137a149a4a5398bc8ad67560.js"], function() { // from javascript_helper.rb var dispatcherData = {} if (true){ window.WowProfile.dispatcher = window.WowProfile.dispatcher || _.clone(Backbone.Events); dispatcherData = { dispatcher: window.WowProfile.dispatcher, downloadLinkId: "169dd3f505285a0babbcfed8cab9575d" } } $('.js-work-strip[data-work-id=23138235]').each(function() { if (!$(this).data('initialized')) { new WowProfile.WorkStripView({ el: this, workJSON: {"id":23138235,"title":"Web Search and SEO - Web Technologies (1019888BNR)","translated_title":"","metadata":{"abstract":"This lecture forms part of the 'Web Technologies' course given at the Vrije Universiteit Brussel."},"translated_abstract":"This lecture forms part of the 'Web Technologies' course given at the Vrije Universiteit Brussel.","internal_url":"https://www.academia.edu/23138235/Web_Search_and_SEO_Web_Technologies_1019888BNR_","translated_internal_url":"","created_at":"2016-03-11T07:41:40.492-08:00","preview_url":null,"current_user_can_edit":null,"current_user_is_owner":null,"owner_id":13155,"coauthors_can_edit":true,"document_type":"teaching_document","co_author_tags":[],"downloadable_attachments":[{"id":119843634,"title":"","file_type":"pdf","scribd_thumbnail_url":"https://attachments.academia-assets.com/119843634/thumbnails/1.jpg","file_name":"lecture_10_search.pdf","download_url":"https://www.academia.edu/attachments/119843634/download_file","bulk_download_file_name":"Web_Search_and_SEO_Web_Technologies_1019.pdf","bulk_download_url":"https://d1wqtxts1xzle7.cloudfront.net/119843634/lecture_10_search-libre.pdf?1732704166=\u0026response-content-disposition=attachment%3B+filename%3DWeb_Search_and_SEO_Web_Technologies_1019.pdf\u0026Expires=1744203894\u0026Signature=ENVy2UYEQEt~t7ftUruFspLsAR8eSZKINSKe8iSuzqsQ5zVBKLVwLzN8ltYK0avRLtbOTZjvizagbY8uaExQlHvsyO6Qu-EUjmgYpkh~TLrNPOXtV0P5f4pIVF-gluh4BPFshVbsP1R7iLg~zewLR8ODe~-pUAsWkkevT5wv3qaHp~~e6s~NF1Q92Cmf7-7W8svEIt-0~fHVy-gjhIV~~63BNlQ~5U7txWAQdeqOyGbzkFb2LqqrG1Xg7QXnG5Z7pAxGJssUxyS22o39TwCxG5YnB0m9g8LeLt7YyEgD06LNkW19ql6JVsMz4PUENqEAhmS02saPaYGiWxboAchyAg__\u0026Key-Pair-Id=APKAJLOHF5GGSLRBV4ZA"}],"slug":"Web_Search_and_SEO_Web_Technologies_1019888BNR_","translated_slug":"","page_count":58,"language":"en","content_type":"Work","summary":"This lecture forms part of the 'Web Technologies' course given at the Vrije Universiteit Brussel.","impression_tracking_id":null,"owner":{"id":13155,"first_name":"Beat","middle_initials":null,"last_name":"Signer","page_name":"BeatSigner","domain_name":"vub","created_at":"2008-10-29T17:11:18.015-07:00","display_name":"Beat Signer","url":"https://vub.academia.edu/BeatSigner"},"attachments":[{"id":119843634,"title":"","file_type":"pdf","scribd_thumbnail_url":"https://attachments.academia-assets.com/119843634/thumbnails/1.jpg","file_name":"lecture_10_search.pdf","download_url":"https://www.academia.edu/attachments/119843634/download_file","bulk_download_file_name":"Web_Search_and_SEO_Web_Technologies_1019.pdf","bulk_download_url":"https://d1wqtxts1xzle7.cloudfront.net/119843634/lecture_10_search-libre.pdf?1732704166=\u0026response-content-disposition=attachment%3B+filename%3DWeb_Search_and_SEO_Web_Technologies_1019.pdf\u0026Expires=1744203894\u0026Signature=ENVy2UYEQEt~t7ftUruFspLsAR8eSZKINSKe8iSuzqsQ5zVBKLVwLzN8ltYK0avRLtbOTZjvizagbY8uaExQlHvsyO6Qu-EUjmgYpkh~TLrNPOXtV0P5f4pIVF-gluh4BPFshVbsP1R7iLg~zewLR8ODe~-pUAsWkkevT5wv3qaHp~~e6s~NF1Q92Cmf7-7W8svEIt-0~fHVy-gjhIV~~63BNlQ~5U7txWAQdeqOyGbzkFb2LqqrG1Xg7QXnG5Z7pAxGJssUxyS22o39TwCxG5YnB0m9g8LeLt7YyEgD06LNkW19ql6JVsMz4PUENqEAhmS02saPaYGiWxboAchyAg__\u0026Key-Pair-Id=APKAJLOHF5GGSLRBV4ZA"}],"research_interests":[{"id":422,"name":"Computer Science","url":"https://www.academia.edu/Documents/in/Computer_Science"},{"id":922,"name":"Education","url":"https://www.academia.edu/Documents/in/Education"},{"id":923,"name":"Technology","url":"https://www.academia.edu/Documents/in/Technology"},{"id":1128,"name":"Computer Science Education","url":"https://www.academia.edu/Documents/in/Computer_Science_Education"},{"id":1210,"name":"Informatics","url":"https://www.academia.edu/Documents/in/Informatics"},{"id":2621,"name":"Higher Education","url":"https://www.academia.edu/Documents/in/Higher_Education"},{"id":3457,"name":"Learning and Teaching","url":"https://www.academia.edu/Documents/in/Learning_and_Teaching"},{"id":8813,"name":"Web search","url":"https://www.academia.edu/Documents/in/Web_search"},{"id":28190,"name":"Semantic Search Engine","url":"https://www.academia.edu/Documents/in/Semantic_Search_Engine"},{"id":35838,"name":"Search Engines","url":"https://www.academia.edu/Documents/in/Search_Engines"},{"id":54192,"name":"Hypertext","url":"https://www.academia.edu/Documents/in/Hypertext"},{"id":68725,"name":"Search Engine Optimization","url":"https://www.academia.edu/Documents/in/Search_Engine_Optimization"},{"id":120695,"name":"Search","url":"https://www.academia.edu/Documents/in/Search"},{"id":202805,"name":"Pagerank","url":"https://www.academia.edu/Documents/in/Pagerank"},{"id":962738,"name":"Google Pagerank","url":"https://www.academia.edu/Documents/in/Google_Pagerank"}],"urls":[{"id":34176769,"url":"https://wise.vub.ac.be/course/web-technologies"}]}, dispatcherData: dispatcherData }); $(this).data('initialized', true); } }); $a.trackClickSource(".js-work-strip-work-link", "profile_work_strip") if (false) { Aedu.setUpFigureCarousel('profile-work-23138235-figures'); } }); </script> <div class="js-work-strip profile--work_container" data-work-id="30708437"><div class="profile--work_thumbnail hidden-xs"><a class="js-work-strip-work-link" data-click-track="profile-work-strip-thumbnail" href="https://www.academia.edu/30708437/Security_Privacy_and_Trust_Web_Technologies_1019888BNR_"><img alt="Research paper thumbnail of Security, Privacy and Trust - Web Technologies (1019888BNR)" class="work-thumbnail" src="https://attachments.academia-assets.com/119959551/thumbnails/1.jpg" /></a></div><div class="wp-workCard wp-workCard_itemContainer"><div class="wp-workCard_item wp-workCard--title"><a class="js-work-strip-work-link text-gray-darker" data-click-track="profile-work-strip-title" href="https://www.academia.edu/30708437/Security_Privacy_and_Trust_Web_Technologies_1019888BNR_">Security, Privacy and Trust - Web Technologies (1019888BNR)</a></div><div class="wp-workCard_item"><span class="js-work-more-abstract-truncated">This lecture forms part of the &#39;Web Technologies&#39; course given at the Vrije Universiteit Brussel.</span></div><div class="wp-workCard_item wp-workCard--actions"><span class="work-strip-bookmark-button-container"></span><a id="0733753bcc7903544ddfe60edae4c898" class="wp-workCard--action" rel="nofollow" data-click-track="profile-work-strip-download" data-download="{&quot;attachment_id&quot;:119959551,&quot;asset_id&quot;:30708437,&quot;asset_type&quot;:&quot;Work&quot;,&quot;button_location&quot;:&quot;profile&quot;}" href="https://www.academia.edu/attachments/119959551/download_file?s=profile"><span><i class="fa fa-arrow-down"></i></span><span>Download</span></a><span class="wp-workCard--action visible-if-viewed-by-owner inline-block" style="display: none;"><span class="js-profile-work-strip-edit-button-wrapper profile-work-strip-edit-button-wrapper" data-work-id="30708437"><a class="js-profile-work-strip-edit-button" tabindex="0"><span><i class="fa fa-pencil"></i></span><span>Edit</span></a></span></span></div><div class="wp-workCard_item wp-workCard--stats"><span><span><span class="js-view-count view-count u-mr2x" data-work-id="30708437"><i class="fa fa-spinner fa-spin"></i></span><script>$(function () { var workId = 30708437; window.Academia.workViewCountsFetcher.queue(workId, function (count) { var description = window.$h.commaizeInt(count) + " " + window.$h.pluralize(count, 'View'); $(".js-view-count[data-work-id=30708437]").text(description); $(".js-view-count[data-work-id=30708437]").attr('title', description).tooltip(); }); });</script></span></span><span><span class="percentile-widget hidden"><span class="u-mr2x work-percentile"></span></span><script>$(function () { var workId = 30708437; window.Academia.workPercentilesFetcher.queue(workId, function (percentileText) { var container = $(".js-work-strip[data-work-id='30708437']"); container.find('.work-percentile').text(percentileText.charAt(0).toUpperCase() + percentileText.slice(1)); container.find('.percentile-widget').show(); container.find('.percentile-widget').removeClass('hidden'); }); });</script></span></div><div id="work-strip-premium-row-container"></div></div></div><script> require.config({ waitSeconds: 90 })(["https://a.academia-assets.com/assets/wow_profile-a9bf3a2bc8c89fa2a77156577594264ee8a0f214d74241bc0fcd3f69f8d107ac.js","https://a.academia-assets.com/assets/work_edit-ad038b8c047c1a8d4fa01b402d530ff93c45fee2137a149a4a5398bc8ad67560.js"], function() { // from javascript_helper.rb var dispatcherData = {} if (true){ window.WowProfile.dispatcher = window.WowProfile.dispatcher || _.clone(Backbone.Events); dispatcherData = { dispatcher: window.WowProfile.dispatcher, downloadLinkId: "0733753bcc7903544ddfe60edae4c898" } } $('.js-work-strip[data-work-id=30708437]').each(function() { if (!$(this).data('initialized')) { new WowProfile.WorkStripView({ el: this, workJSON: {"id":30708437,"title":"Security, Privacy and Trust - Web Technologies (1019888BNR)","translated_title":"","metadata":{"abstract":"This lecture forms part of the 'Web Technologies' course given at the Vrije Universiteit Brussel."},"translated_abstract":"This lecture forms part of the 'Web Technologies' course given at the Vrije Universiteit Brussel.","internal_url":"https://www.academia.edu/30708437/Security_Privacy_and_Trust_Web_Technologies_1019888BNR_","translated_internal_url":"","created_at":"2017-01-02T12:50:17.245-08:00","preview_url":null,"current_user_can_edit":null,"current_user_is_owner":null,"owner_id":13155,"coauthors_can_edit":true,"document_type":"teaching_document","co_author_tags":[],"downloadable_attachments":[{"id":119959551,"title":"","file_type":"pdf","scribd_thumbnail_url":"https://attachments.academia-assets.com/119959551/thumbnails/1.jpg","file_name":"lecture_11_securityPrivacyTrust.pdf","download_url":"https://www.academia.edu/attachments/119959551/download_file","bulk_download_file_name":"Security_Privacy_and_Trust_Web_Technolog.pdf","bulk_download_url":"https://d1wqtxts1xzle7.cloudfront.net/119959551/lecture_11_securityPrivacyTrust-libre.pdf?1733158578=\u0026response-content-disposition=attachment%3B+filename%3DSecurity_Privacy_and_Trust_Web_Technolog.pdf\u0026Expires=1744203894\u0026Signature=gZMWqpq3sX~FTiqV2LVmWlGw8ig8ZWKB0YIbdsVCt4uNOZrsYGfAz0tGaONDwS6PsytfJ85J8jvJG4Kcphxs4nSYDt0DkMi32utEw9yKiq8q0TG51OAwwD9DDN1xIlICIsKNXW7AbqjrqBPaHi93gg2SHFw7JtxHA2vaR~zLlM1KjxtZvGyJ-9Rwe53iIeUcVSjj2jATdM3k2kr0N6yrAssnRsGMi53Cw6muqR03L3QJ7VIuYn8tDH6Nw2oPSiha1AUaWpCgv5lnQGCWpAVzJHX4J-hD~iyjipe3QAWTArCwbsdWnMerkIoQJi2NGndndWT1tJ51hp~Gc7tfF-ORog__\u0026Key-Pair-Id=APKAJLOHF5GGSLRBV4ZA"}],"slug":"Security_Privacy_and_Trust_Web_Technologies_1019888BNR_","translated_slug":"","page_count":39,"language":"en","content_type":"Work","summary":"This lecture forms part of the 'Web Technologies' course given at the Vrije Universiteit Brussel.","impression_tracking_id":null,"owner":{"id":13155,"first_name":"Beat","middle_initials":null,"last_name":"Signer","page_name":"BeatSigner","domain_name":"vub","created_at":"2008-10-29T17:11:18.015-07:00","display_name":"Beat Signer","url":"https://vub.academia.edu/BeatSigner"},"attachments":[{"id":119959551,"title":"","file_type":"pdf","scribd_thumbnail_url":"https://attachments.academia-assets.com/119959551/thumbnails/1.jpg","file_name":"lecture_11_securityPrivacyTrust.pdf","download_url":"https://www.academia.edu/attachments/119959551/download_file","bulk_download_file_name":"Security_Privacy_and_Trust_Web_Technolog.pdf","bulk_download_url":"https://d1wqtxts1xzle7.cloudfront.net/119959551/lecture_11_securityPrivacyTrust-libre.pdf?1733158578=\u0026response-content-disposition=attachment%3B+filename%3DSecurity_Privacy_and_Trust_Web_Technolog.pdf\u0026Expires=1744203894\u0026Signature=gZMWqpq3sX~FTiqV2LVmWlGw8ig8ZWKB0YIbdsVCt4uNOZrsYGfAz0tGaONDwS6PsytfJ85J8jvJG4Kcphxs4nSYDt0DkMi32utEw9yKiq8q0TG51OAwwD9DDN1xIlICIsKNXW7AbqjrqBPaHi93gg2SHFw7JtxHA2vaR~zLlM1KjxtZvGyJ-9Rwe53iIeUcVSjj2jATdM3k2kr0N6yrAssnRsGMi53Cw6muqR03L3QJ7VIuYn8tDH6Nw2oPSiha1AUaWpCgv5lnQGCWpAVzJHX4J-hD~iyjipe3QAWTArCwbsdWnMerkIoQJi2NGndndWT1tJ51hp~Gc7tfF-ORog__\u0026Key-Pair-Id=APKAJLOHF5GGSLRBV4ZA"}],"research_interests":[{"id":48,"name":"Engineering","url":"https://www.academia.edu/Documents/in/Engineering"},{"id":422,"name":"Computer Science","url":"https://www.academia.edu/Documents/in/Computer_Science"},{"id":491,"name":"Information Technology","url":"https://www.academia.edu/Documents/in/Information_Technology"},{"id":922,"name":"Education","url":"https://www.academia.edu/Documents/in/Education"},{"id":923,"name":"Technology","url":"https://www.academia.edu/Documents/in/Technology"},{"id":1128,"name":"Computer Science Education","url":"https://www.academia.edu/Documents/in/Computer_Science_Education"},{"id":1210,"name":"Informatics","url":"https://www.academia.edu/Documents/in/Informatics"},{"id":1380,"name":"Computer Engineering","url":"https://www.academia.edu/Documents/in/Computer_Engineering"},{"id":1384,"name":"Web Engineering","url":"https://www.academia.edu/Documents/in/Web_Engineering"},{"id":1736,"name":"Science Education","url":"https://www.academia.edu/Documents/in/Science_Education"},{"id":2621,"name":"Higher Education","url":"https://www.academia.edu/Documents/in/Higher_Education"},{"id":3457,"name":"Learning and Teaching","url":"https://www.academia.edu/Documents/in/Learning_and_Teaching"},{"id":5978,"name":"Web Technologies","url":"https://www.academia.edu/Documents/in/Web_Technologies"},{"id":29124,"name":"Web Science","url":"https://www.academia.edu/Documents/in/Web_Science"},{"id":141114,"name":"World Wide Web","url":"https://www.academia.edu/Documents/in/World_Wide_Web"}],"urls":[{"id":34176775,"url":"https://wise.vub.ac.be/course/web-technologies"}]}, dispatcherData: dispatcherData }); $(this).data('initialized', true); } }); $a.trackClickSource(".js-work-strip-work-link", "profile_work_strip") if (false) { Aedu.setUpFigureCarousel('profile-work-30708437-figures'); } }); </script> </div><div class="profile--tab_content_container js-tab-pane tab-pane" data-section-id="4627627" id="editedproceedings"><div class="js-work-strip profile--work_container" data-work-id="22095927"><div class="profile--work_thumbnail hidden-xs"><a class="js-work-strip-work-link" data-click-track="profile-work-strip-thumbnail" href="https://www.academia.edu/22095927/Proceedings_of_the_2nd_International_Workshop_on_Collaborating_over_Paper_and_Digital_Documents_CoPADD_2007_"><img alt="Research paper thumbnail of Proceedings of the 2nd International Workshop on Collaborating over Paper and Digital Documents (CoPADD 2007)" class="work-thumbnail" src="https://attachments.academia-assets.com/42767188/thumbnails/1.jpg" /></a></div><div class="wp-workCard wp-workCard_itemContainer"><div class="wp-workCard_item wp-workCard--title"><a class="js-work-strip-work-link text-gray-darker" data-click-track="profile-work-strip-title" href="https://www.academia.edu/22095927/Proceedings_of_the_2nd_International_Workshop_on_Collaborating_over_Paper_and_Digital_Documents_CoPADD_2007_">Proceedings of the 2nd International Workshop on Collaborating over Paper and Digital Documents (CoPADD 2007)</a></div><div class="wp-workCard_item"><span class="js-work-more-abstract-truncated">In this paper, we describe the process of using user needs, collected through case communities, t...</span><a class="js-work-more-abstract" data-broccoli-component="work_strip.more_abstract" data-click-track="profile-work-strip-more-abstract" href="javascript:;"><span> more </span><span><i class="fa fa-caret-down"></i></span></a><span class="js-work-more-abstract-untruncated hidden">In this paper, we describe the process of using user needs, collected through case communities, to design concepts that link print products and digital services. One of the example cases was an online community, while the other was a traditional off-line community. Users often, perhaps subconsciously, know the strengths of print and web and use the media suitable for their needs accordingly. With a user-centered design approach, it&#39;s possible to create new and enticing hybrid media products for user communities with very different backgrounds.</span></div><div class="wp-workCard_item wp-workCard--actions"><span class="work-strip-bookmark-button-container"></span><a id="a85361157cdcfa8ab950c371a0813534" class="wp-workCard--action" rel="nofollow" data-click-track="profile-work-strip-download" data-download="{&quot;attachment_id&quot;:42767188,&quot;asset_id&quot;:22095927,&quot;asset_type&quot;:&quot;Work&quot;,&quot;button_location&quot;:&quot;profile&quot;}" href="https://www.academia.edu/attachments/42767188/download_file?s=profile"><span><i class="fa fa-arrow-down"></i></span><span>Download</span></a><span class="wp-workCard--action visible-if-viewed-by-owner inline-block" style="display: none;"><span class="js-profile-work-strip-edit-button-wrapper profile-work-strip-edit-button-wrapper" data-work-id="22095927"><a class="js-profile-work-strip-edit-button" tabindex="0"><span><i class="fa fa-pencil"></i></span><span>Edit</span></a></span></span></div><div class="wp-workCard_item wp-workCard--stats"><span><span><span class="js-view-count view-count u-mr2x" data-work-id="22095927"><i class="fa fa-spinner fa-spin"></i></span><script>$(function () { var workId = 22095927; window.Academia.workViewCountsFetcher.queue(workId, function (count) { var description = window.$h.commaizeInt(count) + " " + window.$h.pluralize(count, 'View'); $(".js-view-count[data-work-id=22095927]").text(description); $(".js-view-count[data-work-id=22095927]").attr('title', description).tooltip(); }); });</script></span></span><span><span class="percentile-widget hidden"><span class="u-mr2x work-percentile"></span></span><script>$(function () { var workId = 22095927; window.Academia.workPercentilesFetcher.queue(workId, function (percentileText) { var container = $(".js-work-strip[data-work-id='22095927']"); container.find('.work-percentile').text(percentileText.charAt(0).toUpperCase() + percentileText.slice(1)); container.find('.percentile-widget').show(); container.find('.percentile-widget').removeClass('hidden'); }); });</script></span></div><div id="work-strip-premium-row-container"></div></div></div><script> require.config({ waitSeconds: 90 })(["https://a.academia-assets.com/assets/wow_profile-a9bf3a2bc8c89fa2a77156577594264ee8a0f214d74241bc0fcd3f69f8d107ac.js","https://a.academia-assets.com/assets/work_edit-ad038b8c047c1a8d4fa01b402d530ff93c45fee2137a149a4a5398bc8ad67560.js"], function() { // from javascript_helper.rb var dispatcherData = {} if (true){ window.WowProfile.dispatcher = window.WowProfile.dispatcher || _.clone(Backbone.Events); dispatcherData = { dispatcher: window.WowProfile.dispatcher, downloadLinkId: "a85361157cdcfa8ab950c371a0813534" } } $('.js-work-strip[data-work-id=22095927]').each(function() { if (!$(this).data('initialized')) { new WowProfile.WorkStripView({ el: this, workJSON: {"id":22095927,"title":"Proceedings of the 2nd International Workshop on Collaborating over Paper and Digital Documents (CoPADD 2007)","translated_title":"","metadata":{"location":"London, UK, November 2007","grobid_abstract":"In this paper, we describe the process of using user needs, collected through case communities, to design concepts that link print products and digital services. One of the example cases was an online community, while the other was a traditional off-line community. Users often, perhaps subconsciously, know the strengths of print and web and use the media suitable for their needs accordingly. With a user-centered design approach, it's possible to create new and enticing hybrid media products for user communities with very different backgrounds.","grobid_abstract_attachment_id":42767188},"translated_abstract":null,"internal_url":"https://www.academia.edu/22095927/Proceedings_of_the_2nd_International_Workshop_on_Collaborating_over_Paper_and_Digital_Documents_CoPADD_2007_","translated_internal_url":"","created_at":"2016-02-17T05:58:28.681-08:00","preview_url":null,"current_user_can_edit":null,"current_user_is_owner":null,"owner_id":13155,"coauthors_can_edit":true,"document_type":"other","co_author_tags":[],"downloadable_attachments":[{"id":42767188,"title":"","file_type":"pdf","scribd_thumbnail_url":"https://attachments.academia-assets.com/42767188/thumbnails/1.jpg","file_name":"CoPADD2007.pdf","download_url":"https://www.academia.edu/attachments/42767188/download_file","bulk_download_file_name":"Proceedings_of_the_2nd_International_Wor.pdf","bulk_download_url":"https://d1wqtxts1xzle7.cloudfront.net/42767188/CoPADD2007-libre.pdf?1455717439=\u0026response-content-disposition=attachment%3B+filename%3DProceedings_of_the_2nd_International_Wor.pdf\u0026Expires=1744203894\u0026Signature=bIPc~l9tSNEoIqcl-SbFo23QN60FT2DxVcZ3PX2298ScEQZRzWRdrgkQ7aemOoeIaDWF~T0DFLj3XfQ7a6IeOuu5qP55k8-w~s4OhFjzbYhEsZNCaz2D5uQkXHDMoRK7e5pZiEyS-6VrM6Hg2FZ-8aT08M7UFBdNnMVa5Y4NT7QnQgI23ZrBMk1nsfSgyExDn3b1sQzcjmVU~FpxiT0IDpkZXmjejvKHAXUbfvsmEZUCkoh4okLujrMgxvI8lZkz1Pg0Djvi~3Tfhq7jWFDdkc7l9FzcQYVYhdhPIuPVisJtQ9eTOkil4kBB8-oA9lsSWuN3Ef3QKXtduZx9B7WdqA__\u0026Key-Pair-Id=APKAJLOHF5GGSLRBV4ZA"}],"slug":"Proceedings_of_the_2nd_International_Workshop_on_Collaborating_over_Paper_and_Digital_Documents_CoPADD_2007_","translated_slug":"","page_count":48,"language":"en","content_type":"Work","summary":"In this paper, we describe the process of using user needs, collected through case communities, to design concepts that link print products and digital services. One of the example cases was an online community, while the other was a traditional off-line community. Users often, perhaps subconsciously, know the strengths of print and web and use the media suitable for their needs accordingly. With a user-centered design approach, it's possible to create new and enticing hybrid media products for user communities with very different backgrounds.","impression_tracking_id":null,"owner":{"id":13155,"first_name":"Beat","middle_initials":null,"last_name":"Signer","page_name":"BeatSigner","domain_name":"vub","created_at":"2008-10-29T17:11:18.015-07:00","display_name":"Beat Signer","url":"https://vub.academia.edu/BeatSigner"},"attachments":[{"id":42767188,"title":"","file_type":"pdf","scribd_thumbnail_url":"https://attachments.academia-assets.com/42767188/thumbnails/1.jpg","file_name":"CoPADD2007.pdf","download_url":"https://www.academia.edu/attachments/42767188/download_file","bulk_download_file_name":"Proceedings_of_the_2nd_International_Wor.pdf","bulk_download_url":"https://d1wqtxts1xzle7.cloudfront.net/42767188/CoPADD2007-libre.pdf?1455717439=\u0026response-content-disposition=attachment%3B+filename%3DProceedings_of_the_2nd_International_Wor.pdf\u0026Expires=1744203894\u0026Signature=bIPc~l9tSNEoIqcl-SbFo23QN60FT2DxVcZ3PX2298ScEQZRzWRdrgkQ7aemOoeIaDWF~T0DFLj3XfQ7a6IeOuu5qP55k8-w~s4OhFjzbYhEsZNCaz2D5uQkXHDMoRK7e5pZiEyS-6VrM6Hg2FZ-8aT08M7UFBdNnMVa5Y4NT7QnQgI23ZrBMk1nsfSgyExDn3b1sQzcjmVU~FpxiT0IDpkZXmjejvKHAXUbfvsmEZUCkoh4okLujrMgxvI8lZkz1Pg0Djvi~3Tfhq7jWFDdkc7l9FzcQYVYhdhPIuPVisJtQ9eTOkil4kBB8-oA9lsSWuN3Ef3QKXtduZx9B7WdqA__\u0026Key-Pair-Id=APKAJLOHF5GGSLRBV4ZA"}],"research_interests":[{"id":37,"name":"Information Systems","url":"https://www.academia.edu/Documents/in/Information_Systems"},{"id":48,"name":"Engineering","url":"https://www.academia.edu/Documents/in/Engineering"},{"id":422,"name":"Computer Science","url":"https://www.academia.edu/Documents/in/Computer_Science"},{"id":449,"name":"Software Engineering","url":"https://www.academia.edu/Documents/in/Software_Engineering"},{"id":472,"name":"Human Computer Interaction","url":"https://www.academia.edu/Documents/in/Human_Computer_Interaction"},{"id":491,"name":"Information Technology","url":"https://www.academia.edu/Documents/in/Information_Technology"},{"id":923,"name":"Technology","url":"https://www.academia.edu/Documents/in/Technology"},{"id":933,"name":"New Media","url":"https://www.academia.edu/Documents/in/New_Media"},{"id":1012,"name":"Digital Libraries","url":"https://www.academia.edu/Documents/in/Digital_Libraries"},{"id":1210,"name":"Informatics","url":"https://www.academia.edu/Documents/in/Informatics"},{"id":1380,"name":"Computer Engineering","url":"https://www.academia.edu/Documents/in/Computer_Engineering"},{"id":2869,"name":"Digital Media","url":"https://www.academia.edu/Documents/in/Digital_Media"},{"id":2875,"name":"User Experience (UX)","url":"https://www.academia.edu/Documents/in/User_Experience_UX_"},{"id":2879,"name":"Ubiquitous Computing","url":"https://www.academia.edu/Documents/in/Ubiquitous_Computing"},{"id":3419,"name":"Multimedia","url":"https://www.academia.edu/Documents/in/Multimedia"},{"id":4416,"name":"Interaction Design","url":"https://www.academia.edu/Documents/in/Interaction_Design"},{"id":5266,"name":"Human Information Interaction","url":"https://www.academia.edu/Documents/in/Human_Information_Interaction"},{"id":5673,"name":"Augmented Reality","url":"https://www.academia.edu/Documents/in/Augmented_Reality"},{"id":9134,"name":"Pervasive Computing","url":"https://www.academia.edu/Documents/in/Pervasive_Computing"},{"id":9135,"name":"The Internet of Things","url":"https://www.academia.edu/Documents/in/The_Internet_of_Things"},{"id":10165,"name":"Interactive and Digital Media","url":"https://www.academia.edu/Documents/in/Interactive_and_Digital_Media"},{"id":11085,"name":"Cross-Media Information Spaces","url":"https://www.academia.edu/Documents/in/Cross-Media_Information_Spaces"},{"id":11086,"name":"Interactive Paper","url":"https://www.academia.edu/Documents/in/Interactive_Paper"},{"id":11119,"name":"User Interface","url":"https://www.academia.edu/Documents/in/User_Interface"},{"id":15951,"name":"Digital Pen and Paper","url":"https://www.academia.edu/Documents/in/Digital_Pen_and_Paper"},{"id":20470,"name":"Tangible User Interfaces","url":"https://www.academia.edu/Documents/in/Tangible_User_Interfaces"},{"id":21201,"name":"Multimodality","url":"https://www.academia.edu/Documents/in/Multimodality"},{"id":23997,"name":"HCI","url":"https://www.academia.edu/Documents/in/HCI"},{"id":26825,"name":"Mobile Computing","url":"https://www.academia.edu/Documents/in/Mobile_Computing"},{"id":85420,"name":"Crossmedia","url":"https://www.academia.edu/Documents/in/Crossmedia"},{"id":458849,"name":"Pervasive and Ubiquitous Computing","url":"https://www.academia.edu/Documents/in/Pervasive_and_Ubiquitous_Computing"}],"urls":[]}, dispatcherData: dispatcherData }); $(this).data('initialized', true); } }); $a.trackClickSource(".js-work-strip-work-link", "profile_work_strip") if (false) { Aedu.setUpFigureCarousel('profile-work-22095927-figures'); } }); </script> <div class="js-work-strip profile--work_container" data-work-id="22094334"><div class="profile--work_thumbnail hidden-xs"><a class="js-work-strip-work-link" data-click-track="profile-work-strip-thumbnail" href="https://www.academia.edu/22094334/Proceedings_of_the_1st_International_Workshop_on_Collaborating_over_Paper_and_Digital_Documents_CoPADD_2006_"><img alt="Research paper thumbnail of Proceedings of the 1st International Workshop on Collaborating over Paper and Digital Documents (CoPADD 2006)" class="work-thumbnail" src="https://attachments.academia-assets.com/42765808/thumbnails/1.jpg" /></a></div><div class="wp-workCard wp-workCard_itemContainer"><div class="wp-workCard_item wp-workCard--title"><a class="js-work-strip-work-link text-gray-darker" data-click-track="profile-work-strip-title" href="https://www.academia.edu/22094334/Proceedings_of_the_1st_International_Workshop_on_Collaborating_over_Paper_and_Digital_Documents_CoPADD_2006_">Proceedings of the 1st International Workshop on Collaborating over Paper and Digital Documents (CoPADD 2006)</a></div><div class="wp-workCard_item"><span class="js-work-more-abstract-truncated">This paper presents a discussion of work combining paper maps with electronic information on hand...</span><a class="js-work-more-abstract" data-broccoli-component="work_strip.more_abstract" data-click-track="profile-work-strip-more-abstract" href="javascript:;"><span> more </span><span><i class="fa fa-caret-down"></i></span></a><span class="js-work-more-abstract-untruncated hidden">This paper presents a discussion of work combining paper maps with electronic information on handheld devices for use in a mobile context. We consider the benefits of paper, electronic media, and mixing the two in relation to the domain of group navigation. A prototype design is described that attempts to utilize these benefits toward providing a lightweight, ad hoc group navigation support system. Of particular interest is the extent to which an augmented paper map can be used as a shared display, to enhance communication and awareness of the activities of group members while navigating together.</span></div><div class="wp-workCard_item wp-workCard--actions"><span class="work-strip-bookmark-button-container"></span><a id="989547b34c10f6454c2b98ceaeee6420" class="wp-workCard--action" rel="nofollow" data-click-track="profile-work-strip-download" data-download="{&quot;attachment_id&quot;:42765808,&quot;asset_id&quot;:22094334,&quot;asset_type&quot;:&quot;Work&quot;,&quot;button_location&quot;:&quot;profile&quot;}" href="https://www.academia.edu/attachments/42765808/download_file?s=profile"><span><i class="fa fa-arrow-down"></i></span><span>Download</span></a><span class="wp-workCard--action visible-if-viewed-by-owner inline-block" style="display: none;"><span class="js-profile-work-strip-edit-button-wrapper profile-work-strip-edit-button-wrapper" data-work-id="22094334"><a class="js-profile-work-strip-edit-button" tabindex="0"><span><i class="fa fa-pencil"></i></span><span>Edit</span></a></span></span></div><div class="wp-workCard_item wp-workCard--stats"><span><span><span class="js-view-count view-count u-mr2x" data-work-id="22094334"><i class="fa fa-spinner fa-spin"></i></span><script>$(function () { var workId = 22094334; window.Academia.workViewCountsFetcher.queue(workId, function (count) { var description = window.$h.commaizeInt(count) + " " + window.$h.pluralize(count, 'View'); $(".js-view-count[data-work-id=22094334]").text(description); $(".js-view-count[data-work-id=22094334]").attr('title', description).tooltip(); }); });</script></span></span><span><span class="percentile-widget hidden"><span class="u-mr2x work-percentile"></span></span><script>$(function () { var workId = 22094334; window.Academia.workPercentilesFetcher.queue(workId, function (percentileText) { var container = $(".js-work-strip[data-work-id='22094334']"); container.find('.work-percentile').text(percentileText.charAt(0).toUpperCase() + percentileText.slice(1)); container.find('.percentile-widget').show(); container.find('.percentile-widget').removeClass('hidden'); }); });</script></span></div><div id="work-strip-premium-row-container"></div></div></div><script> require.config({ waitSeconds: 90 })(["https://a.academia-assets.com/assets/wow_profile-a9bf3a2bc8c89fa2a77156577594264ee8a0f214d74241bc0fcd3f69f8d107ac.js","https://a.academia-assets.com/assets/work_edit-ad038b8c047c1a8d4fa01b402d530ff93c45fee2137a149a4a5398bc8ad67560.js"], function() { // from javascript_helper.rb var dispatcherData = {} if (true){ window.WowProfile.dispatcher = window.WowProfile.dispatcher || _.clone(Backbone.Events); dispatcherData = { dispatcher: window.WowProfile.dispatcher, downloadLinkId: "989547b34c10f6454c2b98ceaeee6420" } } $('.js-work-strip[data-work-id=22094334]').each(function() { if (!$(this).data('initialized')) { new WowProfile.WorkStripView({ el: this, workJSON: {"id":22094334,"title":"Proceedings of the 1st International Workshop on Collaborating over Paper and Digital Documents (CoPADD 2006)","translated_title":"","metadata":{"location":"Banff, Canada, November 2006","ai_title_tag":"Integrating Paper Maps and Digital Tools for Group Navigation","grobid_abstract":"This paper presents a discussion of work combining paper maps with electronic information on handheld devices for use in a mobile context. We consider the benefits of paper, electronic media, and mixing the two in relation to the domain of group navigation. A prototype design is described that attempts to utilize these benefits toward providing a lightweight, ad hoc group navigation support system. Of particular interest is the extent to which an augmented paper map can be used as a shared display, to enhance communication and awareness of the activities of group members while navigating together.","grobid_abstract_attachment_id":42765808},"translated_abstract":null,"internal_url":"https://www.academia.edu/22094334/Proceedings_of_the_1st_International_Workshop_on_Collaborating_over_Paper_and_Digital_Documents_CoPADD_2006_","translated_internal_url":"","created_at":"2016-02-17T05:19:02.946-08:00","preview_url":null,"current_user_can_edit":null,"current_user_is_owner":null,"owner_id":13155,"coauthors_can_edit":true,"document_type":"other","co_author_tags":[],"downloadable_attachments":[{"id":42765808,"title":"","file_type":"pdf","scribd_thumbnail_url":"https://attachments.academia-assets.com/42765808/thumbnails/1.jpg","file_name":"CoPADD2006.pdf","download_url":"https://www.academia.edu/attachments/42765808/download_file","bulk_download_file_name":"Proceedings_of_the_1st_International_Wor.pdf","bulk_download_url":"https://d1wqtxts1xzle7.cloudfront.net/42765808/CoPADD2006-libre.pdf?1455715165=\u0026response-content-disposition=attachment%3B+filename%3DProceedings_of_the_1st_International_Wor.pdf\u0026Expires=1744203894\u0026Signature=DHKTRuSKr8ufHbyzOg4dSbFZNykPo8FfumEyJUgSWnhIj1b0nzMnFkCNAHNrCluRDGQdl4oVgkqQ~yYVoxxgNIh~j-~Bgct0gcKWetbd0Pnt94dwDhX38BHdFFiCDIh7ziyfjuIyX7TWwHMBG1LVhLhH9pdrzm~ky2Uzgyl8fJtt61nz4pu0UCv~9vJMoog8W~7bwvot3ceUGeZwvzNAs5Gg6VdYgldcoMkJm6U3bhgVoKDq2QJ2iUFly4g4qMMLBaaRqLryEZZt1RKyrT7I2afuv-67aL6IU5Kc0T3Bth2C-jm02cbjQSApdvG1YYvYw8HUoXNu2eyXlF9Lrqa3eQ__\u0026Key-Pair-Id=APKAJLOHF5GGSLRBV4ZA"}],"slug":"Proceedings_of_the_1st_International_Workshop_on_Collaborating_over_Paper_and_Digital_Documents_CoPADD_2006_","translated_slug":"","page_count":47,"language":"en","content_type":"Work","summary":"This paper presents a discussion of work combining paper maps with electronic information on handheld devices for use in a mobile context. We consider the benefits of paper, electronic media, and mixing the two in relation to the domain of group navigation. A prototype design is described that attempts to utilize these benefits toward providing a lightweight, ad hoc group navigation support system. Of particular interest is the extent to which an augmented paper map can be used as a shared display, to enhance communication and awareness of the activities of group members while navigating together.","impression_tracking_id":null,"owner":{"id":13155,"first_name":"Beat","middle_initials":null,"last_name":"Signer","page_name":"BeatSigner","domain_name":"vub","created_at":"2008-10-29T17:11:18.015-07:00","display_name":"Beat Signer","url":"https://vub.academia.edu/BeatSigner"},"attachments":[{"id":42765808,"title":"","file_type":"pdf","scribd_thumbnail_url":"https://attachments.academia-assets.com/42765808/thumbnails/1.jpg","file_name":"CoPADD2006.pdf","download_url":"https://www.academia.edu/attachments/42765808/download_file","bulk_download_file_name":"Proceedings_of_the_1st_International_Wor.pdf","bulk_download_url":"https://d1wqtxts1xzle7.cloudfront.net/42765808/CoPADD2006-libre.pdf?1455715165=\u0026response-content-disposition=attachment%3B+filename%3DProceedings_of_the_1st_International_Wor.pdf\u0026Expires=1744203894\u0026Signature=DHKTRuSKr8ufHbyzOg4dSbFZNykPo8FfumEyJUgSWnhIj1b0nzMnFkCNAHNrCluRDGQdl4oVgkqQ~yYVoxxgNIh~j-~Bgct0gcKWetbd0Pnt94dwDhX38BHdFFiCDIh7ziyfjuIyX7TWwHMBG1LVhLhH9pdrzm~ky2Uzgyl8fJtt61nz4pu0UCv~9vJMoog8W~7bwvot3ceUGeZwvzNAs5Gg6VdYgldcoMkJm6U3bhgVoKDq2QJ2iUFly4g4qMMLBaaRqLryEZZt1RKyrT7I2afuv-67aL6IU5Kc0T3Bth2C-jm02cbjQSApdvG1YYvYw8HUoXNu2eyXlF9Lrqa3eQ__\u0026Key-Pair-Id=APKAJLOHF5GGSLRBV4ZA"}],"research_interests":[{"id":37,"name":"Information Systems","url":"https://www.academia.edu/Documents/in/Information_Systems"},{"id":48,"name":"Engineering","url":"https://www.academia.edu/Documents/in/Engineering"},{"id":422,"name":"Computer Science","url":"https://www.academia.edu/Documents/in/Computer_Science"},{"id":449,"name":"Software Engineering","url":"https://www.academia.edu/Documents/in/Software_Engineering"},{"id":472,"name":"Human Computer Interaction","url":"https://www.academia.edu/Documents/in/Human_Computer_Interaction"},{"id":491,"name":"Information Technology","url":"https://www.academia.edu/Documents/in/Information_Technology"},{"id":492,"name":"Management Information Systems","url":"https://www.academia.edu/Documents/in/Management_Information_Systems"},{"id":923,"name":"Technology","url":"https://www.academia.edu/Documents/in/Technology"},{"id":933,"name":"New Media","url":"https://www.academia.edu/Documents/in/New_Media"},{"id":1012,"name":"Digital Libraries","url":"https://www.academia.edu/Documents/in/Digital_Libraries"},{"id":1210,"name":"Informatics","url":"https://www.academia.edu/Documents/in/Informatics"},{"id":1380,"name":"Computer Engineering","url":"https://www.academia.edu/Documents/in/Computer_Engineering"},{"id":2129,"name":"Computer Supported Cooperative Work (CSCW)","url":"https://www.academia.edu/Documents/in/Computer_Supported_Cooperative_Work_CSCW_"},{"id":2875,"name":"User Experience (UX)","url":"https://www.academia.edu/Documents/in/User_Experience_UX_"},{"id":2879,"name":"Ubiquitous Computing","url":"https://www.academia.edu/Documents/in/Ubiquitous_Computing"},{"id":3419,"name":"Multimedia","url":"https://www.academia.edu/Documents/in/Multimedia"},{"id":4416,"name":"Interaction Design","url":"https://www.academia.edu/Documents/in/Interaction_Design"},{"id":5266,"name":"Human Information Interaction","url":"https://www.academia.edu/Documents/in/Human_Information_Interaction"},{"id":5673,"name":"Augmented Reality","url":"https://www.academia.edu/Documents/in/Augmented_Reality"},{"id":9134,"name":"Pervasive Computing","url":"https://www.academia.edu/Documents/in/Pervasive_Computing"},{"id":9135,"name":"The Internet of Things","url":"https://www.academia.edu/Documents/in/The_Internet_of_Things"},{"id":10165,"name":"Interactive and Digital Media","url":"https://www.academia.edu/Documents/in/Interactive_and_Digital_Media"},{"id":11081,"name":"Multimodal Interaction","url":"https://www.academia.edu/Documents/in/Multimodal_Interaction"},{"id":11085,"name":"Cross-Media Information Spaces","url":"https://www.academia.edu/Documents/in/Cross-Media_Information_Spaces"},{"id":11086,"name":"Interactive Paper","url":"https://www.academia.edu/Documents/in/Interactive_Paper"},{"id":11119,"name":"User Interface","url":"https://www.academia.edu/Documents/in/User_Interface"},{"id":15951,"name":"Digital Pen and Paper","url":"https://www.academia.edu/Documents/in/Digital_Pen_and_Paper"},{"id":20470,"name":"Tangible User Interfaces","url":"https://www.academia.edu/Documents/in/Tangible_User_Interfaces"},{"id":21201,"name":"Multimodality","url":"https://www.academia.edu/Documents/in/Multimodality"},{"id":23997,"name":"HCI","url":"https://www.academia.edu/Documents/in/HCI"},{"id":26825,"name":"Mobile Computing","url":"https://www.academia.edu/Documents/in/Mobile_Computing"},{"id":42095,"name":"Document Engineering","url":"https://www.academia.edu/Documents/in/Document_Engineering"},{"id":85420,"name":"Crossmedia","url":"https://www.academia.edu/Documents/in/Crossmedia"},{"id":458849,"name":"Pervasive and Ubiquitous Computing","url":"https://www.academia.edu/Documents/in/Pervasive_and_Ubiquitous_Computing"}],"urls":[{"id":6778348,"url":"http://www.beatsigner.com/docs/CoPADD2006.pdf"}]}, dispatcherData: dispatcherData }); $(this).data('initialized', true); } }); $a.trackClickSource(".js-work-strip-work-link", "profile_work_strip") if (false) { Aedu.setUpFigureCarousel('profile-work-22094334-figures'); } }); </script> <div class="js-work-strip profile--work_container" data-work-id="22090507"><div class="profile--work_thumbnail hidden-xs"><a class="js-work-strip-work-link" data-click-track="profile-work-strip-thumbnail" href="https://www.academia.edu/22090507/Proceedings_of_the_Workshop_on_Engineering_Gestures_for_Multimodal_Interfaces_EGMI_2014_"><img alt="Research paper thumbnail of Proceedings of the Workshop on Engineering Gestures for Multimodal Interfaces (EGMI 2014)" class="work-thumbnail" src="https://attachments.academia-assets.com/42762885/thumbnails/1.jpg" /></a></div><div class="wp-workCard wp-workCard_itemContainer"><div class="wp-workCard_item wp-workCard--title"><a class="js-work-strip-work-link text-gray-darker" data-click-track="profile-work-strip-title" href="https://www.academia.edu/22090507/Proceedings_of_the_Workshop_on_Engineering_Gestures_for_Multimodal_Interfaces_EGMI_2014_">Proceedings of the Workshop on Engineering Gestures for Multimodal Interfaces (EGMI 2014)</a></div><div class="wp-workCard_item wp-workCard--actions"><span class="work-strip-bookmark-button-container"></span><a id="e90860e1ef3b24897de5cd8a5a458f15" class="wp-workCard--action" rel="nofollow" data-click-track="profile-work-strip-download" data-download="{&quot;attachment_id&quot;:42762885,&quot;asset_id&quot;:22090507,&quot;asset_type&quot;:&quot;Work&quot;,&quot;button_location&quot;:&quot;profile&quot;}" href="https://www.academia.edu/attachments/42762885/download_file?s=profile"><span><i class="fa fa-arrow-down"></i></span><span>Download</span></a><span class="wp-workCard--action visible-if-viewed-by-owner inline-block" style="display: none;"><span class="js-profile-work-strip-edit-button-wrapper profile-work-strip-edit-button-wrapper" data-work-id="22090507"><a class="js-profile-work-strip-edit-button" tabindex="0"><span><i class="fa fa-pencil"></i></span><span>Edit</span></a></span></span></div><div class="wp-workCard_item wp-workCard--stats"><span><span><span class="js-view-count view-count u-mr2x" data-work-id="22090507"><i class="fa fa-spinner fa-spin"></i></span><script>$(function () { var workId = 22090507; window.Academia.workViewCountsFetcher.queue(workId, function (count) { var description = window.$h.commaizeInt(count) + " " + window.$h.pluralize(count, 'View'); $(".js-view-count[data-work-id=22090507]").text(description); $(".js-view-count[data-work-id=22090507]").attr('title', description).tooltip(); }); });</script></span></span><span><span class="percentile-widget hidden"><span class="u-mr2x work-percentile"></span></span><script>$(function () { var workId = 22090507; window.Academia.workPercentilesFetcher.queue(workId, function (percentileText) { var container = $(".js-work-strip[data-work-id='22090507']"); container.find('.work-percentile').text(percentileText.charAt(0).toUpperCase() + percentileText.slice(1)); container.find('.percentile-widget').show(); container.find('.percentile-widget').removeClass('hidden'); }); });</script></span></div><div id="work-strip-premium-row-container"></div></div></div><script> require.config({ waitSeconds: 90 })(["https://a.academia-assets.com/assets/wow_profile-a9bf3a2bc8c89fa2a77156577594264ee8a0f214d74241bc0fcd3f69f8d107ac.js","https://a.academia-assets.com/assets/work_edit-ad038b8c047c1a8d4fa01b402d530ff93c45fee2137a149a4a5398bc8ad67560.js"], function() { // from javascript_helper.rb var dispatcherData = {} if (true){ window.WowProfile.dispatcher = window.WowProfile.dispatcher || _.clone(Backbone.Events); dispatcherData = { dispatcher: window.WowProfile.dispatcher, downloadLinkId: "e90860e1ef3b24897de5cd8a5a458f15" } } $('.js-work-strip[data-work-id=22090507]').each(function() { if (!$(this).data('initialized')) { new WowProfile.WorkStripView({ el: this, workJSON: {"id":22090507,"title":"Proceedings of the Workshop on Engineering Gestures for Multimodal Interfaces (EGMI 2014)","translated_title":"","metadata":{"location":"Rome, Italy, June 17, 2014","ai_abstract":"The proceedings of the EGMI 2014 workshop highlight the increasing interest in multimodal user interfaces (MMUIs) and the challenges in designing development tools for gesture-based interactions. Despite advancements in hardware and use cases, such as smartphones and devices like Microsoft Kinect, existing APIs and programming paradigms remain outdated and lack support for diverse input methods. The workshop presented six peer-reviewed papers addressing various aspects of gesture interaction and programming, underscoring the need for innovation in user interface design."},"translated_abstract":null,"internal_url":"https://www.academia.edu/22090507/Proceedings_of_the_Workshop_on_Engineering_Gestures_for_Multimodal_Interfaces_EGMI_2014_","translated_internal_url":"","created_at":"2016-02-17T03:55:03.270-08:00","preview_url":null,"current_user_can_edit":null,"current_user_is_owner":null,"owner_id":13155,"coauthors_can_edit":true,"document_type":"other","co_author_tags":[],"downloadable_attachments":[{"id":42762885,"title":"","file_type":"pdf","scribd_thumbnail_url":"https://attachments.academia-assets.com/42762885/thumbnails/1.jpg","file_name":"EGMI2014.pdf","download_url":"https://www.academia.edu/attachments/42762885/download_file","bulk_download_file_name":"Proceedings_of_the_Workshop_on_Engineeri.pdf","bulk_download_url":"https://d1wqtxts1xzle7.cloudfront.net/42762885/EGMI2014-libre.pdf?1455710011=\u0026response-content-disposition=attachment%3B+filename%3DProceedings_of_the_Workshop_on_Engineeri.pdf\u0026Expires=1744203894\u0026Signature=BLmCmyaoOMVJ~lZ4Qc0bTCrA4ubaXQHpydmd4elhuEk~qDLCmXYdU3gZi9sSPdJDftrdQNyaEK94h1f6ylPd1bJ5SqzUdtyHwfXRRortaOxriCiJePeaLElh6Yy48K-~ud-8ow9kcc7ba6tC59nN6h6g4cx46xuD5ERsaOjfmj87YHEOy6DD0JDGFAfLFd0x--xBZNv9YwFXo1gi8hD273W8fwqmH3j~SHkGdyzH6FJiIeMioI5LK4AFdDuB-ohC0tSlJi71e3IJNbvXmDganXIhgFy35tZtKoQ64EjZwmpzR5L9eutWDKY8vb-XkBLeGEh~t3o5bwFnghqZBM7iLg__\u0026Key-Pair-Id=APKAJLOHF5GGSLRBV4ZA"}],"slug":"Proceedings_of_the_Workshop_on_Engineering_Gestures_for_Multimodal_Interfaces_EGMI_2014_","translated_slug":"","page_count":4,"language":"en","content_type":"Work","summary":null,"impression_tracking_id":null,"owner":{"id":13155,"first_name":"Beat","middle_initials":null,"last_name":"Signer","page_name":"BeatSigner","domain_name":"vub","created_at":"2008-10-29T17:11:18.015-07:00","display_name":"Beat Signer","url":"https://vub.academia.edu/BeatSigner"},"attachments":[{"id":42762885,"title":"","file_type":"pdf","scribd_thumbnail_url":"https://attachments.academia-assets.com/42762885/thumbnails/1.jpg","file_name":"EGMI2014.pdf","download_url":"https://www.academia.edu/attachments/42762885/download_file","bulk_download_file_name":"Proceedings_of_the_Workshop_on_Engineeri.pdf","bulk_download_url":"https://d1wqtxts1xzle7.cloudfront.net/42762885/EGMI2014-libre.pdf?1455710011=\u0026response-content-disposition=attachment%3B+filename%3DProceedings_of_the_Workshop_on_Engineeri.pdf\u0026Expires=1744203894\u0026Signature=BLmCmyaoOMVJ~lZ4Qc0bTCrA4ubaXQHpydmd4elhuEk~qDLCmXYdU3gZi9sSPdJDftrdQNyaEK94h1f6ylPd1bJ5SqzUdtyHwfXRRortaOxriCiJePeaLElh6Yy48K-~ud-8ow9kcc7ba6tC59nN6h6g4cx46xuD5ERsaOjfmj87YHEOy6DD0JDGFAfLFd0x--xBZNv9YwFXo1gi8hD273W8fwqmH3j~SHkGdyzH6FJiIeMioI5LK4AFdDuB-ohC0tSlJi71e3IJNbvXmDganXIhgFy35tZtKoQ64EjZwmpzR5L9eutWDKY8vb-XkBLeGEh~t3o5bwFnghqZBM7iLg__\u0026Key-Pair-Id=APKAJLOHF5GGSLRBV4ZA"}],"research_interests":[{"id":422,"name":"Computer Science","url":"https://www.academia.edu/Documents/in/Computer_Science"},{"id":449,"name":"Software Engineering","url":"https://www.academia.edu/Documents/in/Software_Engineering"},{"id":472,"name":"Human Computer Interaction","url":"https://www.academia.edu/Documents/in/Human_Computer_Interaction"},{"id":1830,"name":"Gesture Studies","url":"https://www.academia.edu/Documents/in/Gesture_Studies"},{"id":3147,"name":"Gesture","url":"https://www.academia.edu/Documents/in/Gesture"},{"id":11081,"name":"Multimodal Interaction","url":"https://www.academia.edu/Documents/in/Multimodal_Interaction"},{"id":11119,"name":"User Interface","url":"https://www.academia.edu/Documents/in/User_Interface"},{"id":17701,"name":"Gesture Recognition","url":"https://www.academia.edu/Documents/in/Gesture_Recognition"},{"id":21201,"name":"Multimodality","url":"https://www.academia.edu/Documents/in/Multimodality"},{"id":53293,"name":"Software","url":"https://www.academia.edu/Documents/in/Software"},{"id":64453,"name":"Gestures","url":"https://www.academia.edu/Documents/in/Gestures"},{"id":97585,"name":"User interfaces","url":"https://www.academia.edu/Documents/in/User_interfaces"},{"id":317863,"name":"Hand Gesture Recognition System","url":"https://www.academia.edu/Documents/in/Hand_Gesture_Recognition_System"},{"id":550029,"name":"Gesture Technology","url":"https://www.academia.edu/Documents/in/Gesture_Technology"}],"urls":[{"id":6777919,"url":"http://ceur-ws.org/Vol-1190/"}]}, dispatcherData: dispatcherData }); $(this).data('initialized', true); } }); $a.trackClickSource(".js-work-strip-work-link", "profile_work_strip") if (false) { Aedu.setUpFigureCarousel('profile-work-22090507-figures'); } }); </script> </div><div class="profile--tab_content_container js-tab-pane tab-pane" data-section-id="2099779" id="dissertations"><div class="js-work-strip profile--work_container" data-work-id="175442"><div class="profile--work_thumbnail hidden-xs"><a class="js-work-strip-work-link" data-click-track="profile-work-strip-thumbnail" href="https://www.academia.edu/175442/Fundamental_Concepts_for_Interactive_Paper_and_Cross_Media_Information_Spaces"><img alt="Research paper thumbnail of Fundamental Concepts for Interactive Paper and Cross-Media Information Spaces" class="work-thumbnail" src="https://attachments.academia-assets.com/12381060/thumbnails/1.jpg" /></a></div><div class="wp-workCard wp-workCard_itemContainer"><div class="wp-workCard_item wp-workCard--title"><a class="js-work-strip-work-link text-gray-darker" data-click-track="profile-work-strip-title" href="https://www.academia.edu/175442/Fundamental_Concepts_for_Interactive_Paper_and_Cross_Media_Information_Spaces">Fundamental Concepts for Interactive Paper and Cross-Media Information Spaces</a></div><div class="wp-workCard_item"><span class="js-work-more-abstract-truncated">&quot;While there have been dramatic increases in the use of digital technologies for information stor...</span><a class="js-work-more-abstract" data-broccoli-component="work_strip.more_abstract" data-click-track="profile-work-strip-more-abstract" href="javascript:;"><span> more </span><span><i class="fa fa-caret-down"></i></span></a><span class="js-work-more-abstract-untruncated hidden">&quot;While there have been dramatic increases in the use of digital technologies for information storage, processing and delivery over the last twenty years, the affordances of paper have ensured its retention as a key information medium. Despite predictions of the paperless office, paper is ever more present in our daily work as reflected by the continuously increasing worldwide paper consumption. <br /> <br />Many researchers have argued for the retention of paper as an information resource and its integration into cross-media environments as opposed to its replacement. This has resulted in a wide variety of projects and technological developments for digitally augmented paper documents over the past decade. However, the majority of the realised projects focus on technical advances in terms of hardware but pay less attention to the very fundamental information integration and cross-media information management issues. <br /> <br />Our information-centric approach for a tight integration of paper and digital information is based on extending an object-oriented database management system with functionality for cross-media information management. The resulting iServer platform introduces fundamental link concepts at an abstract level. The iServer’s core link management functionality is available across different multimedia resources. Only the media-specific portion of these general concepts, for example the specification of a link’s source anchor, has to be implemented in the form of a plug-in to support new resource types. This resource plug-in mechanism results in a flexible and extensible system where new types of digital as well as physical resources can easily be integrated and, more importantly, cross-linked to the growing set of supported multimedia resources. In addition to the associative linking of information, our solution allows for the integration of semantic metadata and supports multiple classification of information units. iServer can, not only link between various static information entities, but also link to active content and this has proven to be very effective in enabling more complex interaction design. <br /> <br />As part of the European project Paper++, under the Disappearing Computer Programme, an iServer plug-in for interactive paper has been implemented to fully integrate paper and digital media, thereby gaining the best of the physical and the digital worlds. It not only supports linking from physical paper to digital information, but also enables links from digital content to physical paper or even paper to paper links. This multi-mode user interface results in highly interactive systems where users can easily switch back and forth between paper and digital information. The definition of an abstract input device interface further provides flexibility for supporting emerging technologies for paper link definition in addition to the hardware solutions for paper link definition and activation that were developed within the Paper++ project. <br /> <br />We introduce different approaches for cross-media information authoring where information is either compiled by established publishers with an expertise in a specific domain or by individuals who produce their own cross-media information environments. Preauthored information can be combined with personally aggregated information. A distributed peer-to-peer version of the iServer platform supports collaborative authoring and the sharing of link knowledge within a community of users. <br /> <br />The associations between different types of resources as well as other application-specific information can be visualised on different output channels. Universal access to the iServer’s information space is granted using the eXtensible Information Management Architecture (XIMA), our publishing platform for multi-channel access. <br /> <br />Our fundamental concepts for interactive paper and cross-media information management have been designed independently of particular hardware solutions and modes of interaction which enables the iServer platform to easily adapt to both new technologies and applications. Finally, the information infrastructure that we have developed has great potential as an experimental platform for the investigation of emerging multimedia resources in general and interactive paper with its possible applications in particular.&quot;</span></div><div class="wp-workCard_item wp-workCard--actions"><span class="work-strip-bookmark-button-container"></span><a id="49e24365bf3b90de9d20d9ac90e781c8" class="wp-workCard--action" rel="nofollow" data-click-track="profile-work-strip-download" data-download="{&quot;attachment_id&quot;:12381060,&quot;asset_id&quot;:175442,&quot;asset_type&quot;:&quot;Work&quot;,&quot;button_location&quot;:&quot;profile&quot;}" href="https://www.academia.edu/attachments/12381060/download_file?s=profile"><span><i class="fa fa-arrow-down"></i></span><span>Download</span></a><span class="wp-workCard--action visible-if-viewed-by-owner inline-block" style="display: none;"><span class="js-profile-work-strip-edit-button-wrapper profile-work-strip-edit-button-wrapper" data-work-id="175442"><a class="js-profile-work-strip-edit-button" tabindex="0"><span><i class="fa fa-pencil"></i></span><span>Edit</span></a></span></span></div><div class="wp-workCard_item wp-workCard--stats"><span><span><span class="js-view-count view-count u-mr2x" data-work-id="175442"><i class="fa fa-spinner fa-spin"></i></span><script>$(function () { var workId = 175442; window.Academia.workViewCountsFetcher.queue(workId, function (count) { var description = window.$h.commaizeInt(count) + " " + window.$h.pluralize(count, 'View'); $(".js-view-count[data-work-id=175442]").text(description); $(".js-view-count[data-work-id=175442]").attr('title', description).tooltip(); }); });</script></span></span><span><span class="percentile-widget hidden"><span class="u-mr2x work-percentile"></span></span><script>$(function () { var workId = 175442; window.Academia.workPercentilesFetcher.queue(workId, function (percentileText) { var container = $(".js-work-strip[data-work-id='175442']"); container.find('.work-percentile').text(percentileText.charAt(0).toUpperCase() + percentileText.slice(1)); container.find('.percentile-widget').show(); container.find('.percentile-widget').removeClass('hidden'); }); });</script></span></div><div id="work-strip-premium-row-container"></div></div></div><script> require.config({ waitSeconds: 90 })(["https://a.academia-assets.com/assets/wow_profile-a9bf3a2bc8c89fa2a77156577594264ee8a0f214d74241bc0fcd3f69f8d107ac.js","https://a.academia-assets.com/assets/work_edit-ad038b8c047c1a8d4fa01b402d530ff93c45fee2137a149a4a5398bc8ad67560.js"], function() { // from javascript_helper.rb var dispatcherData = {} if (true){ window.WowProfile.dispatcher = window.WowProfile.dispatcher || _.clone(Backbone.Events); dispatcherData = { dispatcher: window.WowProfile.dispatcher, downloadLinkId: "49e24365bf3b90de9d20d9ac90e781c8" } } $('.js-work-strip[data-work-id=175442]').each(function() { if (!$(this).data('initialized')) { new WowProfile.WorkStripView({ el: this, workJSON: {"id":175442,"title":"Fundamental Concepts for Interactive Paper and Cross-Media Information Spaces","translated_title":"","metadata":{"abstract":"\"While there have been dramatic increases in the use of digital technologies for information storage, processing and delivery over the last twenty years, the affordances of paper have ensured its retention as a key information medium. Despite predictions of the paperless office, paper is ever more present in our daily work as reflected by the continuously increasing worldwide paper consumption.\r\n\r\nMany researchers have argued for the retention of paper as an information resource and its integration into cross-media environments as opposed to its replacement. This has resulted in a wide variety of projects and technological developments for digitally augmented paper documents over the past decade. However, the majority of the realised projects focus on technical advances in terms of hardware but pay less attention to the very fundamental information integration and cross-media information management issues.\r\n\r\nOur information-centric approach for a tight integration of paper and digital information is based on extending an object-oriented database management system with functionality for cross-media information management. The resulting iServer platform introduces fundamental link concepts at an abstract level. The iServer’s core link management functionality is available across different multimedia resources. Only the media-specific portion of these general concepts, for example the specification of a link’s source anchor, has to be implemented in the form of a plug-in to support new resource types. This resource plug-in mechanism results in a flexible and extensible system where new types of digital as well as physical resources can easily be integrated and, more importantly, cross-linked to the growing set of supported multimedia resources. In addition to the associative linking of information, our solution allows for the integration of semantic metadata and supports multiple classification of information units. iServer can, not only link between various static information entities, but also link to active content and this has proven to be very effective in enabling more complex interaction design.\r\n\r\nAs part of the European project Paper++, under the Disappearing Computer Programme, an iServer plug-in for interactive paper has been implemented to fully integrate paper and digital media, thereby gaining the best of the physical and the digital worlds. It not only supports linking from physical paper to digital information, but also enables links from digital content to physical paper or even paper to paper links. This multi-mode user interface results in highly interactive systems where users can easily switch back and forth between paper and digital information. The definition of an abstract input device interface further provides flexibility for supporting emerging technologies for paper link definition in addition to the hardware solutions for paper link definition and activation that were developed within the Paper++ project.\r\n\r\nWe introduce different approaches for cross-media information authoring where information is either compiled by established publishers with an expertise in a specific domain or by individuals who produce their own cross-media information environments. Preauthored information can be combined with personally aggregated information. A distributed peer-to-peer version of the iServer platform supports collaborative authoring and the sharing of link knowledge within a community of users.\r\n\r\nThe associations between different types of resources as well as other application-specific information can be visualised on different output channels. Universal access to the iServer’s information space is granted using the eXtensible Information Management Architecture (XIMA), our publishing platform for multi-channel access.\r\n\r\nOur fundamental concepts for interactive paper and cross-media information management have been designed independently of particular hardware solutions and modes of interaction which enables the iServer platform to easily adapt to both new technologies and applications. Finally, the information infrastructure that we have developed has great potential as an experimental platform for the investigation of emerging multimedia resources in general and interactive paper with its possible applications in particular.\"","more_info":"\"Beat Signer, Dissertation ETH No. 16218. Zurich, Switzerland, 2006.Buy from Amazon.co.uk: http://www.amazon.co.uk/Fundamental-Concepts-Interactive-Cross-Media-Information/dp/3837027139\"","ai_title_tag":"Integrating Interactive Paper with Cross-Media Information Systems"},"translated_abstract":"\"While there have been dramatic increases in the use of digital technologies for information storage, processing and delivery over the last twenty years, the affordances of paper have ensured its retention as a key information medium. Despite predictions of the paperless office, paper is ever more present in our daily work as reflected by the continuously increasing worldwide paper consumption.\r\n\r\nMany researchers have argued for the retention of paper as an information resource and its integration into cross-media environments as opposed to its replacement. This has resulted in a wide variety of projects and technological developments for digitally augmented paper documents over the past decade. However, the majority of the realised projects focus on technical advances in terms of hardware but pay less attention to the very fundamental information integration and cross-media information management issues.\r\n\r\nOur information-centric approach for a tight integration of paper and digital information is based on extending an object-oriented database management system with functionality for cross-media information management. The resulting iServer platform introduces fundamental link concepts at an abstract level. The iServer’s core link management functionality is available across different multimedia resources. Only the media-specific portion of these general concepts, for example the specification of a link’s source anchor, has to be implemented in the form of a plug-in to support new resource types. This resource plug-in mechanism results in a flexible and extensible system where new types of digital as well as physical resources can easily be integrated and, more importantly, cross-linked to the growing set of supported multimedia resources. In addition to the associative linking of information, our solution allows for the integration of semantic metadata and supports multiple classification of information units. iServer can, not only link between various static information entities, but also link to active content and this has proven to be very effective in enabling more complex interaction design.\r\n\r\nAs part of the European project Paper++, under the Disappearing Computer Programme, an iServer plug-in for interactive paper has been implemented to fully integrate paper and digital media, thereby gaining the best of the physical and the digital worlds. It not only supports linking from physical paper to digital information, but also enables links from digital content to physical paper or even paper to paper links. This multi-mode user interface results in highly interactive systems where users can easily switch back and forth between paper and digital information. The definition of an abstract input device interface further provides flexibility for supporting emerging technologies for paper link definition in addition to the hardware solutions for paper link definition and activation that were developed within the Paper++ project.\r\n\r\nWe introduce different approaches for cross-media information authoring where information is either compiled by established publishers with an expertise in a specific domain or by individuals who produce their own cross-media information environments. Preauthored information can be combined with personally aggregated information. A distributed peer-to-peer version of the iServer platform supports collaborative authoring and the sharing of link knowledge within a community of users.\r\n\r\nThe associations between different types of resources as well as other application-specific information can be visualised on different output channels. Universal access to the iServer’s information space is granted using the eXtensible Information Management Architecture (XIMA), our publishing platform for multi-channel access.\r\n\r\nOur fundamental concepts for interactive paper and cross-media information management have been designed independently of particular hardware solutions and modes of interaction which enables the iServer platform to easily adapt to both new technologies and applications. Finally, the information infrastructure that we have developed has great potential as an experimental platform for the investigation of emerging multimedia resources in general and interactive paper with its possible applications in particular.\"","internal_url":"https://www.academia.edu/175442/Fundamental_Concepts_for_Interactive_Paper_and_Cross_Media_Information_Spaces","translated_internal_url":"","created_at":"2009-03-16T15:55:14.094-07:00","preview_url":null,"current_user_can_edit":null,"current_user_is_owner":null,"owner_id":13155,"coauthors_can_edit":true,"document_type":"other","co_author_tags":[],"downloadable_attachments":[{"id":12381060,"title":"","file_type":"pdf","scribd_thumbnail_url":"https://attachments.academia-assets.com/12381060/thumbnails/1.jpg","file_name":"Signer2006.pdf","download_url":"https://www.academia.edu/attachments/12381060/download_file","bulk_download_file_name":"Fundamental_Concepts_for_Interactive_Pap.pdf","bulk_download_url":"https://d1wqtxts1xzle7.cloudfront.net/12381060/Signer2006-libre.pdf?1394354155=\u0026response-content-disposition=attachment%3B+filename%3DFundamental_Concepts_for_Interactive_Pap.pdf\u0026Expires=1744203894\u0026Signature=EN-Y1wuHPWI7Xn1YsrgnID3YiPbjQL-LTziYImvglwbuSyg0wPJ2ZYRbl6CPDpU7~ecSjz7zXGrXWSceNsZri50n76MO-ajisl65YXayugyQzR7sfqq3uqiqjlmXU~qGWehzAkF-jY9YxOhFFieTBgkR3pm5X6q-hPwx5mCbxWqth8LrtNvxRM4yUvt87Ho0DFyG1-8mnJZCWuYhVQ-yJ~msiU6v2nMDCK4FegavrZ~vnD6pjOz7dfzyW0GjxFjwnyCdF2oyLWRx66lErtdmN1sYzZXkV~TRq8-LxmOHd7U2UN8zG8E~VIjo-6yr3Jq4g03mTdEtJ~p6f78~XU~3pA__\u0026Key-Pair-Id=APKAJLOHF5GGSLRBV4ZA"}],"slug":"Fundamental_Concepts_for_Interactive_Paper_and_Cross_Media_Information_Spaces","translated_slug":"","page_count":273,"language":"en","content_type":"Work","summary":"\"While there have been dramatic increases in the use of digital technologies for information storage, processing and delivery over the last twenty years, the affordances of paper have ensured its retention as a key information medium. Despite predictions of the paperless office, paper is ever more present in our daily work as reflected by the continuously increasing worldwide paper consumption.\r\n\r\nMany researchers have argued for the retention of paper as an information resource and its integration into cross-media environments as opposed to its replacement. This has resulted in a wide variety of projects and technological developments for digitally augmented paper documents over the past decade. However, the majority of the realised projects focus on technical advances in terms of hardware but pay less attention to the very fundamental information integration and cross-media information management issues.\r\n\r\nOur information-centric approach for a tight integration of paper and digital information is based on extending an object-oriented database management system with functionality for cross-media information management. The resulting iServer platform introduces fundamental link concepts at an abstract level. The iServer’s core link management functionality is available across different multimedia resources. Only the media-specific portion of these general concepts, for example the specification of a link’s source anchor, has to be implemented in the form of a plug-in to support new resource types. This resource plug-in mechanism results in a flexible and extensible system where new types of digital as well as physical resources can easily be integrated and, more importantly, cross-linked to the growing set of supported multimedia resources. In addition to the associative linking of information, our solution allows for the integration of semantic metadata and supports multiple classification of information units. iServer can, not only link between various static information entities, but also link to active content and this has proven to be very effective in enabling more complex interaction design.\r\n\r\nAs part of the European project Paper++, under the Disappearing Computer Programme, an iServer plug-in for interactive paper has been implemented to fully integrate paper and digital media, thereby gaining the best of the physical and the digital worlds. It not only supports linking from physical paper to digital information, but also enables links from digital content to physical paper or even paper to paper links. This multi-mode user interface results in highly interactive systems where users can easily switch back and forth between paper and digital information. The definition of an abstract input device interface further provides flexibility for supporting emerging technologies for paper link definition in addition to the hardware solutions for paper link definition and activation that were developed within the Paper++ project.\r\n\r\nWe introduce different approaches for cross-media information authoring where information is either compiled by established publishers with an expertise in a specific domain or by individuals who produce their own cross-media information environments. Preauthored information can be combined with personally aggregated information. A distributed peer-to-peer version of the iServer platform supports collaborative authoring and the sharing of link knowledge within a community of users.\r\n\r\nThe associations between different types of resources as well as other application-specific information can be visualised on different output channels. Universal access to the iServer’s information space is granted using the eXtensible Information Management Architecture (XIMA), our publishing platform for multi-channel access.\r\n\r\nOur fundamental concepts for interactive paper and cross-media information management have been designed independently of particular hardware solutions and modes of interaction which enables the iServer platform to easily adapt to both new technologies and applications. Finally, the information infrastructure that we have developed has great potential as an experimental platform for the investigation of emerging multimedia resources in general and interactive paper with its possible applications in particular.\"","impression_tracking_id":null,"owner":{"id":13155,"first_name":"Beat","middle_initials":null,"last_name":"Signer","page_name":"BeatSigner","domain_name":"vub","created_at":"2008-10-29T17:11:18.015-07:00","display_name":"Beat Signer","url":"https://vub.academia.edu/BeatSigner"},"attachments":[{"id":12381060,"title":"","file_type":"pdf","scribd_thumbnail_url":"https://attachments.academia-assets.com/12381060/thumbnails/1.jpg","file_name":"Signer2006.pdf","download_url":"https://www.academia.edu/attachments/12381060/download_file","bulk_download_file_name":"Fundamental_Concepts_for_Interactive_Pap.pdf","bulk_download_url":"https://d1wqtxts1xzle7.cloudfront.net/12381060/Signer2006-libre.pdf?1394354155=\u0026response-content-disposition=attachment%3B+filename%3DFundamental_Concepts_for_Interactive_Pap.pdf\u0026Expires=1744203894\u0026Signature=EN-Y1wuHPWI7Xn1YsrgnID3YiPbjQL-LTziYImvglwbuSyg0wPJ2ZYRbl6CPDpU7~ecSjz7zXGrXWSceNsZri50n76MO-ajisl65YXayugyQzR7sfqq3uqiqjlmXU~qGWehzAkF-jY9YxOhFFieTBgkR3pm5X6q-hPwx5mCbxWqth8LrtNvxRM4yUvt87Ho0DFyG1-8mnJZCWuYhVQ-yJ~msiU6v2nMDCK4FegavrZ~vnD6pjOz7dfzyW0GjxFjwnyCdF2oyLWRx66lErtdmN1sYzZXkV~TRq8-LxmOHd7U2UN8zG8E~VIjo-6yr3Jq4g03mTdEtJ~p6f78~XU~3pA__\u0026Key-Pair-Id=APKAJLOHF5GGSLRBV4ZA"}],"research_interests":[{"id":48,"name":"Engineering","url":"https://www.academia.edu/Documents/in/Engineering"},{"id":422,"name":"Computer Science","url":"https://www.academia.edu/Documents/in/Computer_Science"},{"id":449,"name":"Software Engineering","url":"https://www.academia.edu/Documents/in/Software_Engineering"},{"id":459,"name":"Information Science","url":"https://www.academia.edu/Documents/in/Information_Science"},{"id":472,"name":"Human Computer Interaction","url":"https://www.academia.edu/Documents/in/Human_Computer_Interaction"},{"id":491,"name":"Information Technology","url":"https://www.academia.edu/Documents/in/Information_Technology"},{"id":859,"name":"Communication","url":"https://www.academia.edu/Documents/in/Communication"},{"id":923,"name":"Technology","url":"https://www.academia.edu/Documents/in/Technology"},{"id":933,"name":"New Media","url":"https://www.academia.edu/Documents/in/New_Media"},{"id":1012,"name":"Digital Libraries","url":"https://www.academia.edu/Documents/in/Digital_Libraries"},{"id":1210,"name":"Informatics","url":"https://www.academia.edu/Documents/in/Informatics"},{"id":1241,"name":"Knowledge Management","url":"https://www.academia.edu/Documents/in/Knowledge_Management"},{"id":1380,"name":"Computer Engineering","url":"https://www.academia.edu/Documents/in/Computer_Engineering"},{"id":1453,"name":"Information Management","url":"https://www.academia.edu/Documents/in/Information_Management"},{"id":2129,"name":"Computer Supported Cooperative Work (CSCW)","url":"https://www.academia.edu/Documents/in/Computer_Supported_Cooperative_Work_CSCW_"},{"id":2482,"name":"Database Systems","url":"https://www.academia.edu/Documents/in/Database_Systems"},{"id":2869,"name":"Digital Media","url":"https://www.academia.edu/Documents/in/Digital_Media"},{"id":2879,"name":"Ubiquitous Computing","url":"https://www.academia.edu/Documents/in/Ubiquitous_Computing"},{"id":3419,"name":"Multimedia","url":"https://www.academia.edu/Documents/in/Multimedia"},{"id":5673,"name":"Augmented Reality","url":"https://www.academia.edu/Documents/in/Augmented_Reality"},{"id":7454,"name":"Information Communication Technology","url":"https://www.academia.edu/Documents/in/Information_Communication_Technology"},{"id":8215,"name":"Conceptual Modelling","url":"https://www.academia.edu/Documents/in/Conceptual_Modelling"},{"id":9134,"name":"Pervasive Computing","url":"https://www.academia.edu/Documents/in/Pervasive_Computing"},{"id":10165,"name":"Interactive and Digital Media","url":"https://www.academia.edu/Documents/in/Interactive_and_Digital_Media"},{"id":11085,"name":"Cross-Media Information Spaces","url":"https://www.academia.edu/Documents/in/Cross-Media_Information_Spaces"},{"id":11086,"name":"Interactive Paper","url":"https://www.academia.edu/Documents/in/Interactive_Paper"},{"id":11119,"name":"User Interface","url":"https://www.academia.edu/Documents/in/User_Interface"},{"id":11123,"name":"Personal Information Management","url":"https://www.academia.edu/Documents/in/Personal_Information_Management"},{"id":15951,"name":"Digital Pen and Paper","url":"https://www.academia.edu/Documents/in/Digital_Pen_and_Paper"},{"id":20470,"name":"Tangible User Interfaces","url":"https://www.academia.edu/Documents/in/Tangible_User_Interfaces"},{"id":41030,"name":"Mobile Augmented Reality","url":"https://www.academia.edu/Documents/in/Mobile_Augmented_Reality"},{"id":42095,"name":"Document Engineering","url":"https://www.academia.edu/Documents/in/Document_Engineering"},{"id":44606,"name":"Hypermedia","url":"https://www.academia.edu/Documents/in/Hypermedia"},{"id":668047,"name":"Digital Media and Interaction Design","url":"https://www.academia.edu/Documents/in/Digital_Media_and_Interaction_Design"}],"urls":[{"id":2126,"url":"http://wise.vub.ac.be/sites/default/files/publications/Signer2006.pdf"}]}, dispatcherData: dispatcherData }); $(this).data('initialized', true); } }); $a.trackClickSource(".js-work-strip-work-link", "profile_work_strip") if (false) { Aedu.setUpFigureCarousel('profile-work-175442-figures'); } }); </script> </div><div class="profile--tab_content_container js-tab-pane tab-pane" data-section-id="544045" id="conferencepresentations"><div class="js-work-strip profile--work_container" data-work-id="123739194"><div class="profile--work_thumbnail hidden-xs"><a class="js-work-strip-work-link" data-click-track="profile-work-strip-thumbnail" href="https://www.academia.edu/123739194/As_We_May_Interact_Challenges_and_Opportunities_for_Next_Generation_Human_Information_Interaction"><img alt="Research paper thumbnail of As We May Interact: Challenges and Opportunities for Next-Generation Human-Information Interaction" class="work-thumbnail" src="https://attachments.academia-assets.com/118100919/thumbnails/1.jpg" /></a></div><div class="wp-workCard wp-workCard_itemContainer"><div class="wp-workCard_item wp-workCard--title"><a class="js-work-strip-work-link text-gray-darker" data-click-track="profile-work-strip-title" href="https://www.academia.edu/123739194/As_We_May_Interact_Challenges_and_Opportunities_for_Next_Generation_Human_Information_Interaction">As We May Interact: Challenges and Opportunities for Next-Generation Human-Information Interaction</a></div><div class="wp-workCard_item"><span class="js-work-more-abstract-truncated">Long before the advent of personal computing, Vannevar Bush envisioned the Memex as a solution to...</span><a class="js-work-more-abstract" data-broccoli-component="work_strip.more_abstract" data-click-track="profile-work-strip-more-abstract" href="javascript:;"><span> more </span><span><i class="fa fa-caret-down"></i></span></a><span class="js-work-more-abstract-untruncated hidden">Long before the advent of personal computing, Vannevar Bush envisioned the Memex as a solution to address information overload by enhancing the management and refinding of information through associative trails. While other hypertext pioneers like Douglas Engelbart and Ted Nelson introduced advanced hypertext concepts to create more flexible document structures and augment the human intellect, some of their original ideas are still absent in our daily interaction with documents and information systems. Today, many digital document formats mimic paper documents without fully leveraging the opportunities offered by digital media and documents are often organised in hierarchical file structures. In this keynote, we explore how cross-media technologies, such as the resource-selector-link (RSL) hypermedia metamodel, can be used to organise and interact with information across digital and physical spaces. While emerging wearable mixed reality (MR) headsets offer new possibilities to augment the human intellect, we discuss how hypermedia research, in combination with other technologies, could play a major role in providing the necessary linked data and hypertext infrastructure for this augmentation process. We outline the challenges and opportunities for next-generation multimodal human-information interaction enabled by flexible cross-media information spaces and document structures in combination with upcoming mixed and virtual reality solutions.<br /><br />Research paper: <a href="https://beatsigner.com/publications/as-we-may-interact-challenges-and-opportunities-for-next-generation-human-information-interaction.pdf" rel="nofollow">https://beatsigner.com/publications/as-we-may-interact-challenges-and-opportunities-for-next-generation-human-information-interaction.pdf</a></span></div><div class="wp-workCard_item wp-workCard--actions"><span class="work-strip-bookmark-button-container"></span><a id="81928736bbc420c10eed10daea7a6f93" class="wp-workCard--action" rel="nofollow" data-click-track="profile-work-strip-download" data-download="{&quot;attachment_id&quot;:118100919,&quot;asset_id&quot;:123739194,&quot;asset_type&quot;:&quot;Work&quot;,&quot;button_location&quot;:&quot;profile&quot;}" href="https://www.academia.edu/attachments/118100919/download_file?s=profile"><span><i class="fa fa-arrow-down"></i></span><span>Download</span></a><span class="wp-workCard--action visible-if-viewed-by-owner inline-block" style="display: none;"><span class="js-profile-work-strip-edit-button-wrapper profile-work-strip-edit-button-wrapper" data-work-id="123739194"><a class="js-profile-work-strip-edit-button" tabindex="0"><span><i class="fa fa-pencil"></i></span><span>Edit</span></a></span></span></div><div class="wp-workCard_item wp-workCard--stats"><span><span><span class="js-view-count view-count u-mr2x" data-work-id="123739194"><i class="fa fa-spinner fa-spin"></i></span><script>$(function () { var workId = 123739194; window.Academia.workViewCountsFetcher.queue(workId, function (count) { var description = window.$h.commaizeInt(count) + " " + window.$h.pluralize(count, 'View'); $(".js-view-count[data-work-id=123739194]").text(description); $(".js-view-count[data-work-id=123739194]").attr('title', description).tooltip(); }); });</script></span></span><span><span class="percentile-widget hidden"><span class="u-mr2x work-percentile"></span></span><script>$(function () { var workId = 123739194; window.Academia.workPercentilesFetcher.queue(workId, function (percentileText) { var container = $(".js-work-strip[data-work-id='123739194']"); container.find('.work-percentile').text(percentileText.charAt(0).toUpperCase() + percentileText.slice(1)); container.find('.percentile-widget').show(); container.find('.percentile-widget').removeClass('hidden'); }); });</script></span></div><div id="work-strip-premium-row-container"></div></div></div><script> require.config({ waitSeconds: 90 })(["https://a.academia-assets.com/assets/wow_profile-a9bf3a2bc8c89fa2a77156577594264ee8a0f214d74241bc0fcd3f69f8d107ac.js","https://a.academia-assets.com/assets/work_edit-ad038b8c047c1a8d4fa01b402d530ff93c45fee2137a149a4a5398bc8ad67560.js"], function() { // from javascript_helper.rb var dispatcherData = {} if (true){ window.WowProfile.dispatcher = window.WowProfile.dispatcher || _.clone(Backbone.Events); dispatcherData = { dispatcher: window.WowProfile.dispatcher, downloadLinkId: "81928736bbc420c10eed10daea7a6f93" } } $('.js-work-strip[data-work-id=123739194]').each(function() { if (!$(this).data('initialized')) { new WowProfile.WorkStripView({ el: this, workJSON: {"id":123739194,"title":"As We May Interact: Challenges and Opportunities for Next-Generation Human-Information Interaction","translated_title":"","metadata":{"abstract":"Long before the advent of personal computing, Vannevar Bush envisioned the Memex as a solution to address information overload by enhancing the management and refinding of information through associative trails. While other hypertext pioneers like Douglas Engelbart and Ted Nelson introduced advanced hypertext concepts to create more flexible document structures and augment the human intellect, some of their original ideas are still absent in our daily interaction with documents and information systems. Today, many digital document formats mimic paper documents without fully leveraging the opportunities offered by digital media and documents are often organised in hierarchical file structures. In this keynote, we explore how cross-media technologies, such as the resource-selector-link (RSL) hypermedia metamodel, can be used to organise and interact with information across digital and physical spaces. While emerging wearable mixed reality (MR) headsets offer new possibilities to augment the human intellect, we discuss how hypermedia research, in combination with other technologies, could play a major role in providing the necessary linked data and hypertext infrastructure for this augmentation process. We outline the challenges and opportunities for next-generation multimodal human-information interaction enabled by flexible cross-media information spaces and document structures in combination with upcoming mixed and virtual reality solutions.\n\nResearch paper: https://beatsigner.com/publications/as-we-may-interact-challenges-and-opportunities-for-next-generation-human-information-interaction.pdf","more_info":"Keynote given at HUMAN 2024, September 10, 2024","publication_date":{"day":null,"month":null,"year":2024,"errors":{}}},"translated_abstract":"Long before the advent of personal computing, Vannevar Bush envisioned the Memex as a solution to address information overload by enhancing the management and refinding of information through associative trails. While other hypertext pioneers like Douglas Engelbart and Ted Nelson introduced advanced hypertext concepts to create more flexible document structures and augment the human intellect, some of their original ideas are still absent in our daily interaction with documents and information systems. Today, many digital document formats mimic paper documents without fully leveraging the opportunities offered by digital media and documents are often organised in hierarchical file structures. In this keynote, we explore how cross-media technologies, such as the resource-selector-link (RSL) hypermedia metamodel, can be used to organise and interact with information across digital and physical spaces. While emerging wearable mixed reality (MR) headsets offer new possibilities to augment the human intellect, we discuss how hypermedia research, in combination with other technologies, could play a major role in providing the necessary linked data and hypertext infrastructure for this augmentation process. We outline the challenges and opportunities for next-generation multimodal human-information interaction enabled by flexible cross-media information spaces and document structures in combination with upcoming mixed and virtual reality solutions.\n\nResearch paper: https://beatsigner.com/publications/as-we-may-interact-challenges-and-opportunities-for-next-generation-human-information-interaction.pdf","internal_url":"https://www.academia.edu/123739194/As_We_May_Interact_Challenges_and_Opportunities_for_Next_Generation_Human_Information_Interaction","translated_internal_url":"","created_at":"2024-09-10T02:01:16.207-07:00","preview_url":null,"current_user_can_edit":null,"current_user_is_owner":null,"owner_id":13155,"coauthors_can_edit":true,"document_type":"conference_presentation","co_author_tags":[],"downloadable_attachments":[{"id":118100919,"title":"","file_type":"pdf","scribd_thumbnail_url":"https://attachments.academia-assets.com/118100919/thumbnails/1.jpg","file_name":"HUMAN_2024.pdf","download_url":"https://www.academia.edu/attachments/118100919/download_file","bulk_download_file_name":"As_We_May_Interact_Challenges_and_Opport.pdf","bulk_download_url":"https://d1wqtxts1xzle7.cloudfront.net/118100919/HUMAN_2024-libre.pdf?1725961735=\u0026response-content-disposition=attachment%3B+filename%3DAs_We_May_Interact_Challenges_and_Opport.pdf\u0026Expires=1744203894\u0026Signature=gUYzO2lJsvURp9H3O3sW54pVLHHyrE~32UGIlGYJOb9gSXFM786z2O~rpOI8Rupnkziotu96vkVFCbIkFCJAj9uEptYPwdO~UYCedBkfQFUVjfKlgHIGuFV1k5N3L9aD64xylxJyqPXoBqAR1M88hkzXg~ZD28xk~Au-pMQZlgFHD5iLlDze8icfpdg0iVaoG9kAMZZu10OczNYGvCleFbKSuBOd6vcVNJDPF-A8LYi~OGiOVQK2jv4jJgv2DDc9ThdpVYdJYyajTObV-G9PXfTODknWEuGMiK4xuCWcotkd3vxX8Cb1zBN5HWkIFxRHXckg-RKeDHu4kYfudukj7A__\u0026Key-Pair-Id=APKAJLOHF5GGSLRBV4ZA"}],"slug":"As_We_May_Interact_Challenges_and_Opportunities_for_Next_Generation_Human_Information_Interaction","translated_slug":"","page_count":24,"language":"en","content_type":"Work","summary":"Long before the advent of personal computing, Vannevar Bush envisioned the Memex as a solution to address information overload by enhancing the management and refinding of information through associative trails. While other hypertext pioneers like Douglas Engelbart and Ted Nelson introduced advanced hypertext concepts to create more flexible document structures and augment the human intellect, some of their original ideas are still absent in our daily interaction with documents and information systems. Today, many digital document formats mimic paper documents without fully leveraging the opportunities offered by digital media and documents are often organised in hierarchical file structures. In this keynote, we explore how cross-media technologies, such as the resource-selector-link (RSL) hypermedia metamodel, can be used to organise and interact with information across digital and physical spaces. While emerging wearable mixed reality (MR) headsets offer new possibilities to augment the human intellect, we discuss how hypermedia research, in combination with other technologies, could play a major role in providing the necessary linked data and hypertext infrastructure for this augmentation process. We outline the challenges and opportunities for next-generation multimodal human-information interaction enabled by flexible cross-media information spaces and document structures in combination with upcoming mixed and virtual reality solutions.\n\nResearch paper: https://beatsigner.com/publications/as-we-may-interact-challenges-and-opportunities-for-next-generation-human-information-interaction.pdf","impression_tracking_id":null,"owner":{"id":13155,"first_name":"Beat","middle_initials":null,"last_name":"Signer","page_name":"BeatSigner","domain_name":"vub","created_at":"2008-10-29T17:11:18.015-07:00","display_name":"Beat Signer","url":"https://vub.academia.edu/BeatSigner"},"attachments":[{"id":118100919,"title":"","file_type":"pdf","scribd_thumbnail_url":"https://attachments.academia-assets.com/118100919/thumbnails/1.jpg","file_name":"HUMAN_2024.pdf","download_url":"https://www.academia.edu/attachments/118100919/download_file","bulk_download_file_name":"As_We_May_Interact_Challenges_and_Opport.pdf","bulk_download_url":"https://d1wqtxts1xzle7.cloudfront.net/118100919/HUMAN_2024-libre.pdf?1725961735=\u0026response-content-disposition=attachment%3B+filename%3DAs_We_May_Interact_Challenges_and_Opport.pdf\u0026Expires=1744203894\u0026Signature=gUYzO2lJsvURp9H3O3sW54pVLHHyrE~32UGIlGYJOb9gSXFM786z2O~rpOI8Rupnkziotu96vkVFCbIkFCJAj9uEptYPwdO~UYCedBkfQFUVjfKlgHIGuFV1k5N3L9aD64xylxJyqPXoBqAR1M88hkzXg~ZD28xk~Au-pMQZlgFHD5iLlDze8icfpdg0iVaoG9kAMZZu10OczNYGvCleFbKSuBOd6vcVNJDPF-A8LYi~OGiOVQK2jv4jJgv2DDc9ThdpVYdJYyajTObV-G9PXfTODknWEuGMiK4xuCWcotkd3vxX8Cb1zBN5HWkIFxRHXckg-RKeDHu4kYfudukj7A__\u0026Key-Pair-Id=APKAJLOHF5GGSLRBV4ZA"}],"research_interests":[{"id":422,"name":"Computer Science","url":"https://www.academia.edu/Documents/in/Computer_Science"},{"id":472,"name":"Human Computer Interaction","url":"https://www.academia.edu/Documents/in/Human_Computer_Interaction"},{"id":1210,"name":"Informatics","url":"https://www.academia.edu/Documents/in/Informatics"},{"id":4416,"name":"Interaction Design","url":"https://www.academia.edu/Documents/in/Interaction_Design"},{"id":5673,"name":"Augmented Reality","url":"https://www.academia.edu/Documents/in/Augmented_Reality"},{"id":11085,"name":"Cross-Media Information Spaces","url":"https://www.academia.edu/Documents/in/Cross-Media_Information_Spaces"},{"id":39370,"name":"Mixed Reality","url":"https://www.academia.edu/Documents/in/Mixed_Reality"},{"id":44606,"name":"Hypermedia","url":"https://www.academia.edu/Documents/in/Hypermedia"},{"id":54192,"name":"Hypertext","url":"https://www.academia.edu/Documents/in/Hypertext"},{"id":97585,"name":"User interfaces","url":"https://www.academia.edu/Documents/in/User_interfaces"},{"id":193390,"name":"RSL","url":"https://www.academia.edu/Documents/in/RSL"},{"id":1226904,"name":"Computer Science and Engineering","url":"https://www.academia.edu/Documents/in/Computer_Science_and_Engineering-1"},{"id":3519206,"name":"eXtended Reality XR","url":"https://www.academia.edu/Documents/in/eXtended_Reality_XR"}],"urls":[]}, dispatcherData: dispatcherData }); $(this).data('initialized', true); } }); $a.trackClickSource(".js-work-strip-work-link", "profile_work_strip") if (false) { Aedu.setUpFigureCarousel('profile-work-123739194-figures'); } }); </script> <div class="js-work-strip profile--work_container" data-work-id="39336011"><div class="profile--work_thumbnail hidden-xs"><a class="js-work-strip-work-link" data-click-track="profile-work-strip-thumbnail" href="https://www.academia.edu/39336011/Towards_Cross_Media_Information_Spaces_and_Architectures"><img alt="Research paper thumbnail of Towards Cross-Media Information Spaces and Architectures" class="work-thumbnail" src="https://attachments.academia-assets.com/59476185/thumbnails/1.jpg" /></a></div><div class="wp-workCard wp-workCard_itemContainer"><div class="wp-workCard_item wp-workCard--title"><a class="js-work-strip-work-link text-gray-darker" data-click-track="profile-work-strip-title" href="https://www.academia.edu/39336011/Towards_Cross_Media_Information_Spaces_and_Architectures">Towards Cross-Media Information Spaces and Architectures</a></div><div class="wp-workCard_item"><span>Keynote at RCIS 2019, Brussels, Belgium</span><span>, 2019</span></div><div class="wp-workCard_item"><span class="js-work-more-abstract-truncated">The efficient management and retrieval of information has been investigated since the early days ...</span><a class="js-work-more-abstract" data-broccoli-component="work_strip.more_abstract" data-click-track="profile-work-strip-more-abstract" href="javascript:;"><span> more </span><span><i class="fa fa-caret-down"></i></span></a><span class="js-work-more-abstract-untruncated hidden">The efficient management and retrieval of information has been investigated since the early days of Vannevar Bush&#39;s seminal article &#39;As We May Think&#39; introducing the Memex. However, nowadays information is fragmented across different media types, devices as well as digital and physical environments and we are often struggling to find information. In this keynote I will discuss three main issues to be addressed when developing solutions for managing information in these co-called cross-media information spaces. We first have a look at an extensible cross-media linking solution based on the RSL (resource-selector-link) hypermedia metamodel where information can be integrated across applications, devices as well as digital and physical information environments. I will then outline some of the limitations of existing digital document formats which are often just a simulation of paper documents and their affordances on desktop computers, and discuss more flexible document representations for cross-media information spaces. Further, new forms of human-information interaction and cross-media user interfaces, including some recent work on dynamic data physicalisation, are introduced. Various research artefacts such as the EdFest interactive paper prototype, the PimVis solution for personal cross-media information management or the MindXpres platform for next generation presentation solutions will be used to illustrate different aspects of the presented data-centric approach for cross-media information spaces and architectures. Last but not least, I will provide an outlook on how the embedding of the presented concepts at the level of an operating system might ultimately lead to new possibilities for cross-media information management and innovative forms of human-information interaction.<br /><br />Research paper: <a href="https://beatsigner.com/publications/towards-cross-media-information-spaces-and-architectures.pdf" rel="nofollow">https://beatsigner.com/publications/towards-cross-media-information-spaces-and-architectures.pdf</a></span></div><div class="wp-workCard_item wp-workCard--actions"><span class="work-strip-bookmark-button-container"></span><a id="d44247a84306eaa9344f77121ce2320b" class="wp-workCard--action" rel="nofollow" data-click-track="profile-work-strip-download" data-download="{&quot;attachment_id&quot;:59476185,&quot;asset_id&quot;:39336011,&quot;asset_type&quot;:&quot;Work&quot;,&quot;button_location&quot;:&quot;profile&quot;}" href="https://www.academia.edu/attachments/59476185/download_file?s=profile"><span><i class="fa fa-arrow-down"></i></span><span>Download</span></a><span class="wp-workCard--action visible-if-viewed-by-owner inline-block" style="display: none;"><span class="js-profile-work-strip-edit-button-wrapper profile-work-strip-edit-button-wrapper" data-work-id="39336011"><a class="js-profile-work-strip-edit-button" tabindex="0"><span><i class="fa fa-pencil"></i></span><span>Edit</span></a></span></span></div><div class="wp-workCard_item wp-workCard--stats"><span><span><span class="js-view-count view-count u-mr2x" data-work-id="39336011"><i class="fa fa-spinner fa-spin"></i></span><script>$(function () { var workId = 39336011; window.Academia.workViewCountsFetcher.queue(workId, function (count) { var description = window.$h.commaizeInt(count) + " " + window.$h.pluralize(count, 'View'); $(".js-view-count[data-work-id=39336011]").text(description); $(".js-view-count[data-work-id=39336011]").attr('title', description).tooltip(); }); });</script></span></span><span><span class="percentile-widget hidden"><span class="u-mr2x work-percentile"></span></span><script>$(function () { var workId = 39336011; window.Academia.workPercentilesFetcher.queue(workId, function (percentileText) { var container = $(".js-work-strip[data-work-id='39336011']"); container.find('.work-percentile').text(percentileText.charAt(0).toUpperCase() + percentileText.slice(1)); container.find('.percentile-widget').show(); container.find('.percentile-widget').removeClass('hidden'); }); });</script></span></div><div id="work-strip-premium-row-container"></div></div></div><script> require.config({ waitSeconds: 90 })(["https://a.academia-assets.com/assets/wow_profile-a9bf3a2bc8c89fa2a77156577594264ee8a0f214d74241bc0fcd3f69f8d107ac.js","https://a.academia-assets.com/assets/work_edit-ad038b8c047c1a8d4fa01b402d530ff93c45fee2137a149a4a5398bc8ad67560.js"], function() { // from javascript_helper.rb var dispatcherData = {} if (true){ window.WowProfile.dispatcher = window.WowProfile.dispatcher || _.clone(Backbone.Events); dispatcherData = { dispatcher: window.WowProfile.dispatcher, downloadLinkId: "d44247a84306eaa9344f77121ce2320b" } } $('.js-work-strip[data-work-id=39336011]').each(function() { if (!$(this).data('initialized')) { new WowProfile.WorkStripView({ el: this, workJSON: {"id":39336011,"title":"Towards Cross-Media Information Spaces and Architectures","translated_title":"","metadata":{"abstract":"The efficient management and retrieval of information has been investigated since the early days of Vannevar Bush's seminal article 'As We May Think' introducing the Memex. However, nowadays information is fragmented across different media types, devices as well as digital and physical environments and we are often struggling to find information. In this keynote I will discuss three main issues to be addressed when developing solutions for managing information in these co-called cross-media information spaces. We first have a look at an extensible cross-media linking solution based on the RSL (resource-selector-link) hypermedia metamodel where information can be integrated across applications, devices as well as digital and physical information environments. I will then outline some of the limitations of existing digital document formats which are often just a simulation of paper documents and their affordances on desktop computers, and discuss more flexible document representations for cross-media information spaces. Further, new forms of human-information interaction and cross-media user interfaces, including some recent work on dynamic data physicalisation, are introduced. Various research artefacts such as the EdFest interactive paper prototype, the PimVis solution for personal cross-media information management or the MindXpres platform for next generation presentation solutions will be used to illustrate different aspects of the presented data-centric approach for cross-media information spaces and architectures. Last but not least, I will provide an outlook on how the embedding of the presented concepts at the level of an operating system might ultimately lead to new possibilities for cross-media information management and innovative forms of human-information interaction.\n\nResearch paper: https://beatsigner.com/publications/towards-cross-media-information-spaces-and-architectures.pdf","more_info":"Keynote given at RCIS 2019","publication_date":{"day":null,"month":null,"year":2019,"errors":{}},"publication_name":"Keynote at RCIS 2019, Brussels, Belgium"},"translated_abstract":"The efficient management and retrieval of information has been investigated since the early days of Vannevar Bush's seminal article 'As We May Think' introducing the Memex. However, nowadays information is fragmented across different media types, devices as well as digital and physical environments and we are often struggling to find information. In this keynote I will discuss three main issues to be addressed when developing solutions for managing information in these co-called cross-media information spaces. We first have a look at an extensible cross-media linking solution based on the RSL (resource-selector-link) hypermedia metamodel where information can be integrated across applications, devices as well as digital and physical information environments. I will then outline some of the limitations of existing digital document formats which are often just a simulation of paper documents and their affordances on desktop computers, and discuss more flexible document representations for cross-media information spaces. Further, new forms of human-information interaction and cross-media user interfaces, including some recent work on dynamic data physicalisation, are introduced. Various research artefacts such as the EdFest interactive paper prototype, the PimVis solution for personal cross-media information management or the MindXpres platform for next generation presentation solutions will be used to illustrate different aspects of the presented data-centric approach for cross-media information spaces and architectures. Last but not least, I will provide an outlook on how the embedding of the presented concepts at the level of an operating system might ultimately lead to new possibilities for cross-media information management and innovative forms of human-information interaction.\n\nResearch paper: https://beatsigner.com/publications/towards-cross-media-information-spaces-and-architectures.pdf","internal_url":"https://www.academia.edu/39336011/Towards_Cross_Media_Information_Spaces_and_Architectures","translated_internal_url":"","created_at":"2019-06-01T03:16:53.459-07:00","preview_url":null,"current_user_can_edit":null,"current_user_is_owner":null,"owner_id":13155,"coauthors_can_edit":true,"document_type":"conference_presentation","co_author_tags":[],"downloadable_attachments":[{"id":59476185,"title":"","file_type":"pdf","scribd_thumbnail_url":"https://attachments.academia-assets.com/59476185/thumbnails/1.jpg","file_name":"RCIS_2019_Keynote20190601-54202-1l1hzmv.pdf","download_url":"https://www.academia.edu/attachments/59476185/download_file","bulk_download_file_name":"Towards_Cross_Media_Information_Spaces_a.pdf","bulk_download_url":"https://d1wqtxts1xzle7.cloudfront.net/59476185/RCIS_2019_Keynote20190601-54202-1l1hzmv-libre.pdf?1559384904=\u0026response-content-disposition=attachment%3B+filename%3DTowards_Cross_Media_Information_Spaces_a.pdf\u0026Expires=1744203894\u0026Signature=S8Pm5KIwjTp99RNIaEPGev5EDBU3cKSmBr8JIkMdbKYiXWEHlcNNDcOZUM7oAgs4Y9ttw9LtW5Wf5~pLjWy04qLRq4ycC8tt5a~5OHMON8Frt7cy4ezPNp8q4Yhn6MzJPhRcf-hj3ZFJ9Glt2v0Cb2YUa-fORF9l8n7Hvv4Yit9cCPcELF-VFbNyoojICTH5Q1Pikab-0k4GkVAuBYl6v18~NOFDOkSyLqTrJXzs1mmBM5z-W1Pw59v8--9fuMgLHY2S46kGW40CjgmDOCCrFfGpt6U3xqFpZbKa3Q-ps8FfLzzXalX69bzOf42xgcRJhC79tX-GP4DhxdK2w-2tgQ__\u0026Key-Pair-Id=APKAJLOHF5GGSLRBV4ZA"}],"slug":"Towards_Cross_Media_Information_Spaces_and_Architectures","translated_slug":"","page_count":34,"language":"en","content_type":"Work","summary":"The efficient management and retrieval of information has been investigated since the early days of Vannevar Bush's seminal article 'As We May Think' introducing the Memex. However, nowadays information is fragmented across different media types, devices as well as digital and physical environments and we are often struggling to find information. In this keynote I will discuss three main issues to be addressed when developing solutions for managing information in these co-called cross-media information spaces. We first have a look at an extensible cross-media linking solution based on the RSL (resource-selector-link) hypermedia metamodel where information can be integrated across applications, devices as well as digital and physical information environments. I will then outline some of the limitations of existing digital document formats which are often just a simulation of paper documents and their affordances on desktop computers, and discuss more flexible document representations for cross-media information spaces. Further, new forms of human-information interaction and cross-media user interfaces, including some recent work on dynamic data physicalisation, are introduced. Various research artefacts such as the EdFest interactive paper prototype, the PimVis solution for personal cross-media information management or the MindXpres platform for next generation presentation solutions will be used to illustrate different aspects of the presented data-centric approach for cross-media information spaces and architectures. Last but not least, I will provide an outlook on how the embedding of the presented concepts at the level of an operating system might ultimately lead to new possibilities for cross-media information management and innovative forms of human-information interaction.\n\nResearch paper: https://beatsigner.com/publications/towards-cross-media-information-spaces-and-architectures.pdf","impression_tracking_id":null,"owner":{"id":13155,"first_name":"Beat","middle_initials":null,"last_name":"Signer","page_name":"BeatSigner","domain_name":"vub","created_at":"2008-10-29T17:11:18.015-07:00","display_name":"Beat Signer","url":"https://vub.academia.edu/BeatSigner"},"attachments":[{"id":59476185,"title":"","file_type":"pdf","scribd_thumbnail_url":"https://attachments.academia-assets.com/59476185/thumbnails/1.jpg","file_name":"RCIS_2019_Keynote20190601-54202-1l1hzmv.pdf","download_url":"https://www.academia.edu/attachments/59476185/download_file","bulk_download_file_name":"Towards_Cross_Media_Information_Spaces_a.pdf","bulk_download_url":"https://d1wqtxts1xzle7.cloudfront.net/59476185/RCIS_2019_Keynote20190601-54202-1l1hzmv-libre.pdf?1559384904=\u0026response-content-disposition=attachment%3B+filename%3DTowards_Cross_Media_Information_Spaces_a.pdf\u0026Expires=1744203894\u0026Signature=S8Pm5KIwjTp99RNIaEPGev5EDBU3cKSmBr8JIkMdbKYiXWEHlcNNDcOZUM7oAgs4Y9ttw9LtW5Wf5~pLjWy04qLRq4ycC8tt5a~5OHMON8Frt7cy4ezPNp8q4Yhn6MzJPhRcf-hj3ZFJ9Glt2v0Cb2YUa-fORF9l8n7Hvv4Yit9cCPcELF-VFbNyoojICTH5Q1Pikab-0k4GkVAuBYl6v18~NOFDOkSyLqTrJXzs1mmBM5z-W1Pw59v8--9fuMgLHY2S46kGW40CjgmDOCCrFfGpt6U3xqFpZbKa3Q-ps8FfLzzXalX69bzOf42xgcRJhC79tX-GP4DhxdK2w-2tgQ__\u0026Key-Pair-Id=APKAJLOHF5GGSLRBV4ZA"}],"research_interests":[{"id":37,"name":"Information Systems","url":"https://www.academia.edu/Documents/in/Information_Systems"},{"id":422,"name":"Computer Science","url":"https://www.academia.edu/Documents/in/Computer_Science"},{"id":449,"name":"Software Engineering","url":"https://www.academia.edu/Documents/in/Software_Engineering"},{"id":459,"name":"Information Science","url":"https://www.academia.edu/Documents/in/Information_Science"},{"id":472,"name":"Human Computer Interaction","url":"https://www.academia.edu/Documents/in/Human_Computer_Interaction"},{"id":491,"name":"Information Technology","url":"https://www.academia.edu/Documents/in/Information_Technology"},{"id":923,"name":"Technology","url":"https://www.academia.edu/Documents/in/Technology"},{"id":933,"name":"New Media","url":"https://www.academia.edu/Documents/in/New_Media"},{"id":1012,"name":"Digital Libraries","url":"https://www.academia.edu/Documents/in/Digital_Libraries"},{"id":1210,"name":"Informatics","url":"https://www.academia.edu/Documents/in/Informatics"},{"id":1453,"name":"Information Management","url":"https://www.academia.edu/Documents/in/Information_Management"},{"id":2879,"name":"Ubiquitous Computing","url":"https://www.academia.edu/Documents/in/Ubiquitous_Computing"},{"id":3419,"name":"Multimedia","url":"https://www.academia.edu/Documents/in/Multimedia"},{"id":5266,"name":"Human Information Interaction","url":"https://www.academia.edu/Documents/in/Human_Information_Interaction"},{"id":9134,"name":"Pervasive Computing","url":"https://www.academia.edu/Documents/in/Pervasive_Computing"},{"id":10165,"name":"Interactive and Digital Media","url":"https://www.academia.edu/Documents/in/Interactive_and_Digital_Media"},{"id":11085,"name":"Cross-Media Information Spaces","url":"https://www.academia.edu/Documents/in/Cross-Media_Information_Spaces"},{"id":11123,"name":"Personal Information Management","url":"https://www.academia.edu/Documents/in/Personal_Information_Management"},{"id":11732,"name":"Linked Data","url":"https://www.academia.edu/Documents/in/Linked_Data"},{"id":54192,"name":"Hypertext","url":"https://www.academia.edu/Documents/in/Hypertext"}],"urls":[{"id":15699108,"url":"https://speakerdeck.com/signer/towards-cross-media-information-spaces-and-architectures"}]}, dispatcherData: dispatcherData }); $(this).data('initialized', true); } }); $a.trackClickSource(".js-work-strip-work-link", "profile_work_strip") if (false) { Aedu.setUpFigureCarousel('profile-work-39336011-figures'); } }); </script> <div class="js-work-strip profile--work_container" data-work-id="37315995"><div class="profile--work_thumbnail hidden-xs"><a class="js-work-strip-work-link" data-click-track="profile-work-strip-thumbnail" href="https://www.academia.edu/37315995/Cross_Media_Document_Linking_and_Navigation"><img alt="Research paper thumbnail of Cross-Media Document Linking and Navigation" class="work-thumbnail" src="https://attachments.academia-assets.com/77359982/thumbnails/1.jpg" /></a></div><div class="wp-workCard wp-workCard_itemContainer"><div class="wp-workCard_item wp-workCard--title"><a class="js-work-strip-work-link text-gray-darker" data-click-track="profile-work-strip-title" href="https://www.academia.edu/37315995/Cross_Media_Document_Linking_and_Navigation">Cross-Media Document Linking and Navigation</a></div><div class="wp-workCard_item wp-workCard--coauthors"><span>by </span><span><a class="" data-click-track="profile-work-strip-authors" href="https://vub.academia.edu/BeatSigner">Beat Signer</a> and <a class="" data-click-track="profile-work-strip-authors" href="https://vub.academia.edu/PayamEbrahimi">Payam Ebrahimi</a></span></div><div class="wp-workCard_item"><span>Presentation given at DocEng 2018, 18th ACM Symposium on Document Engineering, Halifax, Canada</span><span>, 2018</span></div><div class="wp-workCard_item"><span class="js-work-more-abstract-truncated">Documents do often not exist in isolation but are implicitly or explicitly linked to parts of oth...</span><a class="js-work-more-abstract" data-broccoli-component="work_strip.more_abstract" data-click-track="profile-work-strip-more-abstract" href="javascript:;"><span> more </span><span><i class="fa fa-caret-down"></i></span></a><span class="js-work-more-abstract-untruncated hidden">Documents do often not exist in isolation but are implicitly or explicitly linked to parts of other documents. However, due to a multitude of proprietary document formats with rather simple link models, today&#39;s possibilities for creating hyperlinks between snippets of information in different document formats are limited. In previous work, we have presented a dynamically extensible cross-document link service overcoming the limitations of the simple link models supported by most existing document formats. Based on a plug-in mechanism, our link service enables the linking across different document types. In this paper, we assess the extensibility of our link service by integrating some document formats as well as third-party document viewers. We illustrate the flexibility of creating advanced hyperlinks across these document formats and viewers that cannot be realised with existing linking solutions or link models of existing document formats. A user study further investigates the user experience when creating and navigating cross-document hyperlinks.<br /><br />Research paper: <a href="https://beatsigner.com/publications/cross-media-document-linking-and-navigation.pdf" rel="nofollow">https://beatsigner.com/publications/cross-media-document-linking-and-navigation.pdf</a></span></div><div class="wp-workCard_item wp-workCard--actions"><span class="work-strip-bookmark-button-container"></span><a id="33da57eb520889ecf617d0b13e0cb90d" class="wp-workCard--action" rel="nofollow" data-click-track="profile-work-strip-download" data-download="{&quot;attachment_id&quot;:77359982,&quot;asset_id&quot;:37315995,&quot;asset_type&quot;:&quot;Work&quot;,&quot;button_location&quot;:&quot;profile&quot;}" href="https://www.academia.edu/attachments/77359982/download_file?s=profile"><span><i class="fa fa-arrow-down"></i></span><span>Download</span></a><span class="wp-workCard--action visible-if-viewed-by-owner inline-block" style="display: none;"><span class="js-profile-work-strip-edit-button-wrapper profile-work-strip-edit-button-wrapper" data-work-id="37315995"><a class="js-profile-work-strip-edit-button" tabindex="0"><span><i class="fa fa-pencil"></i></span><span>Edit</span></a></span></span></div><div class="wp-workCard_item wp-workCard--stats"><span><span><span class="js-view-count view-count u-mr2x" data-work-id="37315995"><i class="fa fa-spinner fa-spin"></i></span><script>$(function () { var workId = 37315995; window.Academia.workViewCountsFetcher.queue(workId, function (count) { var description = window.$h.commaizeInt(count) + " " + window.$h.pluralize(count, 'View'); $(".js-view-count[data-work-id=37315995]").text(description); $(".js-view-count[data-work-id=37315995]").attr('title', description).tooltip(); }); });</script></span></span><span><span class="percentile-widget hidden"><span class="u-mr2x work-percentile"></span></span><script>$(function () { var workId = 37315995; window.Academia.workPercentilesFetcher.queue(workId, function (percentileText) { var container = $(".js-work-strip[data-work-id='37315995']"); container.find('.work-percentile').text(percentileText.charAt(0).toUpperCase() + percentileText.slice(1)); container.find('.percentile-widget').show(); container.find('.percentile-widget').removeClass('hidden'); }); });</script></span></div><div id="work-strip-premium-row-container"></div></div></div><script> require.config({ waitSeconds: 90 })(["https://a.academia-assets.com/assets/wow_profile-a9bf3a2bc8c89fa2a77156577594264ee8a0f214d74241bc0fcd3f69f8d107ac.js","https://a.academia-assets.com/assets/work_edit-ad038b8c047c1a8d4fa01b402d530ff93c45fee2137a149a4a5398bc8ad67560.js"], function() { // from javascript_helper.rb var dispatcherData = {} if (true){ window.WowProfile.dispatcher = window.WowProfile.dispatcher || _.clone(Backbone.Events); dispatcherData = { dispatcher: window.WowProfile.dispatcher, downloadLinkId: "33da57eb520889ecf617d0b13e0cb90d" } } $('.js-work-strip[data-work-id=37315995]').each(function() { if (!$(this).data('initialized')) { new WowProfile.WorkStripView({ el: this, workJSON: {"id":37315995,"title":"Cross-Media Document Linking and Navigation","translated_title":"","metadata":{"abstract":"Documents do often not exist in isolation but are implicitly or explicitly linked to parts of other documents. However, due to a multitude of proprietary document formats with rather simple link models, today's possibilities for creating hyperlinks between snippets of information in different document formats are limited. In previous work, we have presented a dynamically extensible cross-document link service overcoming the limitations of the simple link models supported by most existing document formats. Based on a plug-in mechanism, our link service enables the linking across different document types. In this paper, we assess the extensibility of our link service by integrating some document formats as well as third-party document viewers. We illustrate the flexibility of creating advanced hyperlinks across these document formats and viewers that cannot be realised with existing linking solutions or link models of existing document formats. A user study further investigates the user experience when creating and navigating cross-document hyperlinks.\n\nResearch paper: https://beatsigner.com/publications/cross-media-document-linking-and-navigation.pdf","ai_title_tag":"Extensible Cross-Document Linking and User Experience Study","publication_date":{"day":null,"month":null,"year":2018,"errors":{}},"publication_name":"Presentation given at DocEng 2018, 18th ACM Symposium on Document Engineering, Halifax, Canada"},"translated_abstract":"Documents do often not exist in isolation but are implicitly or explicitly linked to parts of other documents. However, due to a multitude of proprietary document formats with rather simple link models, today's possibilities for creating hyperlinks between snippets of information in different document formats are limited. In previous work, we have presented a dynamically extensible cross-document link service overcoming the limitations of the simple link models supported by most existing document formats. Based on a plug-in mechanism, our link service enables the linking across different document types. In this paper, we assess the extensibility of our link service by integrating some document formats as well as third-party document viewers. We illustrate the flexibility of creating advanced hyperlinks across these document formats and viewers that cannot be realised with existing linking solutions or link models of existing document formats. A user study further investigates the user experience when creating and navigating cross-document hyperlinks.\n\nResearch paper: https://beatsigner.com/publications/cross-media-document-linking-and-navigation.pdf","internal_url":"https://www.academia.edu/37315995/Cross_Media_Document_Linking_and_Navigation","translated_internal_url":"","created_at":"2018-08-30T04:41:49.573-07:00","preview_url":null,"current_user_can_edit":null,"current_user_is_owner":null,"owner_id":13155,"coauthors_can_edit":true,"document_type":"conference_presentation","co_author_tags":[{"id":31838458,"work_id":37315995,"tagging_user_id":13155,"tagged_user_id":1765495,"co_author_invite_id":null,"email":"h***h@hotmail.com","affiliation":"Universiteit Gent and Vrije Universiteit Brussel","display_order":1,"name":"Ahmed Tayeh","title":"Cross-Media Document Linking and Navigation"},{"id":31838459,"work_id":37315995,"tagging_user_id":13155,"tagged_user_id":3163292,"co_author_invite_id":null,"email":"p***m@gmail.com","affiliation":"Vrije Universiteit Brussel","display_order":2,"name":"Payam Ebrahimi","title":"Cross-Media Document Linking and Navigation"}],"downloadable_attachments":[{"id":77359982,"title":"","file_type":"pdf","scribd_thumbnail_url":"https://attachments.academia-assets.com/77359982/thumbnails/1.jpg","file_name":"DocEng2018.pdf","download_url":"https://www.academia.edu/attachments/77359982/download_file","bulk_download_file_name":"Cross_Media_Document_Linking_and_Navigat.pdf","bulk_download_url":"https://d1wqtxts1xzle7.cloudfront.net/77359982/DocEng2018-libre.pdf?1640512981=\u0026response-content-disposition=attachment%3B+filename%3DCross_Media_Document_Linking_and_Navigat.pdf\u0026Expires=1744203894\u0026Signature=ckfGK-okN9yQ-Ylo3X8REzxSrmuOy2SbB0WiECeUzHT8sUpCfaXAF6Pr4l1ejhyq~ug41t6CeTbSKa6UOmRueQnxSpTl51YYVNo3em-MTsodHU04H5YePcmEMz3BHYfjfOHbEwqfJrhCzrtzLSeEtwG5GwEJqpwS3mxqE1mwGHOQ-uWfoEhPFUP2j2WgU~bncygNMGhXkyLkIpfA0aMfq5CYyxVYjRh7fMurQVHHMJ3fdkkFbacwlV68FXHbKmKz0bqYXPE9N5YsJs~RoIiVNCPICdSUr~ScNVj5J2vNH15Rr~u5usuqTqukGu~khD-bqAjlcbqe9O6DM2Q8COJNzQ__\u0026Key-Pair-Id=APKAJLOHF5GGSLRBV4ZA"}],"slug":"Cross_Media_Document_Linking_and_Navigation","translated_slug":"","page_count":23,"language":"en","content_type":"Work","summary":"Documents do often not exist in isolation but are implicitly or explicitly linked to parts of other documents. However, due to a multitude of proprietary document formats with rather simple link models, today's possibilities for creating hyperlinks between snippets of information in different document formats are limited. In previous work, we have presented a dynamically extensible cross-document link service overcoming the limitations of the simple link models supported by most existing document formats. Based on a plug-in mechanism, our link service enables the linking across different document types. In this paper, we assess the extensibility of our link service by integrating some document formats as well as third-party document viewers. We illustrate the flexibility of creating advanced hyperlinks across these document formats and viewers that cannot be realised with existing linking solutions or link models of existing document formats. A user study further investigates the user experience when creating and navigating cross-document hyperlinks.\n\nResearch paper: https://beatsigner.com/publications/cross-media-document-linking-and-navigation.pdf","impression_tracking_id":null,"owner":{"id":13155,"first_name":"Beat","middle_initials":null,"last_name":"Signer","page_name":"BeatSigner","domain_name":"vub","created_at":"2008-10-29T17:11:18.015-07:00","display_name":"Beat Signer","url":"https://vub.academia.edu/BeatSigner"},"attachments":[{"id":77359982,"title":"","file_type":"pdf","scribd_thumbnail_url":"https://attachments.academia-assets.com/77359982/thumbnails/1.jpg","file_name":"DocEng2018.pdf","download_url":"https://www.academia.edu/attachments/77359982/download_file","bulk_download_file_name":"Cross_Media_Document_Linking_and_Navigat.pdf","bulk_download_url":"https://d1wqtxts1xzle7.cloudfront.net/77359982/DocEng2018-libre.pdf?1640512981=\u0026response-content-disposition=attachment%3B+filename%3DCross_Media_Document_Linking_and_Navigat.pdf\u0026Expires=1744203894\u0026Signature=ckfGK-okN9yQ-Ylo3X8REzxSrmuOy2SbB0WiECeUzHT8sUpCfaXAF6Pr4l1ejhyq~ug41t6CeTbSKa6UOmRueQnxSpTl51YYVNo3em-MTsodHU04H5YePcmEMz3BHYfjfOHbEwqfJrhCzrtzLSeEtwG5GwEJqpwS3mxqE1mwGHOQ-uWfoEhPFUP2j2WgU~bncygNMGhXkyLkIpfA0aMfq5CYyxVYjRh7fMurQVHHMJ3fdkkFbacwlV68FXHbKmKz0bqYXPE9N5YsJs~RoIiVNCPICdSUr~ScNVj5J2vNH15Rr~u5usuqTqukGu~khD-bqAjlcbqe9O6DM2Q8COJNzQ__\u0026Key-Pair-Id=APKAJLOHF5GGSLRBV4ZA"}],"research_interests":[{"id":48,"name":"Engineering","url":"https://www.academia.edu/Documents/in/Engineering"},{"id":422,"name":"Computer Science","url":"https://www.academia.edu/Documents/in/Computer_Science"},{"id":449,"name":"Software Engineering","url":"https://www.academia.edu/Documents/in/Software_Engineering"},{"id":923,"name":"Technology","url":"https://www.academia.edu/Documents/in/Technology"},{"id":933,"name":"New Media","url":"https://www.academia.edu/Documents/in/New_Media"},{"id":1012,"name":"Digital Libraries","url":"https://www.academia.edu/Documents/in/Digital_Libraries"},{"id":1210,"name":"Informatics","url":"https://www.academia.edu/Documents/in/Informatics"},{"id":1380,"name":"Computer Engineering","url":"https://www.academia.edu/Documents/in/Computer_Engineering"},{"id":1453,"name":"Information Management","url":"https://www.academia.edu/Documents/in/Information_Management"},{"id":2869,"name":"Digital Media","url":"https://www.academia.edu/Documents/in/Digital_Media"},{"id":3419,"name":"Multimedia","url":"https://www.academia.edu/Documents/in/Multimedia"},{"id":5266,"name":"Human Information Interaction","url":"https://www.academia.edu/Documents/in/Human_Information_Interaction"},{"id":10165,"name":"Interactive and Digital Media","url":"https://www.academia.edu/Documents/in/Interactive_and_Digital_Media"},{"id":11085,"name":"Cross-Media Information Spaces","url":"https://www.academia.edu/Documents/in/Cross-Media_Information_Spaces"},{"id":42095,"name":"Document Engineering","url":"https://www.academia.edu/Documents/in/Document_Engineering"},{"id":44606,"name":"Hypermedia","url":"https://www.academia.edu/Documents/in/Hypermedia"},{"id":54192,"name":"Hypertext","url":"https://www.academia.edu/Documents/in/Hypertext"},{"id":85420,"name":"Crossmedia","url":"https://www.academia.edu/Documents/in/Crossmedia"},{"id":97585,"name":"User interfaces","url":"https://www.academia.edu/Documents/in/User_interfaces"},{"id":193390,"name":"RSL","url":"https://www.academia.edu/Documents/in/RSL"}],"urls":[{"id":14771318,"url":"https://beatsigner.com/publications/cross-media-document-linking-and-navigation.pdf"}]}, dispatcherData: dispatcherData }); $(this).data('initialized', true); } }); $a.trackClickSource(".js-work-strip-work-link", "profile_work_strip") if (false) { Aedu.setUpFigureCarousel('profile-work-37315995-figures'); } }); </script> <div class="js-work-strip profile--work_container" data-work-id="63182979"><div class="profile--work_thumbnail hidden-xs"><a class="js-work-strip-work-link" data-click-track="profile-work-strip-thumbnail" href="https://www.academia.edu/63182979/Indoor_Positioning_Using_the_OpenHPS_Framework"><img alt="Research paper thumbnail of Indoor Positioning Using the OpenHPS Framework" class="work-thumbnail" src="https://attachments.academia-assets.com/75690803/thumbnails/1.jpg" /></a></div><div class="wp-workCard wp-workCard_itemContainer"><div class="wp-workCard_item wp-workCard--title"><a class="js-work-strip-work-link text-gray-darker" data-click-track="profile-work-strip-title" href="https://www.academia.edu/63182979/Indoor_Positioning_Using_the_OpenHPS_Framework">Indoor Positioning Using the OpenHPS Framework</a></div><div class="wp-workCard_item wp-workCard--coauthors"><span>by </span><span><a class="" data-click-track="profile-work-strip-authors" href="https://vub.academia.edu/BeatSigner">Beat Signer</a> and <a class="" data-click-track="profile-work-strip-authors" href="https://vub.academia.edu/MaximVandeWynckel">Maxim Van de Wynckel</a></span></div><div class="wp-workCard_item"><span>Presentation given at IPIN 2021, 11th International Conference on Indoor Positioning and Indoor Navigation, Lloret de Mar, Spain</span><span>, 2021</span></div><div class="wp-workCard_item"><span class="js-work-more-abstract-truncated">Hybrid positioning frameworks use various sensors and algorithms to enhance positioning through d...</span><a class="js-work-more-abstract" data-broccoli-component="work_strip.more_abstract" data-click-track="profile-work-strip-more-abstract" href="javascript:;"><span> more </span><span><i class="fa fa-caret-down"></i></span></a><span class="js-work-more-abstract-untruncated hidden">Hybrid positioning frameworks use various sensors and algorithms to enhance positioning through different types of fusion. The optimisation of the fusion process requires the testing of different algorithm parameters and optimal lowas well as high-level sensor fusion techniques. The presented OpenHPS open source hybrid positioning system is a modular framework managing individual nodes in a process network, which can be configured to support concrete positioning use cases or to adapt to specific technologies. This modularity allows developers to rapidly develop and optimise their positioning system while still providing them the flexibility to add their own algorithms. In this paper we discuss how a process network developed with OpenHPS can be used to realise a customisable indoor positioning solution with an offline and online stage, and how it can be adapted for high accuracy or low latency. For the demonstration and validation of our indoor positioning solution, we further compiled a publicly available dataset containing data from WLAN access points, BLE beacons as well as several trajectories that include IMU data.<br /><br />Research paper: <a href="https://beatsigner.com/publications/indoor-positioning-using-the-openhps-framework.pdf" rel="nofollow">https://beatsigner.com/publications/indoor-positioning-using-the-openhps-framework.pdf</a></span></div><div class="wp-workCard_item"><div class="carousel-container carousel-container--sm" id="profile-work-63182979-figures"><div class="prev-slide-container js-prev-button-container"><button aria-label="Previous" class="carousel-navigation-button js-profile-work-63182979-figures-prev"><span class="material-symbols-outlined" style="font-size: 24px" translate="no">arrow_back_ios</span></button></div><div class="slides-container js-slides-container"><figure class="figure-slide-container"><a href="https://www.academia.edu/figures/10181517/figure-1-indoor-positioning-using-the-openhps-framework"><img alt="" class="figure-slide-image" src="https://figures.academia-assets.com/75690803/figure_001.jpg" /></a></figure><figure class="figure-slide-container"><a href="https://www.academia.edu/figures/10181523/figure-2-indoor-positioning-using-the-openhps-framework"><img alt="" class="figure-slide-image" src="https://figures.academia-assets.com/75690803/figure_002.jpg" /></a></figure><figure class="figure-slide-container"><a href="https://www.academia.edu/figures/10181528/figure-3-indoor-positioning-using-the-openhps-framework"><img alt="" class="figure-slide-image" src="https://figures.academia-assets.com/75690803/figure_003.jpg" /></a></figure><figure class="figure-slide-container"><a href="https://www.academia.edu/figures/10181531/figure-4-indoor-positioning-using-the-openhps-framework"><img alt="" class="figure-slide-image" src="https://figures.academia-assets.com/75690803/figure_004.jpg" /></a></figure><figure class="figure-slide-container"><a href="https://www.academia.edu/figures/10181539/figure-5-indoor-positioning-using-the-openhps-framework"><img alt="" class="figure-slide-image" src="https://figures.academia-assets.com/75690803/figure_005.jpg" /></a></figure><figure class="figure-slide-container"><a href="https://www.academia.edu/figures/10181542/figure-6-indoor-positioning-using-the-openhps-framework"><img alt="" class="figure-slide-image" src="https://figures.academia-assets.com/75690803/figure_006.jpg" /></a></figure><figure class="figure-slide-container"><a href="https://www.academia.edu/figures/10181547/figure-7-indoor-positioning-using-the-openhps-framework"><img alt="" class="figure-slide-image" src="https://figures.academia-assets.com/75690803/figure_007.jpg" /></a></figure><figure class="figure-slide-container"><a href="https://www.academia.edu/figures/10181549/figure-8-indoor-positioning-using-the-openhps-framework"><img alt="" class="figure-slide-image" src="https://figures.academia-assets.com/75690803/figure_008.jpg" /></a></figure><figure class="figure-slide-container"><a href="https://www.academia.edu/figures/10181553/figure-9-indoor-positioning-using-the-openhps-framework"><img alt="" class="figure-slide-image" src="https://figures.academia-assets.com/75690803/figure_009.jpg" /></a></figure><figure class="figure-slide-container"><a href="https://www.academia.edu/figures/10181557/figure-10-indoor-positioning-using-the-openhps-framework"><img alt="" class="figure-slide-image" src="https://figures.academia-assets.com/75690803/figure_010.jpg" /></a></figure><figure class="figure-slide-container"><a href="https://www.academia.edu/figures/10181559/figure-11-indoor-positioning-using-the-openhps-framework"><img alt="" class="figure-slide-image" src="https://figures.academia-assets.com/75690803/figure_011.jpg" /></a></figure><figure class="figure-slide-container"><a href="https://www.academia.edu/figures/10181568/figure-12-indoor-positioning-using-the-openhps-framework"><img alt="" class="figure-slide-image" src="https://figures.academia-assets.com/75690803/figure_012.jpg" /></a></figure><figure class="figure-slide-container"><a href="https://www.academia.edu/figures/10181573/figure-13-indoor-positioning-using-the-openhps-framework"><img alt="" class="figure-slide-image" src="https://figures.academia-assets.com/75690803/figure_013.jpg" /></a></figure><figure class="figure-slide-container"><a href="https://www.academia.edu/figures/10181585/figure-14-indoor-positioning-using-the-openhps-framework"><img alt="" class="figure-slide-image" src="https://figures.academia-assets.com/75690803/figure_014.jpg" /></a></figure><figure class="figure-slide-container"><a href="https://www.academia.edu/figures/10181592/figure-15-indoor-positioning-using-the-openhps-framework"><img alt="" class="figure-slide-image" src="https://figures.academia-assets.com/75690803/figure_015.jpg" /></a></figure><figure class="figure-slide-container"><a href="https://www.academia.edu/figures/10181604/figure-16-indoor-positioning-using-the-openhps-framework"><img alt="" class="figure-slide-image" src="https://figures.academia-assets.com/75690803/figure_016.jpg" /></a></figure><figure class="figure-slide-container"><a href="https://www.academia.edu/figures/10181613/figure-17-indoor-positioning-using-the-openhps-framework"><img alt="" class="figure-slide-image" src="https://figures.academia-assets.com/75690803/figure_017.jpg" /></a></figure><figure class="figure-slide-container"><a href="https://www.academia.edu/figures/10181668/figure-18-indoor-positioning-using-the-openhps-framework"><img alt="" class="figure-slide-image" src="https://figures.academia-assets.com/75690803/figure_018.jpg" /></a></figure><figure class="figure-slide-container"><a href="https://www.academia.edu/figures/10181682/figure-19-indoor-positioning-using-the-openhps-framework"><img alt="" class="figure-slide-image" src="https://figures.academia-assets.com/75690803/figure_019.jpg" /></a></figure><figure class="figure-slide-container"><a href="https://www.academia.edu/figures/10181691/figure-20-indoor-positioning-using-the-openhps-framework"><img alt="" class="figure-slide-image" src="https://figures.academia-assets.com/75690803/figure_020.jpg" /></a></figure></div><div class="next-slide-container js-next-button-container"><button aria-label="Next" class="carousel-navigation-button js-profile-work-63182979-figures-next"><span class="material-symbols-outlined" style="font-size: 24px" translate="no">arrow_forward_ios</span></button></div></div></div><div class="wp-workCard_item wp-workCard--actions"><span class="work-strip-bookmark-button-container"></span><a id="7efdd201e87a494f0080af015bb3e581" class="wp-workCard--action" rel="nofollow" data-click-track="profile-work-strip-download" data-download="{&quot;attachment_id&quot;:75690803,&quot;asset_id&quot;:63182979,&quot;asset_type&quot;:&quot;Work&quot;,&quot;button_location&quot;:&quot;profile&quot;}" href="https://www.academia.edu/attachments/75690803/download_file?s=profile"><span><i class="fa fa-arrow-down"></i></span><span>Download</span></a><span class="wp-workCard--action visible-if-viewed-by-owner inline-block" style="display: none;"><span class="js-profile-work-strip-edit-button-wrapper profile-work-strip-edit-button-wrapper" data-work-id="63182979"><a class="js-profile-work-strip-edit-button" tabindex="0"><span><i class="fa fa-pencil"></i></span><span>Edit</span></a></span></span></div><div class="wp-workCard_item wp-workCard--stats"><span><span><span class="js-view-count view-count u-mr2x" data-work-id="63182979"><i class="fa fa-spinner fa-spin"></i></span><script>$(function () { var workId = 63182979; window.Academia.workViewCountsFetcher.queue(workId, function (count) { var description = window.$h.commaizeInt(count) + " " + window.$h.pluralize(count, 'View'); $(".js-view-count[data-work-id=63182979]").text(description); $(".js-view-count[data-work-id=63182979]").attr('title', description).tooltip(); }); });</script></span></span><span><span class="percentile-widget hidden"><span class="u-mr2x work-percentile"></span></span><script>$(function () { var workId = 63182979; window.Academia.workPercentilesFetcher.queue(workId, function (percentileText) { var container = $(".js-work-strip[data-work-id='63182979']"); container.find('.work-percentile').text(percentileText.charAt(0).toUpperCase() + percentileText.slice(1)); container.find('.percentile-widget').show(); container.find('.percentile-widget').removeClass('hidden'); }); });</script></span></div><div id="work-strip-premium-row-container"></div></div></div><script> require.config({ waitSeconds: 90 })(["https://a.academia-assets.com/assets/wow_profile-a9bf3a2bc8c89fa2a77156577594264ee8a0f214d74241bc0fcd3f69f8d107ac.js","https://a.academia-assets.com/assets/work_edit-ad038b8c047c1a8d4fa01b402d530ff93c45fee2137a149a4a5398bc8ad67560.js"], function() { // from javascript_helper.rb var dispatcherData = {} if (true){ window.WowProfile.dispatcher = window.WowProfile.dispatcher || _.clone(Backbone.Events); dispatcherData = { dispatcher: window.WowProfile.dispatcher, downloadLinkId: "7efdd201e87a494f0080af015bb3e581" } } $('.js-work-strip[data-work-id=63182979]').each(function() { if (!$(this).data('initialized')) { new WowProfile.WorkStripView({ el: this, workJSON: {"id":63182979,"title":"Indoor Positioning Using the OpenHPS Framework","translated_title":"","metadata":{"abstract":"Hybrid positioning frameworks use various sensors and algorithms to enhance positioning through different types of fusion. The optimisation of the fusion process requires the testing of different algorithm parameters and optimal lowas well as high-level sensor fusion techniques. The presented OpenHPS open source hybrid positioning system is a modular framework managing individual nodes in a process network, which can be configured to support concrete positioning use cases or to adapt to specific technologies. This modularity allows developers to rapidly develop and optimise their positioning system while still providing them the flexibility to add their own algorithms. In this paper we discuss how a process network developed with OpenHPS can be used to realise a customisable indoor positioning solution with an offline and online stage, and how it can be adapted for high accuracy or low latency. For the demonstration and validation of our indoor positioning solution, we further compiled a publicly available dataset containing data from WLAN access points, BLE beacons as well as several trajectories that include IMU data.\n\nResearch paper: https://beatsigner.com/publications/indoor-positioning-using-the-openhps-framework.pdf","publication_date":{"day":null,"month":null,"year":2021,"errors":{}},"publication_name":"Presentation given at IPIN 2021, 11th International Conference on Indoor Positioning and Indoor Navigation, Lloret de Mar, Spain"},"translated_abstract":"Hybrid positioning frameworks use various sensors and algorithms to enhance positioning through different types of fusion. The optimisation of the fusion process requires the testing of different algorithm parameters and optimal lowas well as high-level sensor fusion techniques. The presented OpenHPS open source hybrid positioning system is a modular framework managing individual nodes in a process network, which can be configured to support concrete positioning use cases or to adapt to specific technologies. This modularity allows developers to rapidly develop and optimise their positioning system while still providing them the flexibility to add their own algorithms. In this paper we discuss how a process network developed with OpenHPS can be used to realise a customisable indoor positioning solution with an offline and online stage, and how it can be adapted for high accuracy or low latency. For the demonstration and validation of our indoor positioning solution, we further compiled a publicly available dataset containing data from WLAN access points, BLE beacons as well as several trajectories that include IMU data.\n\nResearch paper: https://beatsigner.com/publications/indoor-positioning-using-the-openhps-framework.pdf","internal_url":"https://www.academia.edu/63182979/Indoor_Positioning_Using_the_OpenHPS_Framework","translated_internal_url":"","created_at":"2021-12-04T03:57:17.588-08:00","preview_url":null,"current_user_can_edit":null,"current_user_is_owner":null,"owner_id":13155,"coauthors_can_edit":true,"document_type":"conference_presentation","co_author_tags":[{"id":37186240,"work_id":63182979,"tagging_user_id":13155,"tagged_user_id":61118416,"co_author_invite_id":null,"email":"m***l@gmail.com","affiliation":"Vrije Universiteit Brussel","display_order":1,"name":"Maxim Van de Wynckel","title":"Indoor Positioning Using the OpenHPS Framework"}],"downloadable_attachments":[{"id":75690803,"title":"","file_type":"pdf","scribd_thumbnail_url":"https://attachments.academia-assets.com/75690803/thumbnails/1.jpg","file_name":"IPIN2021.pdf","download_url":"https://www.academia.edu/attachments/75690803/download_file","bulk_download_file_name":"Indoor_Positioning_Using_the_OpenHPS_Fra.pdf","bulk_download_url":"https://d1wqtxts1xzle7.cloudfront.net/75690803/IPIN2021-libre.pdf?1638621078=\u0026response-content-disposition=attachment%3B+filename%3DIndoor_Positioning_Using_the_OpenHPS_Fra.pdf\u0026Expires=1744203894\u0026Signature=JG3rUso8ueGbb2uB-MODpnzkT8DIqpyjxTHSMrA0suSc49sXtZ-vRutqWSBuh96wPUCv9ZwFFY8zgymvarnyOMxLACE8sQjxBuU3a5Ly5tGNpa0Z5wUDdwWhVnz3qaaq6cx0aozCcKY2FWXA0nWGtivNwiSX2-eHWjNjlc7HDyt7W--MrIz2V54PDpgEt9i8PSAzDEr8fwATHI2rQFxpNVKKcLYJc~ITm5-B91hsvz-hOtQyWflAimshgKdYBvbMtMfLtx21xwECuDZrPVQ3zo1Zgd2iymU5U85uTWy6KuKRlfxhFsZBIMOn9UF8RodB~d4LxsJ0ayj0RVsOvzL5Nw__\u0026Key-Pair-Id=APKAJLOHF5GGSLRBV4ZA"}],"slug":"Indoor_Positioning_Using_the_OpenHPS_Framework","translated_slug":"","page_count":25,"language":"en","content_type":"Work","summary":"Hybrid positioning frameworks use various sensors and algorithms to enhance positioning through different types of fusion. The optimisation of the fusion process requires the testing of different algorithm parameters and optimal lowas well as high-level sensor fusion techniques. The presented OpenHPS open source hybrid positioning system is a modular framework managing individual nodes in a process network, which can be configured to support concrete positioning use cases or to adapt to specific technologies. This modularity allows developers to rapidly develop and optimise their positioning system while still providing them the flexibility to add their own algorithms. In this paper we discuss how a process network developed with OpenHPS can be used to realise a customisable indoor positioning solution with an offline and online stage, and how it can be adapted for high accuracy or low latency. For the demonstration and validation of our indoor positioning solution, we further compiled a publicly available dataset containing data from WLAN access points, BLE beacons as well as several trajectories that include IMU data.\n\nResearch paper: https://beatsigner.com/publications/indoor-positioning-using-the-openhps-framework.pdf","impression_tracking_id":null,"owner":{"id":13155,"first_name":"Beat","middle_initials":null,"last_name":"Signer","page_name":"BeatSigner","domain_name":"vub","created_at":"2008-10-29T17:11:18.015-07:00","display_name":"Beat Signer","url":"https://vub.academia.edu/BeatSigner"},"attachments":[{"id":75690803,"title":"","file_type":"pdf","scribd_thumbnail_url":"https://attachments.academia-assets.com/75690803/thumbnails/1.jpg","file_name":"IPIN2021.pdf","download_url":"https://www.academia.edu/attachments/75690803/download_file","bulk_download_file_name":"Indoor_Positioning_Using_the_OpenHPS_Fra.pdf","bulk_download_url":"https://d1wqtxts1xzle7.cloudfront.net/75690803/IPIN2021-libre.pdf?1638621078=\u0026response-content-disposition=attachment%3B+filename%3DIndoor_Positioning_Using_the_OpenHPS_Fra.pdf\u0026Expires=1744203894\u0026Signature=JG3rUso8ueGbb2uB-MODpnzkT8DIqpyjxTHSMrA0suSc49sXtZ-vRutqWSBuh96wPUCv9ZwFFY8zgymvarnyOMxLACE8sQjxBuU3a5Ly5tGNpa0Z5wUDdwWhVnz3qaaq6cx0aozCcKY2FWXA0nWGtivNwiSX2-eHWjNjlc7HDyt7W--MrIz2V54PDpgEt9i8PSAzDEr8fwATHI2rQFxpNVKKcLYJc~ITm5-B91hsvz-hOtQyWflAimshgKdYBvbMtMfLtx21xwECuDZrPVQ3zo1Zgd2iymU5U85uTWy6KuKRlfxhFsZBIMOn9UF8RodB~d4LxsJ0ayj0RVsOvzL5Nw__\u0026Key-Pair-Id=APKAJLOHF5GGSLRBV4ZA"}],"research_interests":[{"id":422,"name":"Computer Science","url":"https://www.academia.edu/Documents/in/Computer_Science"},{"id":1210,"name":"Informatics","url":"https://www.academia.edu/Documents/in/Informatics"},{"id":9988,"name":"Indoor Positioning","url":"https://www.academia.edu/Documents/in/Indoor_Positioning"},{"id":46957,"name":"Indoor Navigation","url":"https://www.academia.edu/Documents/in/Indoor_Navigation"},{"id":83870,"name":"Ubiquitous Positioning","url":"https://www.academia.edu/Documents/in/Ubiquitous_Positioning"},{"id":86591,"name":"Mobile Positioning","url":"https://www.academia.edu/Documents/in/Mobile_Positioning"},{"id":294922,"name":"Wi Fi Positioning","url":"https://www.academia.edu/Documents/in/Wi_Fi_Positioning"},{"id":368857,"name":"Positioning","url":"https://www.academia.edu/Documents/in/Positioning"},{"id":533704,"name":"Indoor Location","url":"https://www.academia.edu/Documents/in/Indoor_Location"},{"id":3857627,"name":"OpenHPS","url":"https://www.academia.edu/Documents/in/OpenHPS"}],"urls":[{"id":15699851,"url":"https://speakerdeck.com/signer/indoor-positioning-using-the-openhps-framework"}]}, dispatcherData: dispatcherData }); $(this).data('initialized', true); } }); $a.trackClickSource(".js-work-strip-work-link", "profile_work_strip") if (true) { Aedu.setUpFigureCarousel('profile-work-63182979-figures'); } }); </script> <div class="js-work-strip profile--work_container" data-work-id="1675035"><div class="profile--work_thumbnail hidden-xs"><a class="js-work-strip-work-link" data-click-track="profile-work-strip-thumbnail" href="https://www.academia.edu/1675035/What_is_Wrong_with_Digital_Documents_A_Conceptual_Model_for_Structural_Cross_Media_Content_Composition_and_Reuse"><img alt="Research paper thumbnail of What is Wrong with Digital Documents? A Conceptual Model for Structural Cross-Media Content Composition and Reuse" class="work-thumbnail" src="https://attachments.academia-assets.com/77361088/thumbnails/1.jpg" /></a></div><div class="wp-workCard wp-workCard_itemContainer"><div class="wp-workCard_item wp-workCard--title"><a class="js-work-strip-work-link text-gray-darker" data-click-track="profile-work-strip-title" href="https://www.academia.edu/1675035/What_is_Wrong_with_Digital_Documents_A_Conceptual_Model_for_Structural_Cross_Media_Content_Composition_and_Reuse">What is Wrong with Digital Documents? A Conceptual Model for Structural Cross-Media Content Composition and Reuse</a></div><div class="wp-workCard_item"><span>Presentation given at ER 2010, 29th International Conference on Conceptual Modeling, Vancouver, Canada</span><span>, 2010</span></div><div class="wp-workCard_item"><span class="js-work-more-abstract-truncated">Many of today&#39;s digital document formats are strongly based on a digital emulation of printed med...</span><a class="js-work-more-abstract" data-broccoli-component="work_strip.more_abstract" data-click-track="profile-work-strip-more-abstract" href="javascript:;"><span> more </span><span><i class="fa fa-caret-down"></i></span></a><span class="js-work-more-abstract-untruncated hidden">Many of today&#39;s digital document formats are strongly based on a digital emulation of printed media. While such a paper simulation might be appropriate for the visualisation of certain digital content, it is generally not the most effective solution for digitally managing and storing information. The oversimplistic modelling of digital documents as monolithic blocks of linear content, with a lack of structural semantics, does not pay attention to some of the superior features that digital media offers in comparison to traditional paper documents. For example, existing digital document formats adopt the limitations of paper documents by unnecessarily replicating content via copy and paste operations, instead of digitally embedding and reusing parts of digital documents via structural references. We introduce a conceptual model for structural cross-media content composition and highlight how the proposed solution not only enables the reuse of content via structural relationships, but also supports dynamic and context-dependent document adaptation, structural content annotations as well as the integration of arbitrary non-textual media types. We further discuss solutions for the fluid navigation and cross-media content publishing based on the proposed structural cross-media content model.<br /><br />Research paper: <a href="https://beatsigner.com/publications/what-is-wrong-with-digital-documents-a-conceptual-model-for-structural-cross-media-content-composition-and-reuse.pdf" rel="nofollow">https://beatsigner.com/publications/what-is-wrong-with-digital-documents-a-conceptual-model-for-structural-cross-media-content-composition-and-reuse.pdf</a></span></div><div class="wp-workCard_item"><div class="carousel-container carousel-container--sm" id="profile-work-1675035-figures"><div class="prev-slide-container js-prev-button-container"><button aria-label="Previous" class="carousel-navigation-button js-profile-work-1675035-figures-prev"><span class="material-symbols-outlined" style="font-size: 24px" translate="no">arrow_back_ios</span></button></div><div class="slides-container js-slides-container"><figure class="figure-slide-container"><a href="https://www.academia.edu/figures/50874582/figure-1-what-is-wrong-with-digital-documents-conceptual"><img alt="" class="figure-slide-image" src="https://figures.academia-assets.com/77361088/figure_001.jpg" /></a></figure><figure class="figure-slide-container"><a href="https://www.academia.edu/figures/50874599/figure-2-what-is-wrong-with-digital-documents-conceptual"><img alt="" class="figure-slide-image" src="https://figures.academia-assets.com/77361088/figure_002.jpg" /></a></figure><figure class="figure-slide-container"><a href="https://www.academia.edu/figures/50874614/figure-3-what-is-wrong-with-digital-documents-conceptual"><img alt="" class="figure-slide-image" src="https://figures.academia-assets.com/77361088/figure_003.jpg" /></a></figure><figure class="figure-slide-container"><a href="https://www.academia.edu/figures/50874626/figure-4-what-is-wrong-with-digital-documents-conceptual"><img alt="" class="figure-slide-image" src="https://figures.academia-assets.com/77361088/figure_004.jpg" /></a></figure><figure class="figure-slide-container"><a href="https://www.academia.edu/figures/50874653/figure-5-what-is-wrong-with-digital-documents-conceptual"><img alt="" class="figure-slide-image" src="https://figures.academia-assets.com/77361088/figure_005.jpg" /></a></figure><figure class="figure-slide-container"><a href="https://www.academia.edu/figures/50874668/figure-6-what-is-wrong-with-digital-documents-conceptual"><img alt="" class="figure-slide-image" src="https://figures.academia-assets.com/77361088/figure_006.jpg" /></a></figure><figure class="figure-slide-container"><a href="https://www.academia.edu/figures/50874679/figure-7-what-is-wrong-with-digital-documents-conceptual"><img alt="" class="figure-slide-image" src="https://figures.academia-assets.com/77361088/figure_007.jpg" /></a></figure><figure class="figure-slide-container"><a href="https://www.academia.edu/figures/50874692/figure-8-what-is-wrong-with-digital-documents-conceptual"><img alt="" class="figure-slide-image" src="https://figures.academia-assets.com/77361088/figure_008.jpg" /></a></figure><figure class="figure-slide-container"><a href="https://www.academia.edu/figures/50874701/figure-9-what-is-wrong-with-digital-documents-conceptual"><img alt="" class="figure-slide-image" src="https://figures.academia-assets.com/77361088/figure_009.jpg" /></a></figure><figure class="figure-slide-container"><a href="https://www.academia.edu/figures/50874731/figure-10-third-party-application-navigational-link-browser"><img alt="Third-party application Navigational link browser &amp; editor " class="figure-slide-image" src="https://figures.academia-assets.com/77361088/figure_010.jpg" /></a></figure><figure class="figure-slide-container"><a href="https://www.academia.edu/figures/50874751/figure-11-what-is-wrong-with-digital-documents-conceptual"><img alt="" class="figure-slide-image" src="https://figures.academia-assets.com/77361088/figure_011.jpg" /></a></figure></div><div class="next-slide-container js-next-button-container"><button aria-label="Next" class="carousel-navigation-button js-profile-work-1675035-figures-next"><span class="material-symbols-outlined" style="font-size: 24px" translate="no">arrow_forward_ios</span></button></div></div></div><div class="wp-workCard_item wp-workCard--actions"><span class="work-strip-bookmark-button-container"></span><a id="85a70c614efc72140097d5886c504694" class="wp-workCard--action" rel="nofollow" data-click-track="profile-work-strip-download" data-download="{&quot;attachment_id&quot;:77361088,&quot;asset_id&quot;:1675035,&quot;asset_type&quot;:&quot;Work&quot;,&quot;button_location&quot;:&quot;profile&quot;}" href="https://www.academia.edu/attachments/77361088/download_file?s=profile"><span><i class="fa fa-arrow-down"></i></span><span>Download</span></a><span class="wp-workCard--action visible-if-viewed-by-owner inline-block" style="display: none;"><span class="js-profile-work-strip-edit-button-wrapper profile-work-strip-edit-button-wrapper" data-work-id="1675035"><a class="js-profile-work-strip-edit-button" tabindex="0"><span><i class="fa fa-pencil"></i></span><span>Edit</span></a></span></span></div><div class="wp-workCard_item wp-workCard--stats"><span><span><span class="js-view-count view-count u-mr2x" data-work-id="1675035"><i class="fa fa-spinner fa-spin"></i></span><script>$(function () { var workId = 1675035; window.Academia.workViewCountsFetcher.queue(workId, function (count) { var description = window.$h.commaizeInt(count) + " " + window.$h.pluralize(count, 'View'); $(".js-view-count[data-work-id=1675035]").text(description); $(".js-view-count[data-work-id=1675035]").attr('title', description).tooltip(); }); });</script></span></span><span><span class="percentile-widget hidden"><span class="u-mr2x work-percentile"></span></span><script>$(function () { var workId = 1675035; window.Academia.workPercentilesFetcher.queue(workId, function (percentileText) { var container = $(".js-work-strip[data-work-id='1675035']"); container.find('.work-percentile').text(percentileText.charAt(0).toUpperCase() + percentileText.slice(1)); container.find('.percentile-widget').show(); container.find('.percentile-widget').removeClass('hidden'); }); });</script></span></div><div id="work-strip-premium-row-container"></div></div></div><script> require.config({ waitSeconds: 90 })(["https://a.academia-assets.com/assets/wow_profile-a9bf3a2bc8c89fa2a77156577594264ee8a0f214d74241bc0fcd3f69f8d107ac.js","https://a.academia-assets.com/assets/work_edit-ad038b8c047c1a8d4fa01b402d530ff93c45fee2137a149a4a5398bc8ad67560.js"], function() { // from javascript_helper.rb var dispatcherData = {} if (true){ window.WowProfile.dispatcher = window.WowProfile.dispatcher || _.clone(Backbone.Events); dispatcherData = { dispatcher: window.WowProfile.dispatcher, downloadLinkId: "85a70c614efc72140097d5886c504694" } } $('.js-work-strip[data-work-id=1675035]').each(function() { if (!$(this).data('initialized')) { new WowProfile.WorkStripView({ el: this, workJSON: {"id":1675035,"title":"What is Wrong with Digital Documents? A Conceptual Model for Structural Cross-Media Content Composition and Reuse","translated_title":"","metadata":{"time":{"end_hour":11,"start_hour":10,"errors":{}},"abstract":"Many of today's digital document formats are strongly based on a digital emulation of printed media. While such a paper simulation might be appropriate for the visualisation of certain digital content, it is generally not the most effective solution for digitally managing and storing information. The oversimplistic modelling of digital documents as monolithic blocks of linear content, with a lack of structural semantics, does not pay attention to some of the superior features that digital media offers in comparison to traditional paper documents. For example, existing digital document formats adopt the limitations of paper documents by unnecessarily replicating content via copy and paste operations, instead of digitally embedding and reusing parts of digital documents via structural references. We introduce a conceptual model for structural cross-media content composition and highlight how the proposed solution not only enables the reuse of content via structural relationships, but also supports dynamic and context-dependent document adaptation, structural content annotations as well as the integration of arbitrary non-textual media types. We further discuss solutions for the fluid navigation and cross-media content publishing based on the proposed structural cross-media content model.\n\nResearch paper: https://beatsigner.com/publications/what-is-wrong-with-digital-documents-a-conceptual-model-for-structural-cross-media-content-composition-and-reuse.pdf","event_date":{"day":4,"month":11,"year":2010,"errors":{}},"publication_date":{"day":null,"month":null,"year":2010,"errors":{}},"publication_name":"Presentation given at ER 2010, 29th International Conference on Conceptual Modeling, Vancouver, Canada"},"translated_abstract":"Many of today's digital document formats are strongly based on a digital emulation of printed media. While such a paper simulation might be appropriate for the visualisation of certain digital content, it is generally not the most effective solution for digitally managing and storing information. The oversimplistic modelling of digital documents as monolithic blocks of linear content, with a lack of structural semantics, does not pay attention to some of the superior features that digital media offers in comparison to traditional paper documents. For example, existing digital document formats adopt the limitations of paper documents by unnecessarily replicating content via copy and paste operations, instead of digitally embedding and reusing parts of digital documents via structural references. We introduce a conceptual model for structural cross-media content composition and highlight how the proposed solution not only enables the reuse of content via structural relationships, but also supports dynamic and context-dependent document adaptation, structural content annotations as well as the integration of arbitrary non-textual media types. We further discuss solutions for the fluid navigation and cross-media content publishing based on the proposed structural cross-media content model.\n\nResearch paper: https://beatsigner.com/publications/what-is-wrong-with-digital-documents-a-conceptual-model-for-structural-cross-media-content-composition-and-reuse.pdf","internal_url":"https://www.academia.edu/1675035/What_is_Wrong_with_Digital_Documents_A_Conceptual_Model_for_Structural_Cross_Media_Content_Composition_and_Reuse","translated_internal_url":"","created_at":"2010-11-07T13:55:08.822-08:00","preview_url":null,"current_user_can_edit":null,"current_user_is_owner":null,"owner_id":13155,"coauthors_can_edit":true,"document_type":"conference_presentation","co_author_tags":[],"downloadable_attachments":[{"id":77361088,"title":"","file_type":"pdf","scribd_thumbnail_url":"https://attachments.academia-assets.com/77361088/thumbnails/1.jpg","file_name":"ER2010.pdf","download_url":"https://www.academia.edu/attachments/77361088/download_file","bulk_download_file_name":"What_is_Wrong_with_Digital_Documents_A_C.pdf","bulk_download_url":"https://d1wqtxts1xzle7.cloudfront.net/77361088/ER2010-libre.pdf?1640512898=\u0026response-content-disposition=attachment%3B+filename%3DWhat_is_Wrong_with_Digital_Documents_A_C.pdf\u0026Expires=1744203894\u0026Signature=POaCKUegyj8F5LNKsOQtD6mVZXnN9kl88-8hY6emwBN5FU~GjfXoB5qol8ZYEfpJfnQntvWP71oo02-8-yUyo6yPr2dw72L1aom08ohLPHSPcrZZbNsLON9hYicwOOS9SUgfnmKXMH5297pRJi-iMxC64C4C3b6r3P3ZiPCV~qO~v~to18caYnGwvW11eW314IDFnVzQ15iZskrLk8-q6fMNduJfRoYvflKbZLQ05BcfxCSrHes-wtqxtQa~pd7uUodaqBx~gBgb6xGwzjSXhVFgeBTQfrxHHq6sUwPME0iK4vQIe-dfRYpPJ-ePIiSEQyjb15GX~24G~pEHA88vuA__\u0026Key-Pair-Id=APKAJLOHF5GGSLRBV4ZA"}],"slug":"What_is_Wrong_with_Digital_Documents_A_Conceptual_Model_for_Structural_Cross_Media_Content_Composition_and_Reuse","translated_slug":"","page_count":15,"language":"en","content_type":"Work","summary":"Many of today's digital document formats are strongly based on a digital emulation of printed media. While such a paper simulation might be appropriate for the visualisation of certain digital content, it is generally not the most effective solution for digitally managing and storing information. The oversimplistic modelling of digital documents as monolithic blocks of linear content, with a lack of structural semantics, does not pay attention to some of the superior features that digital media offers in comparison to traditional paper documents. For example, existing digital document formats adopt the limitations of paper documents by unnecessarily replicating content via copy and paste operations, instead of digitally embedding and reusing parts of digital documents via structural references. We introduce a conceptual model for structural cross-media content composition and highlight how the proposed solution not only enables the reuse of content via structural relationships, but also supports dynamic and context-dependent document adaptation, structural content annotations as well as the integration of arbitrary non-textual media types. We further discuss solutions for the fluid navigation and cross-media content publishing based on the proposed structural cross-media content model.\n\nResearch paper: https://beatsigner.com/publications/what-is-wrong-with-digital-documents-a-conceptual-model-for-structural-cross-media-content-composition-and-reuse.pdf","impression_tracking_id":null,"owner":{"id":13155,"first_name":"Beat","middle_initials":null,"last_name":"Signer","page_name":"BeatSigner","domain_name":"vub","created_at":"2008-10-29T17:11:18.015-07:00","display_name":"Beat Signer","url":"https://vub.academia.edu/BeatSigner"},"attachments":[{"id":77361088,"title":"","file_type":"pdf","scribd_thumbnail_url":"https://attachments.academia-assets.com/77361088/thumbnails/1.jpg","file_name":"ER2010.pdf","download_url":"https://www.academia.edu/attachments/77361088/download_file","bulk_download_file_name":"What_is_Wrong_with_Digital_Documents_A_C.pdf","bulk_download_url":"https://d1wqtxts1xzle7.cloudfront.net/77361088/ER2010-libre.pdf?1640512898=\u0026response-content-disposition=attachment%3B+filename%3DWhat_is_Wrong_with_Digital_Documents_A_C.pdf\u0026Expires=1744203894\u0026Signature=POaCKUegyj8F5LNKsOQtD6mVZXnN9kl88-8hY6emwBN5FU~GjfXoB5qol8ZYEfpJfnQntvWP71oo02-8-yUyo6yPr2dw72L1aom08ohLPHSPcrZZbNsLON9hYicwOOS9SUgfnmKXMH5297pRJi-iMxC64C4C3b6r3P3ZiPCV~qO~v~to18caYnGwvW11eW314IDFnVzQ15iZskrLk8-q6fMNduJfRoYvflKbZLQ05BcfxCSrHes-wtqxtQa~pd7uUodaqBx~gBgb6xGwzjSXhVFgeBTQfrxHHq6sUwPME0iK4vQIe-dfRYpPJ-ePIiSEQyjb15GX~24G~pEHA88vuA__\u0026Key-Pair-Id=APKAJLOHF5GGSLRBV4ZA"}],"research_interests":[{"id":422,"name":"Computer Science","url":"https://www.academia.edu/Documents/in/Computer_Science"},{"id":472,"name":"Human Computer Interaction","url":"https://www.academia.edu/Documents/in/Human_Computer_Interaction"},{"id":3419,"name":"Multimedia","url":"https://www.academia.edu/Documents/in/Multimedia"},{"id":11085,"name":"Cross-Media Information Spaces","url":"https://www.academia.edu/Documents/in/Cross-Media_Information_Spaces"},{"id":42095,"name":"Document Engineering","url":"https://www.academia.edu/Documents/in/Document_Engineering"},{"id":44606,"name":"Hypermedia","url":"https://www.academia.edu/Documents/in/Hypermedia"},{"id":52002,"name":"File Systems","url":"https://www.academia.edu/Documents/in/File_Systems"}],"urls":[{"id":15700353,"url":"https://speakerdeck.com/signer/what-is-wrong-with-digital-documents-a-conceptual-model-for-structural-cross-media-content-composition-and-reuse"}]}, dispatcherData: dispatcherData }); $(this).data('initialized', true); } }); $a.trackClickSource(".js-work-strip-work-link", "profile_work_strip") if (true) { Aedu.setUpFigureCarousel('profile-work-1675035-figures'); } }); </script> <div class="js-work-strip profile--work_container" data-work-id="65998263"><div class="profile--work_thumbnail hidden-xs"><a class="js-work-strip-work-link" data-click-track="profile-work-strip-thumbnail" href="https://www.academia.edu/65998263/Cross_Media_Information_Spaces_and_Architectures"><img alt="Research paper thumbnail of Cross-Media Information Spaces and Architectures" class="work-thumbnail" src="https://attachments.academia-assets.com/77358930/thumbnails/1.jpg" /></a></div><div class="wp-workCard wp-workCard_itemContainer"><div class="wp-workCard_item wp-workCard--title"><a class="js-work-strip-work-link text-gray-darker" data-click-track="profile-work-strip-title" href="https://www.academia.edu/65998263/Cross_Media_Information_Spaces_and_Architectures">Cross-Media Information Spaces and Architectures</a></div><div class="wp-workCard_item"><span>Presentation given at International Workshop Toward a Design Language for Data Physicalization, Berlin, Germany</span><span>, 2018</span></div><div class="wp-workCard_item"><span class="js-work-more-abstract-truncated">Advanced data visualisation techniques enable the exploration and analysis of large datasets. Rec...</span><a class="js-work-more-abstract" data-broccoli-component="work_strip.more_abstract" data-click-track="profile-work-strip-more-abstract" href="javascript:;"><span> more </span><span><i class="fa fa-caret-down"></i></span></a><span class="js-work-more-abstract-untruncated hidden">Advanced data visualisation techniques enable the exploration and analysis of large datasets. Recently, there is the emerging field of data physicalisation, where data is represented in physical space (e.g. via physical models) and can no longer only be explored visually, but also by making use of other senses such as touch. Most existing data physicalisation solutions are static and cannot be dynamically updated based on a user’s interaction. Our goal is to develop a framework for new forms of dynamic data physicalisation in order to support an interactive exploration and analysis of datasets. Based on a study of the design space for dynamic data physicalisation, we are therefore working on a grammar for representing the fundamental physical operations and interactions that can be applied to the underlying data. Our envisioned extensible data physicalisation framework will enable the rapid prototyping of dynamic data physicalisations and thereby support researchers who want to experiment with new combinations of physical variables or output devices for dynamic data physicalisation as well as designers and application developers who are interested in the development of innovative dynamic data physicalisation solutions.<br /><br />Research paper: <a href="https://beatsigner.com/publications/towards-a-framework-for-dynamic-data-physicalisation.pdf" rel="nofollow">https://beatsigner.com/publications/towards-a-framework-for-dynamic-data-physicalisation.pdf</a></span></div><div class="wp-workCard_item wp-workCard--actions"><span class="work-strip-bookmark-button-container"></span><a id="64a83de08fc0cec877404c90966631ff" class="wp-workCard--action" rel="nofollow" data-click-track="profile-work-strip-download" data-download="{&quot;attachment_id&quot;:77358930,&quot;asset_id&quot;:65998263,&quot;asset_type&quot;:&quot;Work&quot;,&quot;button_location&quot;:&quot;profile&quot;}" href="https://www.academia.edu/attachments/77358930/download_file?s=profile"><span><i class="fa fa-arrow-down"></i></span><span>Download</span></a><span class="wp-workCard--action visible-if-viewed-by-owner inline-block" style="display: none;"><span class="js-profile-work-strip-edit-button-wrapper profile-work-strip-edit-button-wrapper" data-work-id="65998263"><a class="js-profile-work-strip-edit-button" tabindex="0"><span><i class="fa fa-pencil"></i></span><span>Edit</span></a></span></span></div><div class="wp-workCard_item wp-workCard--stats"><span><span><span class="js-view-count view-count u-mr2x" data-work-id="65998263"><i class="fa fa-spinner fa-spin"></i></span><script>$(function () { var workId = 65998263; window.Academia.workViewCountsFetcher.queue(workId, function (count) { var description = window.$h.commaizeInt(count) + " " + window.$h.pluralize(count, 'View'); $(".js-view-count[data-work-id=65998263]").text(description); $(".js-view-count[data-work-id=65998263]").attr('title', description).tooltip(); }); });</script></span></span><span><span class="percentile-widget hidden"><span class="u-mr2x work-percentile"></span></span><script>$(function () { var workId = 65998263; window.Academia.workPercentilesFetcher.queue(workId, function (percentileText) { var container = $(".js-work-strip[data-work-id='65998263']"); container.find('.work-percentile').text(percentileText.charAt(0).toUpperCase() + percentileText.slice(1)); container.find('.percentile-widget').show(); container.find('.percentile-widget').removeClass('hidden'); }); });</script></span></div><div id="work-strip-premium-row-container"></div></div></div><script> require.config({ waitSeconds: 90 })(["https://a.academia-assets.com/assets/wow_profile-a9bf3a2bc8c89fa2a77156577594264ee8a0f214d74241bc0fcd3f69f8d107ac.js","https://a.academia-assets.com/assets/work_edit-ad038b8c047c1a8d4fa01b402d530ff93c45fee2137a149a4a5398bc8ad67560.js"], function() { // from javascript_helper.rb var dispatcherData = {} if (true){ window.WowProfile.dispatcher = window.WowProfile.dispatcher || _.clone(Backbone.Events); dispatcherData = { dispatcher: window.WowProfile.dispatcher, downloadLinkId: "64a83de08fc0cec877404c90966631ff" } } $('.js-work-strip[data-work-id=65998263]').each(function() { if (!$(this).data('initialized')) { new WowProfile.WorkStripView({ el: this, workJSON: {"id":65998263,"title":"Cross-Media Information Spaces and Architectures","translated_title":"","metadata":{"abstract":"Advanced data visualisation techniques enable the exploration and analysis of large datasets. Recently, there is the emerging field of data physicalisation, where data is represented in physical space (e.g. via physical models) and can no longer only be explored visually, but also by making use of other senses such as touch. Most existing data physicalisation solutions are static and cannot be dynamically updated based on a user’s interaction. Our goal is to develop a framework for new forms of dynamic data physicalisation in order to support an interactive exploration and analysis of datasets. Based on a study of the design space for dynamic data physicalisation, we are therefore working on a grammar for representing the fundamental physical operations and interactions that can be applied to the underlying data. Our envisioned extensible data physicalisation framework will enable the rapid prototyping of dynamic data physicalisations and thereby support researchers who want to experiment with new combinations of physical variables or output devices for dynamic data physicalisation as well as designers and application developers who are interested in the development of innovative dynamic data physicalisation solutions.\n\nResearch paper: https://beatsigner.com/publications/towards-a-framework-for-dynamic-data-physicalisation.pdf","publication_date":{"day":null,"month":null,"year":2018,"errors":{}},"publication_name":"Presentation given at International Workshop Toward a Design Language for Data Physicalization, Berlin, Germany"},"translated_abstract":"Advanced data visualisation techniques enable the exploration and analysis of large datasets. Recently, there is the emerging field of data physicalisation, where data is represented in physical space (e.g. via physical models) and can no longer only be explored visually, but also by making use of other senses such as touch. Most existing data physicalisation solutions are static and cannot be dynamically updated based on a user’s interaction. Our goal is to develop a framework for new forms of dynamic data physicalisation in order to support an interactive exploration and analysis of datasets. Based on a study of the design space for dynamic data physicalisation, we are therefore working on a grammar for representing the fundamental physical operations and interactions that can be applied to the underlying data. Our envisioned extensible data physicalisation framework will enable the rapid prototyping of dynamic data physicalisations and thereby support researchers who want to experiment with new combinations of physical variables or output devices for dynamic data physicalisation as well as designers and application developers who are interested in the development of innovative dynamic data physicalisation solutions.\n\nResearch paper: https://beatsigner.com/publications/towards-a-framework-for-dynamic-data-physicalisation.pdf","internal_url":"https://www.academia.edu/65998263/Cross_Media_Information_Spaces_and_Architectures","translated_internal_url":"","created_at":"2021-12-26T00:26:39.383-08:00","preview_url":null,"current_user_can_edit":null,"current_user_is_owner":null,"owner_id":13155,"coauthors_can_edit":true,"document_type":"conference_presentation","co_author_tags":[],"downloadable_attachments":[{"id":77358930,"title":"","file_type":"pdf","scribd_thumbnail_url":"https://attachments.academia-assets.com/77358930/thumbnails/1.jpg","file_name":"DataPhys2018.pdf","download_url":"https://www.academia.edu/attachments/77358930/download_file","bulk_download_file_name":"Cross_Media_Information_Spaces_and_Archi.pdf","bulk_download_url":"https://d1wqtxts1xzle7.cloudfront.net/77358930/DataPhys2018-libre.pdf?1640508949=\u0026response-content-disposition=attachment%3B+filename%3DCross_Media_Information_Spaces_and_Archi.pdf\u0026Expires=1744203894\u0026Signature=KwKCOmOi34dPXhPOFBDJm6X8fC1S6qfFgE7BQfh0vplESkRxjB45CUQYuYEiTf5aZyLaUP41~XITIDsxIfkb4CXr5VWDCMsPjv4DpRfGN3illctQCzQsXeqlgOxXPQmERnIPQA-xprkM2y6skeHiB8CQfd3w2fK209xEzLOxNBSusNaRldM5G5nlCARmpcJUQ06ohjxyO75GAr-ufDQ7OJzS1d8-3ysYvoJYyqADGskSkrTe0RipadV49b6y36Ie6Hl6NJ56cDeXnhkJmUTAFZrLDFnD3~a6JhUAv8GUjsAubb5LKnJOEVxq3UL5w05GYnjkGyPKF3BD~~elP54h4g__\u0026Key-Pair-Id=APKAJLOHF5GGSLRBV4ZA"}],"slug":"Cross_Media_Information_Spaces_and_Architectures","translated_slug":"","page_count":7,"language":"en","content_type":"Work","summary":"Advanced data visualisation techniques enable the exploration and analysis of large datasets. Recently, there is the emerging field of data physicalisation, where data is represented in physical space (e.g. via physical models) and can no longer only be explored visually, but also by making use of other senses such as touch. Most existing data physicalisation solutions are static and cannot be dynamically updated based on a user’s interaction. Our goal is to develop a framework for new forms of dynamic data physicalisation in order to support an interactive exploration and analysis of datasets. Based on a study of the design space for dynamic data physicalisation, we are therefore working on a grammar for representing the fundamental physical operations and interactions that can be applied to the underlying data. Our envisioned extensible data physicalisation framework will enable the rapid prototyping of dynamic data physicalisations and thereby support researchers who want to experiment with new combinations of physical variables or output devices for dynamic data physicalisation as well as designers and application developers who are interested in the development of innovative dynamic data physicalisation solutions.\n\nResearch paper: https://beatsigner.com/publications/towards-a-framework-for-dynamic-data-physicalisation.pdf","impression_tracking_id":null,"owner":{"id":13155,"first_name":"Beat","middle_initials":null,"last_name":"Signer","page_name":"BeatSigner","domain_name":"vub","created_at":"2008-10-29T17:11:18.015-07:00","display_name":"Beat Signer","url":"https://vub.academia.edu/BeatSigner"},"attachments":[{"id":77358930,"title":"","file_type":"pdf","scribd_thumbnail_url":"https://attachments.academia-assets.com/77358930/thumbnails/1.jpg","file_name":"DataPhys2018.pdf","download_url":"https://www.academia.edu/attachments/77358930/download_file","bulk_download_file_name":"Cross_Media_Information_Spaces_and_Archi.pdf","bulk_download_url":"https://d1wqtxts1xzle7.cloudfront.net/77358930/DataPhys2018-libre.pdf?1640508949=\u0026response-content-disposition=attachment%3B+filename%3DCross_Media_Information_Spaces_and_Archi.pdf\u0026Expires=1744203894\u0026Signature=KwKCOmOi34dPXhPOFBDJm6X8fC1S6qfFgE7BQfh0vplESkRxjB45CUQYuYEiTf5aZyLaUP41~XITIDsxIfkb4CXr5VWDCMsPjv4DpRfGN3illctQCzQsXeqlgOxXPQmERnIPQA-xprkM2y6skeHiB8CQfd3w2fK209xEzLOxNBSusNaRldM5G5nlCARmpcJUQ06ohjxyO75GAr-ufDQ7OJzS1d8-3ysYvoJYyqADGskSkrTe0RipadV49b6y36Ie6Hl6NJ56cDeXnhkJmUTAFZrLDFnD3~a6JhUAv8GUjsAubb5LKnJOEVxq3UL5w05GYnjkGyPKF3BD~~elP54h4g__\u0026Key-Pair-Id=APKAJLOHF5GGSLRBV4ZA"}],"research_interests":[{"id":422,"name":"Computer Science","url":"https://www.academia.edu/Documents/in/Computer_Science"},{"id":459,"name":"Information Science","url":"https://www.academia.edu/Documents/in/Information_Science"},{"id":491,"name":"Information Technology","url":"https://www.academia.edu/Documents/in/Information_Technology"},{"id":1210,"name":"Informatics","url":"https://www.academia.edu/Documents/in/Informatics"},{"id":69100,"name":"Data Science","url":"https://www.academia.edu/Documents/in/Data_Science"},{"id":1993399,"name":"Data Physicalisation","url":"https://www.academia.edu/Documents/in/Data_Physicalisation"}],"urls":[{"id":15698988,"url":"https://speakerdeck.com/signer/towards-a-framework-for-dynamic-data-physicalisation"}]}, dispatcherData: dispatcherData }); $(this).data('initialized', true); } }); $a.trackClickSource(".js-work-strip-work-link", "profile_work_strip") if (false) { Aedu.setUpFigureCarousel('profile-work-65998263-figures'); } }); </script> <div class="js-work-strip profile--work_container" data-work-id="36867389"><div class="profile--work_thumbnail hidden-xs"><a class="js-work-strip-work-link" data-click-track="profile-work-strip-thumbnail" href="https://www.academia.edu/36867389/An_Analysis_of_Cross_Document_Linking_Mechanisms"><img alt="Research paper thumbnail of An Analysis of Cross-Document Linking Mechanisms" class="work-thumbnail" src="https://attachments.academia-assets.com/77363235/thumbnails/1.jpg" /></a></div><div class="wp-workCard wp-workCard_itemContainer"><div class="wp-workCard_item wp-workCard--title"><a class="js-work-strip-work-link text-gray-darker" data-click-track="profile-work-strip-title" href="https://www.academia.edu/36867389/An_Analysis_of_Cross_Document_Linking_Mechanisms">An Analysis of Cross-Document Linking Mechanisms</a></div><div class="wp-workCard_item wp-workCard--coauthors"><span>by </span><span><a class="" data-click-track="profile-work-strip-authors" href="https://vub.academia.edu/BeatSigner">Beat Signer</a> and <a class="" data-click-track="profile-work-strip-authors" href="https://vub-be.academia.edu/AhmedTayeh">Ahmed Tayeh</a></span></div><div class="wp-workCard_item"><span>Presentation given at JCDL 2018, ACM/IEEE Joint Conference on Digital Libraries, Fort Worth, USA</span><span>, 2018</span></div><div class="wp-workCard_item"><span class="js-work-more-abstract-truncated">Physical and digital documents do often not exist in isolation but are implicitly or explicitly l...</span><a class="js-work-more-abstract" data-broccoli-component="work_strip.more_abstract" data-click-track="profile-work-strip-more-abstract" href="javascript:;"><span> more </span><span><i class="fa fa-caret-down"></i></span></a><span class="js-work-more-abstract-untruncated hidden">Physical and digital documents do often not exist in isolation but are implicitly or explicitly linked. Previous research in Human-Computer Interaction and Personal Information Management has revealed certain user behaviour in associating information across physical and digital documents. Nevertheless, there is a lack of empirical studies on user needs and behaviour when defining these associations. In this paper, we address this lack of empirical studies and provide insights into strategies that users apply when associating information across physical and digital documents. In addition, our study reveals the limitations of current practices and we suggest improvements for associating information across documents. Last but not least, we identify a set of design implications for the development of future cross-document linking solutions.<br /><br />Research paper: <a href="https://beatsigner.com/publications/an-analysis-of-cross-document-linking-mechanisms.pdf" rel="nofollow">https://beatsigner.com/publications/an-analysis-of-cross-document-linking-mechanisms.pdf</a></span></div><div class="wp-workCard_item wp-workCard--actions"><span class="work-strip-bookmark-button-container"></span><a id="d3ddd6d5579c0a68ce8ae8ed1fc9de3b" class="wp-workCard--action" rel="nofollow" data-click-track="profile-work-strip-download" data-download="{&quot;attachment_id&quot;:77363235,&quot;asset_id&quot;:36867389,&quot;asset_type&quot;:&quot;Work&quot;,&quot;button_location&quot;:&quot;profile&quot;}" href="https://www.academia.edu/attachments/77363235/download_file?s=profile"><span><i class="fa fa-arrow-down"></i></span><span>Download</span></a><span class="wp-workCard--action visible-if-viewed-by-owner inline-block" style="display: none;"><span class="js-profile-work-strip-edit-button-wrapper profile-work-strip-edit-button-wrapper" data-work-id="36867389"><a class="js-profile-work-strip-edit-button" tabindex="0"><span><i class="fa fa-pencil"></i></span><span>Edit</span></a></span></span></div><div class="wp-workCard_item wp-workCard--stats"><span><span><span class="js-view-count view-count u-mr2x" data-work-id="36867389"><i class="fa fa-spinner fa-spin"></i></span><script>$(function () { var workId = 36867389; window.Academia.workViewCountsFetcher.queue(workId, function (count) { var description = window.$h.commaizeInt(count) + " " + window.$h.pluralize(count, 'View'); $(".js-view-count[data-work-id=36867389]").text(description); $(".js-view-count[data-work-id=36867389]").attr('title', description).tooltip(); }); });</script></span></span><span><span class="percentile-widget hidden"><span class="u-mr2x work-percentile"></span></span><script>$(function () { var workId = 36867389; window.Academia.workPercentilesFetcher.queue(workId, function (percentileText) { var container = $(".js-work-strip[data-work-id='36867389']"); container.find('.work-percentile').text(percentileText.charAt(0).toUpperCase() + percentileText.slice(1)); container.find('.percentile-widget').show(); container.find('.percentile-widget').removeClass('hidden'); }); });</script></span></div><div id="work-strip-premium-row-container"></div></div></div><script> require.config({ waitSeconds: 90 })(["https://a.academia-assets.com/assets/wow_profile-a9bf3a2bc8c89fa2a77156577594264ee8a0f214d74241bc0fcd3f69f8d107ac.js","https://a.academia-assets.com/assets/work_edit-ad038b8c047c1a8d4fa01b402d530ff93c45fee2137a149a4a5398bc8ad67560.js"], function() { // from javascript_helper.rb var dispatcherData = {} if (true){ window.WowProfile.dispatcher = window.WowProfile.dispatcher || _.clone(Backbone.Events); dispatcherData = { dispatcher: window.WowProfile.dispatcher, downloadLinkId: "d3ddd6d5579c0a68ce8ae8ed1fc9de3b" } } $('.js-work-strip[data-work-id=36867389]').each(function() { if (!$(this).data('initialized')) { new WowProfile.WorkStripView({ el: this, workJSON: {"id":36867389,"title":"An Analysis of Cross-Document Linking Mechanisms","translated_title":"","metadata":{"abstract":"Physical and digital documents do often not exist in isolation but are implicitly or explicitly linked. Previous research in Human-Computer Interaction and Personal Information Management has revealed certain user behaviour in associating information across physical and digital documents. Nevertheless, there is a lack of empirical studies on user needs and behaviour when defining these associations. In this paper, we address this lack of empirical studies and provide insights into strategies that users apply when associating information across physical and digital documents. In addition, our study reveals the limitations of current practices and we suggest improvements for associating information across documents. Last but not least, we identify a set of design implications for the development of future cross-document linking solutions.\n\nResearch paper: https://beatsigner.com/publications/an-analysis-of-cross-document-linking-mechanisms.pdf","publication_date":{"day":null,"month":null,"year":2018,"errors":{}},"publication_name":"Presentation given at JCDL 2018, ACM/IEEE Joint Conference on Digital Libraries, Fort Worth, USA"},"translated_abstract":"Physical and digital documents do often not exist in isolation but are implicitly or explicitly linked. Previous research in Human-Computer Interaction and Personal Information Management has revealed certain user behaviour in associating information across physical and digital documents. Nevertheless, there is a lack of empirical studies on user needs and behaviour when defining these associations. In this paper, we address this lack of empirical studies and provide insights into strategies that users apply when associating information across physical and digital documents. In addition, our study reveals the limitations of current practices and we suggest improvements for associating information across documents. Last but not least, we identify a set of design implications for the development of future cross-document linking solutions.\n\nResearch paper: https://beatsigner.com/publications/an-analysis-of-cross-document-linking-mechanisms.pdf","internal_url":"https://www.academia.edu/36867389/An_Analysis_of_Cross_Document_Linking_Mechanisms","translated_internal_url":"","created_at":"2018-06-18T23:30:48.948-07:00","preview_url":null,"current_user_can_edit":null,"current_user_is_owner":null,"owner_id":13155,"coauthors_can_edit":true,"document_type":"conference_presentation","co_author_tags":[{"id":31545243,"work_id":36867389,"tagging_user_id":13155,"tagged_user_id":1765495,"co_author_invite_id":null,"email":"h***h@hotmail.com","affiliation":"Universiteit Gent and Vrije Universiteit Brussel","display_order":1,"name":"Ahmed Tayeh","title":"An Analysis of Cross-Document Linking Mechanisms"}],"downloadable_attachments":[{"id":77363235,"title":"","file_type":"pdf","scribd_thumbnail_url":"https://attachments.academia-assets.com/77363235/thumbnails/1.jpg","file_name":"JCDL_2018.pdf","download_url":"https://www.academia.edu/attachments/77363235/download_file","bulk_download_file_name":"An_Analysis_of_Cross_Document_Linking_Me.pdf","bulk_download_url":"https://d1wqtxts1xzle7.cloudfront.net/77363235/JCDL_2018-libre.pdf?1640536385=\u0026response-content-disposition=attachment%3B+filename%3DAn_Analysis_of_Cross_Document_Linking_Me.pdf\u0026Expires=1744203894\u0026Signature=G2QXS0YpfZ0EBRHwk-3JkJZEkFa4EgTCLJ75jqeqm21hARv0WiuqkQUVe2F8LzdXrbAR2BEli6o7MFOrmdEFtUs5AkWHD-m4kHtKaojaGSyA5eYMGCTVJmAcNswWwgGYx4uAl5knuK2r7~IViJFDwzmwqJdioZkSW0zQ5KWQ3~FU8jAp6c0~5DgODD3oULkw8gUY8LqD7ZzVs36NIZC8HL58j6vJ~E~gDcbLRYrD3dbZFgJtYmIKnoKUJHLAmeUgCBcbZqUM9JoH3ADBQJFRCobDpQP6y4FcFyN7onIlhsyLHkrEXTjN7iLdN2wacI0cTmvBVvBhCnj-~EjiY10nEQ__\u0026Key-Pair-Id=APKAJLOHF5GGSLRBV4ZA"}],"slug":"An_Analysis_of_Cross_Document_Linking_Mechanisms","translated_slug":"","page_count":20,"language":"en","content_type":"Work","summary":"Physical and digital documents do often not exist in isolation but are implicitly or explicitly linked. Previous research in Human-Computer Interaction and Personal Information Management has revealed certain user behaviour in associating information across physical and digital documents. Nevertheless, there is a lack of empirical studies on user needs and behaviour when defining these associations. In this paper, we address this lack of empirical studies and provide insights into strategies that users apply when associating information across physical and digital documents. In addition, our study reveals the limitations of current practices and we suggest improvements for associating information across documents. Last but not least, we identify a set of design implications for the development of future cross-document linking solutions.\n\nResearch paper: https://beatsigner.com/publications/an-analysis-of-cross-document-linking-mechanisms.pdf","impression_tracking_id":null,"owner":{"id":13155,"first_name":"Beat","middle_initials":null,"last_name":"Signer","page_name":"BeatSigner","domain_name":"vub","created_at":"2008-10-29T17:11:18.015-07:00","display_name":"Beat Signer","url":"https://vub.academia.edu/BeatSigner"},"attachments":[{"id":77363235,"title":"","file_type":"pdf","scribd_thumbnail_url":"https://attachments.academia-assets.com/77363235/thumbnails/1.jpg","file_name":"JCDL_2018.pdf","download_url":"https://www.academia.edu/attachments/77363235/download_file","bulk_download_file_name":"An_Analysis_of_Cross_Document_Linking_Me.pdf","bulk_download_url":"https://d1wqtxts1xzle7.cloudfront.net/77363235/JCDL_2018-libre.pdf?1640536385=\u0026response-content-disposition=attachment%3B+filename%3DAn_Analysis_of_Cross_Document_Linking_Me.pdf\u0026Expires=1744203894\u0026Signature=G2QXS0YpfZ0EBRHwk-3JkJZEkFa4EgTCLJ75jqeqm21hARv0WiuqkQUVe2F8LzdXrbAR2BEli6o7MFOrmdEFtUs5AkWHD-m4kHtKaojaGSyA5eYMGCTVJmAcNswWwgGYx4uAl5knuK2r7~IViJFDwzmwqJdioZkSW0zQ5KWQ3~FU8jAp6c0~5DgODD3oULkw8gUY8LqD7ZzVs36NIZC8HL58j6vJ~E~gDcbLRYrD3dbZFgJtYmIKnoKUJHLAmeUgCBcbZqUM9JoH3ADBQJFRCobDpQP6y4FcFyN7onIlhsyLHkrEXTjN7iLdN2wacI0cTmvBVvBhCnj-~EjiY10nEQ__\u0026Key-Pair-Id=APKAJLOHF5GGSLRBV4ZA"}],"research_interests":[{"id":37,"name":"Information Systems","url":"https://www.academia.edu/Documents/in/Information_Systems"},{"id":48,"name":"Engineering","url":"https://www.academia.edu/Documents/in/Engineering"},{"id":422,"name":"Computer Science","url":"https://www.academia.edu/Documents/in/Computer_Science"},{"id":449,"name":"Software Engineering","url":"https://www.academia.edu/Documents/in/Software_Engineering"},{"id":472,"name":"Human Computer Interaction","url":"https://www.academia.edu/Documents/in/Human_Computer_Interaction"},{"id":491,"name":"Information Technology","url":"https://www.academia.edu/Documents/in/Information_Technology"},{"id":923,"name":"Technology","url":"https://www.academia.edu/Documents/in/Technology"},{"id":933,"name":"New Media","url":"https://www.academia.edu/Documents/in/New_Media"},{"id":1012,"name":"Digital Libraries","url":"https://www.academia.edu/Documents/in/Digital_Libraries"},{"id":1210,"name":"Informatics","url":"https://www.academia.edu/Documents/in/Informatics"},{"id":1241,"name":"Knowledge Management","url":"https://www.academia.edu/Documents/in/Knowledge_Management"},{"id":1380,"name":"Computer Engineering","url":"https://www.academia.edu/Documents/in/Computer_Engineering"},{"id":1453,"name":"Information Management","url":"https://www.academia.edu/Documents/in/Information_Management"},{"id":2869,"name":"Digital Media","url":"https://www.academia.edu/Documents/in/Digital_Media"},{"id":3419,"name":"Multimedia","url":"https://www.academia.edu/Documents/in/Multimedia"},{"id":3424,"name":"Information Visualization","url":"https://www.academia.edu/Documents/in/Information_Visualization"},{"id":4416,"name":"Interaction Design","url":"https://www.academia.edu/Documents/in/Interaction_Design"},{"id":4682,"name":"Reading Habits/Attitudes","url":"https://www.academia.edu/Documents/in/Reading_Habits_Attitudes"},{"id":5266,"name":"Human Information Interaction","url":"https://www.academia.edu/Documents/in/Human_Information_Interaction"},{"id":7505,"name":"Extensive Reading","url":"https://www.academia.edu/Documents/in/Extensive_Reading"},{"id":9471,"name":"Reading","url":"https://www.academia.edu/Documents/in/Reading"},{"id":10110,"name":"History of Reading and Writing","url":"https://www.academia.edu/Documents/in/History_of_Reading_and_Writing"},{"id":10165,"name":"Interactive and Digital Media","url":"https://www.academia.edu/Documents/in/Interactive_and_Digital_Media"},{"id":10249,"name":"Writing","url":"https://www.academia.edu/Documents/in/Writing"},{"id":11085,"name":"Cross-Media Information Spaces","url":"https://www.academia.edu/Documents/in/Cross-Media_Information_Spaces"},{"id":11086,"name":"Interactive Paper","url":"https://www.academia.edu/Documents/in/Interactive_Paper"},{"id":11119,"name":"User Interface","url":"https://www.academia.edu/Documents/in/User_Interface"},{"id":11123,"name":"Personal Information Management","url":"https://www.academia.edu/Documents/in/Personal_Information_Management"},{"id":11732,"name":"Linked Data","url":"https://www.academia.edu/Documents/in/Linked_Data"},{"id":14180,"name":"Reading Comprehension","url":"https://www.academia.edu/Documents/in/Reading_Comprehension"},{"id":15893,"name":"Hypertext theory","url":"https://www.academia.edu/Documents/in/Hypertext_theory"},{"id":15951,"name":"Digital Pen and Paper","url":"https://www.academia.edu/Documents/in/Digital_Pen_and_Paper"},{"id":17711,"name":"Semantic Web","url":"https://www.academia.edu/Documents/in/Semantic_Web"},{"id":35630,"name":"Teaching and Learning Writing and Reading","url":"https://www.academia.edu/Documents/in/Teaching_and_Learning_Writing_and_Reading"},{"id":42095,"name":"Document Engineering","url":"https://www.academia.edu/Documents/in/Document_Engineering"},{"id":44606,"name":"Hypermedia","url":"https://www.academia.edu/Documents/in/Hypermedia"},{"id":54192,"name":"Hypertext","url":"https://www.academia.edu/Documents/in/Hypertext"},{"id":72034,"name":"Reading Strategies","url":"https://www.academia.edu/Documents/in/Reading_Strategies"},{"id":85420,"name":"Crossmedia","url":"https://www.academia.edu/Documents/in/Crossmedia"},{"id":387774,"name":"Reading Skills","url":"https://www.academia.edu/Documents/in/Reading_Skills"},{"id":1019265,"name":"Reading Strategies and Cognition","url":"https://www.academia.edu/Documents/in/Reading_Strategies_and_Cognition"}],"urls":[{"id":14771335,"url":"https://beatsigner.com/publications/an-analysis-of-cross-document-linking-mechanisms.pdf"}]}, dispatcherData: dispatcherData }); $(this).data('initialized', true); } }); $a.trackClickSource(".js-work-strip-work-link", "profile_work_strip") if (false) { Aedu.setUpFigureCarousel('profile-work-36867389-figures'); } }); </script> <div class="js-work-strip profile--work_container" data-work-id="36867292"><div class="profile--work_thumbnail hidden-xs"><a class="js-work-strip-work-link" data-click-track="profile-work-strip-thumbnail" href="https://www.academia.edu/36867292/Designing_Prosthetic_Memory_Audio_or_Transcript_That_is_the_Question"><img alt="Research paper thumbnail of Designing Prosthetic Memory: Audio or Transcript, That is the Question" class="work-thumbnail" src="https://attachments.academia-assets.com/77363852/thumbnails/1.jpg" /></a></div><div class="wp-workCard wp-workCard_itemContainer"><div class="wp-workCard_item wp-workCard--title"><a class="js-work-strip-work-link text-gray-darker" data-click-track="profile-work-strip-title" href="https://www.academia.edu/36867292/Designing_Prosthetic_Memory_Audio_or_Transcript_That_is_the_Question">Designing Prosthetic Memory: Audio or Transcript, That is the Question</a></div><div class="wp-workCard_item wp-workCard--coauthors"><span>by </span><span><a class="" data-click-track="profile-work-strip-authors" href="https://vub.academia.edu/BeatSigner">Beat Signer</a> and <a class="" data-click-track="profile-work-strip-authors" href="https://vub.academia.edu/PayamEbrahimi">Payam Ebrahimi</a></span></div><div class="wp-workCard_item"><span>Presentation given at AVI 2018, International Working Conference on Advanced Visual Interfaces, Grosseto, Italy</span><span>, 2018</span></div><div class="wp-workCard_item"><span class="js-work-more-abstract-truncated">Audio recordings and the corresponding transcripts are often used as prosthetic memory (PM) after...</span><a class="js-work-more-abstract" data-broccoli-component="work_strip.more_abstract" data-click-track="profile-work-strip-more-abstract" href="javascript:;"><span> more </span><span><i class="fa fa-caret-down"></i></span></a><span class="js-work-more-abstract-untruncated hidden">Audio recordings and the corresponding transcripts are often used as prosthetic memory (PM) after meetings and lectures. While current research is mainly developing novel features for prosthetic memory, less is known on how and why audio recordings and transcripts are used. We investigate how users interact with audio and transcripts as prosthetic memory, whether interaction strategies change over time, and analyse potential differences in accuracy and efficiency. In contrast to the subjective user perception, our results show that audio recordings and transcripts are equally efficient, but that transcripts are generally preferred due to their easily accessible contextual information. We further identified that prosthetic memory is not only used as a recall aid but frequently also consulted for verifying information that has been recalled from organic memory (OM). Our findings are summarised in a number of design implications for prosthetic memory solutions.<br /><br />Research paper: <a href="https://beatsigner.com/publications/designing-prosthetic-memory-audio-or-transcript-that-is-the-question.pdf" rel="nofollow">https://beatsigner.com/publications/designing-prosthetic-memory-audio-or-transcript-that-is-the-question.pdf</a></span></div><div class="wp-workCard_item wp-workCard--actions"><span class="work-strip-bookmark-button-container"></span><a id="43afccc81adc2b9988ba76cbcfed947f" class="wp-workCard--action" rel="nofollow" data-click-track="profile-work-strip-download" data-download="{&quot;attachment_id&quot;:77363852,&quot;asset_id&quot;:36867292,&quot;asset_type&quot;:&quot;Work&quot;,&quot;button_location&quot;:&quot;profile&quot;}" href="https://www.academia.edu/attachments/77363852/download_file?s=profile"><span><i class="fa fa-arrow-down"></i></span><span>Download</span></a><span class="wp-workCard--action visible-if-viewed-by-owner inline-block" style="display: none;"><span class="js-profile-work-strip-edit-button-wrapper profile-work-strip-edit-button-wrapper" data-work-id="36867292"><a class="js-profile-work-strip-edit-button" tabindex="0"><span><i class="fa fa-pencil"></i></span><span>Edit</span></a></span></span></div><div class="wp-workCard_item wp-workCard--stats"><span><span><span class="js-view-count view-count u-mr2x" data-work-id="36867292"><i class="fa fa-spinner fa-spin"></i></span><script>$(function () { var workId = 36867292; window.Academia.workViewCountsFetcher.queue(workId, function (count) { var description = window.$h.commaizeInt(count) + " " + window.$h.pluralize(count, 'View'); $(".js-view-count[data-work-id=36867292]").text(description); $(".js-view-count[data-work-id=36867292]").attr('title', description).tooltip(); }); });</script></span></span><span><span class="percentile-widget hidden"><span class="u-mr2x work-percentile"></span></span><script>$(function () { var workId = 36867292; window.Academia.workPercentilesFetcher.queue(workId, function (percentileText) { var container = $(".js-work-strip[data-work-id='36867292']"); container.find('.work-percentile').text(percentileText.charAt(0).toUpperCase() + percentileText.slice(1)); container.find('.percentile-widget').show(); container.find('.percentile-widget').removeClass('hidden'); }); });</script></span></div><div id="work-strip-premium-row-container"></div></div></div><script> require.config({ waitSeconds: 90 })(["https://a.academia-assets.com/assets/wow_profile-a9bf3a2bc8c89fa2a77156577594264ee8a0f214d74241bc0fcd3f69f8d107ac.js","https://a.academia-assets.com/assets/work_edit-ad038b8c047c1a8d4fa01b402d530ff93c45fee2137a149a4a5398bc8ad67560.js"], function() { // from javascript_helper.rb var dispatcherData = {} if (true){ window.WowProfile.dispatcher = window.WowProfile.dispatcher || _.clone(Backbone.Events); dispatcherData = { dispatcher: window.WowProfile.dispatcher, downloadLinkId: "43afccc81adc2b9988ba76cbcfed947f" } } $('.js-work-strip[data-work-id=36867292]').each(function() { if (!$(this).data('initialized')) { new WowProfile.WorkStripView({ el: this, workJSON: {"id":36867292,"title":"Designing Prosthetic Memory: Audio or Transcript, That is the Question","translated_title":"","metadata":{"abstract":"Audio recordings and the corresponding transcripts are often used as prosthetic memory (PM) after meetings and lectures. While current research is mainly developing novel features for prosthetic memory, less is known on how and why audio recordings and transcripts are used. We investigate how users interact with audio and transcripts as prosthetic memory, whether interaction strategies change over time, and analyse potential differences in accuracy and efficiency. In contrast to the subjective user perception, our results show that audio recordings and transcripts are equally efficient, but that transcripts are generally preferred due to their easily accessible contextual information. We further identified that prosthetic memory is not only used as a recall aid but frequently also consulted for verifying information that has been recalled from organic memory (OM). Our findings are summarised in a number of design implications for prosthetic memory solutions.\n\nResearch paper: https://beatsigner.com/publications/designing-prosthetic-memory-audio-or-transcript-that-is-the-question.pdf","ai_title_tag":"Prosthetic Memory: Audio vs. Transcript Efficiency","publication_date":{"day":null,"month":null,"year":2018,"errors":{}},"publication_name":"Presentation given at AVI 2018, International Working Conference on Advanced Visual Interfaces, Grosseto, Italy"},"translated_abstract":"Audio recordings and the corresponding transcripts are often used as prosthetic memory (PM) after meetings and lectures. While current research is mainly developing novel features for prosthetic memory, less is known on how and why audio recordings and transcripts are used. We investigate how users interact with audio and transcripts as prosthetic memory, whether interaction strategies change over time, and analyse potential differences in accuracy and efficiency. In contrast to the subjective user perception, our results show that audio recordings and transcripts are equally efficient, but that transcripts are generally preferred due to their easily accessible contextual information. We further identified that prosthetic memory is not only used as a recall aid but frequently also consulted for verifying information that has been recalled from organic memory (OM). Our findings are summarised in a number of design implications for prosthetic memory solutions.\n\nResearch paper: https://beatsigner.com/publications/designing-prosthetic-memory-audio-or-transcript-that-is-the-question.pdf","internal_url":"https://www.academia.edu/36867292/Designing_Prosthetic_Memory_Audio_or_Transcript_That_is_the_Question","translated_internal_url":"","created_at":"2018-06-18T22:48:52.421-07:00","preview_url":null,"current_user_can_edit":null,"current_user_is_owner":null,"owner_id":13155,"coauthors_can_edit":true,"document_type":"conference_presentation","co_author_tags":[{"id":31545188,"work_id":36867292,"tagging_user_id":13155,"tagged_user_id":1299708,"co_author_invite_id":null,"email":"s***m@vub.ac.be","affiliation":"Vrije Universiteit Brussel","display_order":1,"name":"Sandra Trullemans","title":"Designing Prosthetic Memory: Audio or Transcript, That is the Question"},{"id":31545189,"work_id":36867292,"tagging_user_id":13155,"tagged_user_id":3163292,"co_author_invite_id":null,"email":"p***m@gmail.com","affiliation":"Vrije Universiteit Brussel","display_order":2,"name":"Payam Ebrahimi","title":"Designing Prosthetic Memory: Audio or Transcript, That is the Question"}],"downloadable_attachments":[{"id":77363852,"title":"","file_type":"pdf","scribd_thumbnail_url":"https://attachments.academia-assets.com/77363852/thumbnails/1.jpg","file_name":"AVI2018_note4u.pdf","download_url":"https://www.academia.edu/attachments/77363852/download_file","bulk_download_file_name":"Designing_Prosthetic_Memory_Audio_or_Tra.pdf","bulk_download_url":"https://d1wqtxts1xzle7.cloudfront.net/77363852/AVI2018_note4u-libre.pdf?1640516783=\u0026response-content-disposition=attachment%3B+filename%3DDesigning_Prosthetic_Memory_Audio_or_Tra.pdf\u0026Expires=1744203894\u0026Signature=PjPLTq5cIo7hXIkJcoonfpJDsMh5Pf12ATrDVwmVxWzh7we3oV88iPUmjxTxn9E07~gVXbVML3opCQhp9o2ORc3HWBaxuHJ~NDe26OEZWEVzAYMForzMpgq1yuD-CpWCh8aWjNMFFFV9N9gHDTMr5~g3nk3qyz2cdGieD1Lk0MFC~AY9miWQj9kDdsq2-SxzEsV5WOcP9TTjJRPExzbd9Ilx8svEx~mYW8p8rssBMkNF8CvI4VAPK7c2k~zyvA-WSw6W8zzisAz1LesIw8DF~fWexuKE6CAq31Gvoj~F2qbst-0ofNSc9kSKj4vZWndjvJ4C2TVV0gcxQ-oh1LoerQ__\u0026Key-Pair-Id=APKAJLOHF5GGSLRBV4ZA"}],"slug":"Designing_Prosthetic_Memory_Audio_or_Transcript_That_is_the_Question","translated_slug":"","page_count":22,"language":"en","content_type":"Work","summary":"Audio recordings and the corresponding transcripts are often used as prosthetic memory (PM) after meetings and lectures. While current research is mainly developing novel features for prosthetic memory, less is known on how and why audio recordings and transcripts are used. We investigate how users interact with audio and transcripts as prosthetic memory, whether interaction strategies change over time, and analyse potential differences in accuracy and efficiency. In contrast to the subjective user perception, our results show that audio recordings and transcripts are equally efficient, but that transcripts are generally preferred due to their easily accessible contextual information. We further identified that prosthetic memory is not only used as a recall aid but frequently also consulted for verifying information that has been recalled from organic memory (OM). Our findings are summarised in a number of design implications for prosthetic memory solutions.\n\nResearch paper: https://beatsigner.com/publications/designing-prosthetic-memory-audio-or-transcript-that-is-the-question.pdf","impression_tracking_id":null,"owner":{"id":13155,"first_name":"Beat","middle_initials":null,"last_name":"Signer","page_name":"BeatSigner","domain_name":"vub","created_at":"2008-10-29T17:11:18.015-07:00","display_name":"Beat Signer","url":"https://vub.academia.edu/BeatSigner"},"attachments":[{"id":77363852,"title":"","file_type":"pdf","scribd_thumbnail_url":"https://attachments.academia-assets.com/77363852/thumbnails/1.jpg","file_name":"AVI2018_note4u.pdf","download_url":"https://www.academia.edu/attachments/77363852/download_file","bulk_download_file_name":"Designing_Prosthetic_Memory_Audio_or_Tra.pdf","bulk_download_url":"https://d1wqtxts1xzle7.cloudfront.net/77363852/AVI2018_note4u-libre.pdf?1640516783=\u0026response-content-disposition=attachment%3B+filename%3DDesigning_Prosthetic_Memory_Audio_or_Tra.pdf\u0026Expires=1744203894\u0026Signature=PjPLTq5cIo7hXIkJcoonfpJDsMh5Pf12ATrDVwmVxWzh7we3oV88iPUmjxTxn9E07~gVXbVML3opCQhp9o2ORc3HWBaxuHJ~NDe26OEZWEVzAYMForzMpgq1yuD-CpWCh8aWjNMFFFV9N9gHDTMr5~g3nk3qyz2cdGieD1Lk0MFC~AY9miWQj9kDdsq2-SxzEsV5WOcP9TTjJRPExzbd9Ilx8svEx~mYW8p8rssBMkNF8CvI4VAPK7c2k~zyvA-WSw6W8zzisAz1LesIw8DF~fWexuKE6CAq31Gvoj~F2qbst-0ofNSc9kSKj4vZWndjvJ4C2TVV0gcxQ-oh1LoerQ__\u0026Key-Pair-Id=APKAJLOHF5GGSLRBV4ZA"}],"research_interests":[{"id":37,"name":"Information Systems","url":"https://www.academia.edu/Documents/in/Information_Systems"},{"id":48,"name":"Engineering","url":"https://www.academia.edu/Documents/in/Engineering"},{"id":422,"name":"Computer Science","url":"https://www.academia.edu/Documents/in/Computer_Science"},{"id":449,"name":"Software Engineering","url":"https://www.academia.edu/Documents/in/Software_Engineering"},{"id":459,"name":"Information Science","url":"https://www.academia.edu/Documents/in/Information_Science"},{"id":472,"name":"Human Computer Interaction","url":"https://www.academia.edu/Documents/in/Human_Computer_Interaction"},{"id":491,"name":"Information Technology","url":"https://www.academia.edu/Documents/in/Information_Technology"},{"id":492,"name":"Management Information Systems","url":"https://www.academia.edu/Documents/in/Management_Information_Systems"},{"id":859,"name":"Communication","url":"https://www.academia.edu/Documents/in/Communication"},{"id":923,"name":"Technology","url":"https://www.academia.edu/Documents/in/Technology"},{"id":933,"name":"New Media","url":"https://www.academia.edu/Documents/in/New_Media"},{"id":988,"name":"Design","url":"https://www.academia.edu/Documents/in/Design"},{"id":1012,"name":"Digital Libraries","url":"https://www.academia.edu/Documents/in/Digital_Libraries"},{"id":1210,"name":"Informatics","url":"https://www.academia.edu/Documents/in/Informatics"},{"id":1241,"name":"Knowledge Management","url":"https://www.academia.edu/Documents/in/Knowledge_Management"},{"id":1380,"name":"Computer Engineering","url":"https://www.academia.edu/Documents/in/Computer_Engineering"},{"id":1440,"name":"Visualization","url":"https://www.academia.edu/Documents/in/Visualization"},{"id":2869,"name":"Digital Media","url":"https://www.academia.edu/Documents/in/Digital_Media"},{"id":2875,"name":"User Experience (UX)","url":"https://www.academia.edu/Documents/in/User_Experience_UX_"},{"id":2879,"name":"Ubiquitous Computing","url":"https://www.academia.edu/Documents/in/Ubiquitous_Computing"},{"id":3419,"name":"Multimedia","url":"https://www.academia.edu/Documents/in/Multimedia"},{"id":3424,"name":"Information Visualization","url":"https://www.academia.edu/Documents/in/Information_Visualization"},{"id":4416,"name":"Interaction Design","url":"https://www.academia.edu/Documents/in/Interaction_Design"},{"id":5266,"name":"Human Information Interaction","url":"https://www.academia.edu/Documents/in/Human_Information_Interaction"},{"id":9134,"name":"Pervasive Computing","url":"https://www.academia.edu/Documents/in/Pervasive_Computing"},{"id":10165,"name":"Interactive and Digital Media","url":"https://www.academia.edu/Documents/in/Interactive_and_Digital_Media"},{"id":11081,"name":"Multimodal Interaction","url":"https://www.academia.edu/Documents/in/Multimodal_Interaction"},{"id":11085,"name":"Cross-Media Information Spaces","url":"https://www.academia.edu/Documents/in/Cross-Media_Information_Spaces"},{"id":11086,"name":"Interactive Paper","url":"https://www.academia.edu/Documents/in/Interactive_Paper"},{"id":11119,"name":"User Interface","url":"https://www.academia.edu/Documents/in/User_Interface"},{"id":11123,"name":"Personal Information Management","url":"https://www.academia.edu/Documents/in/Personal_Information_Management"},{"id":21201,"name":"Multimodality","url":"https://www.academia.edu/Documents/in/Multimodality"},{"id":42095,"name":"Document Engineering","url":"https://www.academia.edu/Documents/in/Document_Engineering"},{"id":44606,"name":"Hypermedia","url":"https://www.academia.edu/Documents/in/Hypermedia"},{"id":54192,"name":"Hypertext","url":"https://www.academia.edu/Documents/in/Hypertext"},{"id":67792,"name":"Multi-Touch","url":"https://www.academia.edu/Documents/in/Multi-Touch"},{"id":85420,"name":"Crossmedia","url":"https://www.academia.edu/Documents/in/Crossmedia"},{"id":193390,"name":"RSL","url":"https://www.academia.edu/Documents/in/RSL"},{"id":218146,"name":"Cross-Media","url":"https://www.academia.edu/Documents/in/Cross-Media"},{"id":295317,"name":"PIM","url":"https://www.academia.edu/Documents/in/PIM"},{"id":295574,"name":"Multimodal","url":"https://www.academia.edu/Documents/in/Multimodal"},{"id":458849,"name":"Pervasive and Ubiquitous Computing","url":"https://www.academia.edu/Documents/in/Pervasive_and_Ubiquitous_Computing"},{"id":472546,"name":"IOT","url":"https://www.academia.edu/Documents/in/IOT"},{"id":1654549,"name":"WoT","url":"https://www.academia.edu/Documents/in/WoT"},{"id":1705150,"name":"MindXpres","url":"https://www.academia.edu/Documents/in/MindXpres"},{"id":1993399,"name":"Data Physicalisation","url":"https://www.academia.edu/Documents/in/Data_Physicalisation"}],"urls":[{"id":15701855,"url":"https://speakerdeck.com/signer/designing-prosthetic-memory-audio-or-transcript-that-is-the-question"}]}, dispatcherData: dispatcherData }); $(this).data('initialized', true); } }); $a.trackClickSource(".js-work-strip-work-link", "profile_work_strip") if (false) { Aedu.setUpFigureCarousel('profile-work-36867292-figures'); } }); </script> <div class="js-work-strip profile--work_container" data-work-id="77388591"><div class="profile--work_thumbnail hidden-xs"><a class="js-work-strip-work-link" data-click-track="profile-work-strip-thumbnail" href="https://www.academia.edu/77388591/Personalised_Learning_Environments_based_on_Knowledge_Graphs_and_the_Zone_of_Proximal_Development"><img alt="Research paper thumbnail of Personalised Learning Environments based on Knowledge Graphs and the Zone of Proximal Development" class="work-thumbnail" src="https://attachments.academia-assets.com/84750517/thumbnails/1.jpg" /></a></div><div class="wp-workCard wp-workCard_itemContainer"><div class="wp-workCard_item wp-workCard--title"><a class="js-work-strip-work-link text-gray-darker" data-click-track="profile-work-strip-title" href="https://www.academia.edu/77388591/Personalised_Learning_Environments_based_on_Knowledge_Graphs_and_the_Zone_of_Proximal_Development">Personalised Learning Environments based on Knowledge Graphs and the Zone of Proximal Development</a></div><div class="wp-workCard_item wp-workCard--coauthors"><span>by </span><span><a class="" data-click-track="profile-work-strip-authors" href="https://vub.academia.edu/BeatSigner">Beat Signer</a> and <a class="" data-click-track="profile-work-strip-authors" href="https://vub.academia.edu/YoshiMalaise">Yoshi Malaise</a></span></div><div class="wp-workCard_item"><span>Presentation at CSEDU 2022, Online</span><span>, 2022</span></div><div class="wp-workCard_item"><span class="js-work-more-abstract-truncated">The learning of new knowledge and skills often requires previous knowledge, which can lead to som...</span><a class="js-work-more-abstract" data-broccoli-component="work_strip.more_abstract" data-click-track="profile-work-strip-more-abstract" href="javascript:;"><span> more </span><span><i class="fa fa-caret-down"></i></span></a><span class="js-work-more-abstract-untruncated hidden">The learning of new knowledge and skills often requires previous knowledge, which can lead to some frustration if a teacher does not know a learner&#39;s exact knowledge and skills and therefore confronts them with exercises that are too difficult to solve. We present a solution to address this issue when teaching techniques and skills in the domain of table tennis, based on the concrete needs of trainers that we have investigated in a survey. We present a conceptual model for the representation of knowledge graphs as well as the level at which individual players already master parts of this knowledge graph. Our fine-grained model enables the automatic suggestion of optimal exercises in a player&#39;s so-called zone of proximal development, and our domain-specific application allows table tennis trainers to schedule their training sessions and exercises based on this rich information. In an initial evaluation of the resulting solution for personalised learning environments, we received positive and promising feedback from trainers. We are currently investigating how our approach and conceptual model can be generalised to some more traditional educational settings and how the personalised learning environment might be further improved based on the expressive concepts of the presented model.<br /><br />Research paper: <a href="https://beatsigner.com/publications/personalised-learning-environments-based-on-knowledge-graphs-and-the-zone-of-proximal-development.pdf" rel="nofollow">https://beatsigner.com/publications/personalised-learning-environments-based-on-knowledge-graphs-and-the-zone-of-proximal-development.pdf</a></span></div><div class="wp-workCard_item wp-workCard--actions"><span class="work-strip-bookmark-button-container"></span><a id="140be64eaecc85a792c5b754c60775a6" class="wp-workCard--action" rel="nofollow" data-click-track="profile-work-strip-download" data-download="{&quot;attachment_id&quot;:84750517,&quot;asset_id&quot;:77388591,&quot;asset_type&quot;:&quot;Work&quot;,&quot;button_location&quot;:&quot;profile&quot;}" href="https://www.academia.edu/attachments/84750517/download_file?s=profile"><span><i class="fa fa-arrow-down"></i></span><span>Download</span></a><span class="wp-workCard--action visible-if-viewed-by-owner inline-block" style="display: none;"><span class="js-profile-work-strip-edit-button-wrapper profile-work-strip-edit-button-wrapper" data-work-id="77388591"><a class="js-profile-work-strip-edit-button" tabindex="0"><span><i class="fa fa-pencil"></i></span><span>Edit</span></a></span></span></div><div class="wp-workCard_item wp-workCard--stats"><span><span><span class="js-view-count view-count u-mr2x" data-work-id="77388591"><i class="fa fa-spinner fa-spin"></i></span><script>$(function () { var workId = 77388591; window.Academia.workViewCountsFetcher.queue(workId, function (count) { var description = window.$h.commaizeInt(count) + " " + window.$h.pluralize(count, 'View'); $(".js-view-count[data-work-id=77388591]").text(description); $(".js-view-count[data-work-id=77388591]").attr('title', description).tooltip(); }); });</script></span></span><span><span class="percentile-widget hidden"><span class="u-mr2x work-percentile"></span></span><script>$(function () { var workId = 77388591; window.Academia.workPercentilesFetcher.queue(workId, function (percentileText) { var container = $(".js-work-strip[data-work-id='77388591']"); container.find('.work-percentile').text(percentileText.charAt(0).toUpperCase() + percentileText.slice(1)); container.find('.percentile-widget').show(); container.find('.percentile-widget').removeClass('hidden'); }); });</script></span></div><div id="work-strip-premium-row-container"></div></div></div><script> require.config({ waitSeconds: 90 })(["https://a.academia-assets.com/assets/wow_profile-a9bf3a2bc8c89fa2a77156577594264ee8a0f214d74241bc0fcd3f69f8d107ac.js","https://a.academia-assets.com/assets/work_edit-ad038b8c047c1a8d4fa01b402d530ff93c45fee2137a149a4a5398bc8ad67560.js"], function() { // from javascript_helper.rb var dispatcherData = {} if (true){ window.WowProfile.dispatcher = window.WowProfile.dispatcher || _.clone(Backbone.Events); dispatcherData = { dispatcher: window.WowProfile.dispatcher, downloadLinkId: "140be64eaecc85a792c5b754c60775a6" } } $('.js-work-strip[data-work-id=77388591]').each(function() { if (!$(this).data('initialized')) { new WowProfile.WorkStripView({ el: this, workJSON: {"id":77388591,"title":"Personalised Learning Environments based on Knowledge Graphs and the Zone of Proximal Development","translated_title":"","metadata":{"abstract":"The learning of new knowledge and skills often requires previous knowledge, which can lead to some frustration if a teacher does not know a learner's exact knowledge and skills and therefore confronts them with exercises that are too difficult to solve. We present a solution to address this issue when teaching techniques and skills in the domain of table tennis, based on the concrete needs of trainers that we have investigated in a survey. We present a conceptual model for the representation of knowledge graphs as well as the level at which individual players already master parts of this knowledge graph. Our fine-grained model enables the automatic suggestion of optimal exercises in a player's so-called zone of proximal development, and our domain-specific application allows table tennis trainers to schedule their training sessions and exercises based on this rich information. In an initial evaluation of the resulting solution for personalised learning environments, we received positive and promising feedback from trainers. We are currently investigating how our approach and conceptual model can be generalised to some more traditional educational settings and how the personalised learning environment might be further improved based on the expressive concepts of the presented model.\n\nResearch paper: https://beatsigner.com/publications/personalised-learning-environments-based-on-knowledge-graphs-and-the-zone-of-proximal-development.pdf","ai_title_tag":"Knowledge Graphs for Personalized Learning","publication_date":{"day":null,"month":null,"year":2022,"errors":{}},"publication_name":"Presentation at CSEDU 2022, Online"},"translated_abstract":"The learning of new knowledge and skills often requires previous knowledge, which can lead to some frustration if a teacher does not know a learner's exact knowledge and skills and therefore confronts them with exercises that are too difficult to solve. We present a solution to address this issue when teaching techniques and skills in the domain of table tennis, based on the concrete needs of trainers that we have investigated in a survey. We present a conceptual model for the representation of knowledge graphs as well as the level at which individual players already master parts of this knowledge graph. Our fine-grained model enables the automatic suggestion of optimal exercises in a player's so-called zone of proximal development, and our domain-specific application allows table tennis trainers to schedule their training sessions and exercises based on this rich information. In an initial evaluation of the resulting solution for personalised learning environments, we received positive and promising feedback from trainers. We are currently investigating how our approach and conceptual model can be generalised to some more traditional educational settings and how the personalised learning environment might be further improved based on the expressive concepts of the presented model.\n\nResearch paper: https://beatsigner.com/publications/personalised-learning-environments-based-on-knowledge-graphs-and-the-zone-of-proximal-development.pdf","internal_url":"https://www.academia.edu/77388591/Personalised_Learning_Environments_based_on_Knowledge_Graphs_and_the_Zone_of_Proximal_Development","translated_internal_url":"","created_at":"2022-04-23T13:03:59.405-07:00","preview_url":null,"current_user_can_edit":null,"current_user_is_owner":null,"owner_id":13155,"coauthors_can_edit":true,"document_type":"conference_presentation","co_author_tags":[{"id":38101211,"work_id":77388591,"tagging_user_id":13155,"tagged_user_id":230910613,"co_author_invite_id":7460838,"email":"y***e@vub.be","affiliation":"Vrije Universiteit Brussel","display_order":1,"name":"Yoshi Malaise","title":"Personalised Learning Environments based on Knowledge Graphs and the Zone of Proximal Development"}],"downloadable_attachments":[{"id":84750517,"title":"","file_type":"pdf","scribd_thumbnail_url":"https://attachments.academia-assets.com/84750517/thumbnails/1.jpg","file_name":"Personalised_Learning_Environments_based_on_Knowledge_Graphs_and_the_Zone_of_Proximal_Development.pdf","download_url":"https://www.academia.edu/attachments/84750517/download_file","bulk_download_file_name":"Personalised_Learning_Environments_based.pdf","bulk_download_url":"https://d1wqtxts1xzle7.cloudfront.net/84750517/Personalised_Learning_Environments_based_on_Knowledge_Graphs_and_the_Zone_of_Proximal_Development-libre.pdf?1650744927=\u0026response-content-disposition=attachment%3B+filename%3DPersonalised_Learning_Environments_based.pdf\u0026Expires=1744203894\u0026Signature=g4QVyyKQYgCBQ9hiutaVKSa0dYw~hh9AkVRcV8-AoD4pDq5~fnzRYSrzcf2c~0tVvxVZ-JFf7NMAlwlRCUC1FNFn-JUX~SOeeoeLqk12MvhyGpjMLyCd2gQx0meR92jSqf90nTbJkXbpoZj3P7AylYmU3HZmWcJuu7jj9I-48JFHudq4eihNckpMBM7X04tC~OzjwJ8kIqE7uOjFCEhAzxbtXl84XzJAqn~OpUrPNL~ILZ5bIJwc8WNb5-zjQZeK2YIBGOIRdzr83cDI0bzHhpOLXgxJO6xU-eb4smhDBQGaTt7K3uDkHhuemCMKP8BFV4o3v~jykPCJwrxmRd0U-w__\u0026Key-Pair-Id=APKAJLOHF5GGSLRBV4ZA"}],"slug":"Personalised_Learning_Environments_based_on_Knowledge_Graphs_and_the_Zone_of_Proximal_Development","translated_slug":"","page_count":33,"language":"en","content_type":"Work","summary":"The learning of new knowledge and skills often requires previous knowledge, which can lead to some frustration if a teacher does not know a learner's exact knowledge and skills and therefore confronts them with exercises that are too difficult to solve. We present a solution to address this issue when teaching techniques and skills in the domain of table tennis, based on the concrete needs of trainers that we have investigated in a survey. We present a conceptual model for the representation of knowledge graphs as well as the level at which individual players already master parts of this knowledge graph. Our fine-grained model enables the automatic suggestion of optimal exercises in a player's so-called zone of proximal development, and our domain-specific application allows table tennis trainers to schedule their training sessions and exercises based on this rich information. In an initial evaluation of the resulting solution for personalised learning environments, we received positive and promising feedback from trainers. We are currently investigating how our approach and conceptual model can be generalised to some more traditional educational settings and how the personalised learning environment might be further improved based on the expressive concepts of the presented model.\n\nResearch paper: https://beatsigner.com/publications/personalised-learning-environments-based-on-knowledge-graphs-and-the-zone-of-proximal-development.pdf","impression_tracking_id":null,"owner":{"id":13155,"first_name":"Beat","middle_initials":null,"last_name":"Signer","page_name":"BeatSigner","domain_name":"vub","created_at":"2008-10-29T17:11:18.015-07:00","display_name":"Beat Signer","url":"https://vub.academia.edu/BeatSigner"},"attachments":[{"id":84750517,"title":"","file_type":"pdf","scribd_thumbnail_url":"https://attachments.academia-assets.com/84750517/thumbnails/1.jpg","file_name":"Personalised_Learning_Environments_based_on_Knowledge_Graphs_and_the_Zone_of_Proximal_Development.pdf","download_url":"https://www.academia.edu/attachments/84750517/download_file","bulk_download_file_name":"Personalised_Learning_Environments_based.pdf","bulk_download_url":"https://d1wqtxts1xzle7.cloudfront.net/84750517/Personalised_Learning_Environments_based_on_Knowledge_Graphs_and_the_Zone_of_Proximal_Development-libre.pdf?1650744927=\u0026response-content-disposition=attachment%3B+filename%3DPersonalised_Learning_Environments_based.pdf\u0026Expires=1744203894\u0026Signature=g4QVyyKQYgCBQ9hiutaVKSa0dYw~hh9AkVRcV8-AoD4pDq5~fnzRYSrzcf2c~0tVvxVZ-JFf7NMAlwlRCUC1FNFn-JUX~SOeeoeLqk12MvhyGpjMLyCd2gQx0meR92jSqf90nTbJkXbpoZj3P7AylYmU3HZmWcJuu7jj9I-48JFHudq4eihNckpMBM7X04tC~OzjwJ8kIqE7uOjFCEhAzxbtXl84XzJAqn~OpUrPNL~ILZ5bIJwc8WNb5-zjQZeK2YIBGOIRdzr83cDI0bzHhpOLXgxJO6xU-eb4smhDBQGaTt7K3uDkHhuemCMKP8BFV4o3v~jykPCJwrxmRd0U-w__\u0026Key-Pair-Id=APKAJLOHF5GGSLRBV4ZA"}],"research_interests":[{"id":422,"name":"Computer Science","url":"https://www.academia.edu/Documents/in/Computer_Science"},{"id":887,"name":"Teaching and Learning","url":"https://www.academia.edu/Documents/in/Teaching_and_Learning"},{"id":922,"name":"Education","url":"https://www.academia.edu/Documents/in/Education"},{"id":1003,"name":"Educational Technology","url":"https://www.academia.edu/Documents/in/Educational_Technology"},{"id":1210,"name":"Informatics","url":"https://www.academia.edu/Documents/in/Informatics"},{"id":1609,"name":"E-learning","url":"https://www.academia.edu/Documents/in/E-learning"},{"id":3095,"name":"Computer-Based Learning","url":"https://www.academia.edu/Documents/in/Computer-Based_Learning"},{"id":3457,"name":"Learning and Teaching","url":"https://www.academia.edu/Documents/in/Learning_and_Teaching"},{"id":8673,"name":"Digital Media \u0026 Learning","url":"https://www.academia.edu/Documents/in/Digital_Media_and_Learning"},{"id":17758,"name":"Technology Enhanced Learning","url":"https://www.academia.edu/Documents/in/Technology_Enhanced_Learning"},{"id":228182,"name":"Learning Strategies","url":"https://www.academia.edu/Documents/in/Learning_Strategies-1"}],"urls":[{"id":19827378,"url":"https://beatsigner.com/publications/personalised-learning-environments-based-on-knowledge-graphs-and-the-zone-of-proximal-development.pdf"}]}, dispatcherData: dispatcherData }); $(this).data('initialized', true); } }); $a.trackClickSource(".js-work-strip-work-link", "profile_work_strip") if (false) { Aedu.setUpFigureCarousel('profile-work-77388591-figures'); } }); </script> <div class="js-work-strip profile--work_container" data-work-id="1658471"><div class="profile--work_thumbnail hidden-xs"><a class="js-work-strip-work-link" data-click-track="profile-work-strip-thumbnail" href="https://www.academia.edu/1658471/iGesture_A_General_Gesture_Recognition_Framework"><img alt="Research paper thumbnail of iGesture: A General Gesture Recognition Framework" class="work-thumbnail" src="https://attachments.academia-assets.com/77393217/thumbnails/1.jpg" /></a></div><div class="wp-workCard wp-workCard_itemContainer"><div class="wp-workCard_item wp-workCard--title"><a class="js-work-strip-work-link text-gray-darker" data-click-track="profile-work-strip-title" href="https://www.academia.edu/1658471/iGesture_A_General_Gesture_Recognition_Framework">iGesture: A General Gesture Recognition Framework</a></div><div class="wp-workCard_item"><span>Presentation given at ICDAR 2007, 9th International Conference on Document Analysis and Recognition, Curitiba, Brazil</span><span>, 2007</span></div><div class="wp-workCard_item"><span class="js-work-more-abstract-truncated">With the emergence of digital pen and paper interfaces, there is a need for gesture recognition t...</span><a class="js-work-more-abstract" data-broccoli-component="work_strip.more_abstract" data-click-track="profile-work-strip-more-abstract" href="javascript:;"><span> more </span><span><i class="fa fa-caret-down"></i></span></a><span class="js-work-more-abstract-untruncated hidden">With the emergence of digital pen and paper interfaces, there is a need for gesture recognition tools for digital pen input. While there exists a variety of gesture recognition frameworks, none of them addresses the issues of supporting application developers as well as the designers of new recognition algorithms and, at the same time, can be integrated with new forms of input devices such as digital pens. We introduce iGesture, a Java-based gesture recognition framework focusing on extensibility and cross-application reusability by providing an integrated solution that includes tools for gesture recognition as well as the creation and management of gesture sets for the evaluation and optimisation of new or existing gesture recognition algorithms. In addition to traditional screen-based interaction, iGesture provides a digital pen and paper interface.<br /><br />Research paper: <a href="https://beatsigner.com/publications/igesture-a-general-gesture-recognition-framework.pdf" rel="nofollow">https://beatsigner.com/publications/igesture-a-general-gesture-recognition-framework.pdf</a></span></div><div class="wp-workCard_item wp-workCard--actions"><span class="work-strip-bookmark-button-container"></span><a id="b4cd63de86d26c8da1ec0be94022d4b9" class="wp-workCard--action" rel="nofollow" data-click-track="profile-work-strip-download" data-download="{&quot;attachment_id&quot;:77393217,&quot;asset_id&quot;:1658471,&quot;asset_type&quot;:&quot;Work&quot;,&quot;button_location&quot;:&quot;profile&quot;}" href="https://www.academia.edu/attachments/77393217/download_file?s=profile"><span><i class="fa fa-arrow-down"></i></span><span>Download</span></a><span class="wp-workCard--action visible-if-viewed-by-owner inline-block" style="display: none;"><span class="js-profile-work-strip-edit-button-wrapper profile-work-strip-edit-button-wrapper" data-work-id="1658471"><a class="js-profile-work-strip-edit-button" tabindex="0"><span><i class="fa fa-pencil"></i></span><span>Edit</span></a></span></span></div><div class="wp-workCard_item wp-workCard--stats"><span><span><span class="js-view-count view-count u-mr2x" data-work-id="1658471"><i class="fa fa-spinner fa-spin"></i></span><script>$(function () { var workId = 1658471; window.Academia.workViewCountsFetcher.queue(workId, function (count) { var description = window.$h.commaizeInt(count) + " " + window.$h.pluralize(count, 'View'); $(".js-view-count[data-work-id=1658471]").text(description); $(".js-view-count[data-work-id=1658471]").attr('title', description).tooltip(); }); });</script></span></span><span><span class="percentile-widget hidden"><span class="u-mr2x work-percentile"></span></span><script>$(function () { var workId = 1658471; window.Academia.workPercentilesFetcher.queue(workId, function (percentileText) { var container = $(".js-work-strip[data-work-id='1658471']"); container.find('.work-percentile').text(percentileText.charAt(0).toUpperCase() + percentileText.slice(1)); container.find('.percentile-widget').show(); container.find('.percentile-widget').removeClass('hidden'); }); });</script></span></div><div id="work-strip-premium-row-container"></div></div></div><script> require.config({ waitSeconds: 90 })(["https://a.academia-assets.com/assets/wow_profile-a9bf3a2bc8c89fa2a77156577594264ee8a0f214d74241bc0fcd3f69f8d107ac.js","https://a.academia-assets.com/assets/work_edit-ad038b8c047c1a8d4fa01b402d530ff93c45fee2137a149a4a5398bc8ad67560.js"], function() { // from javascript_helper.rb var dispatcherData = {} if (true){ window.WowProfile.dispatcher = window.WowProfile.dispatcher || _.clone(Backbone.Events); dispatcherData = { dispatcher: window.WowProfile.dispatcher, downloadLinkId: "b4cd63de86d26c8da1ec0be94022d4b9" } } $('.js-work-strip[data-work-id=1658471]').each(function() { if (!$(this).data('initialized')) { new WowProfile.WorkStripView({ el: this, workJSON: {"id":1658471,"title":"iGesture: A General Gesture Recognition Framework","translated_title":"","metadata":{"abstract":"With the emergence of digital pen and paper interfaces, there is a need for gesture recognition tools for digital pen input. While there exists a variety of gesture recognition frameworks, none of them addresses the issues of supporting application developers as well as the designers of new recognition algorithms and, at the same time, can be integrated with new forms of input devices such as digital pens. We introduce iGesture, a Java-based gesture recognition framework focusing on extensibility and cross-application reusability by providing an integrated solution that includes tools for gesture recognition as well as the creation and management of gesture sets for the evaluation and optimisation of new or existing gesture recognition algorithms. In addition to traditional screen-based interaction, iGesture provides a digital pen and paper interface.\n\nResearch paper: https://beatsigner.com/publications/igesture-a-general-gesture-recognition-framework.pdf","location":"ICDAR 2007, 9th International Conference on Document Analysis and Recognition, Curitiba, Brazil","event_date":{"day":null,"month":9,"year":2007,"errors":{}},"publication_date":{"day":null,"month":null,"year":2007,"errors":{}},"publication_name":"Presentation given at ICDAR 2007, 9th International Conference on Document Analysis and Recognition, Curitiba, Brazil"},"translated_abstract":"With the emergence of digital pen and paper interfaces, there is a need for gesture recognition tools for digital pen input. While there exists a variety of gesture recognition frameworks, none of them addresses the issues of supporting application developers as well as the designers of new recognition algorithms and, at the same time, can be integrated with new forms of input devices such as digital pens. We introduce iGesture, a Java-based gesture recognition framework focusing on extensibility and cross-application reusability by providing an integrated solution that includes tools for gesture recognition as well as the creation and management of gesture sets for the evaluation and optimisation of new or existing gesture recognition algorithms. In addition to traditional screen-based interaction, iGesture provides a digital pen and paper interface.\n\nResearch paper: https://beatsigner.com/publications/igesture-a-general-gesture-recognition-framework.pdf","internal_url":"https://www.academia.edu/1658471/iGesture_A_General_Gesture_Recognition_Framework","translated_internal_url":"","created_at":"2009-08-25T06:59:48.833-07:00","preview_url":null,"current_user_can_edit":null,"current_user_is_owner":null,"owner_id":13155,"coauthors_can_edit":true,"document_type":"conference_presentation","co_author_tags":[],"downloadable_attachments":[{"id":77393217,"title":"","file_type":"pdf","scribd_thumbnail_url":"https://attachments.academia-assets.com/77393217/thumbnails/1.jpg","file_name":"ICDAR2007.pdf","download_url":"https://www.academia.edu/attachments/77393217/download_file","bulk_download_file_name":"iGesture_A_General_Gesture_Recognition_F.pdf","bulk_download_url":"https://d1wqtxts1xzle7.cloudfront.net/77393217/ICDAR2007-libre.pdf?1640568757=\u0026response-content-disposition=attachment%3B+filename%3DiGesture_A_General_Gesture_Recognition_F.pdf\u0026Expires=1744203894\u0026Signature=RqO416Lp5BUSyc2p41197EcMLX4McJi9JI43MObjFwfmMbTZaBq3NGLnBEE2CiDU3XAWOH6N1Y1xvymO7ItKVRJ43pO3VUfnt-2fIHaOSklxyhWzPR1-nji2VwGkZyHTnb3O594AxI9B6m5dHGLsyExR1ftlKgMfPD6MGQMNlI9ts5ONbwAQA0J2hTd5GJeuo9S791sbFTvJyyVDFnKK2jOMZHnPizuCOIbc9Kfy3y~0gJoh4qkd3GTeLl~-91lRgyhMEweVzJHOgJsiZ8EPEXEu6A0uCz~aed1c~vPAOYF69wVuAURadj~YJdu2BFviG2CJ8hTkOPuEmMRs9RK4~Q__\u0026Key-Pair-Id=APKAJLOHF5GGSLRBV4ZA"}],"slug":"iGesture_A_General_Gesture_Recognition_Framework","translated_slug":"","page_count":16,"language":"en","content_type":"Work","summary":"With the emergence of digital pen and paper interfaces, there is a need for gesture recognition tools for digital pen input. While there exists a variety of gesture recognition frameworks, none of them addresses the issues of supporting application developers as well as the designers of new recognition algorithms and, at the same time, can be integrated with new forms of input devices such as digital pens. We introduce iGesture, a Java-based gesture recognition framework focusing on extensibility and cross-application reusability by providing an integrated solution that includes tools for gesture recognition as well as the creation and management of gesture sets for the evaluation and optimisation of new or existing gesture recognition algorithms. In addition to traditional screen-based interaction, iGesture provides a digital pen and paper interface.\n\nResearch paper: https://beatsigner.com/publications/igesture-a-general-gesture-recognition-framework.pdf","impression_tracking_id":null,"owner":{"id":13155,"first_name":"Beat","middle_initials":null,"last_name":"Signer","page_name":"BeatSigner","domain_name":"vub","created_at":"2008-10-29T17:11:18.015-07:00","display_name":"Beat Signer","url":"https://vub.academia.edu/BeatSigner"},"attachments":[{"id":77393217,"title":"","file_type":"pdf","scribd_thumbnail_url":"https://attachments.academia-assets.com/77393217/thumbnails/1.jpg","file_name":"ICDAR2007.pdf","download_url":"https://www.academia.edu/attachments/77393217/download_file","bulk_download_file_name":"iGesture_A_General_Gesture_Recognition_F.pdf","bulk_download_url":"https://d1wqtxts1xzle7.cloudfront.net/77393217/ICDAR2007-libre.pdf?1640568757=\u0026response-content-disposition=attachment%3B+filename%3DiGesture_A_General_Gesture_Recognition_F.pdf\u0026Expires=1744203894\u0026Signature=RqO416Lp5BUSyc2p41197EcMLX4McJi9JI43MObjFwfmMbTZaBq3NGLnBEE2CiDU3XAWOH6N1Y1xvymO7ItKVRJ43pO3VUfnt-2fIHaOSklxyhWzPR1-nji2VwGkZyHTnb3O594AxI9B6m5dHGLsyExR1ftlKgMfPD6MGQMNlI9ts5ONbwAQA0J2hTd5GJeuo9S791sbFTvJyyVDFnKK2jOMZHnPizuCOIbc9Kfy3y~0gJoh4qkd3GTeLl~-91lRgyhMEweVzJHOgJsiZ8EPEXEu6A0uCz~aed1c~vPAOYF69wVuAURadj~YJdu2BFviG2CJ8hTkOPuEmMRs9RK4~Q__\u0026Key-Pair-Id=APKAJLOHF5GGSLRBV4ZA"}],"research_interests":[{"id":48,"name":"Engineering","url":"https://www.academia.edu/Documents/in/Engineering"},{"id":422,"name":"Computer Science","url":"https://www.academia.edu/Documents/in/Computer_Science"},{"id":449,"name":"Software Engineering","url":"https://www.academia.edu/Documents/in/Software_Engineering"},{"id":472,"name":"Human Computer Interaction","url":"https://www.academia.edu/Documents/in/Human_Computer_Interaction"},{"id":15951,"name":"Digital Pen and Paper","url":"https://www.academia.edu/Documents/in/Digital_Pen_and_Paper"},{"id":17701,"name":"Gesture Recognition","url":"https://www.academia.edu/Documents/in/Gesture_Recognition"},{"id":20470,"name":"Tangible User Interfaces","url":"https://www.academia.edu/Documents/in/Tangible_User_Interfaces"}],"urls":[{"id":15716574,"url":"https://speakerdeck.com/signer/igesture-a-general-gesture-recognition-framework"}]}, dispatcherData: dispatcherData }); $(this).data('initialized', true); } }); $a.trackClickSource(".js-work-strip-work-link", "profile_work_strip") if (false) { Aedu.setUpFigureCarousel('profile-work-1658471-figures'); } }); </script> <div class="js-work-strip profile--work_container" data-work-id="33706219"><div class="profile--work_thumbnail hidden-xs"><a class="js-work-strip-work-link" data-click-track="profile-work-strip-thumbnail" href="https://www.academia.edu/33706219/The_Context_Modelling_Toolkit_A_Unified_Multi_layered_Context_Modelling_Approach"><img alt="Research paper thumbnail of The Context Modelling Toolkit: A Unified Multi-layered Context Modelling Approach" class="work-thumbnail" src="https://attachments.academia-assets.com/77393371/thumbnails/1.jpg" /></a></div><div class="wp-workCard wp-workCard_itemContainer"><div class="wp-workCard_item wp-workCard--title"><a class="js-work-strip-work-link text-gray-darker" data-click-track="profile-work-strip-title" href="https://www.academia.edu/33706219/The_Context_Modelling_Toolkit_A_Unified_Multi_layered_Context_Modelling_Approach">The Context Modelling Toolkit: A Unified Multi-layered Context Modelling Approach</a></div><div class="wp-workCard_item wp-workCard--coauthors"><span>by </span><span><a class="" data-click-track="profile-work-strip-authors" href="https://vub.academia.edu/BeatSigner">Beat Signer</a> and <a class="" data-click-track="profile-work-strip-authors" href="https://vub.academia.edu/sandratrullemans">Sandra Trullemans</a></span></div><div class="wp-workCard_item"><span>Presentation given at EICS 2017, 9th ACM SIGCHI Symposium on Engineering Interactive Computing Systems, Lisbon, Portugal</span><span>, 2017</span></div><div class="wp-workCard_item"><span class="js-work-more-abstract-truncated">Context awareness plays an important role in recent smart environments and embedded interactions....</span><a class="js-work-more-abstract" data-broccoli-component="work_strip.more_abstract" data-click-track="profile-work-strip-more-abstract" href="javascript:;"><span> more </span><span><i class="fa fa-caret-down"></i></span></a><span class="js-work-more-abstract-untruncated hidden">Context awareness plays an important role in recent smart environments and embedded interactions. In order to increase user satisfaction and acceptance, these context-aware solutions should be controllable by end users. Over the last few years we have therefore seen an emerging trend towards visual programming tools for context-aware applications based on simple &quot;IF this THEN that&quot; rules. However, existing solutions often do not support the simple reuse of the &quot;this&quot; part in order to define more sophisticated rules. Given that the desired level of control varies among individuals, we propose a unified multi-layered context modelling approach distinguishing between end users, expert users and programmers. Our Context Modelling Toolkit (CMT) consists of the necessary context modelling concepts and offers a rule-based context processing engine. We further illustrate how end users and expert users might interact with the CMT framework. Finally, we highlight some advantages of our Context Modelling Toolkit by discussing a number of use cases.<br /><br />Research paper: <a href="https://beatsigner.com/publications/the-context-modelling-toolkit-a-unified-multi-layered-context-modelling-approach.pdf" rel="nofollow">https://beatsigner.com/publications/the-context-modelling-toolkit-a-unified-multi-layered-context-modelling-approach.pdf</a></span></div><div class="wp-workCard_item wp-workCard--actions"><span class="work-strip-bookmark-button-container"></span><a id="fc77737798b7c84d9f30830b15c26d95" class="wp-workCard--action" rel="nofollow" data-click-track="profile-work-strip-download" data-download="{&quot;attachment_id&quot;:77393371,&quot;asset_id&quot;:33706219,&quot;asset_type&quot;:&quot;Work&quot;,&quot;button_location&quot;:&quot;profile&quot;}" href="https://www.academia.edu/attachments/77393371/download_file?s=profile"><span><i class="fa fa-arrow-down"></i></span><span>Download</span></a><span class="wp-workCard--action visible-if-viewed-by-owner inline-block" style="display: none;"><span class="js-profile-work-strip-edit-button-wrapper profile-work-strip-edit-button-wrapper" data-work-id="33706219"><a class="js-profile-work-strip-edit-button" tabindex="0"><span><i class="fa fa-pencil"></i></span><span>Edit</span></a></span></span></div><div class="wp-workCard_item wp-workCard--stats"><span><span><span class="js-view-count view-count u-mr2x" data-work-id="33706219"><i class="fa fa-spinner fa-spin"></i></span><script>$(function () { var workId = 33706219; window.Academia.workViewCountsFetcher.queue(workId, function (count) { var description = window.$h.commaizeInt(count) + " " + window.$h.pluralize(count, 'View'); $(".js-view-count[data-work-id=33706219]").text(description); $(".js-view-count[data-work-id=33706219]").attr('title', description).tooltip(); }); });</script></span></span><span><span class="percentile-widget hidden"><span class="u-mr2x work-percentile"></span></span><script>$(function () { var workId = 33706219; window.Academia.workPercentilesFetcher.queue(workId, function (percentileText) { var container = $(".js-work-strip[data-work-id='33706219']"); container.find('.work-percentile').text(percentileText.charAt(0).toUpperCase() + percentileText.slice(1)); container.find('.percentile-widget').show(); container.find('.percentile-widget').removeClass('hidden'); }); });</script></span></div><div id="work-strip-premium-row-container"></div></div></div><script> require.config({ waitSeconds: 90 })(["https://a.academia-assets.com/assets/wow_profile-a9bf3a2bc8c89fa2a77156577594264ee8a0f214d74241bc0fcd3f69f8d107ac.js","https://a.academia-assets.com/assets/work_edit-ad038b8c047c1a8d4fa01b402d530ff93c45fee2137a149a4a5398bc8ad67560.js"], function() { // from javascript_helper.rb var dispatcherData = {} if (true){ window.WowProfile.dispatcher = window.WowProfile.dispatcher || _.clone(Backbone.Events); dispatcherData = { dispatcher: window.WowProfile.dispatcher, downloadLinkId: "fc77737798b7c84d9f30830b15c26d95" } } $('.js-work-strip[data-work-id=33706219]').each(function() { if (!$(this).data('initialized')) { new WowProfile.WorkStripView({ el: this, workJSON: {"id":33706219,"title":"The Context Modelling Toolkit: A Unified Multi-layered Context Modelling Approach","translated_title":"","metadata":{"abstract":"Context awareness plays an important role in recent smart environments and embedded interactions. In order to increase user satisfaction and acceptance, these context-aware solutions should be controllable by end users. Over the last few years we have therefore seen an emerging trend towards visual programming tools for context-aware applications based on simple \"IF this THEN that\" rules. However, existing solutions often do not support the simple reuse of the \"this\" part in order to define more sophisticated rules. Given that the desired level of control varies among individuals, we propose a unified multi-layered context modelling approach distinguishing between end users, expert users and programmers. Our Context Modelling Toolkit (CMT) consists of the necessary context modelling concepts and offers a rule-based context processing engine. We further illustrate how end users and expert users might interact with the CMT framework. Finally, we highlight some advantages of our Context Modelling Toolkit by discussing a number of use cases.\n\nResearch paper: https://beatsigner.com/publications/the-context-modelling-toolkit-a-unified-multi-layered-context-modelling-approach.pdf","publication_date":{"day":null,"month":null,"year":2017,"errors":{}},"publication_name":"Presentation given at EICS 2017, 9th ACM SIGCHI Symposium on Engineering Interactive Computing Systems, Lisbon, Portugal"},"translated_abstract":"Context awareness plays an important role in recent smart environments and embedded interactions. In order to increase user satisfaction and acceptance, these context-aware solutions should be controllable by end users. Over the last few years we have therefore seen an emerging trend towards visual programming tools for context-aware applications based on simple \"IF this THEN that\" rules. However, existing solutions often do not support the simple reuse of the \"this\" part in order to define more sophisticated rules. Given that the desired level of control varies among individuals, we propose a unified multi-layered context modelling approach distinguishing between end users, expert users and programmers. Our Context Modelling Toolkit (CMT) consists of the necessary context modelling concepts and offers a rule-based context processing engine. We further illustrate how end users and expert users might interact with the CMT framework. Finally, we highlight some advantages of our Context Modelling Toolkit by discussing a number of use cases.\n\nResearch paper: https://beatsigner.com/publications/the-context-modelling-toolkit-a-unified-multi-layered-context-modelling-approach.pdf","internal_url":"https://www.academia.edu/33706219/The_Context_Modelling_Toolkit_A_Unified_Multi_layered_Context_Modelling_Approach","translated_internal_url":"","created_at":"2017-06-29T12:14:57.950-07:00","preview_url":null,"current_user_can_edit":null,"current_user_is_owner":null,"owner_id":13155,"coauthors_can_edit":true,"document_type":"conference_presentation","co_author_tags":[{"id":29559977,"work_id":33706219,"tagging_user_id":13155,"tagged_user_id":1299708,"co_author_invite_id":null,"email":"s***m@vub.ac.be","affiliation":"Vrije Universiteit Brussel","display_order":1,"name":"Sandra Trullemans","title":"The Context Modelling Toolkit: A Unified Multi-layered Context Modelling Approach"},{"id":29559978,"work_id":33706219,"tagging_user_id":13155,"tagged_user_id":65389886,"co_author_invite_id":null,"email":"l***e@vub.be","display_order":2,"name":"Lars Van Holsbeeke","title":"The Context Modelling Toolkit: A Unified Multi-layered Context Modelling Approach"}],"downloadable_attachments":[{"id":77393371,"title":"","file_type":"pdf","scribd_thumbnail_url":"https://attachments.academia-assets.com/77393371/thumbnails/1.jpg","file_name":"EICS2017.pdf","download_url":"https://www.academia.edu/attachments/77393371/download_file","bulk_download_file_name":"The_Context_Modelling_Toolkit_A_Unified.pdf","bulk_download_url":"https://d1wqtxts1xzle7.cloudfront.net/77393371/EICS2017-libre.pdf?1640570731=\u0026response-content-disposition=attachment%3B+filename%3DThe_Context_Modelling_Toolkit_A_Unified.pdf\u0026Expires=1744203895\u0026Signature=Lu6ER6kMwdCsjg1AI-AuKHLjvn8jadnlr8OwjG5-Cgp3JMpHEmAXRfpAMV7YyE2M3~j8Vbca8G47Vk4mferQjT~tXauTCoenMPXdCDhUMFZMLG4QOxEk0riBX1FPu26mxWKlHT6VR7c7hp-4R8M6ENd3guwdhOAGU2JoSjzvlbJMMAe46kdqtYGACGP8Ow002Dqghf52wnBL~uRxNm~0yysFO0-EioC3rZ-0-OtcYujMnZacNVX8GJn0xOO7klXa0R6Qo4zdVqAl4FqB4a8eX4rH0AU1qk4oaEPfG2-aJlms8iR3SKmy88maIyS~jlmEMVVw0vhPS2TRbHn7lpGbCg__\u0026Key-Pair-Id=APKAJLOHF5GGSLRBV4ZA"}],"slug":"The_Context_Modelling_Toolkit_A_Unified_Multi_layered_Context_Modelling_Approach","translated_slug":"","page_count":25,"language":"en","content_type":"Work","summary":"Context awareness plays an important role in recent smart environments and embedded interactions. In order to increase user satisfaction and acceptance, these context-aware solutions should be controllable by end users. Over the last few years we have therefore seen an emerging trend towards visual programming tools for context-aware applications based on simple \"IF this THEN that\" rules. However, existing solutions often do not support the simple reuse of the \"this\" part in order to define more sophisticated rules. Given that the desired level of control varies among individuals, we propose a unified multi-layered context modelling approach distinguishing between end users, expert users and programmers. Our Context Modelling Toolkit (CMT) consists of the necessary context modelling concepts and offers a rule-based context processing engine. We further illustrate how end users and expert users might interact with the CMT framework. Finally, we highlight some advantages of our Context Modelling Toolkit by discussing a number of use cases.\n\nResearch paper: https://beatsigner.com/publications/the-context-modelling-toolkit-a-unified-multi-layered-context-modelling-approach.pdf","impression_tracking_id":null,"owner":{"id":13155,"first_name":"Beat","middle_initials":null,"last_name":"Signer","page_name":"BeatSigner","domain_name":"vub","created_at":"2008-10-29T17:11:18.015-07:00","display_name":"Beat Signer","url":"https://vub.academia.edu/BeatSigner"},"attachments":[{"id":77393371,"title":"","file_type":"pdf","scribd_thumbnail_url":"https://attachments.academia-assets.com/77393371/thumbnails/1.jpg","file_name":"EICS2017.pdf","download_url":"https://www.academia.edu/attachments/77393371/download_file","bulk_download_file_name":"The_Context_Modelling_Toolkit_A_Unified.pdf","bulk_download_url":"https://d1wqtxts1xzle7.cloudfront.net/77393371/EICS2017-libre.pdf?1640570731=\u0026response-content-disposition=attachment%3B+filename%3DThe_Context_Modelling_Toolkit_A_Unified.pdf\u0026Expires=1744203895\u0026Signature=Lu6ER6kMwdCsjg1AI-AuKHLjvn8jadnlr8OwjG5-Cgp3JMpHEmAXRfpAMV7YyE2M3~j8Vbca8G47Vk4mferQjT~tXauTCoenMPXdCDhUMFZMLG4QOxEk0riBX1FPu26mxWKlHT6VR7c7hp-4R8M6ENd3guwdhOAGU2JoSjzvlbJMMAe46kdqtYGACGP8Ow002Dqghf52wnBL~uRxNm~0yysFO0-EioC3rZ-0-OtcYujMnZacNVX8GJn0xOO7klXa0R6Qo4zdVqAl4FqB4a8eX4rH0AU1qk4oaEPfG2-aJlms8iR3SKmy88maIyS~jlmEMVVw0vhPS2TRbHn7lpGbCg__\u0026Key-Pair-Id=APKAJLOHF5GGSLRBV4ZA"}],"research_interests":[{"id":37,"name":"Information Systems","url":"https://www.academia.edu/Documents/in/Information_Systems"},{"id":48,"name":"Engineering","url":"https://www.academia.edu/Documents/in/Engineering"},{"id":422,"name":"Computer Science","url":"https://www.academia.edu/Documents/in/Computer_Science"},{"id":449,"name":"Software Engineering","url":"https://www.academia.edu/Documents/in/Software_Engineering"},{"id":472,"name":"Human Computer Interaction","url":"https://www.academia.edu/Documents/in/Human_Computer_Interaction"},{"id":491,"name":"Information Technology","url":"https://www.academia.edu/Documents/in/Information_Technology"},{"id":923,"name":"Technology","url":"https://www.academia.edu/Documents/in/Technology"},{"id":933,"name":"New Media","url":"https://www.academia.edu/Documents/in/New_Media"},{"id":1210,"name":"Informatics","url":"https://www.academia.edu/Documents/in/Informatics"},{"id":1380,"name":"Computer Engineering","url":"https://www.academia.edu/Documents/in/Computer_Engineering"},{"id":2869,"name":"Digital Media","url":"https://www.academia.edu/Documents/in/Digital_Media"},{"id":2879,"name":"Ubiquitous Computing","url":"https://www.academia.edu/Documents/in/Ubiquitous_Computing"},{"id":5266,"name":"Human Information Interaction","url":"https://www.academia.edu/Documents/in/Human_Information_Interaction"},{"id":8194,"name":"User Centred Design","url":"https://www.academia.edu/Documents/in/User_Centred_Design"},{"id":9134,"name":"Pervasive Computing","url":"https://www.academia.edu/Documents/in/Pervasive_Computing"},{"id":9135,"name":"The Internet of Things","url":"https://www.academia.edu/Documents/in/The_Internet_of_Things"},{"id":10044,"name":"Context","url":"https://www.academia.edu/Documents/in/Context"},{"id":10907,"name":"Context-Aware Computing","url":"https://www.academia.edu/Documents/in/Context-Aware_Computing"},{"id":11119,"name":"User Interface","url":"https://www.academia.edu/Documents/in/User_Interface"},{"id":39433,"name":"Ambient Intelligence","url":"https://www.academia.edu/Documents/in/Ambient_Intelligence"},{"id":54140,"name":"End User Development","url":"https://www.academia.edu/Documents/in/End_User_Development"},{"id":74128,"name":"Rule-Based Systems","url":"https://www.academia.edu/Documents/in/Rule-Based_Systems"},{"id":90202,"name":"Rule based systems","url":"https://www.academia.edu/Documents/in/Rule_based_systems"},{"id":114991,"name":"End-User Software Engineering","url":"https://www.academia.edu/Documents/in/End-User_Software_Engineering"},{"id":119731,"name":"Visual Programming","url":"https://www.academia.edu/Documents/in/Visual_Programming"},{"id":142326,"name":"Web of Things","url":"https://www.academia.edu/Documents/in/Web_of_Things"},{"id":144182,"name":"End Users","url":"https://www.academia.edu/Documents/in/End_Users"},{"id":373609,"name":"Template","url":"https://www.academia.edu/Documents/in/Template"},{"id":458849,"name":"Pervasive and Ubiquitous Computing","url":"https://www.academia.edu/Documents/in/Pervasive_and_Ubiquitous_Computing"},{"id":785596,"name":"Templates","url":"https://www.academia.edu/Documents/in/Templates"}],"urls":[{"id":14771399,"url":"https://beatsigner.com/publications/the-context-modelling-toolkit-a-unified-multi-layered-context-modelling-approach.pdf"}]}, dispatcherData: dispatcherData }); $(this).data('initialized', true); } }); $a.trackClickSource(".js-work-strip-work-link", "profile_work_strip") if (false) { Aedu.setUpFigureCarousel('profile-work-33706219-figures'); } }); </script> <div class="js-work-strip profile--work_container" data-work-id="1672774"><div class="profile--work_thumbnail hidden-xs"><a class="js-work-strip-work-link" data-click-track="profile-work-strip-thumbnail" href="https://www.academia.edu/1672774/Interactive_Paper_Past_Present_and_Future"><img alt="Research paper thumbnail of Interactive Paper: Past, Present and Future" class="work-thumbnail" src="https://attachments.academia-assets.com/77393580/thumbnails/1.jpg" /></a></div><div class="wp-workCard wp-workCard_itemContainer"><div class="wp-workCard_item wp-workCard--title"><a class="js-work-strip-work-link text-gray-darker" data-click-track="profile-work-strip-title" href="https://www.academia.edu/1672774/Interactive_Paper_Past_Present_and_Future">Interactive Paper: Past, Present and Future</a></div><div class="wp-workCard_item"><span>Presentation given at PaperComp 2010, 1st International Workshop on Paper Computing, Copenhagen Denmark</span><span>, 2010</span></div><div class="wp-workCard_item"><span class="js-work-more-abstract-truncated">Over the last few years, there has been a significant increase in the number of researchers deali...</span><a class="js-work-more-abstract" data-broccoli-component="work_strip.more_abstract" data-click-track="profile-work-strip-more-abstract" href="javascript:;"><span> more </span><span><i class="fa fa-caret-down"></i></span></a><span class="js-work-more-abstract-untruncated hidden">Over the last few years, there has been a significant increase in the number of researchers dealing with the integration of paper and digital information or services. While recent technological developments enable new forms of paper-digital integration and interaction, some of the original research on interactive paper dates back almost twenty years. We give a brief overview of the most relevant past and current interactive paper developments. Then, based on our experience in developing a wide variety of interactive paper solutions over the last decade, as well as the results of other research groups, we outline future directions and challenges for the realisation of innovative interactive paper solutions. Further, we propose the definition of common data formats and interactive paper design patterns to ensure future cross-application and framework interoperability.<br /><br />Research paper: <a href="https://beatsigner.com/publications/interactive-paper-past-present-and-future.pdf" rel="nofollow">https://beatsigner.com/publications/interactive-paper-past-present-and-future.pdf</a></span></div><div class="wp-workCard_item wp-workCard--actions"><span class="work-strip-bookmark-button-container"></span><a id="2425acfda4dcb60cb753abe89480a8e9" class="wp-workCard--action" rel="nofollow" data-click-track="profile-work-strip-download" data-download="{&quot;attachment_id&quot;:77393580,&quot;asset_id&quot;:1672774,&quot;asset_type&quot;:&quot;Work&quot;,&quot;button_location&quot;:&quot;profile&quot;}" href="https://www.academia.edu/attachments/77393580/download_file?s=profile"><span><i class="fa fa-arrow-down"></i></span><span>Download</span></a><span class="wp-workCard--action visible-if-viewed-by-owner inline-block" style="display: none;"><span class="js-profile-work-strip-edit-button-wrapper profile-work-strip-edit-button-wrapper" data-work-id="1672774"><a class="js-profile-work-strip-edit-button" tabindex="0"><span><i class="fa fa-pencil"></i></span><span>Edit</span></a></span></span></div><div class="wp-workCard_item wp-workCard--stats"><span><span><span class="js-view-count view-count u-mr2x" data-work-id="1672774"><i class="fa fa-spinner fa-spin"></i></span><script>$(function () { var workId = 1672774; window.Academia.workViewCountsFetcher.queue(workId, function (count) { var description = window.$h.commaizeInt(count) + " " + window.$h.pluralize(count, 'View'); $(".js-view-count[data-work-id=1672774]").text(description); $(".js-view-count[data-work-id=1672774]").attr('title', description).tooltip(); }); });</script></span></span><span><span class="percentile-widget hidden"><span class="u-mr2x work-percentile"></span></span><script>$(function () { var workId = 1672774; window.Academia.workPercentilesFetcher.queue(workId, function (percentileText) { var container = $(".js-work-strip[data-work-id='1672774']"); container.find('.work-percentile').text(percentileText.charAt(0).toUpperCase() + percentileText.slice(1)); container.find('.percentile-widget').show(); container.find('.percentile-widget').removeClass('hidden'); }); });</script></span></div><div id="work-strip-premium-row-container"></div></div></div><script> require.config({ waitSeconds: 90 })(["https://a.academia-assets.com/assets/wow_profile-a9bf3a2bc8c89fa2a77156577594264ee8a0f214d74241bc0fcd3f69f8d107ac.js","https://a.academia-assets.com/assets/work_edit-ad038b8c047c1a8d4fa01b402d530ff93c45fee2137a149a4a5398bc8ad67560.js"], function() { // from javascript_helper.rb var dispatcherData = {} if (true){ window.WowProfile.dispatcher = window.WowProfile.dispatcher || _.clone(Backbone.Events); dispatcherData = { dispatcher: window.WowProfile.dispatcher, downloadLinkId: "2425acfda4dcb60cb753abe89480a8e9" } } $('.js-work-strip[data-work-id=1672774]').each(function() { if (!$(this).data('initialized')) { new WowProfile.WorkStripView({ el: this, workJSON: {"id":1672774,"title":"Interactive Paper: Past, Present and Future","translated_title":"","metadata":{"abstract":"Over the last few years, there has been a significant increase in the number of researchers dealing with the integration of paper and digital information or services. While recent technological developments enable new forms of paper-digital integration and interaction, some of the original research on interactive paper dates back almost twenty years. We give a brief overview of the most relevant past and current interactive paper developments. Then, based on our experience in developing a wide variety of interactive paper solutions over the last decade, as well as the results of other research groups, we outline future directions and challenges for the realisation of innovative interactive paper solutions. Further, we propose the definition of common data formats and interactive paper design patterns to ensure future cross-application and framework interoperability.\n\nResearch paper: https://beatsigner.com/publications/interactive-paper-past-present-and-future.pdf","location":"PaperComp 2010, 1st International Workshop on Paper Computing, Copenhagen, Denmark","event_date":{"day":26,"month":9,"year":2010,"errors":{}},"ai_title_tag":"The Evolution and Future of Interactive Paper","publication_date":{"day":null,"month":null,"year":2010,"errors":{}},"publication_name":"Presentation given at PaperComp 2010, 1st International Workshop on Paper Computing, Copenhagen Denmark"},"translated_abstract":"Over the last few years, there has been a significant increase in the number of researchers dealing with the integration of paper and digital information or services. While recent technological developments enable new forms of paper-digital integration and interaction, some of the original research on interactive paper dates back almost twenty years. We give a brief overview of the most relevant past and current interactive paper developments. Then, based on our experience in developing a wide variety of interactive paper solutions over the last decade, as well as the results of other research groups, we outline future directions and challenges for the realisation of innovative interactive paper solutions. Further, we propose the definition of common data formats and interactive paper design patterns to ensure future cross-application and framework interoperability.\n\nResearch paper: https://beatsigner.com/publications/interactive-paper-past-present-and-future.pdf","internal_url":"https://www.academia.edu/1672774/Interactive_Paper_Past_Present_and_Future","translated_internal_url":"","created_at":"2010-09-26T03:08:31.039-07:00","preview_url":null,"current_user_can_edit":null,"current_user_is_owner":null,"owner_id":13155,"coauthors_can_edit":true,"document_type":"conference_presentation","co_author_tags":[],"downloadable_attachments":[{"id":77393580,"title":"","file_type":"pdf","scribd_thumbnail_url":"https://attachments.academia-assets.com/77393580/thumbnails/1.jpg","file_name":"paperComp2010.pdf","download_url":"https://www.academia.edu/attachments/77393580/download_file","bulk_download_file_name":"Interactive_Paper_Past_Present_and_Futur.pdf","bulk_download_url":"https://d1wqtxts1xzle7.cloudfront.net/77393580/paperComp2010-libre.pdf?1640570716=\u0026response-content-disposition=attachment%3B+filename%3DInteractive_Paper_Past_Present_and_Futur.pdf\u0026Expires=1744203895\u0026Signature=Zq2AzjEJpBkI0v8-Zirn83OVP7a6MxBoae~IonC4rlptmXDNuBGr3B8Yw5DaCYDsdrVUTxnRh27z7e2~jiROMLau7W3RHViTZkNtOcvx9y37n9E7VKmpG-SlBB7RSHPEicBwloBYvKVzPEq1VwyW-T2Py37oZrbopTlHirWcAzHi1CHGFg5NYzUlcjJVTVVk7O-eR3amoP~J0VaNO~l9VggNFz0Qk7ICp9diGn3C6lDoUXKX4xkGSp90Ih4aszfDlh7t8xKP2mzR3GJtAJ5LsifRF59foYGyR0f8sHb447cOe2ZqlHueCh~H6IXvGtDE8~XCnfA3Y6HMTWBuH7uYOQ__\u0026Key-Pair-Id=APKAJLOHF5GGSLRBV4ZA"}],"slug":"Interactive_Paper_Past_Present_and_Future","translated_slug":"","page_count":16,"language":"en","content_type":"Work","summary":"Over the last few years, there has been a significant increase in the number of researchers dealing with the integration of paper and digital information or services. While recent technological developments enable new forms of paper-digital integration and interaction, some of the original research on interactive paper dates back almost twenty years. We give a brief overview of the most relevant past and current interactive paper developments. Then, based on our experience in developing a wide variety of interactive paper solutions over the last decade, as well as the results of other research groups, we outline future directions and challenges for the realisation of innovative interactive paper solutions. Further, we propose the definition of common data formats and interactive paper design patterns to ensure future cross-application and framework interoperability.\n\nResearch paper: https://beatsigner.com/publications/interactive-paper-past-present-and-future.pdf","impression_tracking_id":null,"owner":{"id":13155,"first_name":"Beat","middle_initials":null,"last_name":"Signer","page_name":"BeatSigner","domain_name":"vub","created_at":"2008-10-29T17:11:18.015-07:00","display_name":"Beat Signer","url":"https://vub.academia.edu/BeatSigner"},"attachments":[{"id":77393580,"title":"","file_type":"pdf","scribd_thumbnail_url":"https://attachments.academia-assets.com/77393580/thumbnails/1.jpg","file_name":"paperComp2010.pdf","download_url":"https://www.academia.edu/attachments/77393580/download_file","bulk_download_file_name":"Interactive_Paper_Past_Present_and_Futur.pdf","bulk_download_url":"https://d1wqtxts1xzle7.cloudfront.net/77393580/paperComp2010-libre.pdf?1640570716=\u0026response-content-disposition=attachment%3B+filename%3DInteractive_Paper_Past_Present_and_Futur.pdf\u0026Expires=1744203895\u0026Signature=Zq2AzjEJpBkI0v8-Zirn83OVP7a6MxBoae~IonC4rlptmXDNuBGr3B8Yw5DaCYDsdrVUTxnRh27z7e2~jiROMLau7W3RHViTZkNtOcvx9y37n9E7VKmpG-SlBB7RSHPEicBwloBYvKVzPEq1VwyW-T2Py37oZrbopTlHirWcAzHi1CHGFg5NYzUlcjJVTVVk7O-eR3amoP~J0VaNO~l9VggNFz0Qk7ICp9diGn3C6lDoUXKX4xkGSp90Ih4aszfDlh7t8xKP2mzR3GJtAJ5LsifRF59foYGyR0f8sHb447cOe2ZqlHueCh~H6IXvGtDE8~XCnfA3Y6HMTWBuH7uYOQ__\u0026Key-Pair-Id=APKAJLOHF5GGSLRBV4ZA"}],"research_interests":[{"id":422,"name":"Computer Science","url":"https://www.academia.edu/Documents/in/Computer_Science"},{"id":449,"name":"Software Engineering","url":"https://www.academia.edu/Documents/in/Software_Engineering"},{"id":472,"name":"Human Computer Interaction","url":"https://www.academia.edu/Documents/in/Human_Computer_Interaction"},{"id":933,"name":"New Media","url":"https://www.academia.edu/Documents/in/New_Media"},{"id":2879,"name":"Ubiquitous Computing","url":"https://www.academia.edu/Documents/in/Ubiquitous_Computing"},{"id":3419,"name":"Multimedia","url":"https://www.academia.edu/Documents/in/Multimedia"},{"id":5673,"name":"Augmented Reality","url":"https://www.academia.edu/Documents/in/Augmented_Reality"},{"id":10165,"name":"Interactive and Digital Media","url":"https://www.academia.edu/Documents/in/Interactive_and_Digital_Media"},{"id":11085,"name":"Cross-Media Information Spaces","url":"https://www.academia.edu/Documents/in/Cross-Media_Information_Spaces"},{"id":11086,"name":"Interactive Paper","url":"https://www.academia.edu/Documents/in/Interactive_Paper"},{"id":15951,"name":"Digital Pen and Paper","url":"https://www.academia.edu/Documents/in/Digital_Pen_and_Paper"},{"id":20470,"name":"Tangible User Interfaces","url":"https://www.academia.edu/Documents/in/Tangible_User_Interfaces"}],"urls":[{"id":15716793,"url":"https://speakerdeck.com/signer/interactive-paper-past-present-and-future"}]}, dispatcherData: dispatcherData }); $(this).data('initialized', true); } }); $a.trackClickSource(".js-work-strip-work-link", "profile_work_strip") if (false) { Aedu.setUpFigureCarousel('profile-work-1672774-figures'); } }); </script> <div class="js-work-strip profile--work_container" data-work-id="26400361"><div class="profile--work_thumbnail hidden-xs"><a class="js-work-strip-work-link" data-click-track="profile-work-strip-thumbnail" href="https://www.academia.edu/26400361/DocTr_A_Unifying_Framework_for_Tracking_Physical_Documents_and_Organisational_Structures"><img alt="Research paper thumbnail of DocTr: A Unifying Framework for Tracking Physical Documents and Organisational Structures" class="work-thumbnail" src="https://attachments.academia-assets.com/77575169/thumbnails/1.jpg" /></a></div><div class="wp-workCard wp-workCard_itemContainer"><div class="wp-workCard_item wp-workCard--title"><a class="js-work-strip-work-link text-gray-darker" data-click-track="profile-work-strip-title" href="https://www.academia.edu/26400361/DocTr_A_Unifying_Framework_for_Tracking_Physical_Documents_and_Organisational_Structures">DocTr: A Unifying Framework for Tracking Physical Documents and Organisational Structures</a></div><div class="wp-workCard_item"><span>Presentation given at EICS 2016, 8th ACM SIGCHI Symposium on Engineering Interactive Computing Systems, Brussels, Belgium</span><span>, 2016</span></div><div class="wp-workCard_item"><span class="js-work-more-abstract-truncated">Despite major advancements in digital document management, paper documents still play an importan...</span><a class="js-work-more-abstract" data-broccoli-component="work_strip.more_abstract" data-click-track="profile-work-strip-more-abstract" href="javascript:;"><span> more </span><span><i class="fa fa-caret-down"></i></span></a><span class="js-work-more-abstract-untruncated hidden">Despite major advancements in digital document management, paper documents still play an important role in our daily work and are often used in combination with digital documents and services. Over the last two decades, we have seen a number of augmented reality solutions helping users in managing their paper documents in office settings. However, since data is mainly managed at the application layer, the use of multiple document tracking setups results in fragmented and inconsistent tracking data. Furthermore, existing tracking solutions often focus on the tracking of paper documents in organisational structures such as folders or filing cabinets without taking into account the flow of documents across these organisational structures. We present the Document Tracking (DocTr) framework for unifying existing document tracking setups and managing document metadata across organisational structures. The DocTr framework has been implemented based on a user-centric requirements analysis and simplifies the development of interactive computing systems for personal cross-media information management.<br /><br />Research paper: <a href="https://beatsigner.com/publications/doctr-a-unifying-framework-for-tracking-physical-documents-and-organisational-structures.pdf" rel="nofollow">https://beatsigner.com/publications/doctr-a-unifying-framework-for-tracking-physical-documents-and-organisational-structures.pdf</a></span></div><div class="wp-workCard_item wp-workCard--actions"><span class="work-strip-bookmark-button-container"></span><a id="dff2208c5ec0bf35c7fe7f26c688d554" class="wp-workCard--action" rel="nofollow" data-click-track="profile-work-strip-download" data-download="{&quot;attachment_id&quot;:77575169,&quot;asset_id&quot;:26400361,&quot;asset_type&quot;:&quot;Work&quot;,&quot;button_location&quot;:&quot;profile&quot;}" href="https://www.academia.edu/attachments/77575169/download_file?s=profile"><span><i class="fa fa-arrow-down"></i></span><span>Download</span></a><span class="wp-workCard--action visible-if-viewed-by-owner inline-block" style="display: none;"><span class="js-profile-work-strip-edit-button-wrapper profile-work-strip-edit-button-wrapper" data-work-id="26400361"><a class="js-profile-work-strip-edit-button" tabindex="0"><span><i class="fa fa-pencil"></i></span><span>Edit</span></a></span></span></div><div class="wp-workCard_item wp-workCard--stats"><span><span><span class="js-view-count view-count u-mr2x" data-work-id="26400361"><i class="fa fa-spinner fa-spin"></i></span><script>$(function () { var workId = 26400361; window.Academia.workViewCountsFetcher.queue(workId, function (count) { var description = window.$h.commaizeInt(count) + " " + window.$h.pluralize(count, 'View'); $(".js-view-count[data-work-id=26400361]").text(description); $(".js-view-count[data-work-id=26400361]").attr('title', description).tooltip(); }); });</script></span></span><span><span class="percentile-widget hidden"><span class="u-mr2x work-percentile"></span></span><script>$(function () { var workId = 26400361; window.Academia.workPercentilesFetcher.queue(workId, function (percentileText) { var container = $(".js-work-strip[data-work-id='26400361']"); container.find('.work-percentile').text(percentileText.charAt(0).toUpperCase() + percentileText.slice(1)); container.find('.percentile-widget').show(); container.find('.percentile-widget').removeClass('hidden'); }); });</script></span></div><div id="work-strip-premium-row-container"></div></div></div><script> require.config({ waitSeconds: 90 })(["https://a.academia-assets.com/assets/wow_profile-a9bf3a2bc8c89fa2a77156577594264ee8a0f214d74241bc0fcd3f69f8d107ac.js","https://a.academia-assets.com/assets/work_edit-ad038b8c047c1a8d4fa01b402d530ff93c45fee2137a149a4a5398bc8ad67560.js"], function() { // from javascript_helper.rb var dispatcherData = {} if (true){ window.WowProfile.dispatcher = window.WowProfile.dispatcher || _.clone(Backbone.Events); dispatcherData = { dispatcher: window.WowProfile.dispatcher, downloadLinkId: "dff2208c5ec0bf35c7fe7f26c688d554" } } $('.js-work-strip[data-work-id=26400361]').each(function() { if (!$(this).data('initialized')) { new WowProfile.WorkStripView({ el: this, workJSON: {"id":26400361,"title":"DocTr: A Unifying Framework for Tracking Physical Documents and Organisational Structures","translated_title":"","metadata":{"abstract":"Despite major advancements in digital document management, paper documents still play an important role in our daily work and are often used in combination with digital documents and services. Over the last two decades, we have seen a number of augmented reality solutions helping users in managing their paper documents in office settings. However, since data is mainly managed at the application layer, the use of multiple document tracking setups results in fragmented and inconsistent tracking data. Furthermore, existing tracking solutions often focus on the tracking of paper documents in organisational structures such as folders or filing cabinets without taking into account the flow of documents across these organisational structures. We present the Document Tracking (DocTr) framework for unifying existing document tracking setups and managing document metadata across organisational structures. The DocTr framework has been implemented based on a user-centric requirements analysis and simplifies the development of interactive computing systems for personal cross-media information management.\n\nResearch paper: https://beatsigner.com/publications/doctr-a-unifying-framework-for-tracking-physical-documents-and-organisational-structures.pdf","location":"EICS 2016, 8th ACM SIGCHI Symposium on Engineering Interactive Computing Systems, Brussels, Belgium, June 2016","event_date":{"day":null,"month":6,"year":2016,"errors":{}},"ai_title_tag":"DocTr: Unified Document Tracking Framework","publication_date":{"day":null,"month":null,"year":2016,"errors":{}},"publication_name":"Presentation given at EICS 2016, 8th ACM SIGCHI Symposium on Engineering Interactive Computing Systems, Brussels, Belgium"},"translated_abstract":"Despite major advancements in digital document management, paper documents still play an important role in our daily work and are often used in combination with digital documents and services. Over the last two decades, we have seen a number of augmented reality solutions helping users in managing their paper documents in office settings. However, since data is mainly managed at the application layer, the use of multiple document tracking setups results in fragmented and inconsistent tracking data. Furthermore, existing tracking solutions often focus on the tracking of paper documents in organisational structures such as folders or filing cabinets without taking into account the flow of documents across these organisational structures. We present the Document Tracking (DocTr) framework for unifying existing document tracking setups and managing document metadata across organisational structures. The DocTr framework has been implemented based on a user-centric requirements analysis and simplifies the development of interactive computing systems for personal cross-media information management.\n\nResearch paper: https://beatsigner.com/publications/doctr-a-unifying-framework-for-tracking-physical-documents-and-organisational-structures.pdf","internal_url":"https://www.academia.edu/26400361/DocTr_A_Unifying_Framework_for_Tracking_Physical_Documents_and_Organisational_Structures","translated_internal_url":"","created_at":"2016-06-22T02:34:11.555-07:00","preview_url":null,"current_user_can_edit":null,"current_user_is_owner":null,"owner_id":13155,"coauthors_can_edit":true,"document_type":"conference_presentation","co_author_tags":[{"id":21616259,"work_id":26400361,"tagging_user_id":13155,"tagged_user_id":1299708,"co_author_invite_id":null,"email":"s***m@vub.ac.be","affiliation":"Vrije Universiteit Brussel","display_order":-2,"name":"Sandra Trullemans","title":"DocTr: A Unifying Framework for Tracking Physical Documents and Organisational Structures"}],"downloadable_attachments":[{"id":77575169,"title":"","file_type":"pdf","scribd_thumbnail_url":"https://attachments.academia-assets.com/77575169/thumbnails/1.jpg","file_name":"EICS2016.pdf","download_url":"https://www.academia.edu/attachments/77575169/download_file","bulk_download_file_name":"DocTr_A_Unifying_Framework_for_Tracking.pdf","bulk_download_url":"https://d1wqtxts1xzle7.cloudfront.net/77575169/EICS2016-libre.pdf?1640776881=\u0026response-content-disposition=attachment%3B+filename%3DDocTr_A_Unifying_Framework_for_Tracking.pdf\u0026Expires=1744203895\u0026Signature=MHuoEr~qVuxDLPzvz~fId2RyJ9gnylIYwurPZE-lY-STGapJqcQL2ZHvGSM8fk5m-eFyGfewYGGxG4kUTtINHfJDAh-lvo8W7wnoUASuUc6W0SbOH~pj4sNZZLqtVPtWK~XZ9BD84c2xUOTNTaATjqLZZ308DJyfz1VWeFdJhVqYiLA8wJ3EcQ0HF2qqpFl5ExayjEhJefOdRCddnsXTnNDpWMhcBA6T6y-Qq76UFjmYNVN~v3vk5XVWaPUe0NeXjbN9kOC6NBJAED3sZ~8Zhd3UHUZTIG46ggpiqs~rNp0-rlIAC1P8OagCD~UC28jdpddB2Ck45wbvC6st04nV-g__\u0026Key-Pair-Id=APKAJLOHF5GGSLRBV4ZA"}],"slug":"DocTr_A_Unifying_Framework_for_Tracking_Physical_Documents_and_Organisational_Structures","translated_slug":"","page_count":18,"language":"en","content_type":"Work","summary":"Despite major advancements in digital document management, paper documents still play an important role in our daily work and are often used in combination with digital documents and services. Over the last two decades, we have seen a number of augmented reality solutions helping users in managing their paper documents in office settings. However, since data is mainly managed at the application layer, the use of multiple document tracking setups results in fragmented and inconsistent tracking data. Furthermore, existing tracking solutions often focus on the tracking of paper documents in organisational structures such as folders or filing cabinets without taking into account the flow of documents across these organisational structures. We present the Document Tracking (DocTr) framework for unifying existing document tracking setups and managing document metadata across organisational structures. The DocTr framework has been implemented based on a user-centric requirements analysis and simplifies the development of interactive computing systems for personal cross-media information management.\n\nResearch paper: https://beatsigner.com/publications/doctr-a-unifying-framework-for-tracking-physical-documents-and-organisational-structures.pdf","impression_tracking_id":null,"owner":{"id":13155,"first_name":"Beat","middle_initials":null,"last_name":"Signer","page_name":"BeatSigner","domain_name":"vub","created_at":"2008-10-29T17:11:18.015-07:00","display_name":"Beat Signer","url":"https://vub.academia.edu/BeatSigner"},"attachments":[{"id":77575169,"title":"","file_type":"pdf","scribd_thumbnail_url":"https://attachments.academia-assets.com/77575169/thumbnails/1.jpg","file_name":"EICS2016.pdf","download_url":"https://www.academia.edu/attachments/77575169/download_file","bulk_download_file_name":"DocTr_A_Unifying_Framework_for_Tracking.pdf","bulk_download_url":"https://d1wqtxts1xzle7.cloudfront.net/77575169/EICS2016-libre.pdf?1640776881=\u0026response-content-disposition=attachment%3B+filename%3DDocTr_A_Unifying_Framework_for_Tracking.pdf\u0026Expires=1744203895\u0026Signature=MHuoEr~qVuxDLPzvz~fId2RyJ9gnylIYwurPZE-lY-STGapJqcQL2ZHvGSM8fk5m-eFyGfewYGGxG4kUTtINHfJDAh-lvo8W7wnoUASuUc6W0SbOH~pj4sNZZLqtVPtWK~XZ9BD84c2xUOTNTaATjqLZZ308DJyfz1VWeFdJhVqYiLA8wJ3EcQ0HF2qqpFl5ExayjEhJefOdRCddnsXTnNDpWMhcBA6T6y-Qq76UFjmYNVN~v3vk5XVWaPUe0NeXjbN9kOC6NBJAED3sZ~8Zhd3UHUZTIG46ggpiqs~rNp0-rlIAC1P8OagCD~UC28jdpddB2Ck45wbvC6st04nV-g__\u0026Key-Pair-Id=APKAJLOHF5GGSLRBV4ZA"}],"research_interests":[{"id":422,"name":"Computer Science","url":"https://www.academia.edu/Documents/in/Computer_Science"}],"urls":[{"id":15813240,"url":"https://speakerdeck.com/signer/doctr-a-unifying-framework-for-tracking-physical-documents-and-organisational-structures"}]}, dispatcherData: dispatcherData }); $(this).data('initialized', true); } }); $a.trackClickSource(".js-work-strip-work-link", "profile_work_strip") if (false) { Aedu.setUpFigureCarousel('profile-work-26400361-figures'); } }); </script> <div class="js-work-strip profile--work_container" data-work-id="26244055"><div class="profile--work_thumbnail hidden-xs"><a class="js-work-strip-work-link" data-click-track="profile-work-strip-thumbnail" href="https://www.academia.edu/26244055/PimVis_Exploring_and_Re_finding_Documents_in_Cross_Media_Information_Spaces"><img alt="Research paper thumbnail of PimVis: Exploring and Re-finding Documents in Cross-Media Information Spaces" class="work-thumbnail" src="https://attachments.academia-assets.com/77807731/thumbnails/1.jpg" /></a></div><div class="wp-workCard wp-workCard_itemContainer"><div class="wp-workCard_item wp-workCard--title"><a class="js-work-strip-work-link text-gray-darker" data-click-track="profile-work-strip-title" href="https://www.academia.edu/26244055/PimVis_Exploring_and_Re_finding_Documents_in_Cross_Media_Information_Spaces">PimVis: Exploring and Re-finding Documents in Cross-Media Information Spaces</a></div><div class="wp-workCard_item"><span>Presentation given at AVI 2016, International Working Conference on Advanced Visual Interfaces, Bari, Italy</span><span>, 2016</span></div><div class="wp-workCard_item"><span class="js-work-more-abstract-truncated">Over the last decade, we have witnessed an emergence of Personal Information Management (PIM) sol...</span><a class="js-work-more-abstract" data-broccoli-component="work_strip.more_abstract" data-click-track="profile-work-strip-more-abstract" href="javascript:;"><span> more </span><span><i class="fa fa-caret-down"></i></span></a><span class="js-work-more-abstract-untruncated hidden">Over the last decade, we have witnessed an emergence of Personal Information Management (PIM) solutions. Despite the fact that paper documents still form a significant part of our daily working activities, existing PIM systems usually support the organisation and re-finding of digital documents only. While physical document tracking solutions such as RFID- or computer vision-based systems are recently gaining some attention, they usually focus on the paper document tracking and offer limited support for re-finding activities. We present PimVis, a solution for exploring and re-finding digital and paper documents in so-called cross-media information spaces. The PimVis user interface enables a unified organisation of digital and paper documents through the creation of bidirectional links between the digital and physical information space. The presented personal cross-media information management solution further supports the extension with alternative document tracking techniques as well as augmented reality solutions. A formative PimVis evaluation revealed the high potential of fully integrated cross-media PIM solutions.<br /><br />Research paper: <a href="https://beatsigner.com/publications/pimvis-exploring-and-refinding-documents-in-cross-media-information-spaces.pdf" rel="nofollow">https://beatsigner.com/publications/pimvis-exploring-and-refinding-documents-in-cross-media-information-spaces.pdf</a></span></div><div class="wp-workCard_item wp-workCard--actions"><span class="work-strip-bookmark-button-container"></span><a id="48e8d8683fdd22a649ada9d52ea767e7" class="wp-workCard--action" rel="nofollow" data-click-track="profile-work-strip-download" data-download="{&quot;attachment_id&quot;:77807731,&quot;asset_id&quot;:26244055,&quot;asset_type&quot;:&quot;Work&quot;,&quot;button_location&quot;:&quot;profile&quot;}" href="https://www.academia.edu/attachments/77807731/download_file?s=profile"><span><i class="fa fa-arrow-down"></i></span><span>Download</span></a><span class="wp-workCard--action visible-if-viewed-by-owner inline-block" style="display: none;"><span class="js-profile-work-strip-edit-button-wrapper profile-work-strip-edit-button-wrapper" data-work-id="26244055"><a class="js-profile-work-strip-edit-button" tabindex="0"><span><i class="fa fa-pencil"></i></span><span>Edit</span></a></span></span></div><div class="wp-workCard_item wp-workCard--stats"><span><span><span class="js-view-count view-count u-mr2x" data-work-id="26244055"><i class="fa fa-spinner fa-spin"></i></span><script>$(function () { var workId = 26244055; window.Academia.workViewCountsFetcher.queue(workId, function (count) { var description = window.$h.commaizeInt(count) + " " + window.$h.pluralize(count, 'View'); $(".js-view-count[data-work-id=26244055]").text(description); $(".js-view-count[data-work-id=26244055]").attr('title', description).tooltip(); }); });</script></span></span><span><span class="percentile-widget hidden"><span class="u-mr2x work-percentile"></span></span><script>$(function () { var workId = 26244055; window.Academia.workPercentilesFetcher.queue(workId, function (percentileText) { var container = $(".js-work-strip[data-work-id='26244055']"); container.find('.work-percentile').text(percentileText.charAt(0).toUpperCase() + percentileText.slice(1)); container.find('.percentile-widget').show(); container.find('.percentile-widget').removeClass('hidden'); }); });</script></span></div><div id="work-strip-premium-row-container"></div></div></div><script> require.config({ waitSeconds: 90 })(["https://a.academia-assets.com/assets/wow_profile-a9bf3a2bc8c89fa2a77156577594264ee8a0f214d74241bc0fcd3f69f8d107ac.js","https://a.academia-assets.com/assets/work_edit-ad038b8c047c1a8d4fa01b402d530ff93c45fee2137a149a4a5398bc8ad67560.js"], function() { // from javascript_helper.rb var dispatcherData = {} if (true){ window.WowProfile.dispatcher = window.WowProfile.dispatcher || _.clone(Backbone.Events); dispatcherData = { dispatcher: window.WowProfile.dispatcher, downloadLinkId: "48e8d8683fdd22a649ada9d52ea767e7" } } $('.js-work-strip[data-work-id=26244055]').each(function() { if (!$(this).data('initialized')) { new WowProfile.WorkStripView({ el: this, workJSON: {"id":26244055,"title":"PimVis: Exploring and Re-finding Documents in Cross-Media Information Spaces","translated_title":"","metadata":{"abstract":"Over the last decade, we have witnessed an emergence of Personal Information Management (PIM) solutions. Despite the fact that paper documents still form a significant part of our daily working activities, existing PIM systems usually support the organisation and re-finding of digital documents only. While physical document tracking solutions such as RFID- or computer vision-based systems are recently gaining some attention, they usually focus on the paper document tracking and offer limited support for re-finding activities. We present PimVis, a solution for exploring and re-finding digital and paper documents in so-called cross-media information spaces. The PimVis user interface enables a unified organisation of digital and paper documents through the creation of bidirectional links between the digital and physical information space. The presented personal cross-media information management solution further supports the extension with alternative document tracking techniques as well as augmented reality solutions. A formative PimVis evaluation revealed the high potential of fully integrated cross-media PIM solutions.\n\nResearch paper: https://beatsigner.com/publications/pimvis-exploring-and-refinding-documents-in-cross-media-information-spaces.pdf","location":"AVI 2016, International Working Conference on Advanced Visual Interfaces, Bari, Italy, June 2016","event_date":{"day":null,"month":6,"year":2016,"errors":{}},"publication_date":{"day":null,"month":null,"year":2016,"errors":{}},"publication_name":"Presentation given at AVI 2016, International Working Conference on Advanced Visual Interfaces, Bari, Italy"},"translated_abstract":"Over the last decade, we have witnessed an emergence of Personal Information Management (PIM) solutions. Despite the fact that paper documents still form a significant part of our daily working activities, existing PIM systems usually support the organisation and re-finding of digital documents only. While physical document tracking solutions such as RFID- or computer vision-based systems are recently gaining some attention, they usually focus on the paper document tracking and offer limited support for re-finding activities. We present PimVis, a solution for exploring and re-finding digital and paper documents in so-called cross-media information spaces. The PimVis user interface enables a unified organisation of digital and paper documents through the creation of bidirectional links between the digital and physical information space. The presented personal cross-media information management solution further supports the extension with alternative document tracking techniques as well as augmented reality solutions. A formative PimVis evaluation revealed the high potential of fully integrated cross-media PIM solutions.\n\nResearch paper: https://beatsigner.com/publications/pimvis-exploring-and-refinding-documents-in-cross-media-information-spaces.pdf","internal_url":"https://www.academia.edu/26244055/PimVis_Exploring_and_Re_finding_Documents_in_Cross_Media_Information_Spaces","translated_internal_url":"","created_at":"2016-06-16T23:43:58.226-07:00","preview_url":null,"current_user_can_edit":null,"current_user_is_owner":null,"owner_id":13155,"coauthors_can_edit":true,"document_type":"conference_presentation","co_author_tags":[{"id":21412413,"work_id":26244055,"tagging_user_id":13155,"tagged_user_id":1299708,"co_author_invite_id":null,"email":"s***m@vub.ac.be","affiliation":"Vrije Universiteit Brussel","display_order":-2,"name":"Sandra Trullemans","title":"PimVis: Exploring and Re-finding Documents in Cross-Media Information Spaces"},{"id":21412414,"work_id":26244055,"tagging_user_id":13155,"tagged_user_id":46916097,"co_author_invite_id":null,"email":"a***r@vub.ac.be","affiliation":"Vrije Universiteit Brussel","display_order":-1,"name":"Audrey Sanctorum","title":"PimVis: Exploring and Re-finding Documents in Cross-Media Information Spaces"}],"downloadable_attachments":[{"id":77807731,"title":"","file_type":"pdf","scribd_thumbnail_url":"https://attachments.academia-assets.com/77807731/thumbnails/1.jpg","file_name":"AVI2016.pdf","download_url":"https://www.academia.edu/attachments/77807731/download_file","bulk_download_file_name":"PimVis_Exploring_and_Re_finding_Document.pdf","bulk_download_url":"https://d1wqtxts1xzle7.cloudfront.net/77807731/AVI2016-libre.pdf?1640974231=\u0026response-content-disposition=attachment%3B+filename%3DPimVis_Exploring_and_Re_finding_Document.pdf\u0026Expires=1744203895\u0026Signature=UJaQgLiGkHtFIk6UStc4tgS4PtwRb~vFa85XxZXPhCDANXJP4HjhCHEJwkcBcD4A7bGY4OwxfzBtANe4nSudSlvZJar8zZUM2EcxW1URTntaruKmevkiJBGTHHiMAWnD5tw23NQttUj7wOYOAXbgRgdw7v~mrEqa4iS01dxaV5jU8snB7l5hfqYjilP66Pf~1ZM38ol3~1yNY4YdvqbZZPmTJ3aEq~nZHEke3~9JSMvc0j23c9Oi-q4rSUmmiSL7kaD0QRczEdyUZQCpY1sKBFZeYY99GuHN~gNykNw580N7KdinD3jPVlv0pEnu~u3w50DKg4PoqlKa3~uUelEbYg__\u0026Key-Pair-Id=APKAJLOHF5GGSLRBV4ZA"}],"slug":"PimVis_Exploring_and_Re_finding_Documents_in_Cross_Media_Information_Spaces","translated_slug":"","page_count":32,"language":"en","content_type":"Work","summary":"Over the last decade, we have witnessed an emergence of Personal Information Management (PIM) solutions. Despite the fact that paper documents still form a significant part of our daily working activities, existing PIM systems usually support the organisation and re-finding of digital documents only. While physical document tracking solutions such as RFID- or computer vision-based systems are recently gaining some attention, they usually focus on the paper document tracking and offer limited support for re-finding activities. We present PimVis, a solution for exploring and re-finding digital and paper documents in so-called cross-media information spaces. The PimVis user interface enables a unified organisation of digital and paper documents through the creation of bidirectional links between the digital and physical information space. The presented personal cross-media information management solution further supports the extension with alternative document tracking techniques as well as augmented reality solutions. A formative PimVis evaluation revealed the high potential of fully integrated cross-media PIM solutions.\n\nResearch paper: https://beatsigner.com/publications/pimvis-exploring-and-refinding-documents-in-cross-media-information-spaces.pdf","impression_tracking_id":null,"owner":{"id":13155,"first_name":"Beat","middle_initials":null,"last_name":"Signer","page_name":"BeatSigner","domain_name":"vub","created_at":"2008-10-29T17:11:18.015-07:00","display_name":"Beat Signer","url":"https://vub.academia.edu/BeatSigner"},"attachments":[{"id":77807731,"title":"","file_type":"pdf","scribd_thumbnail_url":"https://attachments.academia-assets.com/77807731/thumbnails/1.jpg","file_name":"AVI2016.pdf","download_url":"https://www.academia.edu/attachments/77807731/download_file","bulk_download_file_name":"PimVis_Exploring_and_Re_finding_Document.pdf","bulk_download_url":"https://d1wqtxts1xzle7.cloudfront.net/77807731/AVI2016-libre.pdf?1640974231=\u0026response-content-disposition=attachment%3B+filename%3DPimVis_Exploring_and_Re_finding_Document.pdf\u0026Expires=1744203895\u0026Signature=UJaQgLiGkHtFIk6UStc4tgS4PtwRb~vFa85XxZXPhCDANXJP4HjhCHEJwkcBcD4A7bGY4OwxfzBtANe4nSudSlvZJar8zZUM2EcxW1URTntaruKmevkiJBGTHHiMAWnD5tw23NQttUj7wOYOAXbgRgdw7v~mrEqa4iS01dxaV5jU8snB7l5hfqYjilP66Pf~1ZM38ol3~1yNY4YdvqbZZPmTJ3aEq~nZHEke3~9JSMvc0j23c9Oi-q4rSUmmiSL7kaD0QRczEdyUZQCpY1sKBFZeYY99GuHN~gNykNw580N7KdinD3jPVlv0pEnu~u3w50DKg4PoqlKa3~uUelEbYg__\u0026Key-Pair-Id=APKAJLOHF5GGSLRBV4ZA"}],"research_interests":[{"id":37,"name":"Information Systems","url":"https://www.academia.edu/Documents/in/Information_Systems"},{"id":48,"name":"Engineering","url":"https://www.academia.edu/Documents/in/Engineering"},{"id":422,"name":"Computer Science","url":"https://www.academia.edu/Documents/in/Computer_Science"},{"id":449,"name":"Software Engineering","url":"https://www.academia.edu/Documents/in/Software_Engineering"},{"id":459,"name":"Information Science","url":"https://www.academia.edu/Documents/in/Information_Science"},{"id":472,"name":"Human Computer Interaction","url":"https://www.academia.edu/Documents/in/Human_Computer_Interaction"},{"id":491,"name":"Information Technology","url":"https://www.academia.edu/Documents/in/Information_Technology"},{"id":492,"name":"Management Information Systems","url":"https://www.academia.edu/Documents/in/Management_Information_Systems"},{"id":923,"name":"Technology","url":"https://www.academia.edu/Documents/in/Technology"},{"id":933,"name":"New Media","url":"https://www.academia.edu/Documents/in/New_Media"},{"id":1012,"name":"Digital Libraries","url":"https://www.academia.edu/Documents/in/Digital_Libraries"},{"id":1210,"name":"Informatics","url":"https://www.academia.edu/Documents/in/Informatics"},{"id":1241,"name":"Knowledge Management","url":"https://www.academia.edu/Documents/in/Knowledge_Management"},{"id":1380,"name":"Computer Engineering","url":"https://www.academia.edu/Documents/in/Computer_Engineering"},{"id":1440,"name":"Visualization","url":"https://www.academia.edu/Documents/in/Visualization"},{"id":1453,"name":"Information Management","url":"https://www.academia.edu/Documents/in/Information_Management"},{"id":2869,"name":"Digital Media","url":"https://www.academia.edu/Documents/in/Digital_Media"},{"id":2875,"name":"User Experience (UX)","url":"https://www.academia.edu/Documents/in/User_Experience_UX_"},{"id":3419,"name":"Multimedia","url":"https://www.academia.edu/Documents/in/Multimedia"},{"id":3424,"name":"Information Visualization","url":"https://www.academia.edu/Documents/in/Information_Visualization"},{"id":4416,"name":"Interaction Design","url":"https://www.academia.edu/Documents/in/Interaction_Design"},{"id":5266,"name":"Human Information Interaction","url":"https://www.academia.edu/Documents/in/Human_Information_Interaction"},{"id":5673,"name":"Augmented Reality","url":"https://www.academia.edu/Documents/in/Augmented_Reality"},{"id":10907,"name":"Context-Aware Computing","url":"https://www.academia.edu/Documents/in/Context-Aware_Computing"},{"id":11085,"name":"Cross-Media Information Spaces","url":"https://www.academia.edu/Documents/in/Cross-Media_Information_Spaces"},{"id":11119,"name":"User Interface","url":"https://www.academia.edu/Documents/in/User_Interface"},{"id":11123,"name":"Personal Information Management","url":"https://www.academia.edu/Documents/in/Personal_Information_Management"},{"id":20470,"name":"Tangible User Interfaces","url":"https://www.academia.edu/Documents/in/Tangible_User_Interfaces"},{"id":20481,"name":"Information Visualisation","url":"https://www.academia.edu/Documents/in/Information_Visualisation"},{"id":23997,"name":"HCI","url":"https://www.academia.edu/Documents/in/HCI"},{"id":42095,"name":"Document Engineering","url":"https://www.academia.edu/Documents/in/Document_Engineering"},{"id":44606,"name":"Hypermedia","url":"https://www.academia.edu/Documents/in/Hypermedia"},{"id":47980,"name":"Data Visualization","url":"https://www.academia.edu/Documents/in/Data_Visualization"},{"id":54192,"name":"Hypertext","url":"https://www.academia.edu/Documents/in/Hypertext"},{"id":85420,"name":"Crossmedia","url":"https://www.academia.edu/Documents/in/Crossmedia"},{"id":402880,"name":"Personal Information Management - PIM","url":"https://www.academia.edu/Documents/in/Personal_Information_Management_-_PIM"}],"urls":[{"id":15942256,"url":"https://speakerdeck.com/signer/pimvis-exploring-and-re-finding-documents-in-cross-media-information-spaces"}]}, dispatcherData: dispatcherData }); $(this).data('initialized', true); } }); $a.trackClickSource(".js-work-strip-work-link", "profile_work_strip") if (false) { Aedu.setUpFigureCarousel('profile-work-26244055-figures'); } }); </script> <div class="js-work-strip profile--work_container" data-work-id="26235318"><div class="profile--work_thumbnail hidden-xs"><a class="js-work-strip-work-link" data-click-track="profile-work-strip-thumbnail" href="https://www.academia.edu/26235318/A_Multi_layered_Context_Modelling_Approach_for_End_Users_Expert_Users_and_Programmers"><img alt="Research paper thumbnail of A Multi-layered Context Modelling Approach for End Users, Expert Users and Programmers" class="work-thumbnail" src="https://attachments.academia-assets.com/77808070/thumbnails/1.jpg" /></a></div><div class="wp-workCard wp-workCard_itemContainer"><div class="wp-workCard_item wp-workCard--title"><a class="js-work-strip-work-link text-gray-darker" data-click-track="profile-work-strip-title" href="https://www.academia.edu/26235318/A_Multi_layered_Context_Modelling_Approach_for_End_Users_Expert_Users_and_Programmers">A Multi-layered Context Modelling Approach for End Users, Expert Users and Programmers</a></div><div class="wp-workCard_item wp-workCard--coauthors"><span>by </span><span><a class="" data-click-track="profile-work-strip-authors" href="https://vub.academia.edu/sandratrullemans">Sandra Trullemans</a> and <a class="" data-click-track="profile-work-strip-authors" href="https://vub.academia.edu/BeatSigner">Beat Signer</a></span></div><div class="wp-workCard_item"><span>Presentation given at SERVE 2016, International Workshop on Smart Ecosystems cReation by Visual dEsign, Bari, Italy</span><span>, 2016</span></div><div class="wp-workCard_item"><span class="js-work-more-abstract-truncated">Context awareness plays an important role in smart environments and embedded interactions. In ord...</span><a class="js-work-more-abstract" data-broccoli-component="work_strip.more_abstract" data-click-track="profile-work-strip-more-abstract" href="javascript:;"><span> more </span><span><i class="fa fa-caret-down"></i></span></a><span class="js-work-more-abstract-untruncated hidden">Context awareness plays an important role in smart environments and embedded interactions. In order to increase user satisfaction and acceptance, context-aware solutions should be controllable by end users. Over the last few years we have therefore witnessed an emerging trend of visual programming tools for context-aware applications based on simple &quot;if this then that&quot; rules. Unfortunately, existing solutions do not support the easy reuse of the &quot;this&quot; part in other rules. Further, the desired level of control varies among individuals. In order to let users choose the right level of automation and control, we propose a multi-layered context modelling approach distinguishing between end users, expert users and programmers. We report on our ongoing development of the Context Modelling Toolkit (CMT) consisting of the necessary context modelling concepts as well as a rule-based context processing engine. We further discuss an initial design of the graphical user interface for the presented multi-layered context modelling approach.<br /><br />Research paper: <a href="https://beatsigner.com/publications/a-multi-layered-context-modelling-approach-for-end-users-expert-users-and-programmers.pdf" rel="nofollow">https://beatsigner.com/publications/a-multi-layered-context-modelling-approach-for-end-users-expert-users-and-programmers.pdf</a></span></div><div class="wp-workCard_item wp-workCard--actions"><span class="work-strip-bookmark-button-container"></span><a id="d6378815c8a851e356dd87f4c4b05ff8" class="wp-workCard--action" rel="nofollow" data-click-track="profile-work-strip-download" data-download="{&quot;attachment_id&quot;:77808070,&quot;asset_id&quot;:26235318,&quot;asset_type&quot;:&quot;Work&quot;,&quot;button_location&quot;:&quot;profile&quot;}" href="https://www.academia.edu/attachments/77808070/download_file?s=profile"><span><i class="fa fa-arrow-down"></i></span><span>Download</span></a><span class="wp-workCard--action visible-if-viewed-by-owner inline-block" style="display: none;"><span class="js-profile-work-strip-edit-button-wrapper profile-work-strip-edit-button-wrapper" data-work-id="26235318"><a class="js-profile-work-strip-edit-button" tabindex="0"><span><i class="fa fa-pencil"></i></span><span>Edit</span></a></span></span></div><div class="wp-workCard_item wp-workCard--stats"><span><span><span class="js-view-count view-count u-mr2x" data-work-id="26235318"><i class="fa fa-spinner fa-spin"></i></span><script>$(function () { var workId = 26235318; window.Academia.workViewCountsFetcher.queue(workId, function (count) { var description = window.$h.commaizeInt(count) + " " + window.$h.pluralize(count, 'View'); $(".js-view-count[data-work-id=26235318]").text(description); $(".js-view-count[data-work-id=26235318]").attr('title', description).tooltip(); }); });</script></span></span><span><span class="percentile-widget hidden"><span class="u-mr2x work-percentile"></span></span><script>$(function () { var workId = 26235318; window.Academia.workPercentilesFetcher.queue(workId, function (percentileText) { var container = $(".js-work-strip[data-work-id='26235318']"); container.find('.work-percentile').text(percentileText.charAt(0).toUpperCase() + percentileText.slice(1)); container.find('.percentile-widget').show(); container.find('.percentile-widget').removeClass('hidden'); }); });</script></span></div><div id="work-strip-premium-row-container"></div></div></div><script> require.config({ waitSeconds: 90 })(["https://a.academia-assets.com/assets/wow_profile-a9bf3a2bc8c89fa2a77156577594264ee8a0f214d74241bc0fcd3f69f8d107ac.js","https://a.academia-assets.com/assets/work_edit-ad038b8c047c1a8d4fa01b402d530ff93c45fee2137a149a4a5398bc8ad67560.js"], function() { // from javascript_helper.rb var dispatcherData = {} if (true){ window.WowProfile.dispatcher = window.WowProfile.dispatcher || _.clone(Backbone.Events); dispatcherData = { dispatcher: window.WowProfile.dispatcher, downloadLinkId: "d6378815c8a851e356dd87f4c4b05ff8" } } $('.js-work-strip[data-work-id=26235318]').each(function() { if (!$(this).data('initialized')) { new WowProfile.WorkStripView({ el: this, workJSON: {"id":26235318,"title":"A Multi-layered Context Modelling Approach for End Users, Expert Users and Programmers","translated_title":"","metadata":{"abstract":"Context awareness plays an important role in smart environments and embedded interactions. In order to increase user satisfaction and acceptance, context-aware solutions should be controllable by end users. Over the last few years we have therefore witnessed an emerging trend of visual programming tools for context-aware applications based on simple \"if this then that\" rules. Unfortunately, existing solutions do not support the easy reuse of the \"this\" part in other rules. Further, the desired level of control varies among individuals. In order to let users choose the right level of automation and control, we propose a multi-layered context modelling approach distinguishing between end users, expert users and programmers. We report on our ongoing development of the Context Modelling Toolkit (CMT) consisting of the necessary context modelling concepts as well as a rule-based context processing engine. We further discuss an initial design of the graphical user interface for the presented multi-layered context modelling approach.\n\nResearch paper: https://beatsigner.com/publications/a-multi-layered-context-modelling-approach-for-end-users-expert-users-and-programmers.pdf","location":"SERVE 2016, International Workshop on Smart Ecosystems cReation by Visual dEsign, Bari, Italy, June 2016","event_date":{"day":7,"month":6,"year":2016,"errors":{}},"publication_date":{"day":null,"month":null,"year":2016,"errors":{}},"publication_name":"Presentation given at SERVE 2016, International Workshop on Smart Ecosystems cReation by Visual dEsign, Bari, Italy"},"translated_abstract":"Context awareness plays an important role in smart environments and embedded interactions. In order to increase user satisfaction and acceptance, context-aware solutions should be controllable by end users. Over the last few years we have therefore witnessed an emerging trend of visual programming tools for context-aware applications based on simple \"if this then that\" rules. Unfortunately, existing solutions do not support the easy reuse of the \"this\" part in other rules. Further, the desired level of control varies among individuals. In order to let users choose the right level of automation and control, we propose a multi-layered context modelling approach distinguishing between end users, expert users and programmers. We report on our ongoing development of the Context Modelling Toolkit (CMT) consisting of the necessary context modelling concepts as well as a rule-based context processing engine. We further discuss an initial design of the graphical user interface for the presented multi-layered context modelling approach.\n\nResearch paper: https://beatsigner.com/publications/a-multi-layered-context-modelling-approach-for-end-users-expert-users-and-programmers.pdf","internal_url":"https://www.academia.edu/26235318/A_Multi_layered_Context_Modelling_Approach_for_End_Users_Expert_Users_and_Programmers","translated_internal_url":"","created_at":"2016-06-16T14:42:06.818-07:00","preview_url":null,"current_user_can_edit":null,"current_user_is_owner":null,"owner_id":13155,"coauthors_can_edit":true,"document_type":"conference_presentation","co_author_tags":[{"id":21402962,"work_id":26235318,"tagging_user_id":13155,"tagged_user_id":1299708,"co_author_invite_id":null,"email":"s***m@vub.ac.be","affiliation":"Vrije Universiteit Brussel","display_order":-1,"name":"Sandra Trullemans","title":"A Multi-layered Context Modelling Approach for End Users, Expert Users and Programmers"}],"downloadable_attachments":[{"id":77808070,"title":"","file_type":"pdf","scribd_thumbnail_url":"https://attachments.academia-assets.com/77808070/thumbnails/1.jpg","file_name":"SERVE2016.pdf","download_url":"https://www.academia.edu/attachments/77808070/download_file","bulk_download_file_name":"A_Multi_layered_Context_Modelling_Approa.pdf","bulk_download_url":"https://d1wqtxts1xzle7.cloudfront.net/77808070/SERVE2016-libre.pdf?1640974201=\u0026response-content-disposition=attachment%3B+filename%3DA_Multi_layered_Context_Modelling_Approa.pdf\u0026Expires=1744203895\u0026Signature=gdtSKYMpPlq7e~-66XxCxKbdCFnTHQjX4In~qt3byGqPyNP-hOniCuLAo8sgdY7dG162C~94m4UQCCnhiLwnhM0fk5Xy8~7QT7oKT0lLOAfKfrU38mk~3CgN0fIKpehm5jKG3MqPBwuLgU3VtqG7kozTsXC5kTPUUAOAEx8VM6SsbyvZSJMXuEjjnSeyYow0U2TA2egnf2-M7nUFxu1gB1XjdxJ3G-krS8By78zVwrNegoHIf-4NBWdqRy3Or4OSKdeThTaPsFjMPHj8Us8GcRETG7LqESAMDlAnkHBhycMTRf7PUbLhyijBz7shWX3iVxIUQfteCkAv6G7ZROX7SQ__\u0026Key-Pair-Id=APKAJLOHF5GGSLRBV4ZA"}],"slug":"A_Multi_layered_Context_Modelling_Approach_for_End_Users_Expert_Users_and_Programmers","translated_slug":"","page_count":27,"language":"en","content_type":"Work","summary":"Context awareness plays an important role in smart environments and embedded interactions. In order to increase user satisfaction and acceptance, context-aware solutions should be controllable by end users. Over the last few years we have therefore witnessed an emerging trend of visual programming tools for context-aware applications based on simple \"if this then that\" rules. Unfortunately, existing solutions do not support the easy reuse of the \"this\" part in other rules. Further, the desired level of control varies among individuals. In order to let users choose the right level of automation and control, we propose a multi-layered context modelling approach distinguishing between end users, expert users and programmers. We report on our ongoing development of the Context Modelling Toolkit (CMT) consisting of the necessary context modelling concepts as well as a rule-based context processing engine. We further discuss an initial design of the graphical user interface for the presented multi-layered context modelling approach.\n\nResearch paper: https://beatsigner.com/publications/a-multi-layered-context-modelling-approach-for-end-users-expert-users-and-programmers.pdf","impression_tracking_id":null,"owner":{"id":13155,"first_name":"Beat","middle_initials":null,"last_name":"Signer","page_name":"BeatSigner","domain_name":"vub","created_at":"2008-10-29T17:11:18.015-07:00","display_name":"Beat Signer","url":"https://vub.academia.edu/BeatSigner"},"attachments":[{"id":77808070,"title":"","file_type":"pdf","scribd_thumbnail_url":"https://attachments.academia-assets.com/77808070/thumbnails/1.jpg","file_name":"SERVE2016.pdf","download_url":"https://www.academia.edu/attachments/77808070/download_file","bulk_download_file_name":"A_Multi_layered_Context_Modelling_Approa.pdf","bulk_download_url":"https://d1wqtxts1xzle7.cloudfront.net/77808070/SERVE2016-libre.pdf?1640974201=\u0026response-content-disposition=attachment%3B+filename%3DA_Multi_layered_Context_Modelling_Approa.pdf\u0026Expires=1744203895\u0026Signature=gdtSKYMpPlq7e~-66XxCxKbdCFnTHQjX4In~qt3byGqPyNP-hOniCuLAo8sgdY7dG162C~94m4UQCCnhiLwnhM0fk5Xy8~7QT7oKT0lLOAfKfrU38mk~3CgN0fIKpehm5jKG3MqPBwuLgU3VtqG7kozTsXC5kTPUUAOAEx8VM6SsbyvZSJMXuEjjnSeyYow0U2TA2egnf2-M7nUFxu1gB1XjdxJ3G-krS8By78zVwrNegoHIf-4NBWdqRy3Or4OSKdeThTaPsFjMPHj8Us8GcRETG7LqESAMDlAnkHBhycMTRf7PUbLhyijBz7shWX3iVxIUQfteCkAv6G7ZROX7SQ__\u0026Key-Pair-Id=APKAJLOHF5GGSLRBV4ZA"}],"research_interests":[{"id":48,"name":"Engineering","url":"https://www.academia.edu/Documents/in/Engineering"},{"id":422,"name":"Computer Science","url":"https://www.academia.edu/Documents/in/Computer_Science"},{"id":428,"name":"Algorithms","url":"https://www.academia.edu/Documents/in/Algorithms"},{"id":449,"name":"Software Engineering","url":"https://www.academia.edu/Documents/in/Software_Engineering"},{"id":472,"name":"Human Computer Interaction","url":"https://www.academia.edu/Documents/in/Human_Computer_Interaction"},{"id":491,"name":"Information Technology","url":"https://www.academia.edu/Documents/in/Information_Technology"},{"id":923,"name":"Technology","url":"https://www.academia.edu/Documents/in/Technology"},{"id":1210,"name":"Informatics","url":"https://www.academia.edu/Documents/in/Informatics"},{"id":1241,"name":"Knowledge Management","url":"https://www.academia.edu/Documents/in/Knowledge_Management"},{"id":1380,"name":"Computer Engineering","url":"https://www.academia.edu/Documents/in/Computer_Engineering"},{"id":1440,"name":"Visualization","url":"https://www.academia.edu/Documents/in/Visualization"},{"id":2879,"name":"Ubiquitous Computing","url":"https://www.academia.edu/Documents/in/Ubiquitous_Computing"},{"id":3424,"name":"Information Visualization","url":"https://www.academia.edu/Documents/in/Information_Visualization"},{"id":5266,"name":"Human Information Interaction","url":"https://www.academia.edu/Documents/in/Human_Information_Interaction"},{"id":5673,"name":"Augmented Reality","url":"https://www.academia.edu/Documents/in/Augmented_Reality"},{"id":8215,"name":"Conceptual Modelling","url":"https://www.academia.edu/Documents/in/Conceptual_Modelling"},{"id":9134,"name":"Pervasive Computing","url":"https://www.academia.edu/Documents/in/Pervasive_Computing"},{"id":9135,"name":"The Internet of Things","url":"https://www.academia.edu/Documents/in/The_Internet_of_Things"},{"id":10044,"name":"Context","url":"https://www.academia.edu/Documents/in/Context"},{"id":10907,"name":"Context-Aware Computing","url":"https://www.academia.edu/Documents/in/Context-Aware_Computing"},{"id":11085,"name":"Cross-Media Information Spaces","url":"https://www.academia.edu/Documents/in/Cross-Media_Information_Spaces"},{"id":11119,"name":"User Interface","url":"https://www.academia.edu/Documents/in/User_Interface"},{"id":11123,"name":"Personal Information Management","url":"https://www.academia.edu/Documents/in/Personal_Information_Management"},{"id":23997,"name":"HCI","url":"https://www.academia.edu/Documents/in/HCI"},{"id":26825,"name":"Mobile Computing","url":"https://www.academia.edu/Documents/in/Mobile_Computing"},{"id":54140,"name":"End User Development","url":"https://www.academia.edu/Documents/in/End_User_Development"},{"id":85420,"name":"Crossmedia","url":"https://www.academia.edu/Documents/in/Crossmedia"},{"id":144182,"name":"End Users","url":"https://www.academia.edu/Documents/in/End_Users"},{"id":458849,"name":"Pervasive and Ubiquitous Computing","url":"https://www.academia.edu/Documents/in/Pervasive_and_Ubiquitous_Computing"}],"urls":[{"id":15942417,"url":"https://speakerdeck.com/signer/a-multi-layered-context-modelling-approach-for-end-users-expert-users-and-programmers"}]}, dispatcherData: dispatcherData }); $(this).data('initialized', true); } }); $a.trackClickSource(".js-work-strip-work-link", "profile_work_strip") if (false) { Aedu.setUpFigureCarousel('profile-work-26235318-figures'); } }); </script> <div class="js-work-strip profile--work_container" data-work-id="9159346"><div class="profile--work_thumbnail hidden-xs"><a class="js-work-strip-work-link" data-click-track="profile-work-strip-thumbnail" href="https://www.academia.edu/9159346/Towards_a_Conceptual_Framework_and_Metamodel_for_Context_Aware_Personal_Cross_Media_Information_Management_Systems"><img alt="Research paper thumbnail of Towards a Conceptual Framework and Metamodel for Context-Aware Personal Cross-Media Information Management Systems" class="work-thumbnail" src="https://attachments.academia-assets.com/77808371/thumbnails/1.jpg" /></a></div><div class="wp-workCard wp-workCard_itemContainer"><div class="wp-workCard_item wp-workCard--title"><a class="js-work-strip-work-link text-gray-darker" data-click-track="profile-work-strip-title" href="https://www.academia.edu/9159346/Towards_a_Conceptual_Framework_and_Metamodel_for_Context_Aware_Personal_Cross_Media_Information_Management_Systems">Towards a Conceptual Framework and Metamodel for Context-Aware Personal Cross-Media Information Management Systems</a></div><div class="wp-workCard_item"><span>Presentation given at ER 2014, 33rd International Conference on Conceptual Modelling, Atlanta, USA</span><span>, 2014</span></div><div class="wp-workCard_item"><span class="js-work-more-abstract-truncated">Information fragmentation is a well-known issue in personal information management (PIM). In orde...</span><a class="js-work-more-abstract" data-broccoli-component="work_strip.more_abstract" data-click-track="profile-work-strip-more-abstract" href="javascript:;"><span> more </span><span><i class="fa fa-caret-down"></i></span></a><span class="js-work-more-abstract-untruncated hidden">Information fragmentation is a well-known issue in personal information management (PIM). In order to overcome this problem, various PIM solutions have focussed on linking documents via semantic relationships. More recently, task-centered information management (TIM) has been introduced as an alternative PIM paradigm. While these two paradigms have their strengths and weaknesses, we aim for a new PIM system design approach to achieve better synergies with human memory. We further envision a cross-media solution where physical information is integrated with a user&#39;s digital personal information space. We present the Object-Concept-Context (OC2) conceptual framework for context-aware personal cross-media information management combining the best of the two existing PIM paradigms and integrating the most relevant features of the human memory. Further, we outline how the OC2 framework has been implemented based on a domain-specific application of the Resource-Selector-Link (RSL) hypermedia metamodel.<br /><br />Research paper: <a href="https://beatsigner.com/publications/towards-a-conceptual-framework-and-metamodel-for-context-aware-personal-cross-media-information-management-systems.pdf" rel="nofollow">https://beatsigner.com/publications/towards-a-conceptual-framework-and-metamodel-for-context-aware-personal-cross-media-information-management-systems.pdf</a></span></div><div class="wp-workCard_item wp-workCard--actions"><span class="work-strip-bookmark-button-container"></span><a id="71fd7c9d19a40ba774d394399742fa56" class="wp-workCard--action" rel="nofollow" data-click-track="profile-work-strip-download" data-download="{&quot;attachment_id&quot;:77808371,&quot;asset_id&quot;:9159346,&quot;asset_type&quot;:&quot;Work&quot;,&quot;button_location&quot;:&quot;profile&quot;}" href="https://www.academia.edu/attachments/77808371/download_file?s=profile"><span><i class="fa fa-arrow-down"></i></span><span>Download</span></a><span class="wp-workCard--action visible-if-viewed-by-owner inline-block" style="display: none;"><span class="js-profile-work-strip-edit-button-wrapper profile-work-strip-edit-button-wrapper" data-work-id="9159346"><a class="js-profile-work-strip-edit-button" tabindex="0"><span><i class="fa fa-pencil"></i></span><span>Edit</span></a></span></span></div><div class="wp-workCard_item wp-workCard--stats"><span><span><span class="js-view-count view-count u-mr2x" data-work-id="9159346"><i class="fa fa-spinner fa-spin"></i></span><script>$(function () { var workId = 9159346; window.Academia.workViewCountsFetcher.queue(workId, function (count) { var description = window.$h.commaizeInt(count) + " " + window.$h.pluralize(count, 'View'); $(".js-view-count[data-work-id=9159346]").text(description); $(".js-view-count[data-work-id=9159346]").attr('title', description).tooltip(); }); });</script></span></span><span><span class="percentile-widget hidden"><span class="u-mr2x work-percentile"></span></span><script>$(function () { var workId = 9159346; window.Academia.workPercentilesFetcher.queue(workId, function (percentileText) { var container = $(".js-work-strip[data-work-id='9159346']"); container.find('.work-percentile').text(percentileText.charAt(0).toUpperCase() + percentileText.slice(1)); container.find('.percentile-widget').show(); container.find('.percentile-widget').removeClass('hidden'); }); });</script></span></div><div id="work-strip-premium-row-container"></div></div></div><script> require.config({ waitSeconds: 90 })(["https://a.academia-assets.com/assets/wow_profile-a9bf3a2bc8c89fa2a77156577594264ee8a0f214d74241bc0fcd3f69f8d107ac.js","https://a.academia-assets.com/assets/work_edit-ad038b8c047c1a8d4fa01b402d530ff93c45fee2137a149a4a5398bc8ad67560.js"], function() { // from javascript_helper.rb var dispatcherData = {} if (true){ window.WowProfile.dispatcher = window.WowProfile.dispatcher || _.clone(Backbone.Events); dispatcherData = { dispatcher: window.WowProfile.dispatcher, downloadLinkId: "71fd7c9d19a40ba774d394399742fa56" } } $('.js-work-strip[data-work-id=9159346]').each(function() { if (!$(this).data('initialized')) { new WowProfile.WorkStripView({ el: this, workJSON: {"id":9159346,"title":"Towards a Conceptual Framework and Metamodel for Context-Aware Personal Cross-Media Information Management Systems","translated_title":"","metadata":{"abstract":"Information fragmentation is a well-known issue in personal information management (PIM). In order to overcome this problem, various PIM solutions have focussed on linking documents via semantic relationships. More recently, task-centered information management (TIM) has been introduced as an alternative PIM paradigm. While these two paradigms have their strengths and weaknesses, we aim for a new PIM system design approach to achieve better synergies with human memory. We further envision a cross-media solution where physical information is integrated with a user's digital personal information space. We present the Object-Concept-Context (OC2) conceptual framework for context-aware personal cross-media information management combining the best of the two existing PIM paradigms and integrating the most relevant features of the human memory. Further, we outline how the OC2 framework has been implemented based on a domain-specific application of the Resource-Selector-Link (RSL) hypermedia metamodel.\n\nResearch paper: https://beatsigner.com/publications/towards-a-conceptual-framework-and-metamodel-for-context-aware-personal-cross-media-information-management-systems.pdf","location":"ER 2014, 33rd International Conference on Conceptual Modelling, Atlanta, USA, October, 2014","event_date":{"day":28,"month":10,"year":2014,"errors":{}},"publication_date":{"day":null,"month":null,"year":2014,"errors":{}},"publication_name":"Presentation given at ER 2014, 33rd International Conference on Conceptual Modelling, Atlanta, USA"},"translated_abstract":"Information fragmentation is a well-known issue in personal information management (PIM). In order to overcome this problem, various PIM solutions have focussed on linking documents via semantic relationships. More recently, task-centered information management (TIM) has been introduced as an alternative PIM paradigm. While these two paradigms have their strengths and weaknesses, we aim for a new PIM system design approach to achieve better synergies with human memory. We further envision a cross-media solution where physical information is integrated with a user's digital personal information space. We present the Object-Concept-Context (OC2) conceptual framework for context-aware personal cross-media information management combining the best of the two existing PIM paradigms and integrating the most relevant features of the human memory. Further, we outline how the OC2 framework has been implemented based on a domain-specific application of the Resource-Selector-Link (RSL) hypermedia metamodel.\n\nResearch paper: https://beatsigner.com/publications/towards-a-conceptual-framework-and-metamodel-for-context-aware-personal-cross-media-information-management-systems.pdf","internal_url":"https://www.academia.edu/9159346/Towards_a_Conceptual_Framework_and_Metamodel_for_Context_Aware_Personal_Cross_Media_Information_Management_Systems","translated_internal_url":"","created_at":"2014-11-06T04:25:06.741-08:00","preview_url":null,"current_user_can_edit":null,"current_user_is_owner":null,"owner_id":13155,"coauthors_can_edit":true,"document_type":"conference_presentation","co_author_tags":[],"downloadable_attachments":[{"id":77808371,"title":"","file_type":"pdf","scribd_thumbnail_url":"https://attachments.academia-assets.com/77808371/thumbnails/1.jpg","file_name":"ER2014.pdf","download_url":"https://www.academia.edu/attachments/77808371/download_file","bulk_download_file_name":"Towards_a_Conceptual_Framework_and_Metam.pdf","bulk_download_url":"https://d1wqtxts1xzle7.cloudfront.net/77808371/ER2014-libre.pdf?1640974180=\u0026response-content-disposition=attachment%3B+filename%3DTowards_a_Conceptual_Framework_and_Metam.pdf\u0026Expires=1744203895\u0026Signature=O~-Yh7gTnQ6sc5h6nO4zZbtmYijN3NGXfnVFYfjH3WA9KZqeLEX3HOlbK0Xr6vyHEqRkBOl8LdLGED4Ncg4ChY2PMtFUEXXbq-KLDzjUIHa5-48UN6Lo6UP~1pEzdkYvZWmc~b~oeMLldDvcGRHQ31dE21omRGtTDv65sjdYa0S24I3LRKzTSm~T8bdFD28s6P9HRtIlQQIYiGIH8om-thNJtNdCgzj1pGO92v3YWcndur706xqkRIT0x6ZAUut1EIPCmHHgREtSapxTYt55oWQnvUsmO2hGs9M4jKQWQJVgSOfG2n2HIIeRAY~idZBa-6vOxC5vpf2Lrj~-rpPoyQ__\u0026Key-Pair-Id=APKAJLOHF5GGSLRBV4ZA"}],"slug":"Towards_a_Conceptual_Framework_and_Metamodel_for_Context_Aware_Personal_Cross_Media_Information_Management_Systems","translated_slug":"","page_count":13,"language":"en","content_type":"Work","summary":"Information fragmentation is a well-known issue in personal information management (PIM). In order to overcome this problem, various PIM solutions have focussed on linking documents via semantic relationships. More recently, task-centered information management (TIM) has been introduced as an alternative PIM paradigm. While these two paradigms have their strengths and weaknesses, we aim for a new PIM system design approach to achieve better synergies with human memory. We further envision a cross-media solution where physical information is integrated with a user's digital personal information space. We present the Object-Concept-Context (OC2) conceptual framework for context-aware personal cross-media information management combining the best of the two existing PIM paradigms and integrating the most relevant features of the human memory. Further, we outline how the OC2 framework has been implemented based on a domain-specific application of the Resource-Selector-Link (RSL) hypermedia metamodel.\n\nResearch paper: https://beatsigner.com/publications/towards-a-conceptual-framework-and-metamodel-for-context-aware-personal-cross-media-information-management-systems.pdf","impression_tracking_id":null,"owner":{"id":13155,"first_name":"Beat","middle_initials":null,"last_name":"Signer","page_name":"BeatSigner","domain_name":"vub","created_at":"2008-10-29T17:11:18.015-07:00","display_name":"Beat Signer","url":"https://vub.academia.edu/BeatSigner"},"attachments":[{"id":77808371,"title":"","file_type":"pdf","scribd_thumbnail_url":"https://attachments.academia-assets.com/77808371/thumbnails/1.jpg","file_name":"ER2014.pdf","download_url":"https://www.academia.edu/attachments/77808371/download_file","bulk_download_file_name":"Towards_a_Conceptual_Framework_and_Metam.pdf","bulk_download_url":"https://d1wqtxts1xzle7.cloudfront.net/77808371/ER2014-libre.pdf?1640974180=\u0026response-content-disposition=attachment%3B+filename%3DTowards_a_Conceptual_Framework_and_Metam.pdf\u0026Expires=1744203895\u0026Signature=O~-Yh7gTnQ6sc5h6nO4zZbtmYijN3NGXfnVFYfjH3WA9KZqeLEX3HOlbK0Xr6vyHEqRkBOl8LdLGED4Ncg4ChY2PMtFUEXXbq-KLDzjUIHa5-48UN6Lo6UP~1pEzdkYvZWmc~b~oeMLldDvcGRHQ31dE21omRGtTDv65sjdYa0S24I3LRKzTSm~T8bdFD28s6P9HRtIlQQIYiGIH8om-thNJtNdCgzj1pGO92v3YWcndur706xqkRIT0x6ZAUut1EIPCmHHgREtSapxTYt55oWQnvUsmO2hGs9M4jKQWQJVgSOfG2n2HIIeRAY~idZBa-6vOxC5vpf2Lrj~-rpPoyQ__\u0026Key-Pair-Id=APKAJLOHF5GGSLRBV4ZA"}],"research_interests":[{"id":48,"name":"Engineering","url":"https://www.academia.edu/Documents/in/Engineering"},{"id":422,"name":"Computer Science","url":"https://www.academia.edu/Documents/in/Computer_Science"},{"id":449,"name":"Software Engineering","url":"https://www.academia.edu/Documents/in/Software_Engineering"},{"id":472,"name":"Human Computer Interaction","url":"https://www.academia.edu/Documents/in/Human_Computer_Interaction"},{"id":1453,"name":"Information Management","url":"https://www.academia.edu/Documents/in/Information_Management"},{"id":2879,"name":"Ubiquitous Computing","url":"https://www.academia.edu/Documents/in/Ubiquitous_Computing"},{"id":5266,"name":"Human Information Interaction","url":"https://www.academia.edu/Documents/in/Human_Information_Interaction"},{"id":8215,"name":"Conceptual Modelling","url":"https://www.academia.edu/Documents/in/Conceptual_Modelling"},{"id":11085,"name":"Cross-Media Information Spaces","url":"https://www.academia.edu/Documents/in/Cross-Media_Information_Spaces"},{"id":23997,"name":"HCI","url":"https://www.academia.edu/Documents/in/HCI"},{"id":402880,"name":"Personal Information Management - PIM","url":"https://www.academia.edu/Documents/in/Personal_Information_Management_-_PIM"}],"urls":[{"id":15942587,"url":"https://speakerdeck.com/signer/towards-a-conceptual-framework-and-metamodel-for-context-aware-personal-cross-media-information-management-systems"}]}, dispatcherData: dispatcherData }); $(this).data('initialized', true); } }); $a.trackClickSource(".js-work-strip-work-link", "profile_work_strip") if (false) { Aedu.setUpFigureCarousel('profile-work-9159346-figures'); } }); </script> <div class="js-work-strip profile--work_container" data-work-id="8814156"><div class="profile--work_thumbnail hidden-xs"><a class="js-work-strip-work-link" data-click-track="profile-work-strip-thumbnail" href="https://www.academia.edu/8814156/Open_Cross_Document_Linking_and_Browsing_based_on_a_Visual_Plug_in_Architecture"><img alt="Research paper thumbnail of Open Cross-Document Linking and Browsing based on a Visual Plug-in Architecture" class="work-thumbnail" src="https://attachments.academia-assets.com/35159033/thumbnails/1.jpg" /></a></div><div class="wp-workCard wp-workCard_itemContainer"><div class="wp-workCard_item wp-workCard--title"><a class="js-work-strip-work-link text-gray-darker" data-click-track="profile-work-strip-title" href="https://www.academia.edu/8814156/Open_Cross_Document_Linking_and_Browsing_based_on_a_Visual_Plug_in_Architecture">Open Cross-Document Linking and Browsing based on a Visual Plug-in Architecture</a></div><div class="wp-workCard_item"><span>Presentation given at WISE 2014, 15th International Conference on Web Information System Engineering, Thessaloniki, Greece</span><span>, 2014</span></div><div class="wp-workCard_item"><span class="js-work-more-abstract-truncated">Digital documents often do not exist in isolation but are implicitly or explicitly linked to part...</span><a class="js-work-more-abstract" data-broccoli-component="work_strip.more_abstract" data-click-track="profile-work-strip-more-abstract" href="javascript:;"><span> more </span><span><i class="fa fa-caret-down"></i></span></a><span class="js-work-more-abstract-untruncated hidden">Digital documents often do not exist in isolation but are implicitly or explicitly linked to parts of other documents. Nevertheless,most existing document formats only support links to web resources but not to parts of third-party documents. An open cross-document link service should address the multitude of existing document formats and be extensible to support emerging document formats and models. We present an architecture and prototype of an open cross-document link service and browser that is based on the RSL hypermedia metamodel. A main contribution is the specification and development of a visual plug-in solution that enables the integration of new document formats without requiring changes to the cross-document browser’s main user interface component. The presented visual plug-in mechanism makes use of the Open Service Gateway initiative (OSGi) specification for modularisation and plug-in extensibility and has been validated by developing data as well as visual plug-ins for a number of existing document formats.<br /><br />Research paper: <a href="https://beatsigner.com/publications/open-cross-document-linking-and-browsing-based-on-a-visual-plug-in-architecture.pdf" rel="nofollow">https://beatsigner.com/publications/open-cross-document-linking-and-browsing-based-on-a-visual-plug-in-architecture.pdf</a></span></div><div class="wp-workCard_item wp-workCard--actions"><span class="work-strip-bookmark-button-container"></span><a id="b968359483fa75bb0207c8a6a08f3b76" class="wp-workCard--action" rel="nofollow" data-click-track="profile-work-strip-download" data-download="{&quot;attachment_id&quot;:35159033,&quot;asset_id&quot;:8814156,&quot;asset_type&quot;:&quot;Work&quot;,&quot;button_location&quot;:&quot;profile&quot;}" href="https://www.academia.edu/attachments/35159033/download_file?s=profile"><span><i class="fa fa-arrow-down"></i></span><span>Download</span></a><span class="wp-workCard--action visible-if-viewed-by-owner inline-block" style="display: none;"><span class="js-profile-work-strip-edit-button-wrapper profile-work-strip-edit-button-wrapper" data-work-id="8814156"><a class="js-profile-work-strip-edit-button" tabindex="0"><span><i class="fa fa-pencil"></i></span><span>Edit</span></a></span></span></div><div class="wp-workCard_item wp-workCard--stats"><span><span><span class="js-view-count view-count u-mr2x" data-work-id="8814156"><i class="fa fa-spinner fa-spin"></i></span><script>$(function () { var workId = 8814156; window.Academia.workViewCountsFetcher.queue(workId, function (count) { var description = window.$h.commaizeInt(count) + " " + window.$h.pluralize(count, 'View'); $(".js-view-count[data-work-id=8814156]").text(description); $(".js-view-count[data-work-id=8814156]").attr('title', description).tooltip(); }); });</script></span></span><span><span class="percentile-widget hidden"><span class="u-mr2x work-percentile"></span></span><script>$(function () { var workId = 8814156; window.Academia.workPercentilesFetcher.queue(workId, function (percentileText) { var container = $(".js-work-strip[data-work-id='8814156']"); container.find('.work-percentile').text(percentileText.charAt(0).toUpperCase() + percentileText.slice(1)); container.find('.percentile-widget').show(); container.find('.percentile-widget').removeClass('hidden'); }); });</script></span></div><div id="work-strip-premium-row-container"></div></div></div><script> require.config({ waitSeconds: 90 })(["https://a.academia-assets.com/assets/wow_profile-a9bf3a2bc8c89fa2a77156577594264ee8a0f214d74241bc0fcd3f69f8d107ac.js","https://a.academia-assets.com/assets/work_edit-ad038b8c047c1a8d4fa01b402d530ff93c45fee2137a149a4a5398bc8ad67560.js"], function() { // from javascript_helper.rb var dispatcherData = {} if (true){ window.WowProfile.dispatcher = window.WowProfile.dispatcher || _.clone(Backbone.Events); dispatcherData = { dispatcher: window.WowProfile.dispatcher, downloadLinkId: "b968359483fa75bb0207c8a6a08f3b76" } } $('.js-work-strip[data-work-id=8814156]').each(function() { if (!$(this).data('initialized')) { new WowProfile.WorkStripView({ el: this, workJSON: {"id":8814156,"title":"Open Cross-Document Linking and Browsing based on a Visual Plug-in Architecture","translated_title":"","metadata":{"abstract":"Digital documents often do not exist in isolation but are implicitly or explicitly linked to parts of other documents. Nevertheless,most existing document formats only support links to web resources but not to parts of third-party documents. An open cross-document link service should address the multitude of existing document formats and be extensible to support emerging document formats and models. We present an architecture and prototype of an open cross-document link service and browser that is based on the RSL hypermedia metamodel. A main contribution is the specification and development of a visual plug-in solution that enables the integration of new document formats without requiring changes to the cross-document browser’s main user interface component. The presented visual plug-in mechanism makes use of the Open Service Gateway initiative (OSGi) specification for modularisation and plug-in extensibility and has been validated by developing data as well as visual plug-ins for a number of existing document formats.\n\nResearch paper: https://beatsigner.com/publications/open-cross-document-linking-and-browsing-based-on-a-visual-plug-in-architecture.pdf","location":"WISE 2014, 15th International Conference on Web Information System Engineering, Thessaloniki, Greece, October, 2014 ","event_date":{"day":14,"month":10,"year":2014,"errors":{}},"publication_date":{"day":null,"month":null,"year":2014,"errors":{}},"publication_name":"Presentation given at WISE 2014, 15th International Conference on Web Information System Engineering, Thessaloniki, Greece"},"translated_abstract":"Digital documents often do not exist in isolation but are implicitly or explicitly linked to parts of other documents. Nevertheless,most existing document formats only support links to web resources but not to parts of third-party documents. An open cross-document link service should address the multitude of existing document formats and be extensible to support emerging document formats and models. We present an architecture and prototype of an open cross-document link service and browser that is based on the RSL hypermedia metamodel. A main contribution is the specification and development of a visual plug-in solution that enables the integration of new document formats without requiring changes to the cross-document browser’s main user interface component. The presented visual plug-in mechanism makes use of the Open Service Gateway initiative (OSGi) specification for modularisation and plug-in extensibility and has been validated by developing data as well as visual plug-ins for a number of existing document formats.\n\nResearch paper: https://beatsigner.com/publications/open-cross-document-linking-and-browsing-based-on-a-visual-plug-in-architecture.pdf","internal_url":"https://www.academia.edu/8814156/Open_Cross_Document_Linking_and_Browsing_based_on_a_Visual_Plug_in_Architecture","translated_internal_url":"","created_at":"2014-10-16T07:01:00.633-07:00","preview_url":null,"current_user_can_edit":null,"current_user_is_owner":null,"owner_id":13155,"coauthors_can_edit":true,"document_type":"conference_presentation","co_author_tags":[],"downloadable_attachments":[{"id":35159033,"title":"","file_type":"pdf","scribd_thumbnail_url":"https://attachments.academia-assets.com/35159033/thumbnails/1.jpg","file_name":"wise2014.pdf","download_url":"https://www.academia.edu/attachments/35159033/download_file","bulk_download_file_name":"Open_Cross_Document_Linking_and_Browsing.pdf","bulk_download_url":"https://d1wqtxts1xzle7.cloudfront.net/35159033/wise2014-libre.pdf?1413476840=\u0026response-content-disposition=attachment%3B+filename%3DOpen_Cross_Document_Linking_and_Browsing.pdf\u0026Expires=1744203895\u0026Signature=KPQhVB2QSR-B7R3LDnBTtvs7SFFwgsy~PZo4OZ-40SOUzv1ZVdmT2jtoZ9esusuq52JDNV9FrU-QfgFA3m-y3dM000r-Kzjd7WUbDeKhCT-IvAIgJOzi4xMjvUnKB5cSysh6kqNSD9ErQSy-yXYLcBtzGqk1Gn~Bti41y8f8uW5ItpqpNlA3iU~cH1eQmnJjnVJmMwpGCaaXvKfO91S7seetZOsMEyPRjJrTTPo8VhAqaoSW-QNqZ-poLbOkZbIpsFNf6wFQfGq3ZJPqQ~19xTbpC4VZZg22bichkogZO1ptO5O5kxDNhFvgyKYTIKCV3k3uqLC8PEWEJBUQ7Te3ww__\u0026Key-Pair-Id=APKAJLOHF5GGSLRBV4ZA"}],"slug":"Open_Cross_Document_Linking_and_Browsing_based_on_a_Visual_Plug_in_Architecture","translated_slug":"","page_count":21,"language":"en","content_type":"Work","summary":"Digital documents often do not exist in isolation but are implicitly or explicitly linked to parts of other documents. Nevertheless,most existing document formats only support links to web resources but not to parts of third-party documents. An open cross-document link service should address the multitude of existing document formats and be extensible to support emerging document formats and models. We present an architecture and prototype of an open cross-document link service and browser that is based on the RSL hypermedia metamodel. A main contribution is the specification and development of a visual plug-in solution that enables the integration of new document formats without requiring changes to the cross-document browser’s main user interface component. The presented visual plug-in mechanism makes use of the Open Service Gateway initiative (OSGi) specification for modularisation and plug-in extensibility and has been validated by developing data as well as visual plug-ins for a number of existing document formats.\n\nResearch paper: https://beatsigner.com/publications/open-cross-document-linking-and-browsing-based-on-a-visual-plug-in-architecture.pdf","impression_tracking_id":null,"owner":{"id":13155,"first_name":"Beat","middle_initials":null,"last_name":"Signer","page_name":"BeatSigner","domain_name":"vub","created_at":"2008-10-29T17:11:18.015-07:00","display_name":"Beat Signer","url":"https://vub.academia.edu/BeatSigner"},"attachments":[{"id":35159033,"title":"","file_type":"pdf","scribd_thumbnail_url":"https://attachments.academia-assets.com/35159033/thumbnails/1.jpg","file_name":"wise2014.pdf","download_url":"https://www.academia.edu/attachments/35159033/download_file","bulk_download_file_name":"Open_Cross_Document_Linking_and_Browsing.pdf","bulk_download_url":"https://d1wqtxts1xzle7.cloudfront.net/35159033/wise2014-libre.pdf?1413476840=\u0026response-content-disposition=attachment%3B+filename%3DOpen_Cross_Document_Linking_and_Browsing.pdf\u0026Expires=1744203895\u0026Signature=KPQhVB2QSR-B7R3LDnBTtvs7SFFwgsy~PZo4OZ-40SOUzv1ZVdmT2jtoZ9esusuq52JDNV9FrU-QfgFA3m-y3dM000r-Kzjd7WUbDeKhCT-IvAIgJOzi4xMjvUnKB5cSysh6kqNSD9ErQSy-yXYLcBtzGqk1Gn~Bti41y8f8uW5ItpqpNlA3iU~cH1eQmnJjnVJmMwpGCaaXvKfO91S7seetZOsMEyPRjJrTTPo8VhAqaoSW-QNqZ-poLbOkZbIpsFNf6wFQfGq3ZJPqQ~19xTbpC4VZZg22bichkogZO1ptO5O5kxDNhFvgyKYTIKCV3k3uqLC8PEWEJBUQ7Te3ww__\u0026Key-Pair-Id=APKAJLOHF5GGSLRBV4ZA"}],"research_interests":[{"id":37,"name":"Information Systems","url":"https://www.academia.edu/Documents/in/Information_Systems"},{"id":422,"name":"Computer Science","url":"https://www.academia.edu/Documents/in/Computer_Science"},{"id":491,"name":"Information Technology","url":"https://www.academia.edu/Documents/in/Information_Technology"},{"id":1210,"name":"Informatics","url":"https://www.academia.edu/Documents/in/Informatics"},{"id":2869,"name":"Digital Media","url":"https://www.academia.edu/Documents/in/Digital_Media"},{"id":3419,"name":"Multimedia","url":"https://www.academia.edu/Documents/in/Multimedia"},{"id":11085,"name":"Cross-Media Information Spaces","url":"https://www.academia.edu/Documents/in/Cross-Media_Information_Spaces"},{"id":41474,"name":"Cross Media Platforms","url":"https://www.academia.edu/Documents/in/Cross_Media_Platforms"},{"id":42095,"name":"Document Engineering","url":"https://www.academia.edu/Documents/in/Document_Engineering"},{"id":44606,"name":"Hypermedia","url":"https://www.academia.edu/Documents/in/Hypermedia"},{"id":48200,"name":"Digital Library","url":"https://www.academia.edu/Documents/in/Digital_Library"},{"id":54192,"name":"Hypertext","url":"https://www.academia.edu/Documents/in/Hypertext"},{"id":182921,"name":"Open Hypermedia","url":"https://www.academia.edu/Documents/in/Open_Hypermedia"}],"urls":[{"id":15942708,"url":"https://speakerdeck.com/signer/open-cross-document-linking-and-browsing-based-on-a-visual-plug-in-architecture"}]}, dispatcherData: dispatcherData }); $(this).data('initialized', true); } }); $a.trackClickSource(".js-work-strip-work-link", "profile_work_strip") if (false) { Aedu.setUpFigureCarousel('profile-work-8814156-figures'); } }); </script> <div class="js-work-strip profile--work_container" data-work-id="8241263"><div class="profile--work_thumbnail hidden-xs"><a class="js-work-strip-work-link" data-click-track="profile-work-strip-thumbnail" href="https://www.academia.edu/8241263/From_User_Needs_to_Opportunities_in_Personal_Information_Management_A_Case_Study_on_Organisational_Strategies_in_Cross_Media_Information_Spaces"><img alt="Research paper thumbnail of From User Needs to Opportunities in Personal Information Management: A Case Study on Organisational Strategies in Cross-Media Information Spaces" class="work-thumbnail" src="https://attachments.academia-assets.com/77808845/thumbnails/1.jpg" /></a></div><div class="wp-workCard wp-workCard_itemContainer"><div class="wp-workCard_item wp-workCard--title"><a class="js-work-strip-work-link text-gray-darker" data-click-track="profile-work-strip-title" href="https://www.academia.edu/8241263/From_User_Needs_to_Opportunities_in_Personal_Information_Management_A_Case_Study_on_Organisational_Strategies_in_Cross_Media_Information_Spaces">From User Needs to Opportunities in Personal Information Management: A Case Study on Organisational Strategies in Cross-Media Information Spaces</a></div><div class="wp-workCard_item"><span>Presentation given at DL 2014, International Conference on Digital Libraries, London, UK</span><span>, 2014</span></div><div class="wp-workCard_item"><span class="js-work-more-abstract-truncated">The efficient management of our daily information in physical and digital information spaces is a...</span><a class="js-work-more-abstract" data-broccoli-component="work_strip.more_abstract" data-click-track="profile-work-strip-more-abstract" href="javascript:;"><span> more </span><span><i class="fa fa-caret-down"></i></span></a><span class="js-work-more-abstract-untruncated hidden">The efficient management of our daily information in physical and digital information spaces is a well-known problem. Current research on personal information management (PIM) aims to understand and improve organisational and re-finding activities. We present a case study about organisational strategies in cross-media information spaces, consisting of physical as well as digital information. In contrast to existing work, we provide a unified view on organisational strategies and investigate how re-finding cues differ across the physical and digital space. We further introduce a new mixing organisational strategy which is used in addition to the well-known filing and piling strategies. Last but not least, based on the results of our study we discuss opportunities and pitfalls for future descriptive PIM research and outline some directions for future PIM system design.<br /><br />Research paper: <a href="https://beatsigner.com/publications/from-user-needs-to-opportunities-in-personal-information-management-a-case-study-on-organisational-strategies-in-cross-media-information-spaces.pdf" rel="nofollow">https://beatsigner.com/publications/from-user-needs-to-opportunities-in-personal-information-management-a-case-study-on-organisational-strategies-in-cross-media-information-spaces.pdf</a></span></div><div class="wp-workCard_item wp-workCard--actions"><span class="work-strip-bookmark-button-container"></span><a id="ab7c8b4fc5749b993ba9922624147aa7" class="wp-workCard--action" rel="nofollow" data-click-track="profile-work-strip-download" data-download="{&quot;attachment_id&quot;:77808845,&quot;asset_id&quot;:8241263,&quot;asset_type&quot;:&quot;Work&quot;,&quot;button_location&quot;:&quot;profile&quot;}" href="https://www.academia.edu/attachments/77808845/download_file?s=profile"><span><i class="fa fa-arrow-down"></i></span><span>Download</span></a><span class="wp-workCard--action visible-if-viewed-by-owner inline-block" style="display: none;"><span class="js-profile-work-strip-edit-button-wrapper profile-work-strip-edit-button-wrapper" data-work-id="8241263"><a class="js-profile-work-strip-edit-button" tabindex="0"><span><i class="fa fa-pencil"></i></span><span>Edit</span></a></span></span></div><div class="wp-workCard_item wp-workCard--stats"><span><span><span class="js-view-count view-count u-mr2x" data-work-id="8241263"><i class="fa fa-spinner fa-spin"></i></span><script>$(function () { var workId = 8241263; window.Academia.workViewCountsFetcher.queue(workId, function (count) { var description = window.$h.commaizeInt(count) + " " + window.$h.pluralize(count, 'View'); $(".js-view-count[data-work-id=8241263]").text(description); $(".js-view-count[data-work-id=8241263]").attr('title', description).tooltip(); }); });</script></span></span><span><span class="percentile-widget hidden"><span class="u-mr2x work-percentile"></span></span><script>$(function () { var workId = 8241263; window.Academia.workPercentilesFetcher.queue(workId, function (percentileText) { var container = $(".js-work-strip[data-work-id='8241263']"); container.find('.work-percentile').text(percentileText.charAt(0).toUpperCase() + percentileText.slice(1)); container.find('.percentile-widget').show(); container.find('.percentile-widget').removeClass('hidden'); }); });</script></span></div><div id="work-strip-premium-row-container"></div></div></div><script> require.config({ waitSeconds: 90 })(["https://a.academia-assets.com/assets/wow_profile-a9bf3a2bc8c89fa2a77156577594264ee8a0f214d74241bc0fcd3f69f8d107ac.js","https://a.academia-assets.com/assets/work_edit-ad038b8c047c1a8d4fa01b402d530ff93c45fee2137a149a4a5398bc8ad67560.js"], function() { // from javascript_helper.rb var dispatcherData = {} if (true){ window.WowProfile.dispatcher = window.WowProfile.dispatcher || _.clone(Backbone.Events); dispatcherData = { dispatcher: window.WowProfile.dispatcher, downloadLinkId: "ab7c8b4fc5749b993ba9922624147aa7" } } $('.js-work-strip[data-work-id=8241263]').each(function() { if (!$(this).data('initialized')) { new WowProfile.WorkStripView({ el: this, workJSON: {"id":8241263,"title":"From User Needs to Opportunities in Personal Information Management: A Case Study on Organisational Strategies in Cross-Media Information Spaces","translated_title":"","metadata":{"abstract":"The efficient management of our daily information in physical and digital information spaces is a well-known problem. Current research on personal information management (PIM) aims to understand and improve organisational and re-finding activities. We present a case study about organisational strategies in cross-media information spaces, consisting of physical as well as digital information. In contrast to existing work, we provide a unified view on organisational strategies and investigate how re-finding cues differ across the physical and digital space. We further introduce a new mixing organisational strategy which is used in addition to the well-known filing and piling strategies. Last but not least, based on the results of our study we discuss opportunities and pitfalls for future descriptive PIM research and outline some directions for future PIM system design.\n\nResearch paper: https://beatsigner.com/publications/from-user-needs-to-opportunities-in-personal-information-management-a-case-study-on-organisational-strategies-in-cross-media-information-spaces.pdf","location":"DL 2014, International Conference on Digital Libraries, London, UK, September, 2014","event_date":{"day":9,"month":9,"year":2014,"errors":{}},"publication_date":{"day":null,"month":null,"year":2014,"errors":{}},"publication_name":"Presentation given at DL 2014, International Conference on Digital Libraries, London, UK"},"translated_abstract":"The efficient management of our daily information in physical and digital information spaces is a well-known problem. Current research on personal information management (PIM) aims to understand and improve organisational and re-finding activities. We present a case study about organisational strategies in cross-media information spaces, consisting of physical as well as digital information. In contrast to existing work, we provide a unified view on organisational strategies and investigate how re-finding cues differ across the physical and digital space. We further introduce a new mixing organisational strategy which is used in addition to the well-known filing and piling strategies. Last but not least, based on the results of our study we discuss opportunities and pitfalls for future descriptive PIM research and outline some directions for future PIM system design.\n\nResearch paper: https://beatsigner.com/publications/from-user-needs-to-opportunities-in-personal-information-management-a-case-study-on-organisational-strategies-in-cross-media-information-spaces.pdf","internal_url":"https://www.academia.edu/8241263/From_User_Needs_to_Opportunities_in_Personal_Information_Management_A_Case_Study_on_Organisational_Strategies_in_Cross_Media_Information_Spaces","translated_internal_url":"","created_at":"2014-09-08T03:50:53.902-07:00","preview_url":null,"current_user_can_edit":null,"current_user_is_owner":null,"owner_id":13155,"coauthors_can_edit":true,"document_type":"conference_presentation","co_author_tags":[],"downloadable_attachments":[{"id":77808845,"title":"","file_type":"pdf","scribd_thumbnail_url":"https://attachments.academia-assets.com/77808845/thumbnails/1.jpg","file_name":"DL2014.pdf","download_url":"https://www.academia.edu/attachments/77808845/download_file","bulk_download_file_name":"From_User_Needs_to_Opportunities_in_Pers.pdf","bulk_download_url":"https://d1wqtxts1xzle7.cloudfront.net/77808845/DL2014-libre.pdf?1640993274=\u0026response-content-disposition=attachment%3B+filename%3DFrom_User_Needs_to_Opportunities_in_Pers.pdf\u0026Expires=1744203895\u0026Signature=PnHXN5Z-t5raBXxawFg66CCD3~c1Bq4UTtIXxnSjZL0WcFIi7D5NC3dcr-DkzaEdcPbnf25ca-FF00gsvwXdSVDuLcskdimrxwb-dN0Im9WIA0KUiQqlt7onhLYXn1BmW1SI355c-UVtWJYGmf6xZbnvVxu4BpfZcn~Wg3FL~eZ~-5H66ihSQyaZi6ORVlmtwfUntHkv0KMSQEiuMNBpzpg5sLP3pdjCn1MF3R9hFo5WQdQGEgVQg3~ETZBEE7mhXN0PTmmneq3Ft1e8oHl1kFHsGUe856k73Cl8kLh-Dy37jetblIhdV9noePGI95sE7ud4S~YYfJ-JuZn4VHly2A__\u0026Key-Pair-Id=APKAJLOHF5GGSLRBV4ZA"}],"slug":"From_User_Needs_to_Opportunities_in_Personal_Information_Management_A_Case_Study_on_Organisational_Strategies_in_Cross_Media_Information_Spaces","translated_slug":"","page_count":18,"language":"en","content_type":"Work","summary":"The efficient management of our daily information in physical and digital information spaces is a well-known problem. Current research on personal information management (PIM) aims to understand and improve organisational and re-finding activities. We present a case study about organisational strategies in cross-media information spaces, consisting of physical as well as digital information. In contrast to existing work, we provide a unified view on organisational strategies and investigate how re-finding cues differ across the physical and digital space. We further introduce a new mixing organisational strategy which is used in addition to the well-known filing and piling strategies. Last but not least, based on the results of our study we discuss opportunities and pitfalls for future descriptive PIM research and outline some directions for future PIM system design.\n\nResearch paper: https://beatsigner.com/publications/from-user-needs-to-opportunities-in-personal-information-management-a-case-study-on-organisational-strategies-in-cross-media-information-spaces.pdf","impression_tracking_id":null,"owner":{"id":13155,"first_name":"Beat","middle_initials":null,"last_name":"Signer","page_name":"BeatSigner","domain_name":"vub","created_at":"2008-10-29T17:11:18.015-07:00","display_name":"Beat Signer","url":"https://vub.academia.edu/BeatSigner"},"attachments":[{"id":77808845,"title":"","file_type":"pdf","scribd_thumbnail_url":"https://attachments.academia-assets.com/77808845/thumbnails/1.jpg","file_name":"DL2014.pdf","download_url":"https://www.academia.edu/attachments/77808845/download_file","bulk_download_file_name":"From_User_Needs_to_Opportunities_in_Pers.pdf","bulk_download_url":"https://d1wqtxts1xzle7.cloudfront.net/77808845/DL2014-libre.pdf?1640993274=\u0026response-content-disposition=attachment%3B+filename%3DFrom_User_Needs_to_Opportunities_in_Pers.pdf\u0026Expires=1744203895\u0026Signature=PnHXN5Z-t5raBXxawFg66CCD3~c1Bq4UTtIXxnSjZL0WcFIi7D5NC3dcr-DkzaEdcPbnf25ca-FF00gsvwXdSVDuLcskdimrxwb-dN0Im9WIA0KUiQqlt7onhLYXn1BmW1SI355c-UVtWJYGmf6xZbnvVxu4BpfZcn~Wg3FL~eZ~-5H66ihSQyaZi6ORVlmtwfUntHkv0KMSQEiuMNBpzpg5sLP3pdjCn1MF3R9hFo5WQdQGEgVQg3~ETZBEE7mhXN0PTmmneq3Ft1e8oHl1kFHsGUe856k73Cl8kLh-Dy37jetblIhdV9noePGI95sE7ud4S~YYfJ-JuZn4VHly2A__\u0026Key-Pair-Id=APKAJLOHF5GGSLRBV4ZA"}],"research_interests":[{"id":37,"name":"Information Systems","url":"https://www.academia.edu/Documents/in/Information_Systems"},{"id":48,"name":"Engineering","url":"https://www.academia.edu/Documents/in/Engineering"},{"id":422,"name":"Computer Science","url":"https://www.academia.edu/Documents/in/Computer_Science"},{"id":459,"name":"Information Science","url":"https://www.academia.edu/Documents/in/Information_Science"},{"id":472,"name":"Human Computer Interaction","url":"https://www.academia.edu/Documents/in/Human_Computer_Interaction"},{"id":491,"name":"Information Technology","url":"https://www.academia.edu/Documents/in/Information_Technology"},{"id":1210,"name":"Informatics","url":"https://www.academia.edu/Documents/in/Informatics"},{"id":1453,"name":"Information Management","url":"https://www.academia.edu/Documents/in/Information_Management"},{"id":2879,"name":"Ubiquitous Computing","url":"https://www.academia.edu/Documents/in/Ubiquitous_Computing"},{"id":5266,"name":"Human Information Interaction","url":"https://www.academia.edu/Documents/in/Human_Information_Interaction"},{"id":5673,"name":"Augmented Reality","url":"https://www.academia.edu/Documents/in/Augmented_Reality"},{"id":8350,"name":"Human Information Behavior","url":"https://www.academia.edu/Documents/in/Human_Information_Behavior"},{"id":11085,"name":"Cross-Media Information Spaces","url":"https://www.academia.edu/Documents/in/Cross-Media_Information_Spaces"},{"id":11119,"name":"User Interface","url":"https://www.academia.edu/Documents/in/User_Interface"},{"id":11123,"name":"Personal Information Management","url":"https://www.academia.edu/Documents/in/Personal_Information_Management"},{"id":23997,"name":"HCI","url":"https://www.academia.edu/Documents/in/HCI"},{"id":85420,"name":"Crossmedia","url":"https://www.academia.edu/Documents/in/Crossmedia"},{"id":86661,"name":"User Studies","url":"https://www.academia.edu/Documents/in/User_Studies"},{"id":402880,"name":"Personal Information Management - PIM","url":"https://www.academia.edu/Documents/in/Personal_Information_Management_-_PIM"},{"id":522782,"name":"Information Technology and System Integration","url":"https://www.academia.edu/Documents/in/Information_Technology_and_System_Integration"},{"id":597612,"name":"Library and Archival Science","url":"https://www.academia.edu/Documents/in/Library_and_Archival_Science"}],"urls":[{"id":15942832,"url":"https://speakerdeck.com/signer/from-user-needs-to-opportunities-in-personal-information-management-a-case-study-on-organisational-strategies-in-cross-media-information-spaces"}]}, dispatcherData: dispatcherData }); $(this).data('initialized', true); } }); $a.trackClickSource(".js-work-strip-work-link", "profile_work_strip") if (false) { Aedu.setUpFigureCarousel('profile-work-8241263-figures'); } }); </script> <div class="js-work-strip profile--work_container" data-work-id="1661745"><div class="profile--work_thumbnail hidden-xs"><a class="js-work-strip-work-link" data-click-track="profile-work-strip-thumbnail" href="https://www.academia.edu/1661745/Print_n_Li_nk_Weaving_the_Paper_Web"><img alt="Research paper thumbnail of Print-n-Li­nk: Weaving the Paper Web" class="work-thumbnail" src="https://attachments.academia-assets.com/77851884/thumbnails/1.jpg" /></a></div><div class="wp-workCard wp-workCard_itemContainer"><div class="wp-workCard_item wp-workCard--title"><a class="js-work-strip-work-link text-gray-darker" data-click-track="profile-work-strip-title" href="https://www.academia.edu/1661745/Print_n_Li_nk_Weaving_the_Paper_Web">Print-n-Li­nk: Weaving the Paper Web</a></div><div class="wp-workCard_item"><span>Presentation given at DocEng 2006, ACM Symposium on Document Engineering, Amsterdam, The Netherlands,</span><span>, 2006</span></div><div class="wp-workCard_item"><span class="js-work-more-abstract-truncated">Citations form the basis for a web of scientific publications. Search engines, embedded hyperlink...</span><a class="js-work-more-abstract" data-broccoli-component="work_strip.more_abstract" data-click-track="profile-work-strip-more-abstract" href="javascript:;"><span> more </span><span><i class="fa fa-caret-down"></i></span></a><span class="js-work-more-abstract-untruncated hidden">Citations form the basis for a web of scientific publications. Search engines, embedded hyperlinks and digital libraries all simplify the task of finding publications of interest on the web and navigating to cited publications or web sites. However the actual reading of publications often takes place on paper and frequently on the move. We present a system Print-n-Link that uses technologies for interactive paper to enhance the reading process by enabling users to access digital information and/or searches for cited documents from a printed version of a publication using a digital pen for interaction. A special virtual printer driver automatically generates links from paper to digital services during the printing process based on an analysis of PDF documents. Depending on the user setting and interaction gesture, the system may retrieve metadata about the citation and inform the user through an audio channel or directly display the cited document on the user’s screen.<br /><br />Research paper: <a href="https://beatsigner.com/publications/print-n-link-weaving-the-paper-web.pdf" rel="nofollow">https://beatsigner.com/publications/print-n-link-weaving-the-paper-web.pdf</a></span></div><div class="wp-workCard_item wp-workCard--actions"><span class="work-strip-bookmark-button-container"></span><a id="41d2982ed99edfc065c3a62e4556508f" class="wp-workCard--action" rel="nofollow" data-click-track="profile-work-strip-download" data-download="{&quot;attachment_id&quot;:77851884,&quot;asset_id&quot;:1661745,&quot;asset_type&quot;:&quot;Work&quot;,&quot;button_location&quot;:&quot;profile&quot;}" href="https://www.academia.edu/attachments/77851884/download_file?s=profile"><span><i class="fa fa-arrow-down"></i></span><span>Download</span></a><span class="wp-workCard--action visible-if-viewed-by-owner inline-block" style="display: none;"><span class="js-profile-work-strip-edit-button-wrapper profile-work-strip-edit-button-wrapper" data-work-id="1661745"><a class="js-profile-work-strip-edit-button" tabindex="0"><span><i class="fa fa-pencil"></i></span><span>Edit</span></a></span></span></div><div class="wp-workCard_item wp-workCard--stats"><span><span><span class="js-view-count view-count u-mr2x" data-work-id="1661745"><i class="fa fa-spinner fa-spin"></i></span><script>$(function () { var workId = 1661745; window.Academia.workViewCountsFetcher.queue(workId, function (count) { var description = window.$h.commaizeInt(count) + " " + window.$h.pluralize(count, 'View'); $(".js-view-count[data-work-id=1661745]").text(description); $(".js-view-count[data-work-id=1661745]").attr('title', description).tooltip(); }); });</script></span></span><span><span class="percentile-widget hidden"><span class="u-mr2x work-percentile"></span></span><script>$(function () { var workId = 1661745; window.Academia.workPercentilesFetcher.queue(workId, function (percentileText) { var container = $(".js-work-strip[data-work-id='1661745']"); container.find('.work-percentile').text(percentileText.charAt(0).toUpperCase() + percentileText.slice(1)); container.find('.percentile-widget').show(); container.find('.percentile-widget').removeClass('hidden'); }); });</script></span></div><div id="work-strip-premium-row-container"></div></div></div><script> require.config({ waitSeconds: 90 })(["https://a.academia-assets.com/assets/wow_profile-a9bf3a2bc8c89fa2a77156577594264ee8a0f214d74241bc0fcd3f69f8d107ac.js","https://a.academia-assets.com/assets/work_edit-ad038b8c047c1a8d4fa01b402d530ff93c45fee2137a149a4a5398bc8ad67560.js"], function() { // from javascript_helper.rb var dispatcherData = {} if (true){ window.WowProfile.dispatcher = window.WowProfile.dispatcher || _.clone(Backbone.Events); dispatcherData = { dispatcher: window.WowProfile.dispatcher, downloadLinkId: "41d2982ed99edfc065c3a62e4556508f" } } $('.js-work-strip[data-work-id=1661745]').each(function() { if (!$(this).data('initialized')) { new WowProfile.WorkStripView({ el: this, workJSON: {"id":1661745,"title":"Print-n-Li­nk: Weaving the Paper Web","translated_title":"","metadata":{"abstract":"Citations form the basis for a web of scientific publications. Search engines, embedded hyperlinks and digital libraries all simplify the task of finding publications of interest on the web and navigating to cited publications or web sites. However the actual reading of publications often takes place on paper and frequently on the move. We present a system Print-n-Link that uses technologies for interactive paper to enhance the reading process by enabling users to access digital information and/or searches for cited documents from a printed version of a publication using a digital pen for interaction. A special virtual printer driver automatically generates links from paper to digital services during the printing process based on an analysis of PDF documents. Depending on the user setting and interaction gesture, the system may retrieve metadata about the citation and inform the user through an audio channel or directly display the cited document on the user’s screen.\n\nResearch paper: https://beatsigner.com/publications/print-n-link-weaving-the-paper-web.pdf","location":"DocEng 2006, ACM Symposium on Document Engineering","event_date":{"day":null,"month":10,"year":2006,"errors":{}},"publication_date":{"day":null,"month":null,"year":2006,"errors":{}},"publication_name":"Presentation given at DocEng 2006, ACM Symposium on Document Engineering, Amsterdam, The Netherlands,"},"translated_abstract":"Citations form the basis for a web of scientific publications. Search engines, embedded hyperlinks and digital libraries all simplify the task of finding publications of interest on the web and navigating to cited publications or web sites. However the actual reading of publications often takes place on paper and frequently on the move. We present a system Print-n-Link that uses technologies for interactive paper to enhance the reading process by enabling users to access digital information and/or searches for cited documents from a printed version of a publication using a digital pen for interaction. A special virtual printer driver automatically generates links from paper to digital services during the printing process based on an analysis of PDF documents. Depending on the user setting and interaction gesture, the system may retrieve metadata about the citation and inform the user through an audio channel or directly display the cited document on the user’s screen.\n\nResearch paper: https://beatsigner.com/publications/print-n-link-weaving-the-paper-web.pdf","internal_url":"https://www.academia.edu/1661745/Print_n_Li_nk_Weaving_the_Paper_Web","translated_internal_url":"","created_at":"2010-01-12T15:44:51.950-08:00","preview_url":null,"current_user_can_edit":null,"current_user_is_owner":null,"owner_id":13155,"coauthors_can_edit":true,"document_type":"conference_presentation","co_author_tags":[],"downloadable_attachments":[{"id":77851884,"title":"","file_type":"pdf","scribd_thumbnail_url":"https://attachments.academia-assets.com/77851884/thumbnails/1.jpg","file_name":"DocEng2006.pdf","download_url":"https://www.academia.edu/attachments/77851884/download_file","bulk_download_file_name":"Print_n_Li_nk_Weaving_the_Paper_Web.pdf","bulk_download_url":"https://d1wqtxts1xzle7.cloudfront.net/77851884/DocEng2006-libre.pdf?1641062599=\u0026response-content-disposition=attachment%3B+filename%3DPrint_n_Li_nk_Weaving_the_Paper_Web.pdf\u0026Expires=1744203895\u0026Signature=Ncm7BALqsSzEI7rANAFV3WkLQj45loqwZMuPupp4qTqpLgbf~hnqTJ~97feMDbMt10Gprh8wiLHUG9nmmu2ToVruNsyOecKULQgATHC-dPlr80dvWqQStMEVfnOoQG-Q3Q-FcQKV-xJkIRA5pNNHS3ILexycY0H3Cmi2quWrzJpuBtVieY1ikampObPTq2TtkO-gPKe~EAjFlPd24l8J~oSdFjBp2cRYKsAhd7bmos7vRJ5MDcVTxbv2Dxp92SwRr6KiE9alKTYX5YZrBxx34f1Dk8jsmBjvMpTBazz5h38OQ~jHWU9AIgiHQmL4nv0Qzd5d1AZIvBK3lVFYgXgSUg__\u0026Key-Pair-Id=APKAJLOHF5GGSLRBV4ZA"}],"slug":"Print_n_Li_nk_Weaving_the_Paper_Web","translated_slug":"","page_count":16,"language":"en","content_type":"Work","summary":"Citations form the basis for a web of scientific publications. Search engines, embedded hyperlinks and digital libraries all simplify the task of finding publications of interest on the web and navigating to cited publications or web sites. However the actual reading of publications often takes place on paper and frequently on the move. We present a system Print-n-Link that uses technologies for interactive paper to enhance the reading process by enabling users to access digital information and/or searches for cited documents from a printed version of a publication using a digital pen for interaction. A special virtual printer driver automatically generates links from paper to digital services during the printing process based on an analysis of PDF documents. Depending on the user setting and interaction gesture, the system may retrieve metadata about the citation and inform the user through an audio channel or directly display the cited document on the user’s screen.\n\nResearch paper: https://beatsigner.com/publications/print-n-link-weaving-the-paper-web.pdf","impression_tracking_id":null,"owner":{"id":13155,"first_name":"Beat","middle_initials":null,"last_name":"Signer","page_name":"BeatSigner","domain_name":"vub","created_at":"2008-10-29T17:11:18.015-07:00","display_name":"Beat Signer","url":"https://vub.academia.edu/BeatSigner"},"attachments":[{"id":77851884,"title":"","file_type":"pdf","scribd_thumbnail_url":"https://attachments.academia-assets.com/77851884/thumbnails/1.jpg","file_name":"DocEng2006.pdf","download_url":"https://www.academia.edu/attachments/77851884/download_file","bulk_download_file_name":"Print_n_Li_nk_Weaving_the_Paper_Web.pdf","bulk_download_url":"https://d1wqtxts1xzle7.cloudfront.net/77851884/DocEng2006-libre.pdf?1641062599=\u0026response-content-disposition=attachment%3B+filename%3DPrint_n_Li_nk_Weaving_the_Paper_Web.pdf\u0026Expires=1744203895\u0026Signature=Ncm7BALqsSzEI7rANAFV3WkLQj45loqwZMuPupp4qTqpLgbf~hnqTJ~97feMDbMt10Gprh8wiLHUG9nmmu2ToVruNsyOecKULQgATHC-dPlr80dvWqQStMEVfnOoQG-Q3Q-FcQKV-xJkIRA5pNNHS3ILexycY0H3Cmi2quWrzJpuBtVieY1ikampObPTq2TtkO-gPKe~EAjFlPd24l8J~oSdFjBp2cRYKsAhd7bmos7vRJ5MDcVTxbv2Dxp92SwRr6KiE9alKTYX5YZrBxx34f1Dk8jsmBjvMpTBazz5h38OQ~jHWU9AIgiHQmL4nv0Qzd5d1AZIvBK3lVFYgXgSUg__\u0026Key-Pair-Id=APKAJLOHF5GGSLRBV4ZA"}],"research_interests":[{"id":422,"name":"Computer Science","url":"https://www.academia.edu/Documents/in/Computer_Science"},{"id":422,"name":"Computer Science","url":"https://www.academia.edu/Documents/in/Computer_Science"},{"id":449,"name":"Software Engineering","url":"https://www.academia.edu/Documents/in/Software_Engineering"},{"id":472,"name":"Human Computer Interaction","url":"https://www.academia.edu/Documents/in/Human_Computer_Interaction"},{"id":3419,"name":"Multimedia","url":"https://www.academia.edu/Documents/in/Multimedia"},{"id":5673,"name":"Augmented Reality","url":"https://www.academia.edu/Documents/in/Augmented_Reality"},{"id":10165,"name":"Interactive and Digital Media","url":"https://www.academia.edu/Documents/in/Interactive_and_Digital_Media"},{"id":11085,"name":"Cross-Media Information Spaces","url":"https://www.academia.edu/Documents/in/Cross-Media_Information_Spaces"},{"id":20470,"name":"Tangible User Interfaces","url":"https://www.academia.edu/Documents/in/Tangible_User_Interfaces"},{"id":42095,"name":"Document Engineering","url":"https://www.academia.edu/Documents/in/Document_Engineering"},{"id":44606,"name":"Hypermedia","url":"https://www.academia.edu/Documents/in/Hypermedia"}],"urls":[{"id":15966291,"url":"https://speakerdeck.com/signer/print-n-link-weaving-the-paper-web"}]}, dispatcherData: dispatcherData }); $(this).data('initialized', true); } }); $a.trackClickSource(".js-work-strip-work-link", "profile_work_strip") if (false) { Aedu.setUpFigureCarousel('profile-work-1661745-figures'); } }); </script> <div class="js-work-strip profile--work_container" data-work-id="1661746"><div class="profile--work_thumbnail hidden-xs"><a class="js-work-strip-work-link" data-click-track="profile-work-strip-thumbnail" href="https://www.academia.edu/1661746/Linking_Paper_and_Digital_Media"><img alt="Research paper thumbnail of Linking Paper and Digital Media" class="work-thumbnail" src="https://attachments.academia-assets.com/77862784/thumbnails/1.jpg" /></a></div><div class="wp-workCard wp-workCard_itemContainer"><div class="wp-workCard_item wp-workCard--title"><a class="js-work-strip-work-link text-gray-darker" data-click-track="profile-work-strip-title" href="https://www.academia.edu/1661746/Linking_Paper_and_Digital_Media">Linking Paper and Digital Media</a></div><div class="wp-workCard_item"><span>Presentation given at DELOS Demo Day</span><span>, 2006</span></div><div class="wp-workCard_item wp-workCard--actions"><span class="work-strip-bookmark-button-container"></span><a id="26e45ba0dd85635ccf01851f4debeae6" class="wp-workCard--action" rel="nofollow" data-click-track="profile-work-strip-download" data-download="{&quot;attachment_id&quot;:77862784,&quot;asset_id&quot;:1661746,&quot;asset_type&quot;:&quot;Work&quot;,&quot;button_location&quot;:&quot;profile&quot;}" href="https://www.academia.edu/attachments/77862784/download_file?s=profile"><span><i class="fa fa-arrow-down"></i></span><span>Download</span></a><span class="wp-workCard--action visible-if-viewed-by-owner inline-block" style="display: none;"><span class="js-profile-work-strip-edit-button-wrapper profile-work-strip-edit-button-wrapper" data-work-id="1661746"><a class="js-profile-work-strip-edit-button" tabindex="0"><span><i class="fa fa-pencil"></i></span><span>Edit</span></a></span></span></div><div class="wp-workCard_item wp-workCard--stats"><span><span><span class="js-view-count view-count u-mr2x" data-work-id="1661746"><i class="fa fa-spinner fa-spin"></i></span><script>$(function () { var workId = 1661746; window.Academia.workViewCountsFetcher.queue(workId, function (count) { var description = window.$h.commaizeInt(count) + " " + window.$h.pluralize(count, 'View'); $(".js-view-count[data-work-id=1661746]").text(description); $(".js-view-count[data-work-id=1661746]").attr('title', description).tooltip(); }); });</script></span></span><span><span class="percentile-widget hidden"><span class="u-mr2x work-percentile"></span></span><script>$(function () { var workId = 1661746; window.Academia.workPercentilesFetcher.queue(workId, function (percentileText) { var container = $(".js-work-strip[data-work-id='1661746']"); container.find('.work-percentile').text(percentileText.charAt(0).toUpperCase() + percentileText.slice(1)); container.find('.percentile-widget').show(); container.find('.percentile-widget').removeClass('hidden'); }); });</script></span></div><div id="work-strip-premium-row-container"></div></div></div><script> require.config({ waitSeconds: 90 })(["https://a.academia-assets.com/assets/wow_profile-a9bf3a2bc8c89fa2a77156577594264ee8a0f214d74241bc0fcd3f69f8d107ac.js","https://a.academia-assets.com/assets/work_edit-ad038b8c047c1a8d4fa01b402d530ff93c45fee2137a149a4a5398bc8ad67560.js"], function() { // from javascript_helper.rb var dispatcherData = {} if (true){ window.WowProfile.dispatcher = window.WowProfile.dispatcher || _.clone(Backbone.Events); dispatcherData = { dispatcher: window.WowProfile.dispatcher, downloadLinkId: "26e45ba0dd85635ccf01851f4debeae6" } } $('.js-work-strip[data-work-id=1661746]').each(function() { if (!$(this).data('initialized')) { new WowProfile.WorkStripView({ el: this, workJSON: {"id":1661746,"title":"Linking Paper and Digital Media","translated_title":"","metadata":{"location":"DELOS Demo Day","event_date":{"day":null,"month":2,"year":2006,"errors":{}},"ai_abstract":"This paper discusses the integration of paper and digital media through the development of an interactive paper framework (iPaper) at ETH Zurich. The framework aims to enhance the use of paper in everyday settings by providing a platform for cross-media information management, featuring interactive functionalities and collaborative capabilities. Key aspects include semantic resources, user trials, and applications that support innovative interaction methods such as voice feedback and handwriting.","publication_date":{"day":null,"month":null,"year":2006,"errors":{}},"publication_name":"Presentation given at DELOS Demo Day"},"translated_abstract":null,"internal_url":"https://www.academia.edu/1661746/Linking_Paper_and_Digital_Media","translated_internal_url":"","created_at":"2010-01-12T15:46:36.379-08:00","preview_url":null,"current_user_can_edit":null,"current_user_is_owner":null,"owner_id":13155,"coauthors_can_edit":true,"document_type":"conference_presentation","co_author_tags":[],"downloadable_attachments":[{"id":77862784,"title":"","file_type":"pdf","scribd_thumbnail_url":"https://attachments.academia-assets.com/77862784/thumbnails/1.jpg","file_name":"DELOS.pdf","download_url":"https://www.academia.edu/attachments/77862784/download_file","bulk_download_file_name":"Linking_Paper_and_Digital_Media.pdf","bulk_download_url":"https://d1wqtxts1xzle7.cloudfront.net/77862784/DELOS-libre.pdf?1641073321=\u0026response-content-disposition=attachment%3B+filename%3DLinking_Paper_and_Digital_Media.pdf\u0026Expires=1744203895\u0026Signature=NV7jV7vQbwy8PBiSQzHD99GWYEmWGCIN4SuBdgq058l6TyLAMwz392nv9E9jN7mSYrVydDqXyst5Lgi2~WAvhASXyFsbHyjiPHnELRjhRUcrNOlx9NZzGYwXHYopc~sW1jHCKw-g1jv~zQCmpd0rXKrXPkdOyQKWc4NFSKpnieJbb6OUjGwemPlFijf~~AyHklXoWcHMZXZN6rTipFxwiZOxmM3Wj1QnXr2kPehb7pUpHvmyJEyMOY2OIht5fJh74Xq1DlZhhg6cnYMaVHt2bbcQBt9OfuCGkHQb6G556ECVwiv-LyVegmBzwcoAIcmUd5jKgUPquT1DDAaiYAuU8g__\u0026Key-Pair-Id=APKAJLOHF5GGSLRBV4ZA"}],"slug":"Linking_Paper_and_Digital_Media","translated_slug":"","page_count":18,"language":"en","content_type":"Work","summary":null,"impression_tracking_id":null,"owner":{"id":13155,"first_name":"Beat","middle_initials":null,"last_name":"Signer","page_name":"BeatSigner","domain_name":"vub","created_at":"2008-10-29T17:11:18.015-07:00","display_name":"Beat Signer","url":"https://vub.academia.edu/BeatSigner"},"attachments":[{"id":77862784,"title":"","file_type":"pdf","scribd_thumbnail_url":"https://attachments.academia-assets.com/77862784/thumbnails/1.jpg","file_name":"DELOS.pdf","download_url":"https://www.academia.edu/attachments/77862784/download_file","bulk_download_file_name":"Linking_Paper_and_Digital_Media.pdf","bulk_download_url":"https://d1wqtxts1xzle7.cloudfront.net/77862784/DELOS-libre.pdf?1641073321=\u0026response-content-disposition=attachment%3B+filename%3DLinking_Paper_and_Digital_Media.pdf\u0026Expires=1744203895\u0026Signature=NV7jV7vQbwy8PBiSQzHD99GWYEmWGCIN4SuBdgq058l6TyLAMwz392nv9E9jN7mSYrVydDqXyst5Lgi2~WAvhASXyFsbHyjiPHnELRjhRUcrNOlx9NZzGYwXHYopc~sW1jHCKw-g1jv~zQCmpd0rXKrXPkdOyQKWc4NFSKpnieJbb6OUjGwemPlFijf~~AyHklXoWcHMZXZN6rTipFxwiZOxmM3Wj1QnXr2kPehb7pUpHvmyJEyMOY2OIht5fJh74Xq1DlZhhg6cnYMaVHt2bbcQBt9OfuCGkHQb6G556ECVwiv-LyVegmBzwcoAIcmUd5jKgUPquT1DDAaiYAuU8g__\u0026Key-Pair-Id=APKAJLOHF5GGSLRBV4ZA"}],"research_interests":[{"id":422,"name":"Computer Science","url":"https://www.academia.edu/Documents/in/Computer_Science"},{"id":449,"name":"Software Engineering","url":"https://www.academia.edu/Documents/in/Software_Engineering"},{"id":472,"name":"Human Computer Interaction","url":"https://www.academia.edu/Documents/in/Human_Computer_Interaction"},{"id":472,"name":"Human Computer Interaction","url":"https://www.academia.edu/Documents/in/Human_Computer_Interaction"},{"id":933,"name":"New Media","url":"https://www.academia.edu/Documents/in/New_Media"},{"id":2879,"name":"Ubiquitous Computing","url":"https://www.academia.edu/Documents/in/Ubiquitous_Computing"},{"id":3419,"name":"Multimedia","url":"https://www.academia.edu/Documents/in/Multimedia"},{"id":5673,"name":"Augmented Reality","url":"https://www.academia.edu/Documents/in/Augmented_Reality"},{"id":10165,"name":"Interactive and Digital Media","url":"https://www.academia.edu/Documents/in/Interactive_and_Digital_Media"},{"id":11085,"name":"Cross-Media Information Spaces","url":"https://www.academia.edu/Documents/in/Cross-Media_Information_Spaces"},{"id":11086,"name":"Interactive Paper","url":"https://www.academia.edu/Documents/in/Interactive_Paper"},{"id":11732,"name":"Linked Data","url":"https://www.academia.edu/Documents/in/Linked_Data"},{"id":20470,"name":"Tangible User Interfaces","url":"https://www.academia.edu/Documents/in/Tangible_User_Interfaces"},{"id":44606,"name":"Hypermedia","url":"https://www.academia.edu/Documents/in/Hypermedia"}],"urls":[{"id":15971654,"url":"https://speakerdeck.com/signer/linking-paper-and-digital-media"}]}, dispatcherData: dispatcherData }); $(this).data('initialized', true); } }); $a.trackClickSource(".js-work-strip-work-link", "profile_work_strip") if (false) { Aedu.setUpFigureCarousel('profile-work-1661746-figures'); } }); </script> </div><div class="profile--tab_content_container js-tab-pane tab-pane" data-section-id="2768024" id="videos"><div class="js-work-strip profile--work_container" data-work-id="36545285"><div class="profile--work_thumbnail hidden-xs"><a class="js-work-strip-work-link" data-click-track="profile-work-strip-thumbnail" rel="nofollow" href="https://www.academia.edu/36545285/MindXpres"><img alt="Research paper thumbnail of MindXpres" class="work-thumbnail" src="https://a.academia-assets.com/images/blank-paper.jpg" /></a></div><div class="wp-workCard wp-workCard_itemContainer"><div class="wp-workCard_item wp-workCard--title">MindXpres</div><div class="wp-workCard_item"><span class="js-work-more-abstract-truncated">Demo of our MindXpres presentation platform. More details about the presented system can be found...</span><a class="js-work-more-abstract" data-broccoli-component="work_strip.more_abstract" data-click-track="profile-work-strip-more-abstract" href="javascript:;"><span> more </span><span><i class="fa fa-caret-down"></i></span></a><span class="js-work-more-abstract-untruncated hidden">Demo of our MindXpres presentation platform. More details about the presented system can be found on mindxpres.com</span></div><div class="wp-workCard_item wp-workCard--actions"><span class="work-strip-bookmark-button-container"></span><span class="wp-workCard--action visible-if-viewed-by-owner inline-block" style="display: none;"><span class="js-profile-work-strip-edit-button-wrapper profile-work-strip-edit-button-wrapper" data-work-id="36545285"><a class="js-profile-work-strip-edit-button" tabindex="0"><span><i class="fa fa-pencil"></i></span><span>Edit</span></a></span></span></div><div class="wp-workCard_item wp-workCard--stats"><span><span><span class="js-view-count view-count u-mr2x" data-work-id="36545285"><i class="fa fa-spinner fa-spin"></i></span><script>$(function () { var workId = 36545285; window.Academia.workViewCountsFetcher.queue(workId, function (count) { var description = window.$h.commaizeInt(count) + " " + window.$h.pluralize(count, 'View'); $(".js-view-count[data-work-id=36545285]").text(description); $(".js-view-count[data-work-id=36545285]").attr('title', description).tooltip(); }); });</script></span></span><span><span class="percentile-widget hidden"><span class="u-mr2x work-percentile"></span></span><script>$(function () { var workId = 36545285; window.Academia.workPercentilesFetcher.queue(workId, function (percentileText) { var container = $(".js-work-strip[data-work-id='36545285']"); container.find('.work-percentile').text(percentileText.charAt(0).toUpperCase() + percentileText.slice(1)); container.find('.percentile-widget').show(); container.find('.percentile-widget').removeClass('hidden'); }); });</script></span></div><div id="work-strip-premium-row-container"></div></div></div><script> require.config({ waitSeconds: 90 })(["https://a.academia-assets.com/assets/wow_profile-a9bf3a2bc8c89fa2a77156577594264ee8a0f214d74241bc0fcd3f69f8d107ac.js","https://a.academia-assets.com/assets/work_edit-ad038b8c047c1a8d4fa01b402d530ff93c45fee2137a149a4a5398bc8ad67560.js"], function() { // from javascript_helper.rb var dispatcherData = {} if (false){ window.WowProfile.dispatcher = window.WowProfile.dispatcher || _.clone(Backbone.Events); dispatcherData = { dispatcher: window.WowProfile.dispatcher, downloadLinkId: "-1" } } $('.js-work-strip[data-work-id=36545285]').each(function() { if (!$(this).data('initialized')) { new WowProfile.WorkStripView({ el: this, workJSON: {"id":36545285,"title":"MindXpres","translated_title":"","metadata":{"abstract":"Demo of our MindXpres presentation platform. More details about the presented system can be found on mindxpres.com"},"translated_abstract":"Demo of our MindXpres presentation platform. More details about the presented system can be found on mindxpres.com","internal_url":"https://www.academia.edu/36545285/MindXpres","translated_internal_url":"","created_at":"2018-05-01T13:47:33.827-07:00","preview_url":null,"current_user_can_edit":null,"current_user_is_owner":null,"owner_id":13155,"coauthors_can_edit":true,"document_type":"other","co_author_tags":[],"downloadable_attachments":[],"slug":"MindXpres","translated_slug":"","page_count":null,"language":"en","content_type":"Work","summary":"Demo of our MindXpres presentation platform. More details about the presented system can be found on mindxpres.com","impression_tracking_id":null,"owner":{"id":13155,"first_name":"Beat","middle_initials":null,"last_name":"Signer","page_name":"BeatSigner","domain_name":"vub","created_at":"2008-10-29T17:11:18.015-07:00","display_name":"Beat Signer","url":"https://vub.academia.edu/BeatSigner"},"attachments":[],"research_interests":[{"id":37,"name":"Information Systems","url":"https://www.academia.edu/Documents/in/Information_Systems"},{"id":422,"name":"Computer Science","url":"https://www.academia.edu/Documents/in/Computer_Science"},{"id":472,"name":"Human Computer Interaction","url":"https://www.academia.edu/Documents/in/Human_Computer_Interaction"},{"id":491,"name":"Information Technology","url":"https://www.academia.edu/Documents/in/Information_Technology"},{"id":922,"name":"Education","url":"https://www.academia.edu/Documents/in/Education"},{"id":1003,"name":"Educational Technology","url":"https://www.academia.edu/Documents/in/Educational_Technology"},{"id":1210,"name":"Informatics","url":"https://www.academia.edu/Documents/in/Informatics"},{"id":1380,"name":"Computer Engineering","url":"https://www.academia.edu/Documents/in/Computer_Engineering"},{"id":5969,"name":"Computer Supported Collaborative Learning","url":"https://www.academia.edu/Documents/in/Computer_Supported_Collaborative_Learning"},{"id":8668,"name":"Computer Supported Collaborative Learning CSCL","url":"https://www.academia.edu/Documents/in/Computer_Supported_Collaborative_Learning_CSCL"},{"id":11085,"name":"Cross-Media Information Spaces","url":"https://www.academia.edu/Documents/in/Cross-Media_Information_Spaces"},{"id":69857,"name":"PowerPoint","url":"https://www.academia.edu/Documents/in/PowerPoint"},{"id":389372,"name":"Presentations","url":"https://www.academia.edu/Documents/in/Presentations"},{"id":1273494,"name":"Slideware","url":"https://www.academia.edu/Documents/in/Slideware"}],"urls":[{"id":8492322,"url":"https://www.youtube.com/watch?v=RfgotXB5Ajc"}]}, dispatcherData: dispatcherData }); $(this).data('initialized', true); } }); $a.trackClickSource(".js-work-strip-work-link", "profile_work_strip") if (false) { Aedu.setUpFigureCarousel('profile-work-36545285-figures'); } }); </script> <div class="js-work-strip profile--work_container" data-work-id="11590838"><div class="profile--work_thumbnail hidden-xs"><a class="js-work-strip-work-link" data-click-track="profile-work-strip-thumbnail" rel="nofollow" href="https://www.academia.edu/11590838/SpeeG2"><img alt="Research paper thumbnail of SpeeG2" class="work-thumbnail" src="https://a.academia-assets.com/images/blank-paper.jpg" /></a></div><div class="wp-workCard wp-workCard_itemContainer"><div class="wp-workCard_item wp-workCard--title">SpeeG2</div><div class="wp-workCard_item"><span class="js-work-more-abstract-truncated">More details about the presented system can be found in our paper entitled &#39;SpeeG2: A Speech- and...</span><a class="js-work-more-abstract" data-broccoli-component="work_strip.more_abstract" data-click-track="profile-work-strip-more-abstract" href="javascript:;"><span> more </span><span><i class="fa fa-caret-down"></i></span></a><span class="js-work-more-abstract-untruncated hidden">More details about the presented system can be found in our paper entitled &#39;SpeeG2: A Speech- and Gesture-based Interface for Efficient Controller-free Text Entry&#39; which has been presented at the 15th International Conference on Multimodal Interaction taking place in&nbsp; Sydney, Australia: <a href="https://www.academia.edu/4685517/SpeeG2_A_Speech-_and_Gesture-based_Interface_for_Efficient_Controller-free_Text_Entry" rel="nofollow">https://www.academia.edu/4685517/SpeeG2_A_Speech-_and_Gesture-based_Interface_for_Efficient_Controller-free_Text_Entry</a></span></div><div class="wp-workCard_item wp-workCard--actions"><span class="work-strip-bookmark-button-container"></span><span class="wp-workCard--action visible-if-viewed-by-owner inline-block" style="display: none;"><span class="js-profile-work-strip-edit-button-wrapper profile-work-strip-edit-button-wrapper" data-work-id="11590838"><a class="js-profile-work-strip-edit-button" tabindex="0"><span><i class="fa fa-pencil"></i></span><span>Edit</span></a></span></span></div><div class="wp-workCard_item wp-workCard--stats"><span><span><span class="js-view-count view-count u-mr2x" data-work-id="11590838"><i class="fa fa-spinner fa-spin"></i></span><script>$(function () { var workId = 11590838; window.Academia.workViewCountsFetcher.queue(workId, function (count) { var description = window.$h.commaizeInt(count) + " " + window.$h.pluralize(count, 'View'); $(".js-view-count[data-work-id=11590838]").text(description); $(".js-view-count[data-work-id=11590838]").attr('title', description).tooltip(); }); });</script></span></span><span><span class="percentile-widget hidden"><span class="u-mr2x work-percentile"></span></span><script>$(function () { var workId = 11590838; window.Academia.workPercentilesFetcher.queue(workId, function (percentileText) { var container = $(".js-work-strip[data-work-id='11590838']"); container.find('.work-percentile').text(percentileText.charAt(0).toUpperCase() + percentileText.slice(1)); container.find('.percentile-widget').show(); container.find('.percentile-widget').removeClass('hidden'); }); });</script></span></div><div id="work-strip-premium-row-container"></div></div></div><script> require.config({ waitSeconds: 90 })(["https://a.academia-assets.com/assets/wow_profile-a9bf3a2bc8c89fa2a77156577594264ee8a0f214d74241bc0fcd3f69f8d107ac.js","https://a.academia-assets.com/assets/work_edit-ad038b8c047c1a8d4fa01b402d530ff93c45fee2137a149a4a5398bc8ad67560.js"], function() { // from javascript_helper.rb var dispatcherData = {} if (false){ window.WowProfile.dispatcher = window.WowProfile.dispatcher || _.clone(Backbone.Events); dispatcherData = { dispatcher: window.WowProfile.dispatcher, downloadLinkId: "-1" } } $('.js-work-strip[data-work-id=11590838]').each(function() { if (!$(this).data('initialized')) { new WowProfile.WorkStripView({ el: this, workJSON: {"id":11590838,"title":"SpeeG2","translated_title":"","metadata":{"abstract":"More details about the presented system can be found in our paper entitled 'SpeeG2: A Speech- and Gesture-based Interface for Efficient Controller-free Text Entry' which has been presented at the 15th International Conference on Multimodal Interaction taking place in Sydney, Australia: https://www.academia.edu/4685517/SpeeG2_A_Speech-_and_Gesture-based_Interface_for_Efficient_Controller-free_Text_Entry"},"translated_abstract":"More details about the presented system can be found in our paper entitled 'SpeeG2: A Speech- and Gesture-based Interface for Efficient Controller-free Text Entry' which has been presented at the 15th International Conference on Multimodal Interaction taking place in Sydney, Australia: https://www.academia.edu/4685517/SpeeG2_A_Speech-_and_Gesture-based_Interface_for_Efficient_Controller-free_Text_Entry","internal_url":"https://www.academia.edu/11590838/SpeeG2","translated_internal_url":"","created_at":"2015-03-22T14:06:05.537-07:00","preview_url":null,"current_user_can_edit":null,"current_user_is_owner":null,"owner_id":13155,"coauthors_can_edit":true,"document_type":"other","co_author_tags":[],"downloadable_attachments":[],"slug":"SpeeG2","translated_slug":"","page_count":null,"language":"en","content_type":"Work","summary":"More details about the presented system can be found in our paper entitled 'SpeeG2: A Speech- and Gesture-based Interface for Efficient Controller-free Text Entry' which has been presented at the 15th International Conference on Multimodal Interaction taking place in Sydney, Australia: https://www.academia.edu/4685517/SpeeG2_A_Speech-_and_Gesture-based_Interface_for_Efficient_Controller-free_Text_Entry","impression_tracking_id":null,"owner":{"id":13155,"first_name":"Beat","middle_initials":null,"last_name":"Signer","page_name":"BeatSigner","domain_name":"vub","created_at":"2008-10-29T17:11:18.015-07:00","display_name":"Beat Signer","url":"https://vub.academia.edu/BeatSigner"},"attachments":[],"research_interests":[{"id":37,"name":"Information Systems","url":"https://www.academia.edu/Documents/in/Information_Systems"},{"id":422,"name":"Computer Science","url":"https://www.academia.edu/Documents/in/Computer_Science"},{"id":449,"name":"Software Engineering","url":"https://www.academia.edu/Documents/in/Software_Engineering"},{"id":472,"name":"Human Computer Interaction","url":"https://www.academia.edu/Documents/in/Human_Computer_Interaction"},{"id":491,"name":"Information Technology","url":"https://www.academia.edu/Documents/in/Information_Technology"},{"id":859,"name":"Communication","url":"https://www.academia.edu/Documents/in/Communication"},{"id":923,"name":"Technology","url":"https://www.academia.edu/Documents/in/Technology"},{"id":933,"name":"New Media","url":"https://www.academia.edu/Documents/in/New_Media"},{"id":1210,"name":"Informatics","url":"https://www.academia.edu/Documents/in/Informatics"},{"id":1453,"name":"Information Management","url":"https://www.academia.edu/Documents/in/Information_Management"},{"id":2879,"name":"Ubiquitous Computing","url":"https://www.academia.edu/Documents/in/Ubiquitous_Computing"},{"id":3419,"name":"Multimedia","url":"https://www.academia.edu/Documents/in/Multimedia"},{"id":4416,"name":"Interaction Design","url":"https://www.academia.edu/Documents/in/Interaction_Design"},{"id":5673,"name":"Augmented Reality","url":"https://www.academia.edu/Documents/in/Augmented_Reality"},{"id":10165,"name":"Interactive and Digital Media","url":"https://www.academia.edu/Documents/in/Interactive_and_Digital_Media"},{"id":11081,"name":"Multimodal Interaction","url":"https://www.academia.edu/Documents/in/Multimodal_Interaction"},{"id":11085,"name":"Cross-Media Information Spaces","url":"https://www.academia.edu/Documents/in/Cross-Media_Information_Spaces"},{"id":11320,"name":"Automatic Speech Recognition","url":"https://www.academia.edu/Documents/in/Automatic_Speech_Recognition"},{"id":11984,"name":"Speech Recognition","url":"https://www.academia.edu/Documents/in/Speech_Recognition"},{"id":15817,"name":"Speech Communication","url":"https://www.academia.edu/Documents/in/Speech_Communication"},{"id":17701,"name":"Gesture Recognition","url":"https://www.academia.edu/Documents/in/Gesture_Recognition"},{"id":20470,"name":"Tangible User Interfaces","url":"https://www.academia.edu/Documents/in/Tangible_User_Interfaces"},{"id":36835,"name":"Speech Processing","url":"https://www.academia.edu/Documents/in/Speech_Processing"},{"id":37228,"name":"Multimodal Interfaces","url":"https://www.academia.edu/Documents/in/Multimodal_Interfaces"},{"id":44606,"name":"Hypermedia","url":"https://www.academia.edu/Documents/in/Hypermedia"},{"id":91387,"name":"Kinect","url":"https://www.academia.edu/Documents/in/Kinect"},{"id":150569,"name":"Voice Recognition","url":"https://www.academia.edu/Documents/in/Voice_Recognition"}],"urls":[{"id":4617493,"url":"https://www.youtube.com/watch?v=ItKySNv8l90"}]}, dispatcherData: dispatcherData }); $(this).data('initialized', true); } }); $a.trackClickSource(".js-work-strip-work-link", "profile_work_strip") if (false) { Aedu.setUpFigureCarousel('profile-work-11590838-figures'); } }); </script> <div class="js-work-strip profile--work_container" data-work-id="11590819"><div class="profile--work_thumbnail hidden-xs"><a class="js-work-strip-work-link" data-click-track="profile-work-strip-thumbnail" rel="nofollow" href="https://www.academia.edu/11590819/SpeeG"><img alt="Research paper thumbnail of SpeeG" class="work-thumbnail" src="https://a.academia-assets.com/images/blank-paper.jpg" /></a></div><div class="wp-workCard wp-workCard_itemContainer"><div class="wp-workCard_item wp-workCard--title">SpeeG</div><div class="wp-workCard_item"><span class="js-work-more-abstract-truncated">More details about the presented system can be found in our paper entitled &#39;SpeeG: A Multimodal S...</span><a class="js-work-more-abstract" data-broccoli-component="work_strip.more_abstract" data-click-track="profile-work-strip-more-abstract" href="javascript:;"><span> more </span><span><i class="fa fa-caret-down"></i></span></a><span class="js-work-more-abstract-untruncated hidden">More details about the presented system can be found in our paper entitled &#39;SpeeG: A Multimodal Speech- and Gesture-based Text Input Solution&#39; which has been presented at the International Working Conference on Advanced Visual Interfaces taking place in Capri Island, Italy: <a href="https://www.academia.edu/1434991/SpeeG_A_Multimodal_Speech-_and_Gesture-based_Text_Input_Solution" rel="nofollow">https://www.academia.edu/1434991/SpeeG_A_Multimodal_Speech-_and_Gesture-based_Text_Input_Solution</a></span></div><div class="wp-workCard_item wp-workCard--actions"><span class="work-strip-bookmark-button-container"></span><span class="wp-workCard--action visible-if-viewed-by-owner inline-block" style="display: none;"><span class="js-profile-work-strip-edit-button-wrapper profile-work-strip-edit-button-wrapper" data-work-id="11590819"><a class="js-profile-work-strip-edit-button" tabindex="0"><span><i class="fa fa-pencil"></i></span><span>Edit</span></a></span></span></div><div class="wp-workCard_item wp-workCard--stats"><span><span><span class="js-view-count view-count u-mr2x" data-work-id="11590819"><i class="fa fa-spinner fa-spin"></i></span><script>$(function () { var workId = 11590819; window.Academia.workViewCountsFetcher.queue(workId, function (count) { var description = window.$h.commaizeInt(count) + " " + window.$h.pluralize(count, 'View'); $(".js-view-count[data-work-id=11590819]").text(description); $(".js-view-count[data-work-id=11590819]").attr('title', description).tooltip(); }); });</script></span></span><span><span class="percentile-widget hidden"><span class="u-mr2x work-percentile"></span></span><script>$(function () { var workId = 11590819; window.Academia.workPercentilesFetcher.queue(workId, function (percentileText) { var container = $(".js-work-strip[data-work-id='11590819']"); container.find('.work-percentile').text(percentileText.charAt(0).toUpperCase() + percentileText.slice(1)); container.find('.percentile-widget').show(); container.find('.percentile-widget').removeClass('hidden'); }); });</script></span></div><div id="work-strip-premium-row-container"></div></div></div><script> require.config({ waitSeconds: 90 })(["https://a.academia-assets.com/assets/wow_profile-a9bf3a2bc8c89fa2a77156577594264ee8a0f214d74241bc0fcd3f69f8d107ac.js","https://a.academia-assets.com/assets/work_edit-ad038b8c047c1a8d4fa01b402d530ff93c45fee2137a149a4a5398bc8ad67560.js"], function() { // from javascript_helper.rb var dispatcherData = {} if (false){ window.WowProfile.dispatcher = window.WowProfile.dispatcher || _.clone(Backbone.Events); dispatcherData = { dispatcher: window.WowProfile.dispatcher, downloadLinkId: "-1" } } $('.js-work-strip[data-work-id=11590819]').each(function() { if (!$(this).data('initialized')) { new WowProfile.WorkStripView({ el: this, workJSON: {"id":11590819,"title":"SpeeG","translated_title":"","metadata":{"abstract":"More details about the presented system can be found in our paper entitled 'SpeeG: A Multimodal Speech- and Gesture-based Text Input Solution' which has been presented at the International Working Conference on Advanced Visual Interfaces taking place in Capri Island, Italy: https://www.academia.edu/1434991/SpeeG_A_Multimodal_Speech-_and_Gesture-based_Text_Input_Solution"},"translated_abstract":"More details about the presented system can be found in our paper entitled 'SpeeG: A Multimodal Speech- and Gesture-based Text Input Solution' which has been presented at the International Working Conference on Advanced Visual Interfaces taking place in Capri Island, Italy: https://www.academia.edu/1434991/SpeeG_A_Multimodal_Speech-_and_Gesture-based_Text_Input_Solution","internal_url":"https://www.academia.edu/11590819/SpeeG","translated_internal_url":"","created_at":"2015-03-22T14:04:40.230-07:00","preview_url":null,"current_user_can_edit":null,"current_user_is_owner":null,"owner_id":13155,"coauthors_can_edit":true,"document_type":"other","co_author_tags":[],"downloadable_attachments":[],"slug":"SpeeG","translated_slug":"","page_count":null,"language":"en","content_type":"Work","summary":"More details about the presented system can be found in our paper entitled 'SpeeG: A Multimodal Speech- and Gesture-based Text Input Solution' which has been presented at the International Working Conference on Advanced Visual Interfaces taking place in Capri Island, Italy: https://www.academia.edu/1434991/SpeeG_A_Multimodal_Speech-_and_Gesture-based_Text_Input_Solution","impression_tracking_id":null,"owner":{"id":13155,"first_name":"Beat","middle_initials":null,"last_name":"Signer","page_name":"BeatSigner","domain_name":"vub","created_at":"2008-10-29T17:11:18.015-07:00","display_name":"Beat Signer","url":"https://vub.academia.edu/BeatSigner"},"attachments":[],"research_interests":[{"id":37,"name":"Information Systems","url":"https://www.academia.edu/Documents/in/Information_Systems"},{"id":48,"name":"Engineering","url":"https://www.academia.edu/Documents/in/Engineering"},{"id":422,"name":"Computer Science","url":"https://www.academia.edu/Documents/in/Computer_Science"},{"id":449,"name":"Software Engineering","url":"https://www.academia.edu/Documents/in/Software_Engineering"},{"id":472,"name":"Human Computer Interaction","url":"https://www.academia.edu/Documents/in/Human_Computer_Interaction"},{"id":491,"name":"Information Technology","url":"https://www.academia.edu/Documents/in/Information_Technology"},{"id":859,"name":"Communication","url":"https://www.academia.edu/Documents/in/Communication"},{"id":923,"name":"Technology","url":"https://www.academia.edu/Documents/in/Technology"},{"id":933,"name":"New Media","url":"https://www.academia.edu/Documents/in/New_Media"},{"id":1210,"name":"Informatics","url":"https://www.academia.edu/Documents/in/Informatics"},{"id":1453,"name":"Information Management","url":"https://www.academia.edu/Documents/in/Information_Management"},{"id":2869,"name":"Digital Media","url":"https://www.academia.edu/Documents/in/Digital_Media"},{"id":2879,"name":"Ubiquitous Computing","url":"https://www.academia.edu/Documents/in/Ubiquitous_Computing"},{"id":3419,"name":"Multimedia","url":"https://www.academia.edu/Documents/in/Multimedia"},{"id":4416,"name":"Interaction Design","url":"https://www.academia.edu/Documents/in/Interaction_Design"},{"id":5673,"name":"Augmented Reality","url":"https://www.academia.edu/Documents/in/Augmented_Reality"},{"id":10165,"name":"Interactive and Digital Media","url":"https://www.academia.edu/Documents/in/Interactive_and_Digital_Media"},{"id":11081,"name":"Multimodal Interaction","url":"https://www.academia.edu/Documents/in/Multimodal_Interaction"},{"id":11085,"name":"Cross-Media Information Spaces","url":"https://www.academia.edu/Documents/in/Cross-Media_Information_Spaces"},{"id":11320,"name":"Automatic Speech Recognition","url":"https://www.academia.edu/Documents/in/Automatic_Speech_Recognition"},{"id":11984,"name":"Speech Recognition","url":"https://www.academia.edu/Documents/in/Speech_Recognition"},{"id":15817,"name":"Speech Communication","url":"https://www.academia.edu/Documents/in/Speech_Communication"},{"id":17701,"name":"Gesture Recognition","url":"https://www.academia.edu/Documents/in/Gesture_Recognition"},{"id":20470,"name":"Tangible User Interfaces","url":"https://www.academia.edu/Documents/in/Tangible_User_Interfaces"},{"id":36835,"name":"Speech Processing","url":"https://www.academia.edu/Documents/in/Speech_Processing"},{"id":37228,"name":"Multimodal Interfaces","url":"https://www.academia.edu/Documents/in/Multimodal_Interfaces"},{"id":44606,"name":"Hypermedia","url":"https://www.academia.edu/Documents/in/Hypermedia"},{"id":91387,"name":"Kinect","url":"https://www.academia.edu/Documents/in/Kinect"},{"id":150569,"name":"Voice Recognition","url":"https://www.academia.edu/Documents/in/Voice_Recognition"},{"id":668047,"name":"Digital Media and Interaction Design","url":"https://www.academia.edu/Documents/in/Digital_Media_and_Interaction_Design"}],"urls":[{"id":4617489,"url":"https://www.youtube.com/watch?v=ZCY_3TNkIgg"}]}, dispatcherData: dispatcherData }); $(this).data('initialized', true); } }); $a.trackClickSource(".js-work-strip-work-link", "profile_work_strip") if (false) { Aedu.setUpFigureCarousel('profile-work-11590819-figures'); } }); </script> <div class="js-work-strip profile--work_container" data-work-id="11590521"><div class="profile--work_thumbnail hidden-xs"><a class="js-work-strip-work-link" data-click-track="profile-work-strip-thumbnail" rel="nofollow" href="https://www.academia.edu/11590521/Expressive_Control_of_Indirect_Augmented_Reality_During_Live_Music_Performances"><img alt="Research paper thumbnail of Expressive Control of Indirect Augmented Reality During Live Music Performances " class="work-thumbnail" src="https://a.academia-assets.com/images/blank-paper.jpg" /></a></div><div class="wp-workCard wp-workCard_itemContainer"><div class="wp-workCard_item wp-workCard--title">Expressive Control of Indirect Augmented Reality During Live Music Performances </div><div class="wp-workCard_item"><span class="js-work-more-abstract-truncated">This video demonstrates our Kinect-based solution for controlling the visual augmentation of a li...</span><a class="js-work-more-abstract" data-broccoli-component="work_strip.more_abstract" data-click-track="profile-work-strip-more-abstract" href="javascript:;"><span> more </span><span><i class="fa fa-caret-down"></i></span></a><span class="js-work-more-abstract-untruncated hidden">This video demonstrates our Kinect-based solution for controlling the visual augmentation of a live music performance via explicit gestures and implicit dance moves. Since the visual augmentation is not scripted, the presented approach enables more dynamic and spontaneous performances and leads to a more intense interaction between artist and audience. <br /> <br />Our solution for the expressive control of indirect augmented reality during live music performances has been used multiple times in live concerts by a well-known Belgium band. <br /> <br />More details about the presented system can be found in our paper entitled &#39;Expressive Control of Indirect Augmented Reality During Live Music Performances&#39; which has been presented at the 13th International Conference on New Interfaces for Musical Expression taking place in Seoul, Korea Republic: <a href="http://www.academia.edu/3408497/Expressive_Control_of_Indirect_Augmented_Reality_During_Live_Music_Performances" rel="nofollow">http://www.academia.edu/3408497/Expressive_Control_of_Indirect_Augmented_Reality_During_Live_Music_Performances</a></span></div><div class="wp-workCard_item wp-workCard--actions"><span class="work-strip-bookmark-button-container"></span><span class="wp-workCard--action visible-if-viewed-by-owner inline-block" style="display: none;"><span class="js-profile-work-strip-edit-button-wrapper profile-work-strip-edit-button-wrapper" data-work-id="11590521"><a class="js-profile-work-strip-edit-button" tabindex="0"><span><i class="fa fa-pencil"></i></span><span>Edit</span></a></span></span></div><div class="wp-workCard_item wp-workCard--stats"><span><span><span class="js-view-count view-count u-mr2x" data-work-id="11590521"><i class="fa fa-spinner fa-spin"></i></span><script>$(function () { var workId = 11590521; window.Academia.workViewCountsFetcher.queue(workId, function (count) { var description = window.$h.commaizeInt(count) + " " + window.$h.pluralize(count, 'View'); $(".js-view-count[data-work-id=11590521]").text(description); $(".js-view-count[data-work-id=11590521]").attr('title', description).tooltip(); }); });</script></span></span><span><span class="percentile-widget hidden"><span class="u-mr2x work-percentile"></span></span><script>$(function () { var workId = 11590521; window.Academia.workPercentilesFetcher.queue(workId, function (percentileText) { var container = $(".js-work-strip[data-work-id='11590521']"); container.find('.work-percentile').text(percentileText.charAt(0).toUpperCase() + percentileText.slice(1)); container.find('.percentile-widget').show(); container.find('.percentile-widget').removeClass('hidden'); }); });</script></span></div><div id="work-strip-premium-row-container"></div></div></div><script> require.config({ waitSeconds: 90 })(["https://a.academia-assets.com/assets/wow_profile-a9bf3a2bc8c89fa2a77156577594264ee8a0f214d74241bc0fcd3f69f8d107ac.js","https://a.academia-assets.com/assets/work_edit-ad038b8c047c1a8d4fa01b402d530ff93c45fee2137a149a4a5398bc8ad67560.js"], function() { // from javascript_helper.rb var dispatcherData = {} if (false){ window.WowProfile.dispatcher = window.WowProfile.dispatcher || _.clone(Backbone.Events); dispatcherData = { dispatcher: window.WowProfile.dispatcher, downloadLinkId: "-1" } } $('.js-work-strip[data-work-id=11590521]').each(function() { if (!$(this).data('initialized')) { new WowProfile.WorkStripView({ el: this, workJSON: {"id":11590521,"title":"Expressive Control of Indirect Augmented Reality During Live Music Performances ","translated_title":"","metadata":{"abstract":"This video demonstrates our Kinect-based solution for controlling the visual augmentation of a live music performance via explicit gestures and implicit dance moves. Since the visual augmentation is not scripted, the presented approach enables more dynamic and spontaneous performances and leads to a more intense interaction between artist and audience.\r\n\r\nOur solution for the expressive control of indirect augmented reality during live music performances has been used multiple times in live concerts by a well-known Belgium band.\r\n\r\nMore details about the presented system can be found in our paper entitled 'Expressive Control of Indirect Augmented Reality During Live Music Performances' which has been presented at the 13th International Conference on New Interfaces for Musical Expression taking place in Seoul, Korea Republic: http://www.academia.edu/3408497/Expressive_Control_of_Indirect_Augmented_Reality_During_Live_Music_Performances"},"translated_abstract":"This video demonstrates our Kinect-based solution for controlling the visual augmentation of a live music performance via explicit gestures and implicit dance moves. Since the visual augmentation is not scripted, the presented approach enables more dynamic and spontaneous performances and leads to a more intense interaction between artist and audience.\r\n\r\nOur solution for the expressive control of indirect augmented reality during live music performances has been used multiple times in live concerts by a well-known Belgium band.\r\n\r\nMore details about the presented system can be found in our paper entitled 'Expressive Control of Indirect Augmented Reality During Live Music Performances' which has been presented at the 13th International Conference on New Interfaces for Musical Expression taking place in Seoul, Korea Republic: http://www.academia.edu/3408497/Expressive_Control_of_Indirect_Augmented_Reality_During_Live_Music_Performances","internal_url":"https://www.academia.edu/11590521/Expressive_Control_of_Indirect_Augmented_Reality_During_Live_Music_Performances","translated_internal_url":"","created_at":"2015-03-22T13:40:56.608-07:00","preview_url":null,"current_user_can_edit":null,"current_user_is_owner":null,"owner_id":13155,"coauthors_can_edit":true,"document_type":"other","co_author_tags":[],"downloadable_attachments":[],"slug":"Expressive_Control_of_Indirect_Augmented_Reality_During_Live_Music_Performances","translated_slug":"","page_count":null,"language":"en","content_type":"Work","summary":"This video demonstrates our Kinect-based solution for controlling the visual augmentation of a live music performance via explicit gestures and implicit dance moves. Since the visual augmentation is not scripted, the presented approach enables more dynamic and spontaneous performances and leads to a more intense interaction between artist and audience.\r\n\r\nOur solution for the expressive control of indirect augmented reality during live music performances has been used multiple times in live concerts by a well-known Belgium band.\r\n\r\nMore details about the presented system can be found in our paper entitled 'Expressive Control of Indirect Augmented Reality During Live Music Performances' which has been presented at the 13th International Conference on New Interfaces for Musical Expression taking place in Seoul, Korea Republic: http://www.academia.edu/3408497/Expressive_Control_of_Indirect_Augmented_Reality_During_Live_Music_Performances","impression_tracking_id":null,"owner":{"id":13155,"first_name":"Beat","middle_initials":null,"last_name":"Signer","page_name":"BeatSigner","domain_name":"vub","created_at":"2008-10-29T17:11:18.015-07:00","display_name":"Beat Signer","url":"https://vub.academia.edu/BeatSigner"},"attachments":[],"research_interests":[{"id":37,"name":"Information Systems","url":"https://www.academia.edu/Documents/in/Information_Systems"},{"id":48,"name":"Engineering","url":"https://www.academia.edu/Documents/in/Engineering"},{"id":422,"name":"Computer Science","url":"https://www.academia.edu/Documents/in/Computer_Science"},{"id":449,"name":"Software Engineering","url":"https://www.academia.edu/Documents/in/Software_Engineering"},{"id":472,"name":"Human Computer Interaction","url":"https://www.academia.edu/Documents/in/Human_Computer_Interaction"},{"id":491,"name":"Information Technology","url":"https://www.academia.edu/Documents/in/Information_Technology"},{"id":859,"name":"Communication","url":"https://www.academia.edu/Documents/in/Communication"},{"id":923,"name":"Technology","url":"https://www.academia.edu/Documents/in/Technology"},{"id":933,"name":"New Media","url":"https://www.academia.edu/Documents/in/New_Media"},{"id":945,"name":"Performing Arts","url":"https://www.academia.edu/Documents/in/Performing_Arts"},{"id":1210,"name":"Informatics","url":"https://www.academia.edu/Documents/in/Informatics"},{"id":1236,"name":"Art","url":"https://www.academia.edu/Documents/in/Art"},{"id":1596,"name":"Creative processes in contemporary dance","url":"https://www.academia.edu/Documents/in/Creative_processes_in_contemporary_dance"},{"id":2869,"name":"Digital Media","url":"https://www.academia.edu/Documents/in/Digital_Media"},{"id":2879,"name":"Ubiquitous Computing","url":"https://www.academia.edu/Documents/in/Ubiquitous_Computing"},{"id":3147,"name":"Gesture","url":"https://www.academia.edu/Documents/in/Gesture"},{"id":3419,"name":"Multimedia","url":"https://www.academia.edu/Documents/in/Multimedia"},{"id":4416,"name":"Interaction Design","url":"https://www.academia.edu/Documents/in/Interaction_Design"},{"id":5673,"name":"Augmented Reality","url":"https://www.academia.edu/Documents/in/Augmented_Reality"},{"id":10165,"name":"Interactive and Digital Media","url":"https://www.academia.edu/Documents/in/Interactive_and_Digital_Media"},{"id":11081,"name":"Multimodal Interaction","url":"https://www.academia.edu/Documents/in/Multimodal_Interaction"},{"id":11085,"name":"Cross-Media Information Spaces","url":"https://www.academia.edu/Documents/in/Cross-Media_Information_Spaces"},{"id":17701,"name":"Gesture Recognition","url":"https://www.academia.edu/Documents/in/Gesture_Recognition"},{"id":19712,"name":"Dance and Aesthetics","url":"https://www.academia.edu/Documents/in/Dance_and_Aesthetics"},{"id":20470,"name":"Tangible User Interfaces","url":"https://www.academia.edu/Documents/in/Tangible_User_Interfaces"},{"id":29167,"name":"Dance","url":"https://www.academia.edu/Documents/in/Dance"},{"id":37228,"name":"Multimodal Interfaces","url":"https://www.academia.edu/Documents/in/Multimodal_Interfaces"},{"id":44606,"name":"Hypermedia","url":"https://www.academia.edu/Documents/in/Hypermedia"},{"id":48842,"name":"Music and Gesture","url":"https://www.academia.edu/Documents/in/Music_and_Gesture"},{"id":64453,"name":"Gestures","url":"https://www.academia.edu/Documents/in/Gestures"},{"id":90300,"name":"Dance Research","url":"https://www.academia.edu/Documents/in/Dance_Research"},{"id":91387,"name":"Kinect","url":"https://www.academia.edu/Documents/in/Kinect"},{"id":122807,"name":"Visual and Performing Arts","url":"https://www.academia.edu/Documents/in/Visual_and_Performing_Arts"},{"id":320772,"name":"Performing Art","url":"https://www.academia.edu/Documents/in/Performing_Art"},{"id":347349,"name":"Vision-based hand gesture recognition","url":"https://www.academia.edu/Documents/in/Vision-based_hand_gesture_recognition"},{"id":550029,"name":"Gesture Technology","url":"https://www.academia.edu/Documents/in/Gesture_Technology"},{"id":668047,"name":"Digital Media and Interaction Design","url":"https://www.academia.edu/Documents/in/Digital_Media_and_Interaction_Design"},{"id":1013140,"name":"Music and Performing Arts","url":"https://www.academia.edu/Documents/in/Music_and_Performing_Arts"}],"urls":[{"id":4617453,"url":"https://www.youtube.com/watch?v=nyVs_5TfN4c"}]}, dispatcherData: dispatcherData }); $(this).data('initialized', true); } }); $a.trackClickSource(".js-work-strip-work-link", "profile_work_strip") if (false) { Aedu.setUpFigureCarousel('profile-work-11590521-figures'); } }); </script> <div class="js-work-strip profile--work_container" data-work-id="11583695"><div class="profile--work_thumbnail hidden-xs"><a class="js-work-strip-work-link" data-click-track="profile-work-strip-thumbnail" rel="nofollow" href="https://www.academia.edu/11583695/EdFest_at_Edinburgh_Festivals"><img alt="Research paper thumbnail of EdFest @ Edinburgh Festivals" class="work-thumbnail" src="https://a.academia-assets.com/images/blank-paper.jpg" /></a></div><div class="wp-workCard wp-workCard_itemContainer"><div class="wp-workCard_item wp-workCard--title">EdFest @ Edinburgh Festivals</div><div class="wp-workCard_item"><span class="js-work-more-abstract-truncated">Our demonstration is a paper-based interactive guide for visitors to the world&#39;s largest internat...</span><a class="js-work-more-abstract" data-broccoli-component="work_strip.more_abstract" data-click-track="profile-work-strip-more-abstract" href="javascript:;"><span> more </span><span><i class="fa fa-caret-down"></i></span></a><span class="js-work-more-abstract-untruncated hidden">Our demonstration is a paper-based interactive guide for visitors to the world&#39;s largest international arts festival that was developed as part of a project investigating new forms of context-aware information delivery and interaction in mobile environments. Information stored in a database is accessed from a set of interactive paper documents, including a printed festival brochure, a city map and a bookmark. Active areas are defined within the documents and selection of these using a special digital pen causes the corresponding query request along with context data to be sent to a festival application database and the response is returned to the visitor in the form of generated speech output. In addition to paper-based information browsing and transactions such as ticket booking, the digital pen can also be applied for data capture of event ratings and handwritten comments on events. The system integrates three main database components - a cross-media link server, a content management framework for multi-channel context-aware publishing of data and the festival application database. <br /> <br />Global Information Systems Group, GlobIS, ETH Zurich <br />Moira C. Norrie, Beat Signer, Nadir Weibel, Michael Grossniklaus, Rudi Belotti, Corsin Decurtins <br /> <br />More details about the presented system can be found in our paper entitled &#39;Context-Aware Platform for Mobile Data Management&#39; which has been presented in the journal Wireless Networks (WINET), 13(6): <a href="https://www.academia.edu/175422/Context-Aware_Platform_for_Mobile_Data_Management" rel="nofollow">https://www.academia.edu/175422/Context-Aware_Platform_for_Mobile_Data_Management</a></span></div><div class="wp-workCard_item wp-workCard--actions"><span class="work-strip-bookmark-button-container"></span><span class="wp-workCard--action visible-if-viewed-by-owner inline-block" style="display: none;"><span class="js-profile-work-strip-edit-button-wrapper profile-work-strip-edit-button-wrapper" data-work-id="11583695"><a class="js-profile-work-strip-edit-button" tabindex="0"><span><i class="fa fa-pencil"></i></span><span>Edit</span></a></span></span></div><div class="wp-workCard_item wp-workCard--stats"><span><span><span class="js-view-count view-count u-mr2x" data-work-id="11583695"><i class="fa fa-spinner fa-spin"></i></span><script>$(function () { var workId = 11583695; window.Academia.workViewCountsFetcher.queue(workId, function (count) { var description = window.$h.commaizeInt(count) + " " + window.$h.pluralize(count, 'View'); $(".js-view-count[data-work-id=11583695]").text(description); $(".js-view-count[data-work-id=11583695]").attr('title', description).tooltip(); }); });</script></span></span><span><span class="percentile-widget hidden"><span class="u-mr2x work-percentile"></span></span><script>$(function () { var workId = 11583695; window.Academia.workPercentilesFetcher.queue(workId, function (percentileText) { var container = $(".js-work-strip[data-work-id='11583695']"); container.find('.work-percentile').text(percentileText.charAt(0).toUpperCase() + percentileText.slice(1)); container.find('.percentile-widget').show(); container.find('.percentile-widget').removeClass('hidden'); }); });</script></span></div><div id="work-strip-premium-row-container"></div></div></div><script> require.config({ waitSeconds: 90 })(["https://a.academia-assets.com/assets/wow_profile-a9bf3a2bc8c89fa2a77156577594264ee8a0f214d74241bc0fcd3f69f8d107ac.js","https://a.academia-assets.com/assets/work_edit-ad038b8c047c1a8d4fa01b402d530ff93c45fee2137a149a4a5398bc8ad67560.js"], function() { // from javascript_helper.rb var dispatcherData = {} if (false){ window.WowProfile.dispatcher = window.WowProfile.dispatcher || _.clone(Backbone.Events); dispatcherData = { dispatcher: window.WowProfile.dispatcher, downloadLinkId: "-1" } } $('.js-work-strip[data-work-id=11583695]').each(function() { if (!$(this).data('initialized')) { new WowProfile.WorkStripView({ el: this, workJSON: {"id":11583695,"title":"EdFest @ Edinburgh Festivals","translated_title":"","metadata":{"abstract":"Our demonstration is a paper-based interactive guide for visitors to the world's largest international arts festival that was developed as part of a project investigating new forms of context-aware information delivery and interaction in mobile environments. Information stored in a database is accessed from a set of interactive paper documents, including a printed festival brochure, a city map and a bookmark. Active areas are defined within the documents and selection of these using a special digital pen causes the corresponding query request along with context data to be sent to a festival application database and the response is returned to the visitor in the form of generated speech output. In addition to paper-based information browsing and transactions such as ticket booking, the digital pen can also be applied for data capture of event ratings and handwritten comments on events. The system integrates three main database components - a cross-media link server, a content management framework for multi-channel context-aware publishing of data and the festival application database.\r\n\r\nGlobal Information Systems Group, GlobIS, ETH Zurich\r\nMoira C. Norrie, Beat Signer, Nadir Weibel, Michael Grossniklaus, Rudi Belotti, Corsin Decurtins\r\n\r\nMore details about the presented system can be found in our paper entitled 'Context-Aware Platform for Mobile Data Management' which has been presented in the journal Wireless Networks (WINET), 13(6): https://www.academia.edu/175422/Context-Aware_Platform_for_Mobile_Data_Management"},"translated_abstract":"Our demonstration is a paper-based interactive guide for visitors to the world's largest international arts festival that was developed as part of a project investigating new forms of context-aware information delivery and interaction in mobile environments. Information stored in a database is accessed from a set of interactive paper documents, including a printed festival brochure, a city map and a bookmark. Active areas are defined within the documents and selection of these using a special digital pen causes the corresponding query request along with context data to be sent to a festival application database and the response is returned to the visitor in the form of generated speech output. In addition to paper-based information browsing and transactions such as ticket booking, the digital pen can also be applied for data capture of event ratings and handwritten comments on events. The system integrates three main database components - a cross-media link server, a content management framework for multi-channel context-aware publishing of data and the festival application database.\r\n\r\nGlobal Information Systems Group, GlobIS, ETH Zurich\r\nMoira C. Norrie, Beat Signer, Nadir Weibel, Michael Grossniklaus, Rudi Belotti, Corsin Decurtins\r\n\r\nMore details about the presented system can be found in our paper entitled 'Context-Aware Platform for Mobile Data Management' which has been presented in the journal Wireless Networks (WINET), 13(6): https://www.academia.edu/175422/Context-Aware_Platform_for_Mobile_Data_Management","internal_url":"https://www.academia.edu/11583695/EdFest_at_Edinburgh_Festivals","translated_internal_url":"","created_at":"2015-03-22T06:07:12.550-07:00","preview_url":null,"current_user_can_edit":null,"current_user_is_owner":null,"owner_id":13155,"coauthors_can_edit":true,"document_type":"other","co_author_tags":[],"downloadable_attachments":[],"slug":"EdFest_at_Edinburgh_Festivals","translated_slug":"","page_count":null,"language":"en","content_type":"Work","summary":"Our demonstration is a paper-based interactive guide for visitors to the world's largest international arts festival that was developed as part of a project investigating new forms of context-aware information delivery and interaction in mobile environments. Information stored in a database is accessed from a set of interactive paper documents, including a printed festival brochure, a city map and a bookmark. Active areas are defined within the documents and selection of these using a special digital pen causes the corresponding query request along with context data to be sent to a festival application database and the response is returned to the visitor in the form of generated speech output. In addition to paper-based information browsing and transactions such as ticket booking, the digital pen can also be applied for data capture of event ratings and handwritten comments on events. The system integrates three main database components - a cross-media link server, a content management framework for multi-channel context-aware publishing of data and the festival application database.\r\n\r\nGlobal Information Systems Group, GlobIS, ETH Zurich\r\nMoira C. Norrie, Beat Signer, Nadir Weibel, Michael Grossniklaus, Rudi Belotti, Corsin Decurtins\r\n\r\nMore details about the presented system can be found in our paper entitled 'Context-Aware Platform for Mobile Data Management' which has been presented in the journal Wireless Networks (WINET), 13(6): https://www.academia.edu/175422/Context-Aware_Platform_for_Mobile_Data_Management","impression_tracking_id":null,"owner":{"id":13155,"first_name":"Beat","middle_initials":null,"last_name":"Signer","page_name":"BeatSigner","domain_name":"vub","created_at":"2008-10-29T17:11:18.015-07:00","display_name":"Beat Signer","url":"https://vub.academia.edu/BeatSigner"},"attachments":[],"research_interests":[{"id":11086,"name":"Interactive Paper","url":"https://www.academia.edu/Documents/in/Interactive_Paper"}],"urls":[{"id":4616417,"url":"https://www.youtube.com/watch?v=QuaL5tDX8eQ"}]}, dispatcherData: dispatcherData }); $(this).data('initialized', true); } }); $a.trackClickSource(".js-work-strip-work-link", "profile_work_strip") if (false) { Aedu.setUpFigureCarousel('profile-work-11583695-figures'); } }); </script> <div class="js-work-strip profile--work_container" data-work-id="11583686"><div class="profile--work_thumbnail hidden-xs"><a class="js-work-strip-work-link" data-click-track="profile-work-strip-thumbnail" rel="nofollow" href="https://www.academia.edu/11583686/Print_n_Link_Weaving_the_Paper_Web"><img alt="Research paper thumbnail of Print-n-Link: Weaving the Paper Web" class="work-thumbnail" src="https://a.academia-assets.com/images/blank-paper.jpg" /></a></div><div class="wp-workCard wp-workCard_itemContainer"><div class="wp-workCard_item wp-workCard--title">Print-n-Link: Weaving the Paper Web</div><div class="wp-workCard_item"><span class="js-work-more-abstract-truncated">Citations form the basis for a web of scientific publications. Search engines, embedded hyperlink...</span><a class="js-work-more-abstract" data-broccoli-component="work_strip.more_abstract" data-click-track="profile-work-strip-more-abstract" href="javascript:;"><span> more </span><span><i class="fa fa-caret-down"></i></span></a><span class="js-work-more-abstract-untruncated hidden">Citations form the basis for a web of scientific publications. Search engines, embedded hyperlinks and digital libraries all simplify the task of finding publications of interest on the web and navigating to cited publications or web sites. However the actual reading of publications often takes place on paper and frequently on the move. We present a system Print-n-Link that uses technologies for interactive paper to enhance the reading process by enabling users to access digital information and/or searches for cited documents from a printed version of a publication using a digital pen for interaction. A special virtual printer driver automatically generates links from paper to digital services during the printing process based on an analysis of PDF documents. Depending on the user setting and interaction gesture, the system may retrieve metadata about the citation and inform the user through an audio channel or directly display the cited document on the users screen. <br /> <br />Global Information Systems Group, GlobIS, ETH Zurich <br />Moira C. Norrie, Beat Signer, Nadir Weibel <br /> <br />More details about the presented system can be found in our paper entitled &#39;Print-n-Link: Weaving the Paper Web&#39; which has been presented at the ACM Symposium on Document Engineering taking place in Amsterdam, The Netherlands: <a href="https://www.academia.edu/175445/Print-n-Link_Weaving_the_Paper_Web" rel="nofollow">https://www.academia.edu/175445/Print-n-Link_Weaving_the_Paper_Web</a></span></div><div class="wp-workCard_item wp-workCard--actions"><span class="work-strip-bookmark-button-container"></span><span class="wp-workCard--action visible-if-viewed-by-owner inline-block" style="display: none;"><span class="js-profile-work-strip-edit-button-wrapper profile-work-strip-edit-button-wrapper" data-work-id="11583686"><a class="js-profile-work-strip-edit-button" tabindex="0"><span><i class="fa fa-pencil"></i></span><span>Edit</span></a></span></span></div><div class="wp-workCard_item wp-workCard--stats"><span><span><span class="js-view-count view-count u-mr2x" data-work-id="11583686"><i class="fa fa-spinner fa-spin"></i></span><script>$(function () { var workId = 11583686; window.Academia.workViewCountsFetcher.queue(workId, function (count) { var description = window.$h.commaizeInt(count) + " " + window.$h.pluralize(count, 'View'); $(".js-view-count[data-work-id=11583686]").text(description); $(".js-view-count[data-work-id=11583686]").attr('title', description).tooltip(); }); });</script></span></span><span><span class="percentile-widget hidden"><span class="u-mr2x work-percentile"></span></span><script>$(function () { var workId = 11583686; window.Academia.workPercentilesFetcher.queue(workId, function (percentileText) { var container = $(".js-work-strip[data-work-id='11583686']"); container.find('.work-percentile').text(percentileText.charAt(0).toUpperCase() + percentileText.slice(1)); container.find('.percentile-widget').show(); container.find('.percentile-widget').removeClass('hidden'); }); });</script></span></div><div id="work-strip-premium-row-container"></div></div></div><script> require.config({ waitSeconds: 90 })(["https://a.academia-assets.com/assets/wow_profile-a9bf3a2bc8c89fa2a77156577594264ee8a0f214d74241bc0fcd3f69f8d107ac.js","https://a.academia-assets.com/assets/work_edit-ad038b8c047c1a8d4fa01b402d530ff93c45fee2137a149a4a5398bc8ad67560.js"], function() { // from javascript_helper.rb var dispatcherData = {} if (false){ window.WowProfile.dispatcher = window.WowProfile.dispatcher || _.clone(Backbone.Events); dispatcherData = { dispatcher: window.WowProfile.dispatcher, downloadLinkId: "-1" } } $('.js-work-strip[data-work-id=11583686]').each(function() { if (!$(this).data('initialized')) { new WowProfile.WorkStripView({ el: this, workJSON: {"id":11583686,"title":"Print-n-Link: Weaving the Paper Web","translated_title":"","metadata":{"abstract":"Citations form the basis for a web of scientific publications. Search engines, embedded hyperlinks and digital libraries all simplify the task of finding publications of interest on the web and navigating to cited publications or web sites. However the actual reading of publications often takes place on paper and frequently on the move. We present a system Print-n-Link that uses technologies for interactive paper to enhance the reading process by enabling users to access digital information and/or searches for cited documents from a printed version of a publication using a digital pen for interaction. A special virtual printer driver automatically generates links from paper to digital services during the printing process based on an analysis of PDF documents. Depending on the user setting and interaction gesture, the system may retrieve metadata about the citation and inform the user through an audio channel or directly display the cited document on the users screen.\r\n\r\nGlobal Information Systems Group, GlobIS, ETH Zurich\r\nMoira C. Norrie, Beat Signer, Nadir Weibel\r\n\r\nMore details about the presented system can be found in our paper entitled 'Print-n-Link: Weaving the Paper Web' which has been presented at the ACM Symposium on Document Engineering taking place in Amsterdam, The Netherlands: https://www.academia.edu/175445/Print-n-Link_Weaving_the_Paper_Web"},"translated_abstract":"Citations form the basis for a web of scientific publications. Search engines, embedded hyperlinks and digital libraries all simplify the task of finding publications of interest on the web and navigating to cited publications or web sites. However the actual reading of publications often takes place on paper and frequently on the move. We present a system Print-n-Link that uses technologies for interactive paper to enhance the reading process by enabling users to access digital information and/or searches for cited documents from a printed version of a publication using a digital pen for interaction. A special virtual printer driver automatically generates links from paper to digital services during the printing process based on an analysis of PDF documents. Depending on the user setting and interaction gesture, the system may retrieve metadata about the citation and inform the user through an audio channel or directly display the cited document on the users screen.\r\n\r\nGlobal Information Systems Group, GlobIS, ETH Zurich\r\nMoira C. Norrie, Beat Signer, Nadir Weibel\r\n\r\nMore details about the presented system can be found in our paper entitled 'Print-n-Link: Weaving the Paper Web' which has been presented at the ACM Symposium on Document Engineering taking place in Amsterdam, The Netherlands: https://www.academia.edu/175445/Print-n-Link_Weaving_the_Paper_Web","internal_url":"https://www.academia.edu/11583686/Print_n_Link_Weaving_the_Paper_Web","translated_internal_url":"","created_at":"2015-03-22T06:06:09.603-07:00","preview_url":null,"current_user_can_edit":null,"current_user_is_owner":null,"owner_id":13155,"coauthors_can_edit":true,"document_type":"other","co_author_tags":[],"downloadable_attachments":[],"slug":"Print_n_Link_Weaving_the_Paper_Web","translated_slug":"","page_count":null,"language":"en","content_type":"Work","summary":"Citations form the basis for a web of scientific publications. Search engines, embedded hyperlinks and digital libraries all simplify the task of finding publications of interest on the web and navigating to cited publications or web sites. However the actual reading of publications often takes place on paper and frequently on the move. We present a system Print-n-Link that uses technologies for interactive paper to enhance the reading process by enabling users to access digital information and/or searches for cited documents from a printed version of a publication using a digital pen for interaction. A special virtual printer driver automatically generates links from paper to digital services during the printing process based on an analysis of PDF documents. Depending on the user setting and interaction gesture, the system may retrieve metadata about the citation and inform the user through an audio channel or directly display the cited document on the users screen.\r\n\r\nGlobal Information Systems Group, GlobIS, ETH Zurich\r\nMoira C. Norrie, Beat Signer, Nadir Weibel\r\n\r\nMore details about the presented system can be found in our paper entitled 'Print-n-Link: Weaving the Paper Web' which has been presented at the ACM Symposium on Document Engineering taking place in Amsterdam, The Netherlands: https://www.academia.edu/175445/Print-n-Link_Weaving_the_Paper_Web","impression_tracking_id":null,"owner":{"id":13155,"first_name":"Beat","middle_initials":null,"last_name":"Signer","page_name":"BeatSigner","domain_name":"vub","created_at":"2008-10-29T17:11:18.015-07:00","display_name":"Beat Signer","url":"https://vub.academia.edu/BeatSigner"},"attachments":[],"research_interests":[{"id":11086,"name":"Interactive Paper","url":"https://www.academia.edu/Documents/in/Interactive_Paper"}],"urls":[{"id":4616416,"url":"https://www.youtube.com/watch?v=d-lXzi8mPMY"}]}, dispatcherData: dispatcherData }); $(this).data('initialized', true); } }); $a.trackClickSource(".js-work-strip-work-link", "profile_work_strip") if (false) { Aedu.setUpFigureCarousel('profile-work-11583686-figures'); } }); </script> <div class="js-work-strip profile--work_container" data-work-id="11583680"><div class="profile--work_thumbnail hidden-xs"><a class="js-work-strip-work-link" data-click-track="profile-work-strip-thumbnail" rel="nofollow" href="https://www.academia.edu/11583680/The_Lost_Cosmonaut_Interactive_Narrative"><img alt="Research paper thumbnail of The Lost Cosmonaut Interactive Narrative" class="work-thumbnail" src="https://a.academia-assets.com/images/blank-paper.jpg" /></a></div><div class="wp-workCard wp-workCard_itemContainer"><div class="wp-workCard_item wp-workCard--title">The Lost Cosmonaut Interactive Narrative</div><div class="wp-workCard_item"><span class="js-work-more-abstract-truncated">The Lost Cosmonaut is an interactive narrative based on digitally enhanced paper. This technology...</span><a class="js-work-more-abstract" data-broccoli-component="work_strip.more_abstract" data-click-track="profile-work-strip-more-abstract" href="javascript:;"><span> more </span><span><i class="fa fa-caret-down"></i></span></a><span class="js-work-more-abstract-untruncated hidden">The Lost Cosmonaut is an interactive narrative based on digitally enhanced paper. This technology uses an electronic pen to mediate between paper and computer. Thus any actions of the pen on the paper can be captured and manipulated by a computer as well as we can map digitally controlled events onto paper. The story in this narrative environment reveals itself partially through written text and images on the paper surface just as any other printed story. However, additional information in form of digitally controlled outputs such as sound, light and projections can be accessed through interaction with pen and paper. Furthermore the audience is not only supposed to read and otherwise perceive information, we also want them to actively produce content for this environment by writing onto the paper. By doing so they also add content to the database containing the digital output at the same time. Hence we produce a complex multimedia environment that works on three levels: On paper, in a digitally controlled visual and acoustic environment and in the combination of both worlds. Last but not least this environment is an open system, which grows as a collaborative effort over time as each user adds his own entries to paper and database. We argue that using paper as an integrated part of a digital environment is a best-of-both-world approach that opens up new possibilities for producing and perceiving narrative. <br /> <br />Global Information Systems, GlobIS, ETH Zurich <br />Moira C. Norrie, Beat Signer, Nadir Weibel, Rudi Belotti, Axel Vogelsang <br /> <br />More details about the presented system can be found in our paper entitled &#39;The Lost Cosmonaut: An Interactive Narrative Environment on Basis of Digitally Enhanced Paper&#39; which has been presented at the International Conference on Virtual Storytelling 2005 taking place in Strasbourg, France: <a href="https://www.academia.edu/175448/The_Lost_Cosmonaut_An_Interactive_Narrative_Environment_on_Basis_of_Digitally_Enhanced_Paper" rel="nofollow">https://www.academia.edu/175448/The_Lost_Cosmonaut_An_Interactive_Narrative_Environment_on_Basis_of_Digitally_Enhanced_Paper</a></span></div><div class="wp-workCard_item wp-workCard--actions"><span class="work-strip-bookmark-button-container"></span><span class="wp-workCard--action visible-if-viewed-by-owner inline-block" style="display: none;"><span class="js-profile-work-strip-edit-button-wrapper profile-work-strip-edit-button-wrapper" data-work-id="11583680"><a class="js-profile-work-strip-edit-button" tabindex="0"><span><i class="fa fa-pencil"></i></span><span>Edit</span></a></span></span></div><div class="wp-workCard_item wp-workCard--stats"><span><span><span class="js-view-count view-count u-mr2x" data-work-id="11583680"><i class="fa fa-spinner fa-spin"></i></span><script>$(function () { var workId = 11583680; window.Academia.workViewCountsFetcher.queue(workId, function (count) { var description = window.$h.commaizeInt(count) + " " + window.$h.pluralize(count, 'View'); $(".js-view-count[data-work-id=11583680]").text(description); $(".js-view-count[data-work-id=11583680]").attr('title', description).tooltip(); }); });</script></span></span><span><span class="percentile-widget hidden"><span class="u-mr2x work-percentile"></span></span><script>$(function () { var workId = 11583680; window.Academia.workPercentilesFetcher.queue(workId, function (percentileText) { var container = $(".js-work-strip[data-work-id='11583680']"); container.find('.work-percentile').text(percentileText.charAt(0).toUpperCase() + percentileText.slice(1)); container.find('.percentile-widget').show(); container.find('.percentile-widget').removeClass('hidden'); }); });</script></span></div><div id="work-strip-premium-row-container"></div></div></div><script> require.config({ waitSeconds: 90 })(["https://a.academia-assets.com/assets/wow_profile-a9bf3a2bc8c89fa2a77156577594264ee8a0f214d74241bc0fcd3f69f8d107ac.js","https://a.academia-assets.com/assets/work_edit-ad038b8c047c1a8d4fa01b402d530ff93c45fee2137a149a4a5398bc8ad67560.js"], function() { // from javascript_helper.rb var dispatcherData = {} if (false){ window.WowProfile.dispatcher = window.WowProfile.dispatcher || _.clone(Backbone.Events); dispatcherData = { dispatcher: window.WowProfile.dispatcher, downloadLinkId: "-1" } } $('.js-work-strip[data-work-id=11583680]').each(function() { if (!$(this).data('initialized')) { new WowProfile.WorkStripView({ el: this, workJSON: {"id":11583680,"title":"The Lost Cosmonaut Interactive Narrative","translated_title":"","metadata":{"abstract":"The Lost Cosmonaut is an interactive narrative based on digitally enhanced paper. This technology uses an electronic pen to mediate between paper and computer. Thus any actions of the pen on the paper can be captured and manipulated by a computer as well as we can map digitally controlled events onto paper. The story in this narrative environment reveals itself partially through written text and images on the paper surface just as any other printed story. However, additional information in form of digitally controlled outputs such as sound, light and projections can be accessed through interaction with pen and paper. Furthermore the audience is not only supposed to read and otherwise perceive information, we also want them to actively produce content for this environment by writing onto the paper. By doing so they also add content to the database containing the digital output at the same time. Hence we produce a complex multimedia environment that works on three levels: On paper, in a digitally controlled visual and acoustic environment and in the combination of both worlds. Last but not least this environment is an open system, which grows as a collaborative effort over time as each user adds his own entries to paper and database. We argue that using paper as an integrated part of a digital environment is a best-of-both-world approach that opens up new possibilities for producing and perceiving narrative.\r\n\r\nGlobal Information Systems, GlobIS, ETH Zurich\r\nMoira C. Norrie, Beat Signer, Nadir Weibel, Rudi Belotti, Axel Vogelsang\r\n\r\nMore details about the presented system can be found in our paper entitled 'The Lost Cosmonaut: An Interactive Narrative Environment on Basis of Digitally Enhanced Paper' which has been presented at the International Conference on Virtual Storytelling 2005 taking place in Strasbourg, France: https://www.academia.edu/175448/The_Lost_Cosmonaut_An_Interactive_Narrative_Environment_on_Basis_of_Digitally_Enhanced_Paper"},"translated_abstract":"The Lost Cosmonaut is an interactive narrative based on digitally enhanced paper. This technology uses an electronic pen to mediate between paper and computer. Thus any actions of the pen on the paper can be captured and manipulated by a computer as well as we can map digitally controlled events onto paper. The story in this narrative environment reveals itself partially through written text and images on the paper surface just as any other printed story. However, additional information in form of digitally controlled outputs such as sound, light and projections can be accessed through interaction with pen and paper. Furthermore the audience is not only supposed to read and otherwise perceive information, we also want them to actively produce content for this environment by writing onto the paper. By doing so they also add content to the database containing the digital output at the same time. Hence we produce a complex multimedia environment that works on three levels: On paper, in a digitally controlled visual and acoustic environment and in the combination of both worlds. Last but not least this environment is an open system, which grows as a collaborative effort over time as each user adds his own entries to paper and database. We argue that using paper as an integrated part of a digital environment is a best-of-both-world approach that opens up new possibilities for producing and perceiving narrative.\r\n\r\nGlobal Information Systems, GlobIS, ETH Zurich\r\nMoira C. Norrie, Beat Signer, Nadir Weibel, Rudi Belotti, Axel Vogelsang\r\n\r\nMore details about the presented system can be found in our paper entitled 'The Lost Cosmonaut: An Interactive Narrative Environment on Basis of Digitally Enhanced Paper' which has been presented at the International Conference on Virtual Storytelling 2005 taking place in Strasbourg, France: https://www.academia.edu/175448/The_Lost_Cosmonaut_An_Interactive_Narrative_Environment_on_Basis_of_Digitally_Enhanced_Paper","internal_url":"https://www.academia.edu/11583680/The_Lost_Cosmonaut_Interactive_Narrative","translated_internal_url":"","created_at":"2015-03-22T06:04:31.905-07:00","preview_url":null,"current_user_can_edit":null,"current_user_is_owner":null,"owner_id":13155,"coauthors_can_edit":true,"document_type":"other","co_author_tags":[],"downloadable_attachments":[],"slug":"The_Lost_Cosmonaut_Interactive_Narrative","translated_slug":"","page_count":null,"language":"en","content_type":"Work","summary":"The Lost Cosmonaut is an interactive narrative based on digitally enhanced paper. This technology uses an electronic pen to mediate between paper and computer. Thus any actions of the pen on the paper can be captured and manipulated by a computer as well as we can map digitally controlled events onto paper. The story in this narrative environment reveals itself partially through written text and images on the paper surface just as any other printed story. However, additional information in form of digitally controlled outputs such as sound, light and projections can be accessed through interaction with pen and paper. Furthermore the audience is not only supposed to read and otherwise perceive information, we also want them to actively produce content for this environment by writing onto the paper. By doing so they also add content to the database containing the digital output at the same time. Hence we produce a complex multimedia environment that works on three levels: On paper, in a digitally controlled visual and acoustic environment and in the combination of both worlds. Last but not least this environment is an open system, which grows as a collaborative effort over time as each user adds his own entries to paper and database. We argue that using paper as an integrated part of a digital environment is a best-of-both-world approach that opens up new possibilities for producing and perceiving narrative.\r\n\r\nGlobal Information Systems, GlobIS, ETH Zurich\r\nMoira C. Norrie, Beat Signer, Nadir Weibel, Rudi Belotti, Axel Vogelsang\r\n\r\nMore details about the presented system can be found in our paper entitled 'The Lost Cosmonaut: An Interactive Narrative Environment on Basis of Digitally Enhanced Paper' which has been presented at the International Conference on Virtual Storytelling 2005 taking place in Strasbourg, France: https://www.academia.edu/175448/The_Lost_Cosmonaut_An_Interactive_Narrative_Environment_on_Basis_of_Digitally_Enhanced_Paper","impression_tracking_id":null,"owner":{"id":13155,"first_name":"Beat","middle_initials":null,"last_name":"Signer","page_name":"BeatSigner","domain_name":"vub","created_at":"2008-10-29T17:11:18.015-07:00","display_name":"Beat Signer","url":"https://vub.academia.edu/BeatSigner"},"attachments":[],"research_interests":[{"id":37,"name":"Information Systems","url":"https://www.academia.edu/Documents/in/Information_Systems"},{"id":48,"name":"Engineering","url":"https://www.academia.edu/Documents/in/Engineering"},{"id":422,"name":"Computer Science","url":"https://www.academia.edu/Documents/in/Computer_Science"},{"id":449,"name":"Software Engineering","url":"https://www.academia.edu/Documents/in/Software_Engineering"},{"id":472,"name":"Human Computer Interaction","url":"https://www.academia.edu/Documents/in/Human_Computer_Interaction"},{"id":491,"name":"Information Technology","url":"https://www.academia.edu/Documents/in/Information_Technology"},{"id":859,"name":"Communication","url":"https://www.academia.edu/Documents/in/Communication"},{"id":923,"name":"Technology","url":"https://www.academia.edu/Documents/in/Technology"},{"id":933,"name":"New Media","url":"https://www.academia.edu/Documents/in/New_Media"},{"id":1210,"name":"Informatics","url":"https://www.academia.edu/Documents/in/Informatics"},{"id":1236,"name":"Art","url":"https://www.academia.edu/Documents/in/Art"},{"id":1453,"name":"Information Management","url":"https://www.academia.edu/Documents/in/Information_Management"},{"id":2129,"name":"Computer Supported Cooperative Work (CSCW)","url":"https://www.academia.edu/Documents/in/Computer_Supported_Cooperative_Work_CSCW_"},{"id":2869,"name":"Digital Media","url":"https://www.academia.edu/Documents/in/Digital_Media"},{"id":2879,"name":"Ubiquitous Computing","url":"https://www.academia.edu/Documents/in/Ubiquitous_Computing"},{"id":3331,"name":"New Media Arts","url":"https://www.academia.edu/Documents/in/New_Media_Arts"},{"id":3419,"name":"Multimedia","url":"https://www.academia.edu/Documents/in/Multimedia"},{"id":4416,"name":"Interaction Design","url":"https://www.academia.edu/Documents/in/Interaction_Design"},{"id":5673,"name":"Augmented Reality","url":"https://www.academia.edu/Documents/in/Augmented_Reality"},{"id":6861,"name":"New Media Performance and Installation","url":"https://www.academia.edu/Documents/in/New_Media_Performance_and_Installation"},{"id":10165,"name":"Interactive and Digital Media","url":"https://www.academia.edu/Documents/in/Interactive_and_Digital_Media"},{"id":11081,"name":"Multimodal Interaction","url":"https://www.academia.edu/Documents/in/Multimodal_Interaction"},{"id":11085,"name":"Cross-Media Information Spaces","url":"https://www.academia.edu/Documents/in/Cross-Media_Information_Spaces"},{"id":11086,"name":"Interactive Paper","url":"https://www.academia.edu/Documents/in/Interactive_Paper"},{"id":11732,"name":"Linked Data","url":"https://www.academia.edu/Documents/in/Linked_Data"},{"id":15951,"name":"Digital Pen and Paper","url":"https://www.academia.edu/Documents/in/Digital_Pen_and_Paper"},{"id":17701,"name":"Gesture Recognition","url":"https://www.academia.edu/Documents/in/Gesture_Recognition"},{"id":18736,"name":"New Media Art","url":"https://www.academia.edu/Documents/in/New_Media_Art"},{"id":20470,"name":"Tangible User Interfaces","url":"https://www.academia.edu/Documents/in/Tangible_User_Interfaces"},{"id":37228,"name":"Multimodal Interfaces","url":"https://www.academia.edu/Documents/in/Multimodal_Interfaces"},{"id":42095,"name":"Document Engineering","url":"https://www.academia.edu/Documents/in/Document_Engineering"},{"id":44606,"name":"Hypermedia","url":"https://www.academia.edu/Documents/in/Hypermedia"},{"id":49289,"name":"Installation","url":"https://www.academia.edu/Documents/in/Installation"},{"id":68448,"name":"Interactive Installation","url":"https://www.academia.edu/Documents/in/Interactive_Installation"},{"id":668047,"name":"Digital Media and Interaction Design","url":"https://www.academia.edu/Documents/in/Digital_Media_and_Interaction_Design"}],"urls":[{"id":4616415,"url":"https://www.youtube.com/watch?v=fu27Cmr1aX4"}]}, dispatcherData: dispatcherData }); $(this).data('initialized', true); } }); $a.trackClickSource(".js-work-strip-work-link", "profile_work_strip") if (false) { Aedu.setUpFigureCarousel('profile-work-11583680-figures'); } }); </script> <div class="js-work-strip profile--work_container" data-work-id="11583666"><div class="profile--work_thumbnail hidden-xs"><a class="js-work-strip-work-link" data-click-track="profile-work-strip-thumbnail" rel="nofollow" href="https://www.academia.edu/11583666/The_Generosa_Enterprise_Project"><img alt="Research paper thumbnail of The Generosa Enterprise Project" class="work-thumbnail" src="https://a.academia-assets.com/images/blank-paper.jpg" /></a></div><div class="wp-workCard wp-workCard_itemContainer"><div class="wp-workCard_item wp-workCard--title">The Generosa Enterprise Project</div><div class="wp-workCard_item"><span class="js-work-more-abstract-truncated">An interactive art installation was realised as part of the 150 year jubilee of ETH Zurich and pr...</span><a class="js-work-more-abstract" data-broccoli-component="work_strip.more_abstract" data-click-track="profile-work-strip-more-abstract" href="javascript:;"><span> more </span><span><i class="fa fa-caret-down"></i></span></a><span class="js-work-more-abstract-untruncated hidden">An interactive art installation was realised as part of the 150 year jubilee of ETH Zurich and presented to a wide public for two and a half weeks in the Welten des Wissens exhibition at Platzspitz Zurich. The Generosa Enterprise installation provides information about Monte Generoso, a mountain located in the southern part of Switzerland. The visitors of the exhibition are taken along a journey where they can experience the world of Monte Generoso combining art, science and interactive paper technology. <br /> <br />Global Information Systems Group, ETH Zurich <br />Beat Signer, Moira C. Norrie, Nadir Weibel, Curt Walter Tannhäuser <br /> <br />More details about the interactive paper framework that has been used in realising the Generosa Enterprise project can be found in our paper entitled &#39;General Framework for the Rapid Development of Interactive Paper Applications&#39; which has been presented at the 1st International Workshop on Collaborating over Paper and Digital Documents taking place in Banff, Canada: <a href="https://www.academia.edu/175444/General_Framework_for_the_Rapid_Development_of_Interactive_Paper_Applications" rel="nofollow">https://www.academia.edu/175444/General_Framework_for_the_Rapid_Development_of_Interactive_Paper_Applications</a></span></div><div class="wp-workCard_item wp-workCard--actions"><span class="work-strip-bookmark-button-container"></span><span class="wp-workCard--action visible-if-viewed-by-owner inline-block" style="display: none;"><span class="js-profile-work-strip-edit-button-wrapper profile-work-strip-edit-button-wrapper" data-work-id="11583666"><a class="js-profile-work-strip-edit-button" tabindex="0"><span><i class="fa fa-pencil"></i></span><span>Edit</span></a></span></span></div><div class="wp-workCard_item wp-workCard--stats"><span><span><span class="js-view-count view-count u-mr2x" data-work-id="11583666"><i class="fa fa-spinner fa-spin"></i></span><script>$(function () { var workId = 11583666; window.Academia.workViewCountsFetcher.queue(workId, function (count) { var description = window.$h.commaizeInt(count) + " " + window.$h.pluralize(count, 'View'); $(".js-view-count[data-work-id=11583666]").text(description); $(".js-view-count[data-work-id=11583666]").attr('title', description).tooltip(); }); });</script></span></span><span><span class="percentile-widget hidden"><span class="u-mr2x work-percentile"></span></span><script>$(function () { var workId = 11583666; window.Academia.workPercentilesFetcher.queue(workId, function (percentileText) { var container = $(".js-work-strip[data-work-id='11583666']"); container.find('.work-percentile').text(percentileText.charAt(0).toUpperCase() + percentileText.slice(1)); container.find('.percentile-widget').show(); container.find('.percentile-widget').removeClass('hidden'); }); });</script></span></div><div id="work-strip-premium-row-container"></div></div></div><script> require.config({ waitSeconds: 90 })(["https://a.academia-assets.com/assets/wow_profile-a9bf3a2bc8c89fa2a77156577594264ee8a0f214d74241bc0fcd3f69f8d107ac.js","https://a.academia-assets.com/assets/work_edit-ad038b8c047c1a8d4fa01b402d530ff93c45fee2137a149a4a5398bc8ad67560.js"], function() { // from javascript_helper.rb var dispatcherData = {} if (false){ window.WowProfile.dispatcher = window.WowProfile.dispatcher || _.clone(Backbone.Events); dispatcherData = { dispatcher: window.WowProfile.dispatcher, downloadLinkId: "-1" } } $('.js-work-strip[data-work-id=11583666]').each(function() { if (!$(this).data('initialized')) { new WowProfile.WorkStripView({ el: this, workJSON: {"id":11583666,"title":"The Generosa Enterprise Project","translated_title":"","metadata":{"abstract":"An interactive art installation was realised as part of the 150 year jubilee of ETH Zurich and presented to a wide public for two and a half weeks in the Welten des Wissens exhibition at Platzspitz Zurich. The Generosa Enterprise installation provides information about Monte Generoso, a mountain located in the southern part of Switzerland. The visitors of the exhibition are taken along a journey where they can experience the world of Monte Generoso combining art, science and interactive paper technology.\r\n\r\nGlobal Information Systems Group, ETH Zurich\r\nBeat Signer, Moira C. Norrie, Nadir Weibel, Curt Walter Tannhäuser\r\n\r\nMore details about the interactive paper framework that has been used in realising the Generosa Enterprise project can be found in our paper entitled 'General Framework for the Rapid Development of Interactive Paper Applications' which has been presented at the 1st International Workshop on Collaborating over Paper and Digital Documents taking place in Banff, Canada: https://www.academia.edu/175444/General_Framework_for_the_Rapid_Development_of_Interactive_Paper_Applications"},"translated_abstract":"An interactive art installation was realised as part of the 150 year jubilee of ETH Zurich and presented to a wide public for two and a half weeks in the Welten des Wissens exhibition at Platzspitz Zurich. The Generosa Enterprise installation provides information about Monte Generoso, a mountain located in the southern part of Switzerland. The visitors of the exhibition are taken along a journey where they can experience the world of Monte Generoso combining art, science and interactive paper technology.\r\n\r\nGlobal Information Systems Group, ETH Zurich\r\nBeat Signer, Moira C. Norrie, Nadir Weibel, Curt Walter Tannhäuser\r\n\r\nMore details about the interactive paper framework that has been used in realising the Generosa Enterprise project can be found in our paper entitled 'General Framework for the Rapid Development of Interactive Paper Applications' which has been presented at the 1st International Workshop on Collaborating over Paper and Digital Documents taking place in Banff, Canada: https://www.academia.edu/175444/General_Framework_for_the_Rapid_Development_of_Interactive_Paper_Applications","internal_url":"https://www.academia.edu/11583666/The_Generosa_Enterprise_Project","translated_internal_url":"","created_at":"2015-03-22T06:02:44.430-07:00","preview_url":null,"current_user_can_edit":null,"current_user_is_owner":null,"owner_id":13155,"coauthors_can_edit":true,"document_type":"other","co_author_tags":[],"downloadable_attachments":[],"slug":"The_Generosa_Enterprise_Project","translated_slug":"","page_count":null,"language":"en","content_type":"Work","summary":"An interactive art installation was realised as part of the 150 year jubilee of ETH Zurich and presented to a wide public for two and a half weeks in the Welten des Wissens exhibition at Platzspitz Zurich. The Generosa Enterprise installation provides information about Monte Generoso, a mountain located in the southern part of Switzerland. The visitors of the exhibition are taken along a journey where they can experience the world of Monte Generoso combining art, science and interactive paper technology.\r\n\r\nGlobal Information Systems Group, ETH Zurich\r\nBeat Signer, Moira C. Norrie, Nadir Weibel, Curt Walter Tannhäuser\r\n\r\nMore details about the interactive paper framework that has been used in realising the Generosa Enterprise project can be found in our paper entitled 'General Framework for the Rapid Development of Interactive Paper Applications' which has been presented at the 1st International Workshop on Collaborating over Paper and Digital Documents taking place in Banff, Canada: https://www.academia.edu/175444/General_Framework_for_the_Rapid_Development_of_Interactive_Paper_Applications","impression_tracking_id":null,"owner":{"id":13155,"first_name":"Beat","middle_initials":null,"last_name":"Signer","page_name":"BeatSigner","domain_name":"vub","created_at":"2008-10-29T17:11:18.015-07:00","display_name":"Beat Signer","url":"https://vub.academia.edu/BeatSigner"},"attachments":[],"research_interests":[{"id":37,"name":"Information Systems","url":"https://www.academia.edu/Documents/in/Information_Systems"},{"id":422,"name":"Computer Science","url":"https://www.academia.edu/Documents/in/Computer_Science"},{"id":472,"name":"Human Computer Interaction","url":"https://www.academia.edu/Documents/in/Human_Computer_Interaction"},{"id":859,"name":"Communication","url":"https://www.academia.edu/Documents/in/Communication"},{"id":922,"name":"Education","url":"https://www.academia.edu/Documents/in/Education"},{"id":923,"name":"Technology","url":"https://www.academia.edu/Documents/in/Technology"},{"id":933,"name":"New Media","url":"https://www.academia.edu/Documents/in/New_Media"},{"id":1210,"name":"Informatics","url":"https://www.academia.edu/Documents/in/Informatics"},{"id":1236,"name":"Art","url":"https://www.academia.edu/Documents/in/Art"},{"id":2869,"name":"Digital Media","url":"https://www.academia.edu/Documents/in/Digital_Media"},{"id":2879,"name":"Ubiquitous Computing","url":"https://www.academia.edu/Documents/in/Ubiquitous_Computing"},{"id":3419,"name":"Multimedia","url":"https://www.academia.edu/Documents/in/Multimedia"},{"id":10165,"name":"Interactive and Digital Media","url":"https://www.academia.edu/Documents/in/Interactive_and_Digital_Media"},{"id":11081,"name":"Multimodal Interaction","url":"https://www.academia.edu/Documents/in/Multimodal_Interaction"},{"id":11085,"name":"Cross-Media Information Spaces","url":"https://www.academia.edu/Documents/in/Cross-Media_Information_Spaces"},{"id":11086,"name":"Interactive Paper","url":"https://www.academia.edu/Documents/in/Interactive_Paper"},{"id":15951,"name":"Digital Pen and Paper","url":"https://www.academia.edu/Documents/in/Digital_Pen_and_Paper"},{"id":42095,"name":"Document Engineering","url":"https://www.academia.edu/Documents/in/Document_Engineering"},{"id":668047,"name":"Digital Media and Interaction Design","url":"https://www.academia.edu/Documents/in/Digital_Media_and_Interaction_Design"}],"urls":[{"id":4616411,"url":"https://www.youtube.com/watch?v=qE1-8xpG18s"}]}, dispatcherData: dispatcherData }); $(this).data('initialized', true); } }); $a.trackClickSource(".js-work-strip-work-link", "profile_work_strip") if (false) { Aedu.setUpFigureCarousel('profile-work-11583666-figures'); } }); </script> <div class="js-work-strip profile--work_container" data-work-id="11583039"><div class="profile--work_thumbnail hidden-xs"><a class="js-work-strip-work-link" data-click-track="profile-work-strip-thumbnail" rel="nofollow" href="https://www.academia.edu/11583039/PaperProof_Paper_Digital_Proof_Editing"><img alt="Research paper thumbnail of PaperProof: Paper-Digital Proof-Editing" class="work-thumbnail" src="https://a.academia-assets.com/images/blank-paper.jpg" /></a></div><div class="wp-workCard wp-workCard_itemContainer"><div class="wp-workCard_item wp-workCard--title">PaperProof: Paper-Digital Proof-Editing</div><div class="wp-workCard_item"><span class="js-work-more-abstract-truncated">PaperProof is a paper-digital proof-editing application that allows users to edit digital documen...</span><a class="js-work-more-abstract" data-broccoli-component="work_strip.more_abstract" data-click-track="profile-work-strip-more-abstract" href="javascript:;"><span> more </span><span><i class="fa fa-caret-down"></i></span></a><span class="js-work-more-abstract-untruncated hidden">PaperProof is a paper-digital proof-editing application that allows users to edit digital documents based on gesture-based markup of printed versions. It interprets the pen strokes made by the users on paper and can automatically execute the intended editing operations in the digital source document. <br /> <br />PaperProof operations may be executed either in real-time to support users reviewing documents at their workplace or at a later time if the user is currently on the move and does not have ready access to a digital version of the document. This enables users to switch seamlessly back and forth between paper and digital instances of a document throughout the document lifecycle working with whichever medium is preferred for a given task. <br /> <br />Global Information Systems, GlobIS, ETH Zurich <br />Nadir Weibel, Adriana Ispas, Moira C. Norrie, Beat Signer <br /> <br />More details about the presented system can be found in our paper entitled &#39;PaperProof: A Paper-Digital Proof-Editing System&#39; which has been presented at the 26th International Conference on Human Factors in Computing Systems (Interactivity Track) taking place in Florence, Italy: <a href="https://www.academia.edu/175420/PaperProof_A_Paper-Digital_Proof-Editing_System" rel="nofollow">https://www.academia.edu/175420/PaperProof_A_Paper-Digital_Proof-Editing_System</a></span></div><div class="wp-workCard_item wp-workCard--actions"><span class="work-strip-bookmark-button-container"></span><span class="wp-workCard--action visible-if-viewed-by-owner inline-block" style="display: none;"><span class="js-profile-work-strip-edit-button-wrapper profile-work-strip-edit-button-wrapper" data-work-id="11583039"><a class="js-profile-work-strip-edit-button" tabindex="0"><span><i class="fa fa-pencil"></i></span><span>Edit</span></a></span></span></div><div class="wp-workCard_item wp-workCard--stats"><span><span><span class="js-view-count view-count u-mr2x" data-work-id="11583039"><i class="fa fa-spinner fa-spin"></i></span><script>$(function () { var workId = 11583039; window.Academia.workViewCountsFetcher.queue(workId, function (count) { var description = window.$h.commaizeInt(count) + " " + window.$h.pluralize(count, 'View'); $(".js-view-count[data-work-id=11583039]").text(description); $(".js-view-count[data-work-id=11583039]").attr('title', description).tooltip(); }); });</script></span></span><span><span class="percentile-widget hidden"><span class="u-mr2x work-percentile"></span></span><script>$(function () { var workId = 11583039; window.Academia.workPercentilesFetcher.queue(workId, function (percentileText) { var container = $(".js-work-strip[data-work-id='11583039']"); container.find('.work-percentile').text(percentileText.charAt(0).toUpperCase() + percentileText.slice(1)); container.find('.percentile-widget').show(); container.find('.percentile-widget').removeClass('hidden'); }); });</script></span></div><div id="work-strip-premium-row-container"></div></div></div><script> require.config({ waitSeconds: 90 })(["https://a.academia-assets.com/assets/wow_profile-a9bf3a2bc8c89fa2a77156577594264ee8a0f214d74241bc0fcd3f69f8d107ac.js","https://a.academia-assets.com/assets/work_edit-ad038b8c047c1a8d4fa01b402d530ff93c45fee2137a149a4a5398bc8ad67560.js"], function() { // from javascript_helper.rb var dispatcherData = {} if (false){ window.WowProfile.dispatcher = window.WowProfile.dispatcher || _.clone(Backbone.Events); dispatcherData = { dispatcher: window.WowProfile.dispatcher, downloadLinkId: "-1" } } $('.js-work-strip[data-work-id=11583039]').each(function() { if (!$(this).data('initialized')) { new WowProfile.WorkStripView({ el: this, workJSON: {"id":11583039,"title":"PaperProof: Paper-Digital Proof-Editing","translated_title":"","metadata":{"abstract":"PaperProof is a paper-digital proof-editing application that allows users to edit digital documents based on gesture-based markup of printed versions. It interprets the pen strokes made by the users on paper and can automatically execute the intended editing operations in the digital source document.\r\n\r\nPaperProof operations may be executed either in real-time to support users reviewing documents at their workplace or at a later time if the user is currently on the move and does not have ready access to a digital version of the document. This enables users to switch seamlessly back and forth between paper and digital instances of a document throughout the document lifecycle working with whichever medium is preferred for a given task.\r\n\r\nGlobal Information Systems, GlobIS, ETH Zurich\r\nNadir Weibel, Adriana Ispas, Moira C. Norrie, Beat Signer\r\n\r\nMore details about the presented system can be found in our paper entitled 'PaperProof: A Paper-Digital Proof-Editing System' which has been presented at the 26th International Conference on Human Factors in Computing Systems (Interactivity Track) taking place in Florence, Italy: https://www.academia.edu/175420/PaperProof_A_Paper-Digital_Proof-Editing_System"},"translated_abstract":"PaperProof is a paper-digital proof-editing application that allows users to edit digital documents based on gesture-based markup of printed versions. It interprets the pen strokes made by the users on paper and can automatically execute the intended editing operations in the digital source document.\r\n\r\nPaperProof operations may be executed either in real-time to support users reviewing documents at their workplace or at a later time if the user is currently on the move and does not have ready access to a digital version of the document. This enables users to switch seamlessly back and forth between paper and digital instances of a document throughout the document lifecycle working with whichever medium is preferred for a given task.\r\n\r\nGlobal Information Systems, GlobIS, ETH Zurich\r\nNadir Weibel, Adriana Ispas, Moira C. Norrie, Beat Signer\r\n\r\nMore details about the presented system can be found in our paper entitled 'PaperProof: A Paper-Digital Proof-Editing System' which has been presented at the 26th International Conference on Human Factors in Computing Systems (Interactivity Track) taking place in Florence, Italy: https://www.academia.edu/175420/PaperProof_A_Paper-Digital_Proof-Editing_System","internal_url":"https://www.academia.edu/11583039/PaperProof_Paper_Digital_Proof_Editing","translated_internal_url":"","created_at":"2015-03-22T05:10:26.157-07:00","preview_url":null,"current_user_can_edit":null,"current_user_is_owner":null,"owner_id":13155,"coauthors_can_edit":true,"document_type":"other","co_author_tags":[],"downloadable_attachments":[],"slug":"PaperProof_Paper_Digital_Proof_Editing","translated_slug":"","page_count":null,"language":"en","content_type":"Work","summary":"PaperProof is a paper-digital proof-editing application that allows users to edit digital documents based on gesture-based markup of printed versions. It interprets the pen strokes made by the users on paper and can automatically execute the intended editing operations in the digital source document.\r\n\r\nPaperProof operations may be executed either in real-time to support users reviewing documents at their workplace or at a later time if the user is currently on the move and does not have ready access to a digital version of the document. This enables users to switch seamlessly back and forth between paper and digital instances of a document throughout the document lifecycle working with whichever medium is preferred for a given task.\r\n\r\nGlobal Information Systems, GlobIS, ETH Zurich\r\nNadir Weibel, Adriana Ispas, Moira C. Norrie, Beat Signer\r\n\r\nMore details about the presented system can be found in our paper entitled 'PaperProof: A Paper-Digital Proof-Editing System' which has been presented at the 26th International Conference on Human Factors in Computing Systems (Interactivity Track) taking place in Florence, Italy: https://www.academia.edu/175420/PaperProof_A_Paper-Digital_Proof-Editing_System","impression_tracking_id":null,"owner":{"id":13155,"first_name":"Beat","middle_initials":null,"last_name":"Signer","page_name":"BeatSigner","domain_name":"vub","created_at":"2008-10-29T17:11:18.015-07:00","display_name":"Beat Signer","url":"https://vub.academia.edu/BeatSigner"},"attachments":[],"research_interests":[{"id":11086,"name":"Interactive Paper","url":"https://www.academia.edu/Documents/in/Interactive_Paper"}],"urls":[{"id":4616338,"url":"https://www.youtube.com/watch?v=8ZW2Msw6HM0"}]}, dispatcherData: dispatcherData }); $(this).data('initialized', true); } }); $a.trackClickSource(".js-work-strip-work-link", "profile_work_strip") if (false) { Aedu.setUpFigureCarousel('profile-work-11583039-figures'); } }); </script> <div class="js-work-strip profile--work_container" data-work-id="11582042"><div class="profile--work_thumbnail hidden-xs"><a class="js-work-strip-work-link" data-click-track="profile-work-strip-thumbnail" rel="nofollow" href="https://www.academia.edu/11582042/iTable_Interactive_Tabletop"><img alt="Research paper thumbnail of iTable Interactive Tabletop" class="work-thumbnail" src="https://a.academia-assets.com/images/blank-paper.jpg" /></a></div><div class="wp-workCard wp-workCard_itemContainer"><div class="wp-workCard_item wp-workCard--title">iTable Interactive Tabletop</div><div class="wp-workCard_item"><span class="js-work-more-abstract-truncated">Based on the iPaper (Interactive Paper) framework, we have developed an Interactive Tabletop. The...</span><a class="js-work-more-abstract" data-broccoli-component="work_strip.more_abstract" data-click-track="profile-work-strip-more-abstract" href="javascript:;"><span> more </span><span><i class="fa fa-caret-down"></i></span></a><span class="js-work-more-abstract-untruncated hidden">Based on the iPaper (Interactive Paper) framework, we have developed an Interactive Tabletop. The surface of a table has been augmented with Anoto pattern and a desk-mounted projector is used for the tabletop projection. This video shows the pen-based control of three different applications including the GoogleEarth browser, a photo album and a drawing tool. Basically any third-party application can be controlled by using the presented iTable solution. <br /> <br />Global Information Systems, GlobIS, ETH Zurich <br />Beat Signer, Matthias Geel, Nadir Weibel, Moira C. Norrie <br /> <br />More details about the interactive paper framework that has been used in realising the iTable can be found in our paper entitled &#39;General Framework for the Rapid Development of Interactive Paper Applications&#39; which has been presented at the 1st International Workshop on Collaborating over Paper and Digital Documents taking place in Banff, Canada: <a href="https://www.academia.edu/175444/General_Framework_for_the_Rapid_Development_of_Interactive_Paper_Applications" rel="nofollow">https://www.academia.edu/175444/General_Framework_for_the_Rapid_Development_of_Interactive_Paper_Applications</a></span></div><div class="wp-workCard_item wp-workCard--actions"><span class="work-strip-bookmark-button-container"></span><span class="wp-workCard--action visible-if-viewed-by-owner inline-block" style="display: none;"><span class="js-profile-work-strip-edit-button-wrapper profile-work-strip-edit-button-wrapper" data-work-id="11582042"><a class="js-profile-work-strip-edit-button" tabindex="0"><span><i class="fa fa-pencil"></i></span><span>Edit</span></a></span></span></div><div class="wp-workCard_item wp-workCard--stats"><span><span><span class="js-view-count view-count u-mr2x" data-work-id="11582042"><i class="fa fa-spinner fa-spin"></i></span><script>$(function () { var workId = 11582042; window.Academia.workViewCountsFetcher.queue(workId, function (count) { var description = window.$h.commaizeInt(count) + " " + window.$h.pluralize(count, 'View'); $(".js-view-count[data-work-id=11582042]").text(description); $(".js-view-count[data-work-id=11582042]").attr('title', description).tooltip(); }); });</script></span></span><span><span class="percentile-widget hidden"><span class="u-mr2x work-percentile"></span></span><script>$(function () { var workId = 11582042; window.Academia.workPercentilesFetcher.queue(workId, function (percentileText) { var container = $(".js-work-strip[data-work-id='11582042']"); container.find('.work-percentile').text(percentileText.charAt(0).toUpperCase() + percentileText.slice(1)); container.find('.percentile-widget').show(); container.find('.percentile-widget').removeClass('hidden'); }); });</script></span></div><div id="work-strip-premium-row-container"></div></div></div><script> require.config({ waitSeconds: 90 })(["https://a.academia-assets.com/assets/wow_profile-a9bf3a2bc8c89fa2a77156577594264ee8a0f214d74241bc0fcd3f69f8d107ac.js","https://a.academia-assets.com/assets/work_edit-ad038b8c047c1a8d4fa01b402d530ff93c45fee2137a149a4a5398bc8ad67560.js"], function() { // from javascript_helper.rb var dispatcherData = {} if (false){ window.WowProfile.dispatcher = window.WowProfile.dispatcher || _.clone(Backbone.Events); dispatcherData = { dispatcher: window.WowProfile.dispatcher, downloadLinkId: "-1" } } $('.js-work-strip[data-work-id=11582042]').each(function() { if (!$(this).data('initialized')) { new WowProfile.WorkStripView({ el: this, workJSON: {"id":11582042,"title":"iTable Interactive Tabletop","translated_title":"","metadata":{"abstract":"Based on the iPaper (Interactive Paper) framework, we have developed an Interactive Tabletop. The surface of a table has been augmented with Anoto pattern and a desk-mounted projector is used for the tabletop projection. This video shows the pen-based control of three different applications including the GoogleEarth browser, a photo album and a drawing tool. Basically any third-party application can be controlled by using the presented iTable solution.\r\n\r\nGlobal Information Systems, GlobIS, ETH Zurich\r\nBeat Signer, Matthias Geel, Nadir Weibel, Moira C. Norrie\r\n\r\nMore details about the interactive paper framework that has been used in realising the iTable can be found in our paper entitled 'General Framework for the Rapid Development of Interactive Paper Applications' which has been presented at the 1st International Workshop on Collaborating over Paper and Digital Documents taking place in Banff, Canada: https://www.academia.edu/175444/General_Framework_for_the_Rapid_Development_of_Interactive_Paper_Applications"},"translated_abstract":"Based on the iPaper (Interactive Paper) framework, we have developed an Interactive Tabletop. The surface of a table has been augmented with Anoto pattern and a desk-mounted projector is used for the tabletop projection. This video shows the pen-based control of three different applications including the GoogleEarth browser, a photo album and a drawing tool. Basically any third-party application can be controlled by using the presented iTable solution.\r\n\r\nGlobal Information Systems, GlobIS, ETH Zurich\r\nBeat Signer, Matthias Geel, Nadir Weibel, Moira C. Norrie\r\n\r\nMore details about the interactive paper framework that has been used in realising the iTable can be found in our paper entitled 'General Framework for the Rapid Development of Interactive Paper Applications' which has been presented at the 1st International Workshop on Collaborating over Paper and Digital Documents taking place in Banff, Canada: https://www.academia.edu/175444/General_Framework_for_the_Rapid_Development_of_Interactive_Paper_Applications","internal_url":"https://www.academia.edu/11582042/iTable_Interactive_Tabletop","translated_internal_url":"","created_at":"2015-03-22T03:53:26.242-07:00","preview_url":null,"current_user_can_edit":null,"current_user_is_owner":null,"owner_id":13155,"coauthors_can_edit":true,"document_type":"other","co_author_tags":[],"downloadable_attachments":[],"slug":"iTable_Interactive_Tabletop","translated_slug":"","page_count":null,"language":"en","content_type":"Work","summary":"Based on the iPaper (Interactive Paper) framework, we have developed an Interactive Tabletop. The surface of a table has been augmented with Anoto pattern and a desk-mounted projector is used for the tabletop projection. This video shows the pen-based control of three different applications including the GoogleEarth browser, a photo album and a drawing tool. Basically any third-party application can be controlled by using the presented iTable solution.\r\n\r\nGlobal Information Systems, GlobIS, ETH Zurich\r\nBeat Signer, Matthias Geel, Nadir Weibel, Moira C. Norrie\r\n\r\nMore details about the interactive paper framework that has been used in realising the iTable can be found in our paper entitled 'General Framework for the Rapid Development of Interactive Paper Applications' which has been presented at the 1st International Workshop on Collaborating over Paper and Digital Documents taking place in Banff, Canada: https://www.academia.edu/175444/General_Framework_for_the_Rapid_Development_of_Interactive_Paper_Applications","impression_tracking_id":null,"owner":{"id":13155,"first_name":"Beat","middle_initials":null,"last_name":"Signer","page_name":"BeatSigner","domain_name":"vub","created_at":"2008-10-29T17:11:18.015-07:00","display_name":"Beat Signer","url":"https://vub.academia.edu/BeatSigner"},"attachments":[],"research_interests":[{"id":37,"name":"Information Systems","url":"https://www.academia.edu/Documents/in/Information_Systems"},{"id":422,"name":"Computer Science","url":"https://www.academia.edu/Documents/in/Computer_Science"},{"id":449,"name":"Software Engineering","url":"https://www.academia.edu/Documents/in/Software_Engineering"},{"id":459,"name":"Information Science","url":"https://www.academia.edu/Documents/in/Information_Science"},{"id":472,"name":"Human Computer Interaction","url":"https://www.academia.edu/Documents/in/Human_Computer_Interaction"},{"id":491,"name":"Information Technology","url":"https://www.academia.edu/Documents/in/Information_Technology"},{"id":859,"name":"Communication","url":"https://www.academia.edu/Documents/in/Communication"},{"id":923,"name":"Technology","url":"https://www.academia.edu/Documents/in/Technology"},{"id":933,"name":"New Media","url":"https://www.academia.edu/Documents/in/New_Media"},{"id":1003,"name":"Educational Technology","url":"https://www.academia.edu/Documents/in/Educational_Technology"},{"id":1012,"name":"Digital Libraries","url":"https://www.academia.edu/Documents/in/Digital_Libraries"},{"id":1210,"name":"Informatics","url":"https://www.academia.edu/Documents/in/Informatics"},{"id":1241,"name":"Knowledge Management","url":"https://www.academia.edu/Documents/in/Knowledge_Management"},{"id":1453,"name":"Information Management","url":"https://www.academia.edu/Documents/in/Information_Management"},{"id":2129,"name":"Computer Supported Cooperative Work (CSCW)","url":"https://www.academia.edu/Documents/in/Computer_Supported_Cooperative_Work_CSCW_"},{"id":2869,"name":"Digital Media","url":"https://www.academia.edu/Documents/in/Digital_Media"},{"id":2879,"name":"Ubiquitous Computing","url":"https://www.academia.edu/Documents/in/Ubiquitous_Computing"},{"id":3419,"name":"Multimedia","url":"https://www.academia.edu/Documents/in/Multimedia"},{"id":4416,"name":"Interaction Design","url":"https://www.academia.edu/Documents/in/Interaction_Design"},{"id":5673,"name":"Augmented Reality","url":"https://www.academia.edu/Documents/in/Augmented_Reality"},{"id":10165,"name":"Interactive and Digital Media","url":"https://www.academia.edu/Documents/in/Interactive_and_Digital_Media"},{"id":11081,"name":"Multimodal Interaction","url":"https://www.academia.edu/Documents/in/Multimodal_Interaction"},{"id":11085,"name":"Cross-Media Information Spaces","url":"https://www.academia.edu/Documents/in/Cross-Media_Information_Spaces"},{"id":11086,"name":"Interactive Paper","url":"https://www.academia.edu/Documents/in/Interactive_Paper"},{"id":11123,"name":"Personal Information Management","url":"https://www.academia.edu/Documents/in/Personal_Information_Management"},{"id":15951,"name":"Digital Pen and Paper","url":"https://www.academia.edu/Documents/in/Digital_Pen_and_Paper"},{"id":17701,"name":"Gesture Recognition","url":"https://www.academia.edu/Documents/in/Gesture_Recognition"},{"id":20470,"name":"Tangible User Interfaces","url":"https://www.academia.edu/Documents/in/Tangible_User_Interfaces"},{"id":37228,"name":"Multimodal Interfaces","url":"https://www.academia.edu/Documents/in/Multimodal_Interfaces"},{"id":42095,"name":"Document Engineering","url":"https://www.academia.edu/Documents/in/Document_Engineering"},{"id":44606,"name":"Hypermedia","url":"https://www.academia.edu/Documents/in/Hypermedia"},{"id":67792,"name":"Multi-Touch","url":"https://www.academia.edu/Documents/in/Multi-Touch"},{"id":71901,"name":"Tabletop","url":"https://www.academia.edu/Documents/in/Tabletop"},{"id":668047,"name":"Digital Media and Interaction Design","url":"https://www.academia.edu/Documents/in/Digital_Media_and_Interaction_Design"}],"urls":[{"id":4616191,"url":"https://www.youtube.com/watch?v=rc7I5h6XirY"}]}, dispatcherData: dispatcherData }); $(this).data('initialized', true); } }); $a.trackClickSource(".js-work-strip-work-link", "profile_work_strip") if (false) { Aedu.setUpFigureCarousel('profile-work-11582042-figures'); } }); </script> </div><div class="profile--tab_content_container js-tab-pane tab-pane" data-section-id="8065687" id="misc"></div></div></div><script> require.config({ waitSeconds: 90 })(["https://a.academia-assets.com/assets/wow_profile-a9bf3a2bc8c89fa2a77156577594264ee8a0f214d74241bc0fcd3f69f8d107ac.js","https://a.academia-assets.com/assets/google_contacts-0dfb882d836b94dbcb4a2d123d6933fc9533eda5be911641f20b4eb428429600.js"], function() { // from javascript_helper.rb $('.js-google-connect-button').click(function(e) { e.preventDefault(); GoogleContacts.authorize_and_show_contacts(); Aedu.Dismissibles.recordClickthrough("WowProfileImportContactsPrompt"); }); $('.js-update-biography-button').click(function(e) { e.preventDefault(); Aedu.Dismissibles.recordClickthrough("UpdateUserBiographyPrompt"); $.ajax({ url: $r.api_v0_profiles_update_about_path({ subdomain_param: 'api', about: "", }), type: 'PUT', success: function(response) { location.reload(); } }); }); $('.js-work-creator-button').click(function (e) { e.preventDefault(); window.location = $r.upload_funnel_document_path({ source: encodeURIComponent(""), }); }); $('.js-video-upload-button').click(function (e) { e.preventDefault(); window.location = $r.upload_funnel_video_path({ source: encodeURIComponent(""), }); }); $('.js-do-this-later-button').click(function() { $(this).closest('.js-profile-nag-panel').remove(); Aedu.Dismissibles.recordDismissal("WowProfileImportContactsPrompt"); }); $('.js-update-biography-do-this-later-button').click(function(){ $(this).closest('.js-profile-nag-panel').remove(); Aedu.Dismissibles.recordDismissal("UpdateUserBiographyPrompt"); }); $('.wow-profile-mentions-upsell--close').click(function(){ $('.wow-profile-mentions-upsell--panel').hide(); Aedu.Dismissibles.recordDismissal("WowProfileMentionsUpsell"); }); $('.wow-profile-mentions-upsell--button').click(function(){ Aedu.Dismissibles.recordClickthrough("WowProfileMentionsUpsell"); }); new WowProfile.SocialRedesignUserWorks({ initialWorksOffset: 20, allWorksOffset: 20, maxSections: 4 }) }); </script> </div></div></div></div><script> require.config({ waitSeconds: 90 })(["https://a.academia-assets.com/assets/wow_profile_edit-5ea339ee107c863779f560dd7275595239fed73f1a13d279d2b599a28c0ecd33.js","https://a.academia-assets.com/assets/add_coauthor-22174b608f9cb871d03443cafa7feac496fb50d7df2d66a53f5ee3c04ba67f53.js","https://a.academia-assets.com/assets/tab-dcac0130902f0cc2d8cb403714dd47454f11fc6fb0e99ae6a0827b06613abc20.js","https://a.academia-assets.com/assets/wow_profile-a9bf3a2bc8c89fa2a77156577594264ee8a0f214d74241bc0fcd3f69f8d107ac.js"], function() { // from javascript_helper.rb window.ae = window.ae || {}; window.ae.WowProfile = window.ae.WowProfile || {}; if(Aedu.User.current && Aedu.User.current.id === $viewedUser.id) { window.ae.WowProfile.current_user_edit = {}; new WowProfileEdit.EditUploadView({ el: '.js-edit-upload-button-wrapper', model: window.$current_user, }); new AddCoauthor.AddCoauthorsController(); } var userInfoView = new WowProfile.SocialRedesignUserInfo({ recaptcha_key: "6LdxlRMTAAAAADnu_zyLhLg0YF9uACwz78shpjJB" }); WowProfile.router = new WowProfile.Router({ userInfoView: userInfoView }); Backbone.history.start({ pushState: true, root: "/" + $viewedUser.page_name }); new WowProfile.UserWorksNav() }); </script> </div> <div class="bootstrap login"><div class="modal fade login-modal" id="login-modal"><div class="login-modal-dialog modal-dialog"><div class="modal-content"><div class="modal-header"><button class="close close" data-dismiss="modal" type="button"><span aria-hidden="true">&times;</span><span class="sr-only">Close</span></button><h4 class="modal-title text-center"><strong>Log In</strong></h4></div><div class="modal-body"><div class="row"><div class="col-xs-10 col-xs-offset-1"><button class="btn btn-fb btn-lg btn-block btn-v-center-content" id="login-facebook-oauth-button"><svg style="float: left; width: 19px; line-height: 1em; margin-right: .3em;" aria-hidden="true" focusable="false" data-prefix="fab" data-icon="facebook-square" class="svg-inline--fa fa-facebook-square fa-w-14" role="img" xmlns="http://www.w3.org/2000/svg" viewBox="0 0 448 512"><path fill="currentColor" d="M400 32H48A48 48 0 0 0 0 80v352a48 48 0 0 0 48 48h137.25V327.69h-63V256h63v-54.64c0-62.15 37-96.48 93.67-96.48 27.14 0 55.52 4.84 55.52 4.84v61h-31.27c-30.81 0-40.42 19.12-40.42 38.73V256h68.78l-11 71.69h-57.78V480H400a48 48 0 0 0 48-48V80a48 48 0 0 0-48-48z"></path></svg><small><strong>Log in</strong> with <strong>Facebook</strong></small></button><br /><button class="btn btn-google btn-lg btn-block btn-v-center-content" id="login-google-oauth-button"><svg style="float: left; width: 22px; line-height: 1em; margin-right: .3em;" aria-hidden="true" focusable="false" data-prefix="fab" data-icon="google-plus" class="svg-inline--fa fa-google-plus fa-w-16" role="img" xmlns="http://www.w3.org/2000/svg" viewBox="0 0 512 512"><path fill="currentColor" d="M256,8C119.1,8,8,119.1,8,256S119.1,504,256,504,504,392.9,504,256,392.9,8,256,8ZM185.3,380a124,124,0,0,1,0-248c31.3,0,60.1,11,83,32.3l-33.6,32.6c-13.2-12.9-31.3-19.1-49.4-19.1-42.9,0-77.2,35.5-77.2,78.1S142.3,334,185.3,334c32.6,0,64.9-19.1,70.1-53.3H185.3V238.1H302.2a109.2,109.2,0,0,1,1.9,20.7c0,70.8-47.5,121.2-118.8,121.2ZM415.5,273.8v35.5H380V273.8H344.5V238.3H380V202.8h35.5v35.5h35.2v35.5Z"></path></svg><small><strong>Log in</strong> with <strong>Google</strong></small></button><br /><style type="text/css">.sign-in-with-apple-button { width: 100%; height: 52px; border-radius: 3px; border: 1px solid black; cursor: pointer; } .sign-in-with-apple-button > div { margin: 0 auto; / This centers the Apple-rendered button horizontally }</style><script src="https://appleid.cdn-apple.com/appleauth/static/jsapi/appleid/1/en_US/appleid.auth.js" type="text/javascript"></script><div class="sign-in-with-apple-button" data-border="false" data-color="white" id="appleid-signin"><span &nbsp;&nbsp;="Sign Up with Apple" class="u-fs11"></span></div><script>AppleID.auth.init({ clientId: 'edu.academia.applesignon', scope: 'name email', redirectURI: 'https://www.academia.edu/sessions', state: "328711aa5136b292b48c28d4eafe1c7d4a770cbefef3a6abb9441c56a92ca657", });</script><script>// Hacky way of checking if on fast loswp if (window.loswp == null) { (function() { const Google = window?.Aedu?.Auth?.OauthButton?.Login?.Google; const Facebook = window?.Aedu?.Auth?.OauthButton?.Login?.Facebook; if (Google) { new Google({ el: '#login-google-oauth-button', rememberMeCheckboxId: 'remember_me', track: null }); } if (Facebook) { new Facebook({ el: '#login-facebook-oauth-button', rememberMeCheckboxId: 'remember_me', track: null }); } })(); }</script></div></div></div><div class="modal-body"><div class="row"><div class="col-xs-10 col-xs-offset-1"><div class="hr-heading login-hr-heading"><span class="hr-heading-text">or</span></div></div></div></div><div class="modal-body"><div class="row"><div class="col-xs-10 col-xs-offset-1"><form class="js-login-form" action="https://www.academia.edu/sessions" accept-charset="UTF-8" method="post"><input type="hidden" name="authenticity_token" value="CbiQjiLIl9JtoG6OsDdLP6fanSXlyW_Vy77XwuENxzISx8vVzyW7L9TE8UyeV-JhHMPGwLXXDZUaRknSRGtuOQ" autocomplete="off" /><div class="form-group"><label class="control-label" for="login-modal-email-input" style="font-size: 14px;">Email</label><input class="form-control" id="login-modal-email-input" name="login" type="email" /></div><div class="form-group"><label class="control-label" for="login-modal-password-input" style="font-size: 14px;">Password</label><input class="form-control" id="login-modal-password-input" name="password" type="password" /></div><input type="hidden" name="post_login_redirect_url" id="post_login_redirect_url" value="https://vub.academia.edu/BeatSigner" autocomplete="off" /><div class="checkbox"><label><input type="checkbox" name="remember_me" id="remember_me" value="1" checked="checked" /><small style="font-size: 12px; margin-top: 2px; display: inline-block;">Remember me on this computer</small></label></div><br><input type="submit" name="commit" value="Log In" class="btn btn-primary btn-block btn-lg js-login-submit" data-disable-with="Log In" /></br></form><script>typeof window?.Aedu?.recaptchaManagedForm === 'function' && window.Aedu.recaptchaManagedForm( document.querySelector('.js-login-form'), document.querySelector('.js-login-submit') );</script><small style="font-size: 12px;"><br />or <a data-target="#login-modal-reset-password-container" data-toggle="collapse" href="javascript:void(0)">reset password</a></small><div class="collapse" id="login-modal-reset-password-container"><br /><div class="well margin-0x"><form class="js-password-reset-form" action="https://www.academia.edu/reset_password" accept-charset="UTF-8" method="post"><input type="hidden" name="authenticity_token" value="Dgpe47d_tiEwoBquP1-hLzC_OYRzW62wAjV0v0KXyRYVdQW4WpKa3InEhWwRPwhxi6ZiYSNFz_DTzeqv5_FgHQ" autocomplete="off" /><p>Enter the email address you signed up with and we&#39;ll email you a reset link.</p><div class="form-group"><input class="form-control" name="email" type="email" /></div><script src="https://recaptcha.net/recaptcha/api.js" async defer></script> <script> var invisibleRecaptchaSubmit = function () { var closestForm = function (ele) { var curEle = ele.parentNode; while (curEle.nodeName !== 'FORM' && curEle.nodeName !== 'BODY'){ curEle = curEle.parentNode; } return curEle.nodeName === 'FORM' ? curEle : null }; var eles = document.getElementsByClassName('g-recaptcha'); if (eles.length > 0) { var form = closestForm(eles[0]); if (form) { form.submit(); } } }; </script> <input type="submit" data-sitekey="6Lf3KHUUAAAAACggoMpmGJdQDtiyrjVlvGJ6BbAj" data-callback="invisibleRecaptchaSubmit" class="g-recaptcha btn btn-primary btn-block" value="Email me a link" value=""/> </form></div></div><script> require.config({ waitSeconds: 90 })(["https://a.academia-assets.com/assets/collapse-45805421cf446ca5adf7aaa1935b08a3a8d1d9a6cc5d91a62a2a3a00b20b3e6a.js"], function() { // from javascript_helper.rb $("#login-modal-reset-password-container").on("shown.bs.collapse", function() { $(this).find("input[type=email]").focus(); }); }); </script> </div></div></div><div class="modal-footer"><div class="text-center"><small style="font-size: 12px;">Need an account?&nbsp;<a rel="nofollow" href="https://www.academia.edu/signup">Click here to sign up</a></small></div></div></div></div></div></div><script>// If we are on subdomain or non-bootstrapped page, redirect to login page instead of showing modal (function(){ if (typeof $ === 'undefined') return; var host = window.location.hostname; if ((host === $domain || host === "www."+$domain) && (typeof $().modal === 'function')) { $("#nav_log_in").click(function(e) { // Don't follow the link and open the modal e.preventDefault(); $("#login-modal").on('shown.bs.modal', function() { $(this).find("#login-modal-email-input").focus() }).modal('show'); }); } })()</script> <div class="bootstrap" id="footer"><div class="footer-content clearfix text-center padding-top-7x" style="width:100%;"><ul class="footer-links-secondary footer-links-wide list-inline margin-bottom-1x"><li><a href="https://www.academia.edu/about">About</a></li><li><a href="https://www.academia.edu/press">Press</a></li><li><a href="https://www.academia.edu/documents">Papers</a></li><li><a href="https://www.academia.edu/topics">Topics</a></li><li><a href="https://www.academia.edu/journals">Academia.edu Journals</a></li><li><a rel="nofollow" href="https://www.academia.edu/hiring"><svg style="width: 13px; height: 13px;" aria-hidden="true" focusable="false" data-prefix="fas" data-icon="briefcase" class="svg-inline--fa fa-briefcase fa-w-16" role="img" xmlns="http://www.w3.org/2000/svg" viewBox="0 0 512 512"><path fill="currentColor" d="M320 336c0 8.84-7.16 16-16 16h-96c-8.84 0-16-7.16-16-16v-48H0v144c0 25.6 22.4 48 48 48h416c25.6 0 48-22.4 48-48V288H320v48zm144-208h-80V80c0-25.6-22.4-48-48-48H176c-25.6 0-48 22.4-48 48v48H48c-25.6 0-48 22.4-48 48v80h512v-80c0-25.6-22.4-48-48-48zm-144 0H192V96h128v32z"></path></svg>&nbsp;<strong>We're Hiring!</strong></a></li><li><a rel="nofollow" href="https://support.academia.edu/hc/en-us"><svg style="width: 12px; height: 12px;" aria-hidden="true" focusable="false" data-prefix="fas" data-icon="question-circle" class="svg-inline--fa fa-question-circle fa-w-16" role="img" xmlns="http://www.w3.org/2000/svg" viewBox="0 0 512 512"><path fill="currentColor" d="M504 256c0 136.997-111.043 248-248 248S8 392.997 8 256C8 119.083 119.043 8 256 8s248 111.083 248 248zM262.655 90c-54.497 0-89.255 22.957-116.549 63.758-3.536 5.286-2.353 12.415 2.715 16.258l34.699 26.31c5.205 3.947 12.621 3.008 16.665-2.122 17.864-22.658 30.113-35.797 57.303-35.797 20.429 0 45.698 13.148 45.698 32.958 0 14.976-12.363 22.667-32.534 33.976C247.128 238.528 216 254.941 216 296v4c0 6.627 5.373 12 12 12h56c6.627 0 12-5.373 12-12v-1.333c0-28.462 83.186-29.647 83.186-106.667 0-58.002-60.165-102-116.531-102zM256 338c-25.365 0-46 20.635-46 46 0 25.364 20.635 46 46 46s46-20.636 46-46c0-25.365-20.635-46-46-46z"></path></svg>&nbsp;<strong>Help Center</strong></a></li></ul><ul class="footer-links-tertiary list-inline margin-bottom-1x"><li class="small">Find new research papers in:</li><li class="small"><a href="https://www.academia.edu/Documents/in/Physics">Physics</a></li><li class="small"><a href="https://www.academia.edu/Documents/in/Chemistry">Chemistry</a></li><li class="small"><a href="https://www.academia.edu/Documents/in/Biology">Biology</a></li><li class="small"><a href="https://www.academia.edu/Documents/in/Health_Sciences">Health Sciences</a></li><li class="small"><a href="https://www.academia.edu/Documents/in/Ecology">Ecology</a></li><li class="small"><a href="https://www.academia.edu/Documents/in/Earth_Sciences">Earth Sciences</a></li><li class="small"><a href="https://www.academia.edu/Documents/in/Cognitive_Science">Cognitive Science</a></li><li class="small"><a href="https://www.academia.edu/Documents/in/Mathematics">Mathematics</a></li><li class="small"><a href="https://www.academia.edu/Documents/in/Computer_Science">Computer Science</a></li></ul></div></div><div class="DesignSystem" id="credit" style="width:100%;"><ul class="u-pl0x footer-links-legal list-inline"><li><a rel="nofollow" href="https://www.academia.edu/terms">Terms</a></li><li><a rel="nofollow" href="https://www.academia.edu/privacy">Privacy</a></li><li><a rel="nofollow" href="https://www.academia.edu/copyright">Copyright</a></li><li><a rel="nofollow" href="https://www.academia.edu/content_policy">Content Policy</a></li><li>Academia &copy;2025</li></ul></div><script> //<![CDATA[ window.detect_gmtoffset = true; window.Academia && window.Academia.set_gmtoffset && Academia.set_gmtoffset('/gmtoffset'); //]]> </script> <div id='overlay_background'></div> <div id='bootstrap-modal-container' class='bootstrap'></div> <div id='ds-modal-container' class='bootstrap DesignSystem'></div> <div id='full-screen-modal'></div> </div> </body> </html>

Pages: 1 2 3 4 5 6 7 8 9 10