CINXE.COM
OmniBuds: A Sensory Earable Platform for Advanced Bio-Sensing and On-Device Machine Learning
<!DOCTYPE html> <html lang="en"> <head> <meta content="text/html; charset=utf-8" http-equiv="content-type"/> <title>OmniBuds: A Sensory Earable Platform for Advanced Bio-Sensing and On-Device Machine Learning</title> <!--Generated on Mon Oct 7 06:31:24 2024 by LaTeXML (version 0.8.8) http://dlmf.nist.gov/LaTeXML/.--> <meta content="width=device-width, initial-scale=1, shrink-to-fit=no" name="viewport"/> <link href="https://cdn.jsdelivr.net/npm/bootstrap@5.3.0/dist/css/bootstrap.min.css" rel="stylesheet" type="text/css"/> <link href="/static/browse/0.3.4/css/ar5iv.0.7.9.min.css" rel="stylesheet" type="text/css"/> <link href="/static/browse/0.3.4/css/ar5iv-fonts.0.7.9.min.css" rel="stylesheet" type="text/css"/> <link href="/static/browse/0.3.4/css/latexml_styles.css" rel="stylesheet" type="text/css"/> <script src="https://cdn.jsdelivr.net/npm/bootstrap@5.3.0/dist/js/bootstrap.bundle.min.js"></script> <script src="https://cdnjs.cloudflare.com/ajax/libs/html2canvas/1.3.3/html2canvas.min.js"></script> <script src="/static/browse/0.3.4/js/addons_new.js"></script> <script src="/static/browse/0.3.4/js/feedbackOverlay.js"></script> <meta content="Earables, wearables, health monitoring, on-device machine learning, embedded artificial intelligence, privacy-preserving computing." lang="en" name="keywords"/> <base href="/html/2410.04775v1/"/></head> <body> <nav class="ltx_page_navbar"> <nav class="ltx_TOC"> <ol class="ltx_toclist"> <li class="ltx_tocentry ltx_tocentry_section"><a class="ltx_ref" href="https://arxiv.org/html/2410.04775v1#S1" title="In OmniBuds: A Sensory Earable Platform for Advanced Bio-Sensing and On-Device Machine Learning"><span class="ltx_text ltx_ref_title"><span class="ltx_tag ltx_tag_ref">1 </span>Introduction</span></a></li> <li class="ltx_tocentry ltx_tocentry_section"> <a class="ltx_ref" href="https://arxiv.org/html/2410.04775v1#S2" title="In OmniBuds: A Sensory Earable Platform for Advanced Bio-Sensing and On-Device Machine Learning"><span class="ltx_text ltx_ref_title"><span class="ltx_tag ltx_tag_ref">2 </span>Design Principles</span></a> <ol class="ltx_toclist ltx_toclist_section"> <li class="ltx_tocentry ltx_tocentry_subsection"><a class="ltx_ref" href="https://arxiv.org/html/2410.04775v1#S2.SS1" title="In 2. Design Principles ‣ OmniBuds: A Sensory Earable Platform for Advanced Bio-Sensing and On-Device Machine Learning"><span class="ltx_text ltx_ref_title"><span class="ltx_tag ltx_tag_ref">2.1 </span>Local Computation and Energy Efficiency</span></a></li> <li class="ltx_tocentry ltx_tocentry_subsection"><a class="ltx_ref" href="https://arxiv.org/html/2410.04775v1#S2.SS2" title="In 2. Design Principles ‣ OmniBuds: A Sensory Earable Platform for Advanced Bio-Sensing and On-Device Machine Learning"><span class="ltx_text ltx_ref_title"><span class="ltx_tag ltx_tag_ref">2.2 </span>Dynamic Sensor Access and Multitasking</span></a></li> <li class="ltx_tocentry ltx_tocentry_subsection"><a class="ltx_ref" href="https://arxiv.org/html/2410.04775v1#S2.SS3" title="In 2. Design Principles ‣ OmniBuds: A Sensory Earable Platform for Advanced Bio-Sensing and On-Device Machine Learning"><span class="ltx_text ltx_ref_title"><span class="ltx_tag ltx_tag_ref">2.3 </span>Unified Communication Across Devices</span></a></li> <li class="ltx_tocentry ltx_tocentry_subsection"><a class="ltx_ref" href="https://arxiv.org/html/2410.04775v1#S2.SS4" title="In 2. Design Principles ‣ OmniBuds: A Sensory Earable Platform for Advanced Bio-Sensing and On-Device Machine Learning"><span class="ltx_text ltx_ref_title"><span class="ltx_tag ltx_tag_ref">2.4 </span>Privacy by Design</span></a></li> </ol> </li> <li class="ltx_tocentry ltx_tocentry_section"> <a class="ltx_ref" href="https://arxiv.org/html/2410.04775v1#S3" title="In OmniBuds: A Sensory Earable Platform for Advanced Bio-Sensing and On-Device Machine Learning"><span class="ltx_text ltx_ref_title"><span class="ltx_tag ltx_tag_ref">3 </span>Hardware Architecture</span></a> <ol class="ltx_toclist ltx_toclist_section"> <li class="ltx_tocentry ltx_tocentry_subsection"><a class="ltx_ref" href="https://arxiv.org/html/2410.04775v1#S3.SS1" title="In 3. Hardware Architecture ‣ OmniBuds: A Sensory Earable Platform for Advanced Bio-Sensing and On-Device Machine Learning"><span class="ltx_text ltx_ref_title"><span class="ltx_tag ltx_tag_ref">3.1 </span>Form Factor and Sensor Placement</span></a></li> <li class="ltx_tocentry ltx_tocentry_subsection"> <a class="ltx_ref" href="https://arxiv.org/html/2410.04775v1#S3.SS2" title="In 3. Hardware Architecture ‣ OmniBuds: A Sensory Earable Platform for Advanced Bio-Sensing and On-Device Machine Learning"><span class="ltx_text ltx_ref_title"><span class="ltx_tag ltx_tag_ref">3.2 </span>Compute Subsystem</span></a> <ol class="ltx_toclist ltx_toclist_subsection"> <li class="ltx_tocentry ltx_tocentry_subsubsection"><a class="ltx_ref" href="https://arxiv.org/html/2410.04775v1#S3.SS2.SSS1" title="In 3.2. Compute Subsystem ‣ 3. Hardware Architecture ‣ OmniBuds: A Sensory Earable Platform for Advanced Bio-Sensing and On-Device Machine Learning"><span class="ltx_text ltx_ref_title"><span class="ltx_tag ltx_tag_ref">3.2.1 </span>Main Processor and CNN Accelerator</span></a></li> <li class="ltx_tocentry ltx_tocentry_subsubsection"><a class="ltx_ref" href="https://arxiv.org/html/2410.04775v1#S3.SS2.SSS2" title="In 3.2. Compute Subsystem ‣ 3. Hardware Architecture ‣ OmniBuds: A Sensory Earable Platform for Advanced Bio-Sensing and On-Device Machine Learning"><span class="ltx_text ltx_ref_title"><span class="ltx_tag ltx_tag_ref">3.2.2 </span>Specialised Computational Units</span></a></li> </ol> </li> <li class="ltx_tocentry ltx_tocentry_subsection"><a class="ltx_ref" href="https://arxiv.org/html/2410.04775v1#S3.SS3" title="In 3. Hardware Architecture ‣ OmniBuds: A Sensory Earable Platform for Advanced Bio-Sensing and On-Device Machine Learning"><span class="ltx_text ltx_ref_title"><span class="ltx_tag ltx_tag_ref">3.3 </span>Wireless Subsystem</span></a></li> <li class="ltx_tocentry ltx_tocentry_subsection"> <a class="ltx_ref" href="https://arxiv.org/html/2410.04775v1#S3.SS4" title="In 3. Hardware Architecture ‣ OmniBuds: A Sensory Earable Platform for Advanced Bio-Sensing and On-Device Machine Learning"><span class="ltx_text ltx_ref_title"><span class="ltx_tag ltx_tag_ref">3.4 </span>Sensing Subsystem</span></a> <ol class="ltx_toclist ltx_toclist_subsection"> <li class="ltx_tocentry ltx_tocentry_subsubsection"><a class="ltx_ref" href="https://arxiv.org/html/2410.04775v1#S3.SS4.SSS1" title="In 3.4. Sensing Subsystem ‣ 3. Hardware Architecture ‣ OmniBuds: A Sensory Earable Platform for Advanced Bio-Sensing and On-Device Machine Learning"><span class="ltx_text ltx_ref_title"><span class="ltx_tag ltx_tag_ref">3.4.1 </span>Photoplethysmography sensor (PPG)</span></a></li> <li class="ltx_tocentry ltx_tocentry_subsubsection"><a class="ltx_ref" href="https://arxiv.org/html/2410.04775v1#S3.SS4.SSS2" title="In 3.4. Sensing Subsystem ‣ 3. Hardware Architecture ‣ OmniBuds: A Sensory Earable Platform for Advanced Bio-Sensing and On-Device Machine Learning"><span class="ltx_text ltx_ref_title"><span class="ltx_tag ltx_tag_ref">3.4.2 </span>Inertial Measurement Unit</span></a></li> <li class="ltx_tocentry ltx_tocentry_subsubsection"><a class="ltx_ref" href="https://arxiv.org/html/2410.04775v1#S3.SS4.SSS3" title="In 3.4. Sensing Subsystem ‣ 3. Hardware Architecture ‣ OmniBuds: A Sensory Earable Platform for Advanced Bio-Sensing and On-Device Machine Learning"><span class="ltx_text ltx_ref_title"><span class="ltx_tag ltx_tag_ref">3.4.3 </span>Temperature</span></a></li> <li class="ltx_tocentry ltx_tocentry_subsubsection"><a class="ltx_ref" href="https://arxiv.org/html/2410.04775v1#S3.SS4.SSS4" title="In 3.4. Sensing Subsystem ‣ 3. Hardware Architecture ‣ OmniBuds: A Sensory Earable Platform for Advanced Bio-Sensing and On-Device Machine Learning"><span class="ltx_text ltx_ref_title"><span class="ltx_tag ltx_tag_ref">3.4.4 </span>Microphones</span></a></li> </ol> </li> </ol> </li> <li class="ltx_tocentry ltx_tocentry_section"> <a class="ltx_ref" href="https://arxiv.org/html/2410.04775v1#S4" title="In OmniBuds: A Sensory Earable Platform for Advanced Bio-Sensing and On-Device Machine Learning"><span class="ltx_text ltx_ref_title"><span class="ltx_tag ltx_tag_ref">4 </span>Software Architecture</span></a> <ol class="ltx_toclist ltx_toclist_section"> <li class="ltx_tocentry ltx_tocentry_subsection"> <a class="ltx_ref" href="https://arxiv.org/html/2410.04775v1#S4.SS1" title="In 4. Software Architecture ‣ OmniBuds: A Sensory Earable Platform for Advanced Bio-Sensing and On-Device Machine Learning"><span class="ltx_text ltx_ref_title"><span class="ltx_tag ltx_tag_ref">4.1 </span>Communication Subsystem</span></a> <ol class="ltx_toclist ltx_toclist_subsection"> <li class="ltx_tocentry ltx_tocentry_subsubsection"><a class="ltx_ref" href="https://arxiv.org/html/2410.04775v1#S4.SS1.SSS1" title="In 4.1. Communication Subsystem ‣ 4. Software Architecture ‣ OmniBuds: A Sensory Earable Platform for Advanced Bio-Sensing and On-Device Machine Learning"><span class="ltx_text ltx_ref_title"><span class="ltx_tag ltx_tag_ref">4.1.1 </span>Inter-Module Communication (IMC)</span></a></li> <li class="ltx_tocentry ltx_tocentry_subsubsection"><a class="ltx_ref" href="https://arxiv.org/html/2410.04775v1#S4.SS1.SSS2" title="In 4.1. Communication Subsystem ‣ 4. Software Architecture ‣ OmniBuds: A Sensory Earable Platform for Advanced Bio-Sensing and On-Device Machine Learning"><span class="ltx_text ltx_ref_title"><span class="ltx_tag ltx_tag_ref">4.1.2 </span>Communication Interface</span></a></li> </ol> </li> <li class="ltx_tocentry ltx_tocentry_subsection"> <a class="ltx_ref" href="https://arxiv.org/html/2410.04775v1#S4.SS2" title="In 4. Software Architecture ‣ OmniBuds: A Sensory Earable Platform for Advanced Bio-Sensing and On-Device Machine Learning"><span class="ltx_text ltx_ref_title"><span class="ltx_tag ltx_tag_ref">4.2 </span>Sense and Compute Subsystem</span></a> <ol class="ltx_toclist ltx_toclist_subsection"> <li class="ltx_tocentry ltx_tocentry_subsubsection"><a class="ltx_ref" href="https://arxiv.org/html/2410.04775v1#S4.SS2.SSS1" title="In 4.2. Sense and Compute Subsystem ‣ 4. Software Architecture ‣ OmniBuds: A Sensory Earable Platform for Advanced Bio-Sensing and On-Device Machine Learning"><span class="ltx_text ltx_ref_title"><span class="ltx_tag ltx_tag_ref">4.2.1 </span>Sensor Distribution Module</span></a></li> <li class="ltx_tocentry ltx_tocentry_subsubsection"><a class="ltx_ref" href="https://arxiv.org/html/2410.04775v1#S4.SS2.SSS2" title="In 4.2. Sense and Compute Subsystem ‣ 4. Software Architecture ‣ OmniBuds: A Sensory Earable Platform for Advanced Bio-Sensing and On-Device Machine Learning"><span class="ltx_text ltx_ref_title"><span class="ltx_tag ltx_tag_ref">4.2.2 </span>Machine Learning Engine</span></a></li> <li class="ltx_tocentry ltx_tocentry_subsubsection"> <a class="ltx_ref" href="https://arxiv.org/html/2410.04775v1#S4.SS2.SSS3" title="In 4.2. Sense and Compute Subsystem ‣ 4. Software Architecture ‣ OmniBuds: A Sensory Earable Platform for Advanced Bio-Sensing and On-Device Machine Learning"><span class="ltx_text ltx_ref_title"><span class="ltx_tag ltx_tag_ref">4.2.3 </span>Audio Manager</span></a> <ol class="ltx_toclist ltx_toclist_subsubsection"> <li class="ltx_tocentry ltx_tocentry_paragraph"><a class="ltx_ref" href="https://arxiv.org/html/2410.04775v1#S4.SS2.SSS3.Px1" title="In 4.2.3. Audio Manager ‣ 4.2. Sense and Compute Subsystem ‣ 4. Software Architecture ‣ OmniBuds: A Sensory Earable Platform for Advanced Bio-Sensing and On-Device Machine Learning"><span class="ltx_text ltx_ref_title">Time-Critical Pipelines</span></a></li> <li class="ltx_tocentry ltx_tocentry_paragraph"><a class="ltx_ref" href="https://arxiv.org/html/2410.04775v1#S4.SS2.SSS3.Px2" title="In 4.2.3. Audio Manager ‣ 4.2. Sense and Compute Subsystem ‣ 4. Software Architecture ‣ OmniBuds: A Sensory Earable Platform for Advanced Bio-Sensing and On-Device Machine Learning"><span class="ltx_text ltx_ref_title">Non-Time-Critical Pipelines</span></a></li> </ol> </li> </ol> </li> <li class="ltx_tocentry ltx_tocentry_subsection"><a class="ltx_ref" href="https://arxiv.org/html/2410.04775v1#S4.SS3" title="In 4. Software Architecture ‣ OmniBuds: A Sensory Earable Platform for Advanced Bio-Sensing and On-Device Machine Learning"><span class="ltx_text ltx_ref_title"><span class="ltx_tag ltx_tag_ref">4.3 </span>Load Balancing Subsystem</span></a></li> </ol> </li> <li class="ltx_tocentry ltx_tocentry_section"><a class="ltx_ref" href="https://arxiv.org/html/2410.04775v1#S5" title="In OmniBuds: A Sensory Earable Platform for Advanced Bio-Sensing and On-Device Machine Learning"><span class="ltx_text ltx_ref_title"><span class="ltx_tag ltx_tag_ref">5 </span>Smartphone Application</span></a></li> <li class="ltx_tocentry ltx_tocentry_section"> <a class="ltx_ref" href="https://arxiv.org/html/2410.04775v1#S6" title="In OmniBuds: A Sensory Earable Platform for Advanced Bio-Sensing and On-Device Machine Learning"><span class="ltx_text ltx_ref_title"><span class="ltx_tag ltx_tag_ref">6 </span>Applications of OmniBuds</span></a> <ol class="ltx_toclist ltx_toclist_section"> <li class="ltx_tocentry ltx_tocentry_subsection"> <a class="ltx_ref" href="https://arxiv.org/html/2410.04775v1#S6.SS1" title="In 6. Applications of OmniBuds ‣ OmniBuds: A Sensory Earable Platform for Advanced Bio-Sensing and On-Device Machine Learning"><span class="ltx_text ltx_ref_title"><span class="ltx_tag ltx_tag_ref">6.1 </span>Existing Applications</span></a> <ol class="ltx_toclist ltx_toclist_subsection"> <li class="ltx_tocentry ltx_tocentry_subsubsection"><a class="ltx_ref" href="https://arxiv.org/html/2410.04775v1#S6.SS1.SSS1" title="In 6.1. Existing Applications ‣ 6. Applications of OmniBuds ‣ OmniBuds: A Sensory Earable Platform for Advanced Bio-Sensing and On-Device Machine Learning"><span class="ltx_text ltx_ref_title"><span class="ltx_tag ltx_tag_ref">6.1.1 </span>Vital Signs Monitoring</span></a></li> <li class="ltx_tocentry ltx_tocentry_subsubsection"><a class="ltx_ref" href="https://arxiv.org/html/2410.04775v1#S6.SS1.SSS2" title="In 6.1. Existing Applications ‣ 6. Applications of OmniBuds ‣ OmniBuds: A Sensory Earable Platform for Advanced Bio-Sensing and On-Device Machine Learning"><span class="ltx_text ltx_ref_title"><span class="ltx_tag ltx_tag_ref">6.1.2 </span>Multi-modal Contextual Recognition</span></a></li> </ol> </li> <li class="ltx_tocentry ltx_tocentry_subsection"> <a class="ltx_ref" href="https://arxiv.org/html/2410.04775v1#S6.SS2" title="In 6. Applications of OmniBuds ‣ OmniBuds: A Sensory Earable Platform for Advanced Bio-Sensing and On-Device Machine Learning"><span class="ltx_text ltx_ref_title"><span class="ltx_tag ltx_tag_ref">6.2 </span>Future Applications</span></a> <ol class="ltx_toclist ltx_toclist_subsection"> <li class="ltx_tocentry ltx_tocentry_subsubsection"><a class="ltx_ref" href="https://arxiv.org/html/2410.04775v1#S6.SS2.SSS1" title="In 6.2. Future Applications ‣ 6. Applications of OmniBuds ‣ OmniBuds: A Sensory Earable Platform for Advanced Bio-Sensing and On-Device Machine Learning"><span class="ltx_text ltx_ref_title"><span class="ltx_tag ltx_tag_ref">6.2.1 </span>Emotion Recognition and Augmented Feedback</span></a></li> <li class="ltx_tocentry ltx_tocentry_subsubsection"><a class="ltx_ref" href="https://arxiv.org/html/2410.04775v1#S6.SS2.SSS2" title="In 6.2. Future Applications ‣ 6. Applications of OmniBuds ‣ OmniBuds: A Sensory Earable Platform for Advanced Bio-Sensing and On-Device Machine Learning"><span class="ltx_text ltx_ref_title"><span class="ltx_tag ltx_tag_ref">6.2.2 </span>Gesture-Based Interface for Cognitive Augmentation</span></a></li> <li class="ltx_tocentry ltx_tocentry_subsubsection"><a class="ltx_ref" href="https://arxiv.org/html/2410.04775v1#S6.SS2.SSS3" title="In 6.2. Future Applications ‣ 6. Applications of OmniBuds ‣ OmniBuds: A Sensory Earable Platform for Advanced Bio-Sensing and On-Device Machine Learning"><span class="ltx_text ltx_ref_title"><span class="ltx_tag ltx_tag_ref">6.2.3 </span>Personalised Fitness Coaching with Motion Analysis</span></a></li> <li class="ltx_tocentry ltx_tocentry_subsubsection"><a class="ltx_ref" href="https://arxiv.org/html/2410.04775v1#S6.SS2.SSS4" title="In 6.2. Future Applications ‣ 6. Applications of OmniBuds ‣ OmniBuds: A Sensory Earable Platform for Advanced Bio-Sensing and On-Device Machine Learning"><span class="ltx_text ltx_ref_title"><span class="ltx_tag ltx_tag_ref">6.2.4 </span>Adaptive Personal Assistant Based on Physical and Cognitive State</span></a></li> <li class="ltx_tocentry ltx_tocentry_subsubsection"><a class="ltx_ref" href="https://arxiv.org/html/2410.04775v1#S6.SS2.SSS5" title="In 6.2. Future Applications ‣ 6. Applications of OmniBuds ‣ OmniBuds: A Sensory Earable Platform for Advanced Bio-Sensing and On-Device Machine Learning"><span class="ltx_text ltx_ref_title"><span class="ltx_tag ltx_tag_ref">6.2.5 </span>Acoustic Augmented Reality for Situational Awareness</span></a></li> </ol> </li> </ol> </li> <li class="ltx_tocentry ltx_tocentry_section"><a class="ltx_ref" href="https://arxiv.org/html/2410.04775v1#S7" title="In OmniBuds: A Sensory Earable Platform for Advanced Bio-Sensing and On-Device Machine Learning"><span class="ltx_text ltx_ref_title"><span class="ltx_tag ltx_tag_ref">7 </span>Conclusion</span></a></li> </ol></nav> </nav> <div class="ltx_page_main"> <div class="ltx_page_content"> <article class="ltx_document ltx_authors_1line ltx_leqno"> <h1 class="ltx_title ltx_title_document">OmniBuds: A Sensory Earable Platform for Advanced Bio-Sensing and On-Device Machine Learning</h1> <div class="ltx_authors"> <span class="ltx_creator ltx_role_author"> <span class="ltx_personname"><span class="ltx_text" id="id1.1.id1" style="font-size:120%;">Alessandro Montanari Ashok Thangarajan Khaldoon Al-Naimi Andrea Ferlini Yang Liu Ananta Narayanan Balaji Fahim Kawsar</span> </span><span class="ltx_author_notes"> <span class="ltx_contact ltx_role_affiliation"><span class="ltx_text ltx_affiliation_institution" id="id2.2.id1" style="font-size:90%;">Nokia Bell Labs, Cambridge (UK) </span><span class="ltx_text" id="id3.3.id2" style="font-size:90%;"> <span class="ltx_text ltx_affiliation_country" id="id3.3.id2.1"></span> </span> </span></span></span> </div> <div class="ltx_abstract"> <h6 class="ltx_title ltx_title_abstract">Abstract.</h6> <p class="ltx_p" id="id4.id1">Sensory earables have evolved from basic audio enhancement devices into sophisticated platforms for clinical-grade health monitoring and wellbeing management. This paper introduces OmniBuds, an advanced sensory earable platform integrating multiple biosensors and onboard computation powered by a machine learning accelerator, all within a real-time operating system (RTOS). The platform’s dual-ear symmetric design, equipped with precisely positioned kinetic, acoustic, optical, and thermal sensors, enables highly accurate and real-time physiological assessments. Unlike conventional earables that rely on external data processing, OmniBuds leverage real-time onboard computation to significantly enhance system efficiency, reduce latency, and safeguard privacy by processing data locally. This capability includes executing complex machine learning models directly on the device. We provide a comprehensive analysis of OmniBuds’ design, hardware and software architecture demonstrating its capacity for multi-functional applications, accurate and robust tracking of physiological parameters, and advanced human-computer interaction.</p> </div> <div class="ltx_keywords">Earables, wearables, health monitoring, on-device machine learning, embedded artificial intelligence, privacy-preserving computing. </div> <section class="ltx_section" id="S1"> <h2 class="ltx_title ltx_title_section"> <span class="ltx_tag ltx_tag_section">1. </span>Introduction</h2> <div class="ltx_para" id="S1.p1"> <p class="ltx_p" id="S1.p1.1">Sensory earables have transcended their initial promise of enhanced audio experiences to become powerful tools for accurate health monitoring and personal wellbeing management. Central to this transformation is their ability to leverage the ear’s unique anatomical proximity to critical vascular and acoustic structures, enabling precise physiological sensing while minimising the effects of motion artefacts <cite class="ltx_cite ltx_citemacro_citep">(<span class="ltx_text" style="font-size:90%;">Röddiger et al</span><span class="ltx_text" style="font-size:90%;">.</span>, <a class="ltx_ref" href="https://arxiv.org/html/2410.04775v1#bib.bib34" title=""><span class="ltx_text" style="font-size:90%;">2022</span></a>; <span class="ltx_text" style="font-size:90%;">Choudhury</span>, <a class="ltx_ref" href="https://arxiv.org/html/2410.04775v1#bib.bib12" title=""><span class="ltx_text" style="font-size:90%;">2021</span></a>)</cite>.</p> </div> <div class="ltx_para" id="S1.p2"> <p class="ltx_p" id="S1.p2.1">The integration of advanced biosensors capable of tracking accurate vital markers allows earables to offer a comprehensive assessment of an individual’s physical, physiological, and social context. Recent academic studies have revealed the extensive potential of these devices in a variety of applications. Earables have demonstrated significant promise in the precise tracking of fitness metrics <cite class="ltx_cite ltx_citemacro_citep">(<span class="ltx_text" style="font-size:90%;">Prakash et al</span><span class="ltx_text" style="font-size:90%;">.</span>, <a class="ltx_ref" href="https://arxiv.org/html/2410.04775v1#bib.bib33" title=""><span class="ltx_text" style="font-size:90%;">2019</span></a>; <span class="ltx_text" style="font-size:90%;">Ferlini et al</span><span class="ltx_text" style="font-size:90%;">.</span>, <a class="ltx_ref" href="https://arxiv.org/html/2410.04775v1#bib.bib17" title=""><span class="ltx_text" style="font-size:90%;">2019</span></a>, <a class="ltx_ref" href="https://arxiv.org/html/2410.04775v1#bib.bib15" title=""><span class="ltx_text" style="font-size:90%;">2021a</span></a>; <span class="ltx_text" style="font-size:90%;">Atallah et al</span><span class="ltx_text" style="font-size:90%;">.</span>, <a class="ltx_ref" href="https://arxiv.org/html/2410.04775v1#bib.bib5" title=""><span class="ltx_text" style="font-size:90%;">2014</span></a>; <span class="ltx_text" style="font-size:90%;">Ferlini et al</span><span class="ltx_text" style="font-size:90%;">.</span>, <a class="ltx_ref" href="https://arxiv.org/html/2410.04775v1#bib.bib16" title=""><span class="ltx_text" style="font-size:90%;">2021b</span></a>)</cite>, monitoring cardiovascular and respiratory parameters <cite class="ltx_cite ltx_citemacro_citep">(<span class="ltx_text" style="font-size:90%;">Ferlini et al</span><span class="ltx_text" style="font-size:90%;">.</span>, <a class="ltx_ref" href="https://arxiv.org/html/2410.04775v1#bib.bib18" title=""><span class="ltx_text" style="font-size:90%;">2021c</span></a>; <span class="ltx_text" style="font-size:90%;">Romero et al</span><span class="ltx_text" style="font-size:90%;">.</span>, <a class="ltx_ref" href="https://arxiv.org/html/2410.04775v1#bib.bib35" title=""><span class="ltx_text" style="font-size:90%;">2024</span></a>; <span class="ltx_text" style="font-size:90%;">Butkow et al</span><span class="ltx_text" style="font-size:90%;">.</span>, <a class="ltx_ref" href="https://arxiv.org/html/2410.04775v1#bib.bib8" title=""><span class="ltx_text" style="font-size:90%;">2023</span></a>)</cite>, hearing screening <cite class="ltx_cite ltx_citemacro_citep">(<span class="ltx_text" style="font-size:90%;">Shahid et al</span><span class="ltx_text" style="font-size:90%;">.</span>, <a class="ltx_ref" href="https://arxiv.org/html/2410.04775v1#bib.bib36" title=""><span class="ltx_text" style="font-size:90%;">2024</span></a>; <span class="ltx_text" style="font-size:90%;">Demirel et al</span><span class="ltx_text" style="font-size:90%;">.</span>, <a class="ltx_ref" href="https://arxiv.org/html/2410.04775v1#bib.bib13" title=""><span class="ltx_text" style="font-size:90%;">2023</span></a>; <span class="ltx_text" style="font-size:90%;">Chan et al</span><span class="ltx_text" style="font-size:90%;">.</span>, <a class="ltx_ref" href="https://arxiv.org/html/2410.04775v1#bib.bib10" title=""><span class="ltx_text" style="font-size:90%;">2023</span></a>)</cite> and modelling motor symptoms associated with neurological disorders such as Parkinson’s disease <cite class="ltx_cite ltx_citemacro_citep">(<span class="ltx_text" style="font-size:90%;">Goverdovsky et al</span><span class="ltx_text" style="font-size:90%;">.</span>, <a class="ltx_ref" href="https://arxiv.org/html/2410.04775v1#bib.bib22" title=""><span class="ltx_text" style="font-size:90%;">2017</span></a>; <span class="ltx_text" style="font-size:90%;">Bleichner and Debener</span>, <a class="ltx_ref" href="https://arxiv.org/html/2410.04775v1#bib.bib7" title=""><span class="ltx_text" style="font-size:90%;">2017</span></a>; <span class="ltx_text" style="font-size:90%;">Kidmose et al</span><span class="ltx_text" style="font-size:90%;">.</span>, <a class="ltx_ref" href="https://arxiv.org/html/2410.04775v1#bib.bib26" title=""><span class="ltx_text" style="font-size:90%;">2013</span></a>; <span class="ltx_text" style="font-size:90%;">Kalanadhabhatta et al</span><span class="ltx_text" style="font-size:90%;">.</span>, <a class="ltx_ref" href="https://arxiv.org/html/2410.04775v1#bib.bib23" title=""><span class="ltx_text" style="font-size:90%;">2021</span></a>)</cite>. They have also been utilised to support dementia patients through cognitive assistance <cite class="ltx_cite ltx_citemacro_citep">(<span class="ltx_text" style="font-size:90%;">Franklin et al</span><span class="ltx_text" style="font-size:90%;">.</span>, <a class="ltx_ref" href="https://arxiv.org/html/2410.04775v1#bib.bib19" title=""><span class="ltx_text" style="font-size:90%;">2021</span></a>)</cite> and to enhance auditory function via augmented hearing <cite class="ltx_cite ltx_citemacro_citep">(<span class="ltx_text" style="font-size:90%;">Veluri et al</span><span class="ltx_text" style="font-size:90%;">.</span>, <a class="ltx_ref" href="https://arxiv.org/html/2410.04775v1#bib.bib39" title=""><span class="ltx_text" style="font-size:90%;">2024a</span></a>, <a class="ltx_ref" href="https://arxiv.org/html/2410.04775v1#bib.bib40" title="">b</a>, <a class="ltx_ref" href="https://arxiv.org/html/2410.04775v1#bib.bib38" title=""><span class="ltx_text" style="font-size:90%;">2023</span></a>; <span class="ltx_text" style="font-size:90%;">Yang et al</span><span class="ltx_text" style="font-size:90%;">.</span>, <a class="ltx_ref" href="https://arxiv.org/html/2410.04775v1#bib.bib43" title=""><span class="ltx_text" style="font-size:90%;">2020</span></a>; <span class="ltx_text" style="font-size:90%;">Yang and Choudhury</span>, <a class="ltx_ref" href="https://arxiv.org/html/2410.04775v1#bib.bib42" title=""><span class="ltx_text" style="font-size:90%;">2021</span></a>; <span class="ltx_text" style="font-size:90%;">Demirel et al</span><span class="ltx_text" style="font-size:90%;">.</span>, <a class="ltx_ref" href="https://arxiv.org/html/2410.04775v1#bib.bib14" title=""><span class="ltx_text" style="font-size:90%;">2024</span></a>)</cite>. Additionally, earables are pioneering the next frontier of personal computing devices, advancing human-computer interaction (HCI) by enabling seamless and intuitive engagement with devices worn on or around the body <cite class="ltx_cite ltx_citemacro_citep">(<span class="ltx_text" style="font-size:90%;">Röddiger et al</span><span class="ltx_text" style="font-size:90%;">.</span>, <a class="ltx_ref" href="https://arxiv.org/html/2410.04775v1#bib.bib34" title=""><span class="ltx_text" style="font-size:90%;">2022</span></a>)</cite>. This growing body of research underscores the transformative role of earables, particularly in healthcare and HCI, as they pave the way for innovative developments in personal computing.</p> </div> <div class="ltx_para" id="S1.p3"> <p class="ltx_p" id="S1.p3.1">Yet, despite remarkable strides, the ultimate challenge remains: creating an earable platform that seamlessly combines high performance sensing, functional utility, and design efficiency within the compact form factor demanded by the ear. It is within this delicate balance that the next breakthrough in sensory earables lies.</p> </div> <div class="ltx_para" id="S1.p4"> <p class="ltx_p" id="S1.p4.1">In 2018, we introduced eSense <cite class="ltx_cite ltx_citemacro_citep">(<span class="ltx_text" style="font-size:90%;">Kawsar et al</span><span class="ltx_text" style="font-size:90%;">.</span>, <a class="ltx_ref" href="https://arxiv.org/html/2410.04775v1#bib.bib24" title=""><span class="ltx_text" style="font-size:90%;">2018a</span></a>)</cite>, a multisensory in-ear platform aimed at advancing research into intelligent earables. By integrating motion and audio sensors with Bluetooth Low Energy (BLE) connectivity, eSense facilitated the monitoring of activities such as speech and movement, while offering APIs to enable developers to access sensor data. This platform provided a foundation for exploring innovative capabilities; however, it also highlighted key challenges, including the need for enhanced sensor accuracy, system integration, design optimisation, user experience, and societal acceptance. These findings revealed both the limitations of eSense and its significant potential for further development and wider application in earable technology.</p> </div> <figure class="ltx_figure" id="S1.F1"><img alt="Refer to caption" class="ltx_graphics ltx_centering ltx_img_landscape" height="631" id="S1.F1.g1" src="x1.png" width="830"/> <figcaption class="ltx_caption ltx_centering"><span class="ltx_tag ltx_tag_figure">Figure 1. </span>OmniBuds main HW components and vital signs monitored.</figcaption> </figure> <div class="ltx_para" id="S1.p5"> <p class="ltx_p" id="S1.p5.1">Our reflection suggests that the true utility of sensory earables hinges on their ability to deliver precise and reliable sensing, a task that demands careful sensor placement to fully exploit the anatomical advantages of the ear. The inherent symmetry of the ears offers a unique opportunity, where spatial redundancy can significantly enhance the accuracy of physiological observations. Expanding beyond kinetic and acoustic sensors to incorporate advanced modalities such as optical PPG and temperature sensors introduces an array of new possibilities for health and wellbeing monitoring. In dual-ear configurations, this spatial redundancy unlocks multi-dimensional, robust health assessments, vastly increasing the scope of applications for these devices.</p> </div> <div class="ltx_para" id="S1.p6"> <p class="ltx_p" id="S1.p6.1">Currently, platforms like eSense operate as passive data collectors, with no onboard computation, relying entirely on external systems for processing. This design introduces inefficiencies, including communication bottlenecks, increased latency, and critical privacy risks due to the need for constant data transmission. We envision integrating local computational capabilities and sufficient storage within ergonomically designed earables. This shift would enable real-time processing, facilitate the execution of machine learning models directly on the device, and drastically enhance system efficiency and responsiveness. Privacy concerns would also be mitigated, as sensitive data could remain on the device without being transmitted externally. Moreover, the inclusion of powerful and programmable digital signal processing (DSP) can further amplify their impact, enabling augmented auditory experiences, spatial hearing enhancements, and even therapeutic interventions.</p> </div> <div class="ltx_para" id="S1.p7"> <p class="ltx_p" id="S1.p7.1">We anticipate that, with these advancements, earables can evolve from simple firmware-driven devices into fully developed programmable software platforms. This transformation would allow multiple applications to coexist, efficiently multitask, and optimally access sensors, creating a rich, integrated user experience untethered from secondary devices. However, advanced sensing, computing, and intervention capabilities must not compromise the primary user experience—the core functionality of earables as wireless audio devices. Therefore, a unified and efficient communication framework is essential to seamlessly support both wireless audio and these additional functionalities, ensuring a cohesive and high-quality user experience without interference.</p> </div> <div class="ltx_para" id="S1.p8"> <p class="ltx_p" id="S1.p8.1">These advancements represent a significant shift in the development of earables, positioning them at the forefront of personal health technology and human-computer interaction. In this paper, we introduce OmniBuds, a state-of-the-art sensory earable platform that exemplifies this shift. With an advanced array of carefully placed sensors, onboard computation powered by a machine learning accelerator, local storage, and programmable digital signal processors (DSPs), OmniBuds is designed to operate within a software stack built on a real-time operating system (RTOS). Its functionally and spatially symmetric dual-ear design opens new opportunities in accurate bio-sensing, targeted interventions, and multi-application functionality, all while ensuring ultra-efficient and privacy-preserving operation in a compact and ergonomic form factor.</p> </div> <div class="ltx_para" id="S1.p9"> <p class="ltx_p" id="S1.p9.1">The paper first outlines the design principles behind OmniBuds, followed by a detailed explanation of its hardware and software architecture. Each key subsystem is examined demonstrating the platform’s capabilities. Finally, we present applications of OmniBuds and explore their vast potential for future innovation, emphasising their transformative role in the advancement of earable technology.</p> </div> </section> <section class="ltx_section" id="S2"> <h2 class="ltx_title ltx_title_section"> <span class="ltx_tag ltx_tag_section">2. </span>Design Principles</h2> <div class="ltx_para" id="S2.p1"> <p class="ltx_p" id="S2.p1.1">The design of OmniBuds was driven by a central philosophy: integrating hardware and software in a way that maximises performance, efficiency, and versatility for earable computing. Unlike many other devices, where hardware and software development may occur in silos, OmniBuds was conceived as a cohesive platform where the two elements are deeply intertwined.</p> </div> <div class="ltx_para" id="S2.p2"> <p class="ltx_p" id="S2.p2.1">At the core of this design is the recognition that hardware choices dictate software capabilities, and vice versa. This interdependence enabled us to develop a platform that meets the current technical requirements of researchers. Additionally, it is designed to anticipate future needs in the field, particularly in terms of scalability, flexibility, and real-time processing.</p> </div> <div class="ltx_para" id="S2.p3"> <p class="ltx_p" id="S2.p3.1">In the following sections, we will explore the key design principles that guided the design of OmniBuds, highlighting how these principles shaped both the device’s hardware and software.</p> </div> <section class="ltx_subsection" id="S2.SS1"> <h3 class="ltx_title ltx_title_subsection"> <span class="ltx_tag ltx_tag_subsection">2.1. </span>Local Computation and Energy Efficiency</h3> <div class="ltx_para" id="S2.SS1.p1"> <p class="ltx_p" id="S2.SS1.p1.1">A fundamental principle guiding the design of OmniBuds is the prioritisation of local computation. Unlike conventional earbuds, which often rely on external devices to handle intensive processing tasks, OmniBuds are designed to manage complex workloads directly on the device. This approach significantly reduces the need for constant communication with external systems, a major source of power consumption in wearable devices.</p> </div> <div class="ltx_para" id="S2.SS1.p2"> <p class="ltx_p" id="S2.SS1.p2.1">For instance, integrating a CNN accelerator enables OmniBuds to run sophisticated machine learning models on-device, such as those used for speech recognition. The power of the CNN accelerator lies in its ability to execute these complex models in milliseconds, without needing to stream audio data to an external device for processing. This enhances the responsiveness of the earbuds while also significantly reducing battery drain by keeping all computations local. The ability to process speech directly within the device enables applications where real-time interaction and low-latency responses are critical, such as hands-free control of devices or real-time assistance in challenging environments<cite class="ltx_cite ltx_citemacro_citep">(<span class="ltx_text" style="font-size:90%;">Veluri et al</span><span class="ltx_text" style="font-size:90%;">.</span>, <a class="ltx_ref" href="https://arxiv.org/html/2410.04775v1#bib.bib39" title=""><span class="ltx_text" style="font-size:90%;">2024a</span></a>, <a class="ltx_ref" href="https://arxiv.org/html/2410.04775v1#bib.bib40" title="">b</a>, <a class="ltx_ref" href="https://arxiv.org/html/2410.04775v1#bib.bib38" title=""><span class="ltx_text" style="font-size:90%;">2023</span></a>)</cite>.</p> </div> <div class="ltx_para" id="S2.SS1.p3"> <p class="ltx_p" id="S2.SS1.p3.1">Following this principle enables OmniBuds to support a wide range of applications, from real-time biometric monitoring to interactive audio experiences, all while ensuring extended battery life and data privacy.</p> </div> </section> <section class="ltx_subsection" id="S2.SS2"> <h3 class="ltx_title ltx_title_subsection"> <span class="ltx_tag ltx_tag_subsection">2.2. </span>Dynamic Sensor Access and Multitasking</h3> <div class="ltx_para" id="S2.SS2.p1"> <p class="ltx_p" id="S2.SS2.p1.1">One of the central design principles of OmniBuds is the integration of sensor flexibility and multitasking <cite class="ltx_cite ltx_citemacro_citep">(<span class="ltx_text" style="font-size:90%;">Min et al</span><span class="ltx_text" style="font-size:90%;">.</span>, <a class="ltx_ref" href="https://arxiv.org/html/2410.04775v1#bib.bib30" title=""><span class="ltx_text" style="font-size:90%;">2022</span></a>, <a class="ltx_ref" href="https://arxiv.org/html/2410.04775v1#bib.bib29" title=""><span class="ltx_text" style="font-size:90%;">2023</span></a>)</cite>, which is achieved by tightly coupling hardware and software design. The platform leverages intelligent sensors capable of performing many functions autonomously, reducing the need to constantly wake up the main microcontroller (MCU). These sensors are equipped with built-in hardware functionalities, such as basic data processing and event detection, allowing them to handle critical tasks independently.</p> </div> <div class="ltx_para" id="S2.SS2.p2"> <p class="ltx_p" id="S2.SS2.p2.1">On the software side, OmniBuds supports a multitasking environment that allows multiple applications to access sensor data simultaneously. Rather than restricting sensor access to a single application at a time, the system is designed to allow multiple applications to share sensor data without conflicts or excessive resource consumption. This requires a coordinated approach at the system level, where the OmniBuds software system manages access and ensures that each application can retrieve the data it needs without impacting the performance of other applications. For instance, physiological monitoring applications can access PPG sensor data while another application may simultaneously use the same data stream for user authentication <cite class="ltx_cite ltx_citemacro_citep">(<span class="ltx_text" style="font-size:90%;">Yadav et al</span><span class="ltx_text" style="font-size:90%;">.</span>, <a class="ltx_ref" href="https://arxiv.org/html/2410.04775v1#bib.bib41" title=""><span class="ltx_text" style="font-size:90%;">2018</span></a>)</cite> or facial expression detection <cite class="ltx_cite ltx_citemacro_citep">(<span class="ltx_text" style="font-size:90%;">Choi et al</span><span class="ltx_text" style="font-size:90%;">.</span>, <a class="ltx_ref" href="https://arxiv.org/html/2410.04775v1#bib.bib11" title=""><span class="ltx_text" style="font-size:90%;">2022</span></a>)</cite>. The system efficiently manages this data access, ensuring that sensor data is shared across applications without compromising performance or power efficiency.</p> </div> <div class="ltx_para" id="S2.SS2.p3"> <p class="ltx_p" id="S2.SS2.p3.1">This holistic design—integrating flexible, autonomous sensors with multitasking software—ensures that OmniBuds remain adaptable to a wide range of experimental or real-world applications. Researchers and developers can take full advantage of the platform’s ability to run multiple, independent applications simultaneously, maximising the potential of each sensor and minimising unnecessary power consumption.</p> </div> </section> <section class="ltx_subsection" id="S2.SS3"> <h3 class="ltx_title ltx_title_subsection"> <span class="ltx_tag ltx_tag_subsection">2.3. </span>Unified Communication Across Devices</h3> <div class="ltx_para" id="S2.SS3.p1"> <p class="ltx_p" id="S2.SS3.p1.1">A key design principle of OmniBuds is to enable seamless communication between devices and software modules, ensuring that the platform remains both scalable and developer-friendly. This capability is underpinned by the integration of a Bluetooth Classic and Bluetooth Low Energy (BLE) chip at the hardware level and a unified set of APIs at the software level. This integration allows OmniBuds to abstract away the complexities of device-to-device communication, enabling modules to interact seamlessly, whether running locally on the earbuds or across multiple devices in the ecosystem.</p> </div> <div class="ltx_para" id="S2.SS3.p2"> <p class="ltx_p" id="S2.SS3.p2.1">The unified communication framework simplifies multi-device coordination while providing developers with a consistent interface for building robust and scalable applications. This framework allows applications to communicate across earbuds, or with external systems, without requiring developers to manage the intricacies of local versus remote execution. For example, real-time data from one earbud can be transmitted to the other or to an external device, such as a smartphone, while maintaining a high level of performance and low latency.</p> </div> <div class="ltx_para" id="S2.SS3.p3"> <p class="ltx_p" id="S2.SS3.p3.1">OmniBuds’ communication capabilities are particularly suited for research scenarios where multiple devices may need to work in concert. Whether managing sensor data, synchronising tasks, or coordinating between devices, the unified BLE communication framework ensures that developers can focus on building applications rather than dealing with the complexities of communication protocols.</p> </div> </section> <section class="ltx_subsection" id="S2.SS4"> <h3 class="ltx_title ltx_title_subsection"> <span class="ltx_tag ltx_tag_subsection">2.4. </span>Privacy by Design</h3> <div class="ltx_para" id="S2.SS4.p1"> <p class="ltx_p" id="S2.SS4.p1.1">In modern wearable systems, especially those handling sensitive physiological data, privacy is a non-negotiable requirement. OmniBuds address this through a dual-layer approach, combining local data processing at the hardware level with secure communication channels in the software.</p> </div> <div class="ltx_para" id="S2.SS4.p2"> <p class="ltx_p" id="S2.SS4.p2.1">By prioritising local computation, OmniBuds minimise the need to transmit sensitive data externally, reducing the risk of unauthorised access. When external communication is necessary, the platform uses encrypted Bluetooth Low Energy (BLE) to protect data integrity. This approach ensures that OmniBuds meet the stringent privacy requirements of health-focused research, safeguarding user data throughout all operations.</p> </div> <div class="ltx_para" id="S2.SS4.p3"> <p class="ltx_p" id="S2.SS4.p3.1">Demonstrating the synergy between hardware and software, OmniBuds set a new standard in earable technology. Designed to address current research demands and anticipate future needs, they combine seamless hardware-software integration with a strong focus on privacy and flexibility, positioning them as a leader in earable computing research.</p> </div> </section> </section> <section class="ltx_section" id="S3"> <h2 class="ltx_title ltx_title_section"> <span class="ltx_tag ltx_tag_section">3. </span>Hardware Architecture</h2> <figure class="ltx_figure" id="S3.F2"><img alt="Refer to caption" class="ltx_graphics ltx_centering ltx_img_landscape" height="361" id="S3.F2.g1" src="x2.png" width="830"/> <figcaption class="ltx_caption ltx_centering"><span class="ltx_tag ltx_tag_figure">Figure 2. </span>OmniBuds Hardware Block Diagram.</figcaption> </figure> <figure class="ltx_figure" id="S3.F3"><img alt="Refer to caption" class="ltx_graphics ltx_centering ltx_img_landscape" height="373" id="S3.F3.g1" src="x3.png" width="664"/> <figcaption class="ltx_caption ltx_centering"><span class="ltx_tag ltx_tag_figure">Figure 3. </span>OmniBuds PCBs.</figcaption> </figure> <div class="ltx_para" id="S3.p1"> <p class="ltx_p" id="S3.p1.1">OmniBuds are fully functional True Wireless Stereo (TWS) earbuds with standard features like music playback, calls, Active Noise Cancellation (ANC), and Acoustic Transparency (or Pass-through). What distinguishes them is the integration of additional sensors and computational units, transforming them into a powerful platform for earable computing research. Both earbuds share identical hardware, enabling multi-device computation and sensing. This symmetry allows computational and sensing tasks to be distributed between the two earbuds, such as one performing signals pre-processing while the other handles intensive machine learning tasks, or both working together for enhanced spatial sensing.</p> </div> <div class="ltx_para" id="S3.p2"> <p class="ltx_p" id="S3.p2.1">At the core of OmniBuds hardware (Figure <a class="ltx_ref" href="https://arxiv.org/html/2410.04775v1#S3.F2" title="Figure 2 ‣ 3. Hardware Architecture ‣ OmniBuds: A Sensory Earable Platform for Advanced Bio-Sensing and On-Device Machine Learning"><span class="ltx_text ltx_ref_tag">2</span></a>) is a dual-core microcontroller (MCU) with a CNN accelerator for efficient on-device machine learning. A dedicated audio DSP handles audio processing, and the BioHub processor manages PPG data for vital signs estimation, all designed to emphasise local computation for real-time data processing and machine learning.</p> </div> <div class="ltx_para" id="S3.p3"> <p class="ltx_p" id="S3.p3.1">OmniBuds are equipped with a comprehensive suite of sensors selected for their low-power characteristics and advanced functionalities: a 9-axis IMU, a 3-wavelength PPG sensor, a medical-grade temperature sensor, and three microphones. Each earbud includes 1GB of non-volatile memory and 8MB of RAM, offering ample capacity for complex applications, such as large datasets and sophisticated machine learning models. For interaction, the earbuds feature a button and an RGB LED. Figure <a class="ltx_ref" href="https://arxiv.org/html/2410.04775v1#S3.F3" title="Figure 3 ‣ 3. Hardware Architecture ‣ OmniBuds: A Sensory Earable Platform for Advanced Bio-Sensing and On-Device Machine Learning"><span class="ltx_text ltx_ref_tag">3</span></a> shows the OmniBuds PCBs, consisting of a mix of rigid, flex-rigid, and flex designs.</p> </div> <div class="ltx_para" id="S3.p4"> <p class="ltx_p" id="S3.p4.1">The earbuds are powered by a 105mAh battery, providing up to 6 hours of music playback and approximately 8 hours of PPG sensing. The charging case, featuring a 630mAh battery, allows for multiple recharges of the earbuds, extending their total usage time.</p> </div> <div class="ltx_para" id="S3.p5"> <p class="ltx_p" id="S3.p5.1">In the following sections, we first analyse the OmniBuds form factor and the positioning of the sensors before delving into the main HW submodules: computation, communication and sensing.</p> </div> <section class="ltx_subsection" id="S3.SS1"> <h3 class="ltx_title ltx_title_subsection"> <span class="ltx_tag ltx_tag_subsection">3.1. </span>Form Factor and Sensor Placement</h3> <figure class="ltx_figure" id="S3.F4"><img alt="Refer to caption" class="ltx_graphics ltx_centering ltx_img_landscape" height="437" id="S3.F4.g1" src="x4.png" width="664"/> <figcaption class="ltx_caption ltx_centering"><span class="ltx_tag ltx_tag_figure">Figure 4. </span>Placement of the sensors in OmniBuds.</figcaption> </figure> <div class="ltx_para" id="S3.SS1.p1"> <p class="ltx_p" id="S3.SS1.p1.1">Form factor and ergonomics design are crucial in wearables, impacting usability, performance, and sensor data quality. When designing OmniBuds, we considered factors affecting comfort, sensor accuracy, and manufacturing ease. Enclosed in an injection-moulded plastic cover, OmniBuds measure 4.3 x 3 x 2.2 cm and weigh around 12g, roughly twice as much as other commercial true wireless earbuds (e.g., Apple AirPods Pro 2: 5.3 grams <cite class="ltx_cite ltx_citemacro_citep">(<span class="ltx_text" style="font-size:90%;">app</span>, <a class="ltx_ref" href="https://arxiv.org/html/2410.04775v1#bib.bib2" title=""><span class="ltx_text" style="font-size:90%;">[n. d.]</span></a>)</cite>, Bose Quite Comfort Earbuds II: 6.24 grams <cite class="ltx_cite ltx_citemacro_citep">(<span class="ltx_text" style="font-size:90%;">bos</span>, <a class="ltx_ref" href="https://arxiv.org/html/2410.04775v1#bib.bib3" title=""><span class="ltx_text" style="font-size:90%;">[n. d.]</span></a>)</cite>) Despite this, OmniBuds remain comfortable even when worn for extended periods. OmniBuds feature a more capable and intricate sensor suite than comparable earbuds, requiring a larger battery. Specifically, the battery accounts for about 20% of the earbud weight (approximately 2.4 grams).</p> </div> <div class="ltx_para" id="S3.SS1.p2"> <p class="ltx_p" id="S3.SS1.p2.1">Sensor placement is a critical element of the OmniBuds design, where performance, user comfort, and manufacturing feasibility are carefully balanced. Each positioning decision is driven by extensive validation studies and the need to optimise the quality of data collected without compromising wearability or ease of production. Figure <a class="ltx_ref" href="https://arxiv.org/html/2410.04775v1#S3.F4" title="Figure 4 ‣ 3.1. Form Factor and Sensor Placement ‣ 3. Hardware Architecture ‣ OmniBuds: A Sensory Earable Platform for Advanced Bio-Sensing and On-Device Machine Learning"><span class="ltx_text ltx_ref_tag">4</span></a> illustrates the placement of the various sensors within the OmniBuds.</p> </div> <div class="ltx_para" id="S3.SS1.p3"> <p class="ltx_p" id="S3.SS1.p3.1">The arrangement of the three microphones is particularly strategic. Two outward-facing microphones, positioned at the top and bottom of each earbud, facilitate essential acoustic features such as ANC, Transparency mode, and beamforming for voice capture. The positioning allows the microphones to target sounds from the user’s environment while enabling effective noise cancellation by capturing ambient noise from multiple angles. Meanwhile, the inward-facing microphone, located within the speaker chamber, serves multiple purposes. While its primary role is as a reference microphone to enhance ANC performance, it also acts as a sensitive detector for internal body sounds, as explored in many studies <cite class="ltx_cite ltx_citemacro_citep">(<span class="ltx_text" style="font-size:90%;">Ferlini et al</span><span class="ltx_text" style="font-size:90%;">.</span>, <a class="ltx_ref" href="https://arxiv.org/html/2410.04775v1#bib.bib15" title=""><span class="ltx_text" style="font-size:90%;">2021a</span></a>; <span class="ltx_text" style="font-size:90%;">Ma et al</span><span class="ltx_text" style="font-size:90%;">.</span>, <a class="ltx_ref" href="https://arxiv.org/html/2410.04775v1#bib.bib28" title=""><span class="ltx_text" style="font-size:90%;">2021</span></a>; <span class="ltx_text" style="font-size:90%;">Truong et al</span><span class="ltx_text" style="font-size:90%;">.</span>, <a class="ltx_ref" href="https://arxiv.org/html/2410.04775v1#bib.bib37" title=""><span class="ltx_text" style="font-size:90%;">2022</span></a>)</cite>. To maximize the accuracy of in-ear recordings, the earbuds are designed to ensure a snug fit in the ear canal, exploiting the occlusion effect for higher-quality body sound detection. The combination of interchangeable ear-tips and an ergonomic earbud shape helps achieve this fit, ensuring both acoustic performance and user comfort.</p> </div> <div class="ltx_para" id="S3.SS1.p4"> <p class="ltx_p" id="S3.SS1.p4.1">The placement of the 9-axis Inertial Measurement Unit (IMU) is equally deliberate. Positioned centrally on the outer plastic cover of the earbuds, the IMU is optimally located to capture a broad range of movements, from subtle facial gestures to larger head and body movements supporting diverse applications that rely on motion data.</p> </div> <div class="ltx_para" id="S3.SS1.p5"> <p class="ltx_p" id="S3.SS1.p5.1">The photoplethysmography sensor (PPG) is crucial in OmniBuds since it supports the measurement of all vital signs (except temperature) directly or indirectly. The accuracy of PPG-derived metrics depends on several factors, including sensor location. In designing OmniBuds, we conducted a systematic validation study to determine the optimal placement for a PPG sensor in an earable <cite class="ltx_cite ltx_citemacro_citep">(<span class="ltx_text" style="font-size:90%;">Ferlini et al</span><span class="ltx_text" style="font-size:90%;">.</span>, <a class="ltx_ref" href="https://arxiv.org/html/2410.04775v1#bib.bib18" title=""><span class="ltx_text" style="font-size:90%;">2021c</span></a>)</cite>. We explored three placements: concha, ear canal, and behind the auricle (or pinna). Our study concluded that the best location for the PPG sensor in OmniBuds is facing towards the concha. Although in-ear canal placement provides slightly more accurate data, the concha placement preserves a higher level of user comfort and practicality for everyday use, which are key factors for long-term wearability in earables.</p> </div> <div class="ltx_para" id="S3.SS1.p6"> <p class="ltx_p" id="S3.SS1.p6.1">Finally, the location of the temperature sensor is determined by the need to measure body core temperature with high accuracy, while accounting for usability and manufacturing constraints. While placing the sensor in the ear canal would provide the most accurate readings <cite class="ltx_cite ltx_citemacro_citep">(<span class="ltx_text" style="font-size:90%;">Ferlini et al</span><span class="ltx_text" style="font-size:90%;">.</span>, <a class="ltx_ref" href="https://arxiv.org/html/2410.04775v1#bib.bib18" title=""><span class="ltx_text" style="font-size:90%;">2021c</span></a>)</cite>, it is instead placed against the skin of the concha, which still offers a reliable proxy for body core temperature. This location strikes the ideal balance between accuracy and manufacturing feasibility, allowing for effective temperature sensing without requiring overly complex computational corrections.</p> </div> <div class="ltx_para" id="S3.SS1.p7"> <p class="ltx_p" id="S3.SS1.p7.1">In summary, the positioning of sensors in OmniBuds reflects a careful evaluation of trade-offs, balancing scientific rigour with practical considerations to ensure the best possible user experience and data quality.</p> </div> </section> <section class="ltx_subsection" id="S3.SS2"> <h3 class="ltx_title ltx_title_subsection"> <span class="ltx_tag ltx_tag_subsection">3.2. </span>Compute Subsystem</h3> <section class="ltx_subsubsection" id="S3.SS2.SSS1"> <h4 class="ltx_title ltx_title_subsubsection"> <span class="ltx_tag ltx_tag_subsubsection">3.2.1. </span>Main Processor and CNN Accelerator</h4> <div class="ltx_para" id="S3.SS2.SSS1.p1"> <p class="ltx_p" id="S3.SS2.SSS1.p1.1">At the core of the OmniBuds’ computation system is a low-power dual-core MCU (Cortex-M4F and RISC-V) with a floating point unit, providing robust processing power. The MCU features 512KB of Flash memory and 128KB of SRAM, making it well-suited for managing real-time tasks and coordinating the various sensors and subsystems.</p> </div> <div class="ltx_para" id="S3.SS2.SSS1.p2"> <p class="ltx_p" id="S3.SS2.SSS1.p2.1">To further enhance its computational capabilities, OmniBuds integrates a dedicated hardware-based Convolutional Neural Network (CNN) accelerator. The accelerator consists of 64 convolutional processors, each equipped with its own pooling engine, input cache, weight memory (from 8-bit down to 1-bit width), and convolution engine, enabling highly parallelised execution of machine learning models. The CNN accelerator supports up to 64 layers and can handle a maximum input size of 1024x1024, allowing it to execute a wide range of neural network architectures, including feed-forward models, residual networks, recurrent models and encoder-decoder designs.</p> </div> </section> <section class="ltx_subsubsection" id="S3.SS2.SSS2"> <h4 class="ltx_title ltx_title_subsubsection"> <span class="ltx_tag ltx_tag_subsubsection">3.2.2. </span>Specialised Computational Units</h4> <div class="ltx_para" id="S3.SS2.SSS2.p1"> <p class="ltx_p" id="S3.SS2.SSS2.p1.1">In addition to the main processor and CNN accelerator, OmniBuds include specialised components designed for task-specific workloads, enhancing the platform’s ability to efficiently handle diverse data types while maintaining power efficiency.</p> </div> <div class="ltx_para" id="S3.SS2.SSS2.p2"> <p class="ltx_p" id="S3.SS2.SSS2.p2.1">One such component is the <span class="ltx_text ltx_font_italic" id="S3.SS2.SSS2.p2.1.1">Audio DSP</span>, which manages audio pipelines and processing tasks. It consists of two sub-cores: the Fast-DSP core for low-latency applications like ANC, and the Slow-DSP core for less time-critical tasks. The DSP allows efficient audio processing without burdening the main MCU, and its flexibility enables it to support a variety of tasks.</p> </div> <div class="ltx_para" id="S3.SS2.SSS2.p3"> <p class="ltx_p" id="S3.SS2.SSS2.p3.1">Another key component is the <span class="ltx_text ltx_font_italic" id="S3.SS2.SSS2.p3.1.1">BioHub</span>, a co-processor dedicated to biosignal processing. The BioHub manages the PPG sensor and its dedicated 6-axis IMU, computing the user’s vitals in real-time while minimising power consumption. By offloading tasks from the main MCU, the BioHub enables parallel execution, allowing the main processor to enter a low-power state when idle, optimising energy use.</p> </div> </section> </section> <section class="ltx_subsection" id="S3.SS3"> <h3 class="ltx_title ltx_title_subsection"> <span class="ltx_tag ltx_tag_subsection">3.3. </span>Wireless Subsystem</h3> <div class="ltx_para" id="S3.SS3.p1"> <p class="ltx_p" id="S3.SS3.p1.1">OmniBuds employ a dedicated Bluetooth System-on-Chip (SoC) for wireless communication. This SoC supports both Bluetooth Classic for audio streaming and Bluetooth Low Energy (BLE) for data communication, ensuring seamless connectivity with external devices. LE Audio can also be supported with software updates. In Bluetooth Classic mode, the SoC manages communication with the host device, notifying the main MCU of events like connections and audio stream changes. For BLE, it provides a transparent API for data transmission, allowing the main processor to focus on other tasks without handling communication protocols.</p> </div> <div class="ltx_para" id="S3.SS3.p2"> <p class="ltx_p" id="S3.SS3.p2.1">This division of responsibilities between the Bluetooth SoC and main MCU ensures that OmniBuds can maintain efficient wireless communication while keeping overall power consumption low. We will delve further into the software management of this communication system in Section <a class="ltx_ref" href="https://arxiv.org/html/2410.04775v1#S4.SS1" title="4.1. Communication Subsystem ‣ 4. Software Architecture ‣ OmniBuds: A Sensory Earable Platform for Advanced Bio-Sensing and On-Device Machine Learning"><span class="ltx_text ltx_ref_tag">4.1</span></a>.</p> </div> </section> <section class="ltx_subsection" id="S3.SS4"> <h3 class="ltx_title ltx_title_subsection"> <span class="ltx_tag ltx_tag_subsection">3.4. </span>Sensing Subsystem</h3> <section class="ltx_subsubsection" id="S3.SS4.SSS1"> <h4 class="ltx_title ltx_title_subsubsection"> <span class="ltx_tag ltx_tag_subsubsection">3.4.1. </span>Photoplethysmography sensor (PPG)</h4> <figure class="ltx_figure" id="S3.F5"><img alt="Refer to caption" class="ltx_graphics ltx_centering ltx_img_landscape" height="406" id="S3.F5.g1" src="extracted/5905686/Images/ob_ppg_sample.png" width="509"/> <figcaption class="ltx_caption ltx_centering"><span class="ltx_tag ltx_tag_figure">Figure 5. </span>PPG data sampled at 50Hz during rest.</figcaption> </figure> <div class="ltx_para" id="S3.SS4.SSS1.p1"> <p class="ltx_p" id="S3.SS4.SSS1.p1.1">PPG is a non-invasive optical technique that detects blood volume changes by measuring light absorption. It uses an LED to emit light through the skin and a photodiode to measure the reflected light. As blood volume increases, more light is absorbed; as it decreases, less light is absorbed. This data provides insights into physiological parameters like heart rate, respiration rate, blood pressure, and oxygen saturation <cite class="ltx_cite ltx_citemacro_citep">(<span class="ltx_text" style="font-size:90%;">Ferlini et al</span><span class="ltx_text" style="font-size:90%;">.</span>, <a class="ltx_ref" href="https://arxiv.org/html/2410.04775v1#bib.bib18" title=""><span class="ltx_text" style="font-size:90%;">2021c</span></a>; <span class="ltx_text" style="font-size:90%;">Balaji et al</span><span class="ltx_text" style="font-size:90%;">.</span>, <a class="ltx_ref" href="https://arxiv.org/html/2410.04775v1#bib.bib6" title=""><span class="ltx_text" style="font-size:90%;">2023</span></a>; <span class="ltx_text" style="font-size:90%;">Romero et al</span><span class="ltx_text" style="font-size:90%;">.</span>, <a class="ltx_ref" href="https://arxiv.org/html/2410.04775v1#bib.bib35" title=""><span class="ltx_text" style="font-size:90%;">2024</span></a>)</cite>.</p> </div> <div class="ltx_para" id="S3.SS4.SSS1.p2"> <p class="ltx_p" id="S3.SS4.SSS1.p2.1">One of the criteria that drove the choice of the specific PPG sensor used in the OmniBuds, is its flexibility and the number of different configurations/parameters that it supports. The sensor includes three LEDs which generate three different wavelengths, green (530nm), red (660nm) and infrared (880nm), and a single photodiode with peak sensitivity at 860nm and a spectral bandwidth range from 420nm to 1020nm. Figure <a class="ltx_ref" href="https://arxiv.org/html/2410.04775v1#S3.F5" title="Figure 5 ‣ 3.4.1. Photoplethysmography sensor (PPG) ‣ 3.4. Sensing Subsystem ‣ 3. Hardware Architecture ‣ OmniBuds: A Sensory Earable Platform for Advanced Bio-Sensing and On-Device Machine Learning"><span class="ltx_text ltx_ref_tag">5</span></a> shows an example of 50Hz PPG data collected from the OmniBuds at the three available wavelengths where the vascular pulses and breathing-related modulations are clearly visible.</p> </div> </section> <section class="ltx_subsubsection" id="S3.SS4.SSS2"> <h4 class="ltx_title ltx_title_subsubsection"> <span class="ltx_tag ltx_tag_subsubsection">3.4.2. </span>Inertial Measurement Unit</h4> <figure class="ltx_figure" id="S3.F6"><img alt="Refer to caption" class="ltx_graphics ltx_centering ltx_img_landscape" height="405" id="S3.F6.g1" src="extracted/5905686/Images/imu_sample.png" width="509"/> <figcaption class="ltx_caption ltx_centering"><span class="ltx_tag ltx_tag_figure">Figure 6. </span>IMU data sampled at 100Hz during head nodding.</figcaption> </figure> <div class="ltx_para" id="S3.SS4.SSS2.p1"> <p class="ltx_p" id="S3.SS4.SSS2.p1.1">The OmniBuds feature two separate Inertial Measurement Units (IMUs). The first one is a 6-axis IMU (accelerometer and gyroscope) which is co-located on the same PCB that hosts the PPG sensor and is dedicated to PPG motion artefact removal. In addition to that, OmniBuds also features a 9-axis IMU (i.e., accelerometer, gyroscope, and magnetometer). Whilst this can be used to track macro motions such as walking and running, it can also be exploited by researchers to perform head motion tracking and 3D pose estimation <cite class="ltx_cite ltx_citemacro_citep">(<span class="ltx_text" style="font-size:90%;">Ferlini et al</span><span class="ltx_text" style="font-size:90%;">.</span>, <a class="ltx_ref" href="https://arxiv.org/html/2410.04775v1#bib.bib17" title=""><span class="ltx_text" style="font-size:90%;">2019</span></a>; <span class="ltx_text" style="font-size:90%;">Lee et al</span><span class="ltx_text" style="font-size:90%;">.</span>, <a class="ltx_ref" href="https://arxiv.org/html/2410.04775v1#bib.bib27" title=""><span class="ltx_text" style="font-size:90%;">2019</span></a>)</cite>.</p> </div> <div class="ltx_para" id="S3.SS4.SSS2.p2"> <p class="ltx_p" id="S3.SS4.SSS2.p2.1">The 9-axis IMU is a modern low-power component which offers natively several features like step detection, step counting, device orientation and single and double click detection. More importantly, the IMU allows the execution of simple on-chip ML models courtesy of its embedded Machine Learning Core module (MLC). The MLC works directly by taking as input the data streams coming from accelerometer, gyroscope and magnetometer, computing features on the data and passing these features to user-defined decision tree models. This enables users to run simple IMU-based inference tasks directly on the sensor waking up the main MCU only when an inference is available. This results in great power saving compared to waking up every time a new raw data sample is available and computing the inference on the MCU. Figure <a class="ltx_ref" href="https://arxiv.org/html/2410.04775v1#S3.F6" title="Figure 6 ‣ 3.4.2. Inertial Measurement Unit ‣ 3.4. Sensing Subsystem ‣ 3. Hardware Architecture ‣ OmniBuds: A Sensory Earable Platform for Advanced Bio-Sensing and On-Device Machine Learning"><span class="ltx_text ltx_ref_tag">6</span></a> shows data collected from the main OmniBuds’ IMU while the user was nodding.</p> </div> </section> <section class="ltx_subsubsection" id="S3.SS4.SSS3"> <h4 class="ltx_title ltx_title_subsubsection"> <span class="ltx_tag ltx_tag_subsubsection">3.4.3. </span>Temperature</h4> <div class="ltx_para" id="S3.SS4.SSS3.p1"> <p class="ltx_p" id="S3.SS4.SSS3.p1.1">Temperature sensing is achieved using an infrared sensor directed at the concha area of the ear, allowing for applications like body core temperature and fertility tracking.</p> </div> <div class="ltx_para" id="S3.SS4.SSS3.p2"> <p class="ltx_p" id="S3.SS4.SSS3.p2.6">The sensor was selected for its high accuracy of <math alttext="\pm 0.2^{\circ}" class="ltx_Math" display="inline" id="S3.SS4.SSS3.p2.1.m1.1"><semantics id="S3.SS4.SSS3.p2.1.m1.1a"><mrow id="S3.SS4.SSS3.p2.1.m1.1.1" xref="S3.SS4.SSS3.p2.1.m1.1.1.cmml"><mo id="S3.SS4.SSS3.p2.1.m1.1.1a" xref="S3.SS4.SSS3.p2.1.m1.1.1.cmml">±</mo><msup id="S3.SS4.SSS3.p2.1.m1.1.1.2" xref="S3.SS4.SSS3.p2.1.m1.1.1.2.cmml"><mn id="S3.SS4.SSS3.p2.1.m1.1.1.2.2" xref="S3.SS4.SSS3.p2.1.m1.1.1.2.2.cmml">0.2</mn><mo id="S3.SS4.SSS3.p2.1.m1.1.1.2.3" xref="S3.SS4.SSS3.p2.1.m1.1.1.2.3.cmml">∘</mo></msup></mrow><annotation-xml encoding="MathML-Content" id="S3.SS4.SSS3.p2.1.m1.1b"><apply id="S3.SS4.SSS3.p2.1.m1.1.1.cmml" xref="S3.SS4.SSS3.p2.1.m1.1.1"><csymbol cd="latexml" id="S3.SS4.SSS3.p2.1.m1.1.1.1.cmml" xref="S3.SS4.SSS3.p2.1.m1.1.1">plus-or-minus</csymbol><apply id="S3.SS4.SSS3.p2.1.m1.1.1.2.cmml" xref="S3.SS4.SSS3.p2.1.m1.1.1.2"><csymbol cd="ambiguous" id="S3.SS4.SSS3.p2.1.m1.1.1.2.1.cmml" xref="S3.SS4.SSS3.p2.1.m1.1.1.2">superscript</csymbol><cn id="S3.SS4.SSS3.p2.1.m1.1.1.2.2.cmml" type="float" xref="S3.SS4.SSS3.p2.1.m1.1.1.2.2">0.2</cn><compose id="S3.SS4.SSS3.p2.1.m1.1.1.2.3.cmml" xref="S3.SS4.SSS3.p2.1.m1.1.1.2.3"></compose></apply></apply></annotation-xml><annotation encoding="application/x-tex" id="S3.SS4.SSS3.p2.1.m1.1c">\pm 0.2^{\circ}</annotation><annotation encoding="application/x-llamapun" id="S3.SS4.SSS3.p2.1.m1.1d">± 0.2 start_POSTSUPERSCRIPT ∘ end_POSTSUPERSCRIPT</annotation></semantics></math> C within the range of <math alttext="35^{\circ}" class="ltx_Math" display="inline" id="S3.SS4.SSS3.p2.2.m2.1"><semantics id="S3.SS4.SSS3.p2.2.m2.1a"><msup id="S3.SS4.SSS3.p2.2.m2.1.1" xref="S3.SS4.SSS3.p2.2.m2.1.1.cmml"><mn id="S3.SS4.SSS3.p2.2.m2.1.1.2" xref="S3.SS4.SSS3.p2.2.m2.1.1.2.cmml">35</mn><mo id="S3.SS4.SSS3.p2.2.m2.1.1.3" xref="S3.SS4.SSS3.p2.2.m2.1.1.3.cmml">∘</mo></msup><annotation-xml encoding="MathML-Content" id="S3.SS4.SSS3.p2.2.m2.1b"><apply id="S3.SS4.SSS3.p2.2.m2.1.1.cmml" xref="S3.SS4.SSS3.p2.2.m2.1.1"><csymbol cd="ambiguous" id="S3.SS4.SSS3.p2.2.m2.1.1.1.cmml" xref="S3.SS4.SSS3.p2.2.m2.1.1">superscript</csymbol><cn id="S3.SS4.SSS3.p2.2.m2.1.1.2.cmml" type="integer" xref="S3.SS4.SSS3.p2.2.m2.1.1.2">35</cn><compose id="S3.SS4.SSS3.p2.2.m2.1.1.3.cmml" xref="S3.SS4.SSS3.p2.2.m2.1.1.3"></compose></apply></annotation-xml><annotation encoding="application/x-tex" id="S3.SS4.SSS3.p2.2.m2.1c">35^{\circ}</annotation><annotation encoding="application/x-llamapun" id="S3.SS4.SSS3.p2.2.m2.1d">35 start_POSTSUPERSCRIPT ∘ end_POSTSUPERSCRIPT</annotation></semantics></math> C to <math alttext="42^{\circ}" class="ltx_Math" display="inline" id="S3.SS4.SSS3.p2.3.m3.1"><semantics id="S3.SS4.SSS3.p2.3.m3.1a"><msup id="S3.SS4.SSS3.p2.3.m3.1.1" xref="S3.SS4.SSS3.p2.3.m3.1.1.cmml"><mn id="S3.SS4.SSS3.p2.3.m3.1.1.2" xref="S3.SS4.SSS3.p2.3.m3.1.1.2.cmml">42</mn><mo id="S3.SS4.SSS3.p2.3.m3.1.1.3" xref="S3.SS4.SSS3.p2.3.m3.1.1.3.cmml">∘</mo></msup><annotation-xml encoding="MathML-Content" id="S3.SS4.SSS3.p2.3.m3.1b"><apply id="S3.SS4.SSS3.p2.3.m3.1.1.cmml" xref="S3.SS4.SSS3.p2.3.m3.1.1"><csymbol cd="ambiguous" id="S3.SS4.SSS3.p2.3.m3.1.1.1.cmml" xref="S3.SS4.SSS3.p2.3.m3.1.1">superscript</csymbol><cn id="S3.SS4.SSS3.p2.3.m3.1.1.2.cmml" type="integer" xref="S3.SS4.SSS3.p2.3.m3.1.1.2">42</cn><compose id="S3.SS4.SSS3.p2.3.m3.1.1.3.cmml" xref="S3.SS4.SSS3.p2.3.m3.1.1.3"></compose></apply></annotation-xml><annotation encoding="application/x-tex" id="S3.SS4.SSS3.p2.3.m3.1c">42^{\circ}</annotation><annotation encoding="application/x-llamapun" id="S3.SS4.SSS3.p2.3.m3.1d">42 start_POSTSUPERSCRIPT ∘ end_POSTSUPERSCRIPT</annotation></semantics></math> C<span class="ltx_note ltx_role_footnote" id="footnote1"><sup class="ltx_note_mark">1</sup><span class="ltx_note_outer"><span class="ltx_note_content"><sup class="ltx_note_mark">1</sup><span class="ltx_tag ltx_tag_note">1</span>Ambient temperature between <math alttext="15^{\circ}" class="ltx_Math" display="inline" id="footnote1.m1.1"><semantics id="footnote1.m1.1b"><msup id="footnote1.m1.1.1" xref="footnote1.m1.1.1.cmml"><mn id="footnote1.m1.1.1.2" xref="footnote1.m1.1.1.2.cmml">15</mn><mo id="footnote1.m1.1.1.3" xref="footnote1.m1.1.1.3.cmml">∘</mo></msup><annotation-xml encoding="MathML-Content" id="footnote1.m1.1c"><apply id="footnote1.m1.1.1.cmml" xref="footnote1.m1.1.1"><csymbol cd="ambiguous" id="footnote1.m1.1.1.1.cmml" xref="footnote1.m1.1.1">superscript</csymbol><cn id="footnote1.m1.1.1.2.cmml" type="integer" xref="footnote1.m1.1.1.2">15</cn><compose id="footnote1.m1.1.1.3.cmml" xref="footnote1.m1.1.1.3"></compose></apply></annotation-xml><annotation encoding="application/x-tex" id="footnote1.m1.1d">15^{\circ}</annotation><annotation encoding="application/x-llamapun" id="footnote1.m1.1e">15 start_POSTSUPERSCRIPT ∘ end_POSTSUPERSCRIPT</annotation></semantics></math> C and <math alttext="40^{\circ}" class="ltx_Math" display="inline" id="footnote1.m2.1"><semantics id="footnote1.m2.1b"><msup id="footnote1.m2.1.1" xref="footnote1.m2.1.1.cmml"><mn id="footnote1.m2.1.1.2" xref="footnote1.m2.1.1.2.cmml">40</mn><mo id="footnote1.m2.1.1.3" xref="footnote1.m2.1.1.3.cmml">∘</mo></msup><annotation-xml encoding="MathML-Content" id="footnote1.m2.1c"><apply id="footnote1.m2.1.1.cmml" xref="footnote1.m2.1.1"><csymbol cd="ambiguous" id="footnote1.m2.1.1.1.cmml" xref="footnote1.m2.1.1">superscript</csymbol><cn id="footnote1.m2.1.1.2.cmml" type="integer" xref="footnote1.m2.1.1.2">40</cn><compose id="footnote1.m2.1.1.3.cmml" xref="footnote1.m2.1.1.3"></compose></apply></annotation-xml><annotation encoding="application/x-tex" id="footnote1.m2.1d">40^{\circ}</annotation><annotation encoding="application/x-llamapun" id="footnote1.m2.1e">40 start_POSTSUPERSCRIPT ∘ end_POSTSUPERSCRIPT</annotation></semantics></math> C.</span></span></span>, which is ideal for human sensing applications. The total measurement range spans from <math alttext="-20^{\circ}" class="ltx_Math" display="inline" id="S3.SS4.SSS3.p2.4.m4.1"><semantics id="S3.SS4.SSS3.p2.4.m4.1a"><mrow id="S3.SS4.SSS3.p2.4.m4.1.1" xref="S3.SS4.SSS3.p2.4.m4.1.1.cmml"><mo id="S3.SS4.SSS3.p2.4.m4.1.1a" xref="S3.SS4.SSS3.p2.4.m4.1.1.cmml">−</mo><msup id="S3.SS4.SSS3.p2.4.m4.1.1.2" xref="S3.SS4.SSS3.p2.4.m4.1.1.2.cmml"><mn id="S3.SS4.SSS3.p2.4.m4.1.1.2.2" xref="S3.SS4.SSS3.p2.4.m4.1.1.2.2.cmml">20</mn><mo id="S3.SS4.SSS3.p2.4.m4.1.1.2.3" xref="S3.SS4.SSS3.p2.4.m4.1.1.2.3.cmml">∘</mo></msup></mrow><annotation-xml encoding="MathML-Content" id="S3.SS4.SSS3.p2.4.m4.1b"><apply id="S3.SS4.SSS3.p2.4.m4.1.1.cmml" xref="S3.SS4.SSS3.p2.4.m4.1.1"><minus id="S3.SS4.SSS3.p2.4.m4.1.1.1.cmml" xref="S3.SS4.SSS3.p2.4.m4.1.1"></minus><apply id="S3.SS4.SSS3.p2.4.m4.1.1.2.cmml" xref="S3.SS4.SSS3.p2.4.m4.1.1.2"><csymbol cd="ambiguous" id="S3.SS4.SSS3.p2.4.m4.1.1.2.1.cmml" xref="S3.SS4.SSS3.p2.4.m4.1.1.2">superscript</csymbol><cn id="S3.SS4.SSS3.p2.4.m4.1.1.2.2.cmml" type="integer" xref="S3.SS4.SSS3.p2.4.m4.1.1.2.2">20</cn><compose id="S3.SS4.SSS3.p2.4.m4.1.1.2.3.cmml" xref="S3.SS4.SSS3.p2.4.m4.1.1.2.3"></compose></apply></apply></annotation-xml><annotation encoding="application/x-tex" id="S3.SS4.SSS3.p2.4.m4.1c">-20^{\circ}</annotation><annotation encoding="application/x-llamapun" id="S3.SS4.SSS3.p2.4.m4.1d">- 20 start_POSTSUPERSCRIPT ∘ end_POSTSUPERSCRIPT</annotation></semantics></math> C to <math alttext="100^{\circ}" class="ltx_Math" display="inline" id="S3.SS4.SSS3.p2.5.m5.1"><semantics id="S3.SS4.SSS3.p2.5.m5.1a"><msup id="S3.SS4.SSS3.p2.5.m5.1.1" xref="S3.SS4.SSS3.p2.5.m5.1.1.cmml"><mn id="S3.SS4.SSS3.p2.5.m5.1.1.2" xref="S3.SS4.SSS3.p2.5.m5.1.1.2.cmml">100</mn><mo id="S3.SS4.SSS3.p2.5.m5.1.1.3" xref="S3.SS4.SSS3.p2.5.m5.1.1.3.cmml">∘</mo></msup><annotation-xml encoding="MathML-Content" id="S3.SS4.SSS3.p2.5.m5.1b"><apply id="S3.SS4.SSS3.p2.5.m5.1.1.cmml" xref="S3.SS4.SSS3.p2.5.m5.1.1"><csymbol cd="ambiguous" id="S3.SS4.SSS3.p2.5.m5.1.1.1.cmml" xref="S3.SS4.SSS3.p2.5.m5.1.1">superscript</csymbol><cn id="S3.SS4.SSS3.p2.5.m5.1.1.2.cmml" type="integer" xref="S3.SS4.SSS3.p2.5.m5.1.1.2">100</cn><compose id="S3.SS4.SSS3.p2.5.m5.1.1.3.cmml" xref="S3.SS4.SSS3.p2.5.m5.1.1.3"></compose></apply></annotation-xml><annotation encoding="application/x-tex" id="S3.SS4.SSS3.p2.5.m5.1c">100^{\circ}</annotation><annotation encoding="application/x-llamapun" id="S3.SS4.SSS3.p2.5.m5.1d">100 start_POSTSUPERSCRIPT ∘ end_POSTSUPERSCRIPT</annotation></semantics></math> C, making it versatile for various conditions. Additionally, the sensor’s configurable sampling rate of up to <math alttext="64" class="ltx_Math" display="inline" id="S3.SS4.SSS3.p2.6.m6.1"><semantics id="S3.SS4.SSS3.p2.6.m6.1a"><mn id="S3.SS4.SSS3.p2.6.m6.1.1" xref="S3.SS4.SSS3.p2.6.m6.1.1.cmml">64</mn><annotation-xml encoding="MathML-Content" id="S3.SS4.SSS3.p2.6.m6.1b"><cn id="S3.SS4.SSS3.p2.6.m6.1.1.cmml" type="integer" xref="S3.SS4.SSS3.p2.6.m6.1.1">64</cn></annotation-xml><annotation encoding="application/x-tex" id="S3.SS4.SSS3.p2.6.m6.1c">64</annotation><annotation encoding="application/x-llamapun" id="S3.SS4.SSS3.p2.6.m6.1d">64</annotation></semantics></math> Hz allows it to detect minute and quick temperature fluctuations, providing enhanced precision in continuous monitoring scenarios.</p> </div> </section> <section class="ltx_subsubsection" id="S3.SS4.SSS4"> <h4 class="ltx_title ltx_title_subsubsection"> <span class="ltx_tag ltx_tag_subsubsection">3.4.4. </span>Microphones</h4> <figure class="ltx_figure" id="S3.F7"><img alt="Refer to caption" class="ltx_graphics ltx_centering ltx_img_landscape" height="297" id="S3.F7.g1" src="extracted/5905686/Images/in_ear_mic_sample.png" width="598"/> <figcaption class="ltx_caption ltx_centering"><span class="ltx_tag ltx_tag_figure">Figure 7. </span>Internal microphone data sampled at 48kHz during silence.</figcaption> </figure> <div class="ltx_para" id="S3.SS4.SSS4.p1"> <p class="ltx_p" id="S3.SS4.SSS4.p1.1">The OmniBuds come equipped with three microphones. Two are facing outward while one is embedded in the speaker chamber and facing the user’s ear canal, as discussed in section <a class="ltx_ref" href="https://arxiv.org/html/2410.04775v1#S3.SS1" title="3.1. Form Factor and Sensor Placement ‣ 3. Hardware Architecture ‣ OmniBuds: A Sensory Earable Platform for Advanced Bio-Sensing and On-Device Machine Learning"><span class="ltx_text ltx_ref_tag">3.1</span></a>. When selecting microphones for OmniBuds, we carefully balanced the requirements of standard acoustic features like active noise cancellation and transparency mode with the need for high-fidelity sensing performance. Microphones optimised for ANC and Transparency must capture environmental sounds accurately, while sensing applications require them to detect subtle internal body sounds, such as heartbeats. Ensuring that the microphones could excel at both tasks was critical to the design. We selected components that offered a sufficient dynamic range and sensitivity to perform well in both acoustic and sensing contexts, acknowledging the inherent trade-offs in optimising for dual functionality.</p> </div> <div class="ltx_para" id="S3.SS4.SSS4.p2"> <p class="ltx_p" id="S3.SS4.SSS4.p2.1">Figure <a class="ltx_ref" href="https://arxiv.org/html/2410.04775v1#S3.F7" title="Figure 7 ‣ 3.4.4. Microphones ‣ 3.4. Sensing Subsystem ‣ 3. Hardware Architecture ‣ OmniBuds: A Sensory Earable Platform for Advanced Bio-Sensing and On-Device Machine Learning"><span class="ltx_text ltx_ref_tag">7</span></a> shows data captured from the OmniBuds’ internal microphone at 48kHz during silence. The signal clearly shows how the microphone is capable of capturing heart sounds which are used for diverse applications, like heart rate measurement and blood pressure estimation <cite class="ltx_cite ltx_citemacro_citep">(<span class="ltx_text" style="font-size:90%;">Truong et al</span><span class="ltx_text" style="font-size:90%;">.</span>, <a class="ltx_ref" href="https://arxiv.org/html/2410.04775v1#bib.bib37" title=""><span class="ltx_text" style="font-size:90%;">2022</span></a>; <span class="ltx_text" style="font-size:90%;">Butkow et al</span><span class="ltx_text" style="font-size:90%;">.</span>, <a class="ltx_ref" href="https://arxiv.org/html/2410.04775v1#bib.bib8" title=""><span class="ltx_text" style="font-size:90%;">2023</span></a>)</cite>.</p> </div> </section> </section> </section> <section class="ltx_section" id="S4"> <h2 class="ltx_title ltx_title_section"> <span class="ltx_tag ltx_tag_section">4. </span>Software Architecture</h2> <figure class="ltx_figure" id="S4.F8"><img alt="Refer to caption" class="ltx_graphics ltx_centering ltx_img_landscape" height="251" id="S4.F8.g1" src="x5.png" width="830"/> <figcaption class="ltx_caption ltx_centering"><span class="ltx_tag ltx_tag_figure">Figure 8. </span>OmniBuds Ecosystem and SW Overview.</figcaption> </figure> <figure class="ltx_figure" id="S4.F9"><img alt="Refer to caption" class="ltx_graphics ltx_centering ltx_img_landscape" height="360" id="S4.F9.g1" src="x6.png" width="830"/> <figcaption class="ltx_caption ltx_centering"><span class="ltx_tag ltx_tag_figure">Figure 9. </span>OmniBuds Software Architecture.</figcaption> </figure> <div class="ltx_para" id="S4.p1"> <p class="ltx_p" id="S4.p1.1">The OmniBuds ecosystem consists primarily of two earbuds with identical hardware, as discussed earlier, and is complemented by a charging case and an optional host device connected via Bluetooth Classic (BT) and/or Bluetooth Low Energy (BLE). Figure <a class="ltx_ref" href="https://arxiv.org/html/2410.04775v1#S4.F8" title="Figure 8 ‣ 4. Software Architecture ‣ OmniBuds: A Sensory Earable Platform for Advanced Bio-Sensing and On-Device Machine Learning"><span class="ltx_text ltx_ref_tag">8</span></a> provides an overview of the OmniBuds ecosystem. The software architecture is designed to fully leverage this multi-device setup, enabling efficient coordination across devices.</p> </div> <div class="ltx_para" id="S4.p2"> <p class="ltx_p" id="S4.p2.1">The software architecture of OmniBuds adopts a layered design that allows developers to extend the platform’s functionality while respecting the hardware’s constraints. As shown in Figure <a class="ltx_ref" href="https://arxiv.org/html/2410.04775v1#S4.F9" title="Figure 9 ‣ 4. Software Architecture ‣ OmniBuds: A Sensory Earable Platform for Advanced Bio-Sensing and On-Device Machine Learning"><span class="ltx_text ltx_ref_tag">9</span></a>, the architecture consists of four key layers:</p> </div> <div class="ltx_para" id="S4.p3"> <ul class="ltx_itemize" id="S4.I1"> <li class="ltx_item" id="S4.I1.i1" style="list-style-type:none;"> <span class="ltx_tag ltx_tag_item">•</span> <div class="ltx_para" id="S4.I1.i1.p1"> <p class="ltx_p" id="S4.I1.i1.p1.1"><span class="ltx_text ltx_font_bold" id="S4.I1.i1.p1.1.1">Hardware Abstraction Layer (HAL):</span> Encapsulates device drivers and utility libraries, ensuring portability across different hardware configurations.</p> </div> </li> <li class="ltx_item" id="S4.I1.i2" style="list-style-type:none;"> <span class="ltx_tag ltx_tag_item">•</span> <div class="ltx_para" id="S4.I1.i2.p1"> <p class="ltx_p" id="S4.I1.i2.p1.1"><span class="ltx_text ltx_font_bold" id="S4.I1.i2.p1.1.1">Middleware Layer:</span> Provides essential APIs for applications to communicate with and efficiently use hardware resources. It ensures independent applications can run without conflicts by managing access to hardware in a modular manner.</p> </div> </li> <li class="ltx_item" id="S4.I1.i3" style="list-style-type:none;"> <span class="ltx_tag ltx_tag_item">•</span> <div class="ltx_para" id="S4.I1.i3.p1"> <p class="ltx_p" id="S4.I1.i3.p1.1"><span class="ltx_text ltx_font_bold" id="S4.I1.i3.p1.1.1">Applications Layer:</span> Hosts algorithms that implement functionalities such as vital signs monitoring, leveraging localised computing to optimise performance and conserve power.</p> </div> </li> <li class="ltx_item" id="S4.I1.i4" style="list-style-type:none;"> <span class="ltx_tag ltx_tag_item">•</span> <div class="ltx_para" id="S4.I1.i4.p1"> <p class="ltx_p" id="S4.I1.i4.p1.1"><span class="ltx_text ltx_font_bold" id="S4.I1.i4.p1.1.1">Services Layer:</span> Provides system-wide services such as file management, persistent storage, and task scheduling. The platform runs FreeRTOS which is packaged as part of the services. The applications are free to use FreeRTOS primitives for their own purposes, but should use the OmniBuds middleware APIs for the functionalities provided by the platform and to access the hardware infrastructure.</p> </div> </li> </ul> </div> <div class="ltx_para" id="S4.p4"> <p class="ltx_p" id="S4.p4.1">This layered design ensures flexibility, scalability, and ease of development while maintaining tight integration between software and hardware. The following sections focus on three critical subsystems: communication, sense and compute, and load balancing, each playing a vital role in the platform’s efficient operation.</p> </div> <section class="ltx_subsection" id="S4.SS1"> <h3 class="ltx_title ltx_title_subsection"> <span class="ltx_tag ltx_tag_subsection">4.1. </span>Communication Subsystem</h3> <div class="ltx_para" id="S4.SS1.p1"> <p class="ltx_p" id="S4.SS1.p1.1">The Communication Subsystem in OmniBuds manages data exchange within the platform and with external devices. It consists of two main components: Inter-Module Communication (IMC) for internal communication between software modules and the Communication Interface, which uses the BLE protocol to connect with external peripherals (Figure <a class="ltx_ref" href="https://arxiv.org/html/2410.04775v1#S4.F9" title="Figure 9 ‣ 4. Software Architecture ‣ OmniBuds: A Sensory Earable Platform for Advanced Bio-Sensing and On-Device Machine Learning"><span class="ltx_text ltx_ref_tag">9</span></a>).</p> </div> <section class="ltx_subsubsection" id="S4.SS1.SSS1"> <h4 class="ltx_title ltx_title_subsubsection"> <span class="ltx_tag ltx_tag_subsubsection">4.1.1. </span>Inter-Module Communication (IMC)</h4> <div class="ltx_para" id="S4.SS1.SSS1.p1"> <p class="ltx_p" id="S4.SS1.SSS1.p1.1">IMC facilitates seamless and transparent communication between software modules, both locally and across devices in the OmniBuds ecosystem. This approach ensures that software modules can interact without needing to manage the complexities of local versus remote communication.</p> </div> <div class="ltx_para" id="S4.SS1.SSS1.p2"> <p class="ltx_p" id="S4.SS1.SSS1.p2.1">This system is based on the observer pattern <cite class="ltx_cite ltx_citemacro_citep">(<span class="ltx_text" style="font-size:90%;">Gamma</span>, <a class="ltx_ref" href="https://arxiv.org/html/2410.04775v1#bib.bib20" title=""><span class="ltx_text" style="font-size:90%;">1995</span></a>)</cite>, where components register for updates from extendable triggers. When a trigger is activated, the registered modules receive notifications with the relevant data. IMC messages are asynchronous and can carry up to 64 bytes of data.</p> </div> <div class="ltx_para" id="S4.SS1.SSS1.p3"> <p class="ltx_p" id="S4.SS1.SSS1.p3.1">The IMC API offers flexibility in message dissemination, allowing modules to specify if a message is sent to the local device, a peer device (i.e. the other earbud), other devices in the OmniBuds ecosystem, or all of the above. The underlying IMC logic handles message queuing efficiently, even when devices use different physical layer connections. The APIs for sending and receiving IMC messages are shown below:</p> </div> <div class="ltx_para" id="S4.SS1.SSS1.p4"> <div class="ltx_listing ltx_lst_language_C ltx_lstlisting ltx_framed ltx_framed_rectangle ltx_listing" id="S4.SS1.SSS1.p4.1"> <div class="ltx_listing_data"><a download="" href="data:text/plain;base64,dm9pZCBvYl9JTUNTZW5kTWVzc2FnZSh1aW50OF90IHRyaWdnZXJJRCwgdWludDhfdCAqZGF0YUJ1ZmZlciwgdWludDhfdCBkYXRhTGVuZ3RoLCB1aW50OF90IGRlc3RpbmF0aW9uKQoKaW50IG9iX0lNQ1JlZ2lzdGVyTWVzc2FnZUNhbGxiYWNrKHVpbnQ4X3QgdHJpZ2dlcklELCB0cmlnZ2VyRXZ0Q0JfdCAqdHJpZ2dlckNCKQ==">⬇</a></div> <div class="ltx_listingline" id="lstnumberx1"> <span class="ltx_text ltx_lst_keyword" id="lstnumberx1.1" style="font-size:90%;color:#0000FF;">void</span><span class="ltx_text ltx_lst_space" id="lstnumberx1.2" style="font-size:90%;"> </span><span class="ltx_text ltx_lst_emph" id="lstnumberx1.3" style="font-size:90%;color:#808000;">ob_IMCSendMessage</span><span class="ltx_text" id="lstnumberx1.4" style="font-size:90%;">(</span><span class="ltx_text ltx_lst_keyword" id="lstnumberx1.5" style="font-size:90%;color:#0000FF;">uint8_t</span><span class="ltx_text ltx_lst_space" id="lstnumberx1.6" style="font-size:90%;"> </span><span class="ltx_text ltx_lst_identifier" id="lstnumberx1.7" style="font-size:90%;">triggerID</span><span class="ltx_text" id="lstnumberx1.8" style="font-size:90%;">,</span><span class="ltx_text ltx_lst_space" id="lstnumberx1.9" style="font-size:90%;"> </span><span class="ltx_text ltx_lst_keyword" id="lstnumberx1.10" style="font-size:90%;color:#0000FF;">uint8_t</span><span class="ltx_text ltx_lst_space" id="lstnumberx1.11" style="font-size:90%;"> </span><span class="ltx_text" id="lstnumberx1.12" style="font-size:90%;">*</span><span class="ltx_text ltx_lst_identifier" id="lstnumberx1.13" style="font-size:90%;">dataBuffer</span><span class="ltx_text" id="lstnumberx1.14" style="font-size:90%;">,</span><span class="ltx_text ltx_lst_space" id="lstnumberx1.15" style="font-size:90%;"> </span><span class="ltx_text ltx_lst_keyword" id="lstnumberx1.16" style="font-size:90%;color:#0000FF;">uint8_t</span><span class="ltx_text ltx_lst_space" id="lstnumberx1.17" style="font-size:90%;"> </span><span class="ltx_text ltx_lst_identifier" id="lstnumberx1.18" style="font-size:90%;">dataLength</span><span class="ltx_text" id="lstnumberx1.19" style="font-size:90%;">,</span><span class="ltx_text ltx_lst_space" id="lstnumberx1.20" style="font-size:90%;"> </span><span class="ltx_text ltx_lst_keyword" id="lstnumberx1.21" style="font-size:90%;color:#0000FF;">uint8_t</span><span class="ltx_text ltx_lst_space" id="lstnumberx1.22" style="font-size:90%;"> </span><span class="ltx_text ltx_lst_identifier" id="lstnumberx1.23" style="font-size:90%;">destination</span><span class="ltx_text" id="lstnumberx1.24" style="font-size:90%;">)</span> </div> <div class="ltx_listingline" id="lstnumberx2"> </div> <div class="ltx_listingline" id="lstnumberx3"> <span class="ltx_text ltx_lst_keyword" id="lstnumberx3.1" style="font-size:90%;color:#0000FF;">int</span><span class="ltx_text ltx_lst_space" id="lstnumberx3.2" style="font-size:90%;"> </span><span class="ltx_text ltx_lst_emph" id="lstnumberx3.3" style="font-size:90%;color:#808000;">ob_IMCRegisterMessageCallback</span><span class="ltx_text" id="lstnumberx3.4" style="font-size:90%;">(</span><span class="ltx_text ltx_lst_keyword" id="lstnumberx3.5" style="font-size:90%;color:#0000FF;">uint8_t</span><span class="ltx_text ltx_lst_space" id="lstnumberx3.6" style="font-size:90%;"> </span><span class="ltx_text ltx_lst_identifier" id="lstnumberx3.7" style="font-size:90%;">triggerID</span><span class="ltx_text" id="lstnumberx3.8" style="font-size:90%;">,</span><span class="ltx_text ltx_lst_space" id="lstnumberx3.9" style="font-size:90%;"> </span><span class="ltx_text ltx_lst_keyword" id="lstnumberx3.10" style="font-size:90%;color:#0000FF;">triggerEvtCB_t</span><span class="ltx_text ltx_lst_space" id="lstnumberx3.11" style="font-size:90%;"> </span><span class="ltx_text" id="lstnumberx3.12" style="font-size:90%;">*</span><span class="ltx_text ltx_lst_identifier" id="lstnumberx3.13" style="font-size:90%;">triggerCB</span><span class="ltx_text" id="lstnumberx3.14" style="font-size:90%;">)</span> </div> </div> </div> </section> <section class="ltx_subsubsection" id="S4.SS1.SSS2"> <h4 class="ltx_title ltx_title_subsubsection"> <span class="ltx_tag ltx_tag_subsubsection">4.1.2. </span>Communication Interface</h4> <div class="ltx_para" id="S4.SS1.SSS2.p1"> <p class="ltx_p" id="S4.SS1.SSS2.p1.1">The Communication Interface module in OmniBuds manages data exchange with external devices via Bluetooth Low Energy (BLE) and handles audio playback and phone calls through Bluetooth Classic.</p> </div> <div class="ltx_para" id="S4.SS1.SSS2.p2"> <p class="ltx_p" id="S4.SS1.SSS2.p2.1">To external devices, the two OmniBuds earbuds appear as a single device, with the <span class="ltx_text ltx_font_italic" id="S4.SS1.SSS2.p2.1.1">primary</span> earbud managing the connection and forwarding messages to the <span class="ltx_text ltx_font_italic" id="S4.SS1.SSS2.p2.1.2">secondary</span> earbud. This architecture simplifies communication while maintaining seamless interaction between both earbuds. Details on how this supports load balancing and battery optimisation are discussed in Section <a class="ltx_ref" href="https://arxiv.org/html/2410.04775v1#S4.SS3" title="4.3. Load Balancing Subsystem ‣ 4. Software Architecture ‣ OmniBuds: A Sensory Earable Platform for Advanced Bio-Sensing and On-Device Machine Learning"><span class="ltx_text ltx_ref_tag">4.3</span></a>.</p> </div> <div class="ltx_para" id="S4.SS1.SSS2.p3"> <p class="ltx_p" id="S4.SS1.SSS2.p3.1">For data exchange, OmniBuds introduces <span class="ltx_text ltx_font_italic" id="S4.SS1.SSS2.p3.1.1">addressable peripherals</span>, which can be either physical sensors (e.g., IMU or PPG) or virtual peripherals (e.g., heart rate derived from PPG). These allow third-party devices to send commands, collect data, and interact with specific peripherals via a consistent interface. BLE communication is encrypted to ensure data privacy and security.</p> </div> <div class="ltx_para" id="S4.SS1.SSS2.p4"> <p class="ltx_p" id="S4.SS1.SSS2.p4.1">BLE communication is based on three message types:</p> <ul class="ltx_itemize" id="S4.I2"> <li class="ltx_item" id="S4.I2.i1" style="list-style-type:none;"> <span class="ltx_tag ltx_tag_item">•</span> <div class="ltx_para" id="S4.I2.i1.p1"> <p class="ltx_p" id="S4.I2.i1.p1.1"><span class="ltx_text ltx_font_bold" id="S4.I2.i1.p1.1.1">Data messages:</span> For large or continuous data streams.</p> </div> </li> <li class="ltx_item" id="S4.I2.i2" style="list-style-type:none;"> <span class="ltx_tag ltx_tag_item">•</span> <div class="ltx_para" id="S4.I2.i2.p1"> <p class="ltx_p" id="S4.I2.i2.p1.1"><span class="ltx_text ltx_font_bold" id="S4.I2.i2.p1.1.1">Event messages:</span> For handling sporadic peripheral triggers.</p> </div> </li> <li class="ltx_item" id="S4.I2.i3" style="list-style-type:none;"> <span class="ltx_tag ltx_tag_item">•</span> <div class="ltx_para" id="S4.I2.i3.p1"> <p class="ltx_p" id="S4.I2.i3.p1.1"><span class="ltx_text ltx_font_bold" id="S4.I2.i3.p1.1.1">Configuration messages:</span> For configuring peripheral settings such as sampling rate or power mode (using different endpoints associated with each peripheral).</p> </div> </li> </ul> </div> <div class="ltx_para" id="S4.SS1.SSS2.p5"> <p class="ltx_p" id="S4.SS1.SSS2.p5.1">The main APIs for managing BLE communication are as follows:</p> <div class="ltx_listing ltx_lst_language_C ltx_lstlisting ltx_framed ltx_framed_rectangle ltx_listing" id="S4.SS1.SSS2.p5.2"> <div class="ltx_listing_data"><a download="" href="data:text/plain;base64,Ly8gUmVnaXN0ZXIgY2FsbGJhY2sgdG8gcmVjZWl2ZSBkYXRhL2V2ZW50L2NvbmZpZyBtZXNzYWdlcwppbnQgb2JfcmVnaXN0ZXJQZXJpcGhlcmFsQ2FsbGJhY2sob2JfcGVyaXBoSURfdCBwZXJpcGhlcmFsRCwgcnhDb21tRXZ0Q0IgKmNhbGxiYWMpOwoKLy8gU2VuZCBhIGRhdGEvZXZlbnQgbWVzc2FnZQp2b2lkIG9iX3NlbmREYXRhTWVzc2FnZShvYl9wZXJpcGhJRF90IHBlcmlwaGVyYWxELCB1aW50OF90ICpkYXRhX2J1ZiwgdWludDE2X3QgZGF0YV9sZW4pOwp2b2lkIG9iX3NlbmRFdmVudE1lc3NhZ2Uob2JfcGVyaXBoSURfdCBwZXJpcGhlcmFsRCwgdWludDhfdCAqZGF0YV9idWYsIHVpbnQxNl90IGRhdGFfbGVuKTsKCi8vIFNlbmQgYSBjb25maWd1cmF0aW9uIG1lc3NhZ2UKdm9pZCBvYl9zZW5kQ29uZmlnTWVzc2FnZShvYl9wZXJpcGhJRF90IHBlcmlwaGVyYWxELCB1aW50OF90IGNvbmZpZ0VuZHBvaW50LCB1aW50OF90ICpkYXRhX2J1ZiwgdWludDE2X3QgZGF0YV9sZW4pOwoKLy8gU2VuZCBhIGNvbmZpZ3VyYXRpb24gbWVzc2FnZSByZXNwb25zZQp2b2lkIG9iX3NlbmRDb25maWdNZXNzYWdlUmVzcG9uc2Uob2JfcGVyaXBoSURfdCBwZXJpcGhlcmFsRCwgdWludDhfdCBjb25maWdFbmRwb2ludCwgb2JfbXNnRXJyb3JDb2RlX3QgZXJyQ29kZSwgdWludDhfdCAqZGF0YV9idWYsIHVpbnQxNl90IGRhdGFfbGVuKTs=">⬇</a></div> <div class="ltx_listingline" id="lstnumberx4"> <span class="ltx_text ltx_lst_comment" id="lstnumberx4.1" style="font-size:90%;color:#808080;">//<span class="ltx_text ltx_lst_space" id="lstnumberx4.1.1"> </span>Register<span class="ltx_text ltx_lst_space" id="lstnumberx4.1.2"> </span>callback<span class="ltx_text ltx_lst_space" id="lstnumberx4.1.3"> </span>to<span class="ltx_text ltx_lst_space" id="lstnumberx4.1.4"> </span>receive<span class="ltx_text ltx_lst_space" id="lstnumberx4.1.5"> </span>data/event/config<span class="ltx_text ltx_lst_space" id="lstnumberx4.1.6"> </span>messages</span> </div> <div class="ltx_listingline" id="lstnumberx5"> <span class="ltx_text ltx_lst_keyword" id="lstnumberx5.1" style="font-size:90%;color:#0000FF;">int</span><span class="ltx_text ltx_lst_space" id="lstnumberx5.2" style="font-size:90%;"> </span><span class="ltx_text ltx_lst_emph" id="lstnumberx5.3" style="font-size:90%;color:#808000;">ob_registerPeripheralCallback</span><span class="ltx_text" id="lstnumberx5.4" style="font-size:90%;">(</span><span class="ltx_text ltx_lst_keyword" id="lstnumberx5.5" style="font-size:90%;color:#0000FF;">ob_periphID_t</span><span class="ltx_text ltx_lst_space" id="lstnumberx5.6" style="font-size:90%;"> </span><span class="ltx_text ltx_lst_identifier" id="lstnumberx5.7" style="font-size:90%;">peripheralD</span><span class="ltx_text" id="lstnumberx5.8" style="font-size:90%;">,</span><span class="ltx_text ltx_lst_space" id="lstnumberx5.9" style="font-size:90%;"> </span><span class="ltx_text ltx_lst_keyword" id="lstnumberx5.10" style="font-size:90%;color:#0000FF;">rxCommEvtCB</span><span class="ltx_text ltx_lst_space" id="lstnumberx5.11" style="font-size:90%;"> </span><span class="ltx_text" id="lstnumberx5.12" style="font-size:90%;">*</span><span class="ltx_text ltx_lst_identifier" id="lstnumberx5.13" style="font-size:90%;">callbac</span><span class="ltx_text" id="lstnumberx5.14" style="font-size:90%;">);</span> </div> <div class="ltx_listingline" id="lstnumberx6"> </div> <div class="ltx_listingline" id="lstnumberx7"> <span class="ltx_text ltx_lst_comment" id="lstnumberx7.1" style="font-size:90%;color:#808080;">//<span class="ltx_text ltx_lst_space" id="lstnumberx7.1.1"> </span>Send<span class="ltx_text ltx_lst_space" id="lstnumberx7.1.2"> </span>a<span class="ltx_text ltx_lst_space" id="lstnumberx7.1.3"> </span>data/event<span class="ltx_text ltx_lst_space" id="lstnumberx7.1.4"> </span>message</span> </div> <div class="ltx_listingline" id="lstnumberx8"> <span class="ltx_text ltx_lst_keyword" id="lstnumberx8.1" style="font-size:90%;color:#0000FF;">void</span><span class="ltx_text ltx_lst_space" id="lstnumberx8.2" style="font-size:90%;"> </span><span class="ltx_text ltx_lst_emph" id="lstnumberx8.3" style="font-size:90%;color:#808000;">ob_sendDataMessage</span><span class="ltx_text" id="lstnumberx8.4" style="font-size:90%;">(</span><span class="ltx_text ltx_lst_keyword" id="lstnumberx8.5" style="font-size:90%;color:#0000FF;">ob_periphID_t</span><span class="ltx_text ltx_lst_space" id="lstnumberx8.6" style="font-size:90%;"> </span><span class="ltx_text ltx_lst_identifier" id="lstnumberx8.7" style="font-size:90%;">peripheralD</span><span class="ltx_text" id="lstnumberx8.8" style="font-size:90%;">,</span><span class="ltx_text ltx_lst_space" id="lstnumberx8.9" style="font-size:90%;"> </span><span class="ltx_text ltx_lst_keyword" id="lstnumberx8.10" style="font-size:90%;color:#0000FF;">uint8_t</span><span class="ltx_text ltx_lst_space" id="lstnumberx8.11" style="font-size:90%;"> </span><span class="ltx_text" id="lstnumberx8.12" style="font-size:90%;">*</span><span class="ltx_text ltx_lst_identifier" id="lstnumberx8.13" style="font-size:90%;">data_buf</span><span class="ltx_text" id="lstnumberx8.14" style="font-size:90%;">,</span><span class="ltx_text ltx_lst_space" id="lstnumberx8.15" style="font-size:90%;"> </span><span class="ltx_text ltx_lst_keyword" id="lstnumberx8.16" style="font-size:90%;color:#0000FF;">uint16_t</span><span class="ltx_text ltx_lst_space" id="lstnumberx8.17" style="font-size:90%;"> </span><span class="ltx_text ltx_lst_identifier" id="lstnumberx8.18" style="font-size:90%;">data_len</span><span class="ltx_text" id="lstnumberx8.19" style="font-size:90%;">);</span> </div> <div class="ltx_listingline" id="lstnumberx9"> <span class="ltx_text ltx_lst_keyword" id="lstnumberx9.1" style="font-size:90%;color:#0000FF;">void</span><span class="ltx_text ltx_lst_space" id="lstnumberx9.2" style="font-size:90%;"> </span><span class="ltx_text ltx_lst_emph" id="lstnumberx9.3" style="font-size:90%;color:#808000;">ob_sendEventMessage</span><span class="ltx_text" id="lstnumberx9.4" style="font-size:90%;">(</span><span class="ltx_text ltx_lst_keyword" id="lstnumberx9.5" style="font-size:90%;color:#0000FF;">ob_periphID_t</span><span class="ltx_text ltx_lst_space" id="lstnumberx9.6" style="font-size:90%;"> </span><span class="ltx_text ltx_lst_identifier" id="lstnumberx9.7" style="font-size:90%;">peripheralD</span><span class="ltx_text" id="lstnumberx9.8" style="font-size:90%;">,</span><span class="ltx_text ltx_lst_space" id="lstnumberx9.9" style="font-size:90%;"> </span><span class="ltx_text ltx_lst_keyword" id="lstnumberx9.10" style="font-size:90%;color:#0000FF;">uint8_t</span><span class="ltx_text ltx_lst_space" id="lstnumberx9.11" style="font-size:90%;"> </span><span class="ltx_text" id="lstnumberx9.12" style="font-size:90%;">*</span><span class="ltx_text ltx_lst_identifier" id="lstnumberx9.13" style="font-size:90%;">data_buf</span><span class="ltx_text" id="lstnumberx9.14" style="font-size:90%;">,</span><span class="ltx_text ltx_lst_space" id="lstnumberx9.15" style="font-size:90%;"> </span><span class="ltx_text ltx_lst_keyword" id="lstnumberx9.16" style="font-size:90%;color:#0000FF;">uint16_t</span><span class="ltx_text ltx_lst_space" id="lstnumberx9.17" style="font-size:90%;"> </span><span class="ltx_text ltx_lst_identifier" id="lstnumberx9.18" style="font-size:90%;">data_len</span><span class="ltx_text" id="lstnumberx9.19" style="font-size:90%;">);</span> </div> <div class="ltx_listingline" id="lstnumberx10"> </div> <div class="ltx_listingline" id="lstnumberx11"> <span class="ltx_text ltx_lst_comment" id="lstnumberx11.1" style="font-size:90%;color:#808080;">//<span class="ltx_text ltx_lst_space" id="lstnumberx11.1.1"> </span>Send<span class="ltx_text ltx_lst_space" id="lstnumberx11.1.2"> </span>a<span class="ltx_text ltx_lst_space" id="lstnumberx11.1.3"> </span>configuration<span class="ltx_text ltx_lst_space" id="lstnumberx11.1.4"> </span>message</span> </div> <div class="ltx_listingline" id="lstnumberx12"> <span class="ltx_text ltx_lst_keyword" id="lstnumberx12.1" style="font-size:90%;color:#0000FF;">void</span><span class="ltx_text ltx_lst_space" id="lstnumberx12.2" style="font-size:90%;"> </span><span class="ltx_text ltx_lst_emph" id="lstnumberx12.3" style="font-size:90%;color:#808000;">ob_sendConfigMessage</span><span class="ltx_text" id="lstnumberx12.4" style="font-size:90%;">(</span><span class="ltx_text ltx_lst_keyword" id="lstnumberx12.5" style="font-size:90%;color:#0000FF;">ob_periphID_t</span><span class="ltx_text ltx_lst_space" id="lstnumberx12.6" style="font-size:90%;"> </span><span class="ltx_text ltx_lst_identifier" id="lstnumberx12.7" style="font-size:90%;">peripheralD</span><span class="ltx_text" id="lstnumberx12.8" style="font-size:90%;">,</span><span class="ltx_text ltx_lst_space" id="lstnumberx12.9" style="font-size:90%;"> </span><span class="ltx_text ltx_lst_keyword" id="lstnumberx12.10" style="font-size:90%;color:#0000FF;">uint8_t</span><span class="ltx_text ltx_lst_space" id="lstnumberx12.11" style="font-size:90%;"> </span><span class="ltx_text ltx_lst_identifier" id="lstnumberx12.12" style="font-size:90%;">configEndpoint</span><span class="ltx_text" id="lstnumberx12.13" style="font-size:90%;">,</span><span class="ltx_text ltx_lst_space" id="lstnumberx12.14" style="font-size:90%;"> </span><span class="ltx_text ltx_lst_keyword" id="lstnumberx12.15" style="font-size:90%;color:#0000FF;">uint8_t</span><span class="ltx_text ltx_lst_space" id="lstnumberx12.16" style="font-size:90%;"> </span><span class="ltx_text" id="lstnumberx12.17" style="font-size:90%;">*</span><span class="ltx_text ltx_lst_identifier" id="lstnumberx12.18" style="font-size:90%;">data_buf</span><span class="ltx_text" id="lstnumberx12.19" style="font-size:90%;">,</span><span class="ltx_text ltx_lst_space" id="lstnumberx12.20" style="font-size:90%;"> </span><span class="ltx_text ltx_lst_keyword" id="lstnumberx12.21" style="font-size:90%;color:#0000FF;">uint16_t</span><span class="ltx_text ltx_lst_space" id="lstnumberx12.22" style="font-size:90%;"> </span><span class="ltx_text ltx_lst_identifier" id="lstnumberx12.23" style="font-size:90%;">data_len</span><span class="ltx_text" id="lstnumberx12.24" style="font-size:90%;">);</span> </div> <div class="ltx_listingline" id="lstnumberx13"> </div> <div class="ltx_listingline" id="lstnumberx14"> <span class="ltx_text ltx_lst_comment" id="lstnumberx14.1" style="font-size:90%;color:#808080;">//<span class="ltx_text ltx_lst_space" id="lstnumberx14.1.1"> </span>Send<span class="ltx_text ltx_lst_space" id="lstnumberx14.1.2"> </span>a<span class="ltx_text ltx_lst_space" id="lstnumberx14.1.3"> </span>configuration<span class="ltx_text ltx_lst_space" id="lstnumberx14.1.4"> </span>message<span class="ltx_text ltx_lst_space" id="lstnumberx14.1.5"> </span>response</span> </div> <div class="ltx_listingline" id="lstnumberx15"> <span class="ltx_text ltx_lst_keyword" id="lstnumberx15.1" style="font-size:90%;color:#0000FF;">void</span><span class="ltx_text ltx_lst_space" id="lstnumberx15.2" style="font-size:90%;"> </span><span class="ltx_text ltx_lst_emph" id="lstnumberx15.3" style="font-size:90%;color:#808000;">ob_sendConfigMessageResponse</span><span class="ltx_text" id="lstnumberx15.4" style="font-size:90%;">(</span><span class="ltx_text ltx_lst_keyword" id="lstnumberx15.5" style="font-size:90%;color:#0000FF;">ob_periphID_t</span><span class="ltx_text ltx_lst_space" id="lstnumberx15.6" style="font-size:90%;"> </span><span class="ltx_text ltx_lst_identifier" id="lstnumberx15.7" style="font-size:90%;">peripheralD</span><span class="ltx_text" id="lstnumberx15.8" style="font-size:90%;">,</span><span class="ltx_text ltx_lst_space" id="lstnumberx15.9" style="font-size:90%;"> </span><span class="ltx_text ltx_lst_keyword" id="lstnumberx15.10" style="font-size:90%;color:#0000FF;">uint8_t</span><span class="ltx_text ltx_lst_space" id="lstnumberx15.11" style="font-size:90%;"> </span><span class="ltx_text ltx_lst_identifier" id="lstnumberx15.12" style="font-size:90%;">configEndpoint</span><span class="ltx_text" id="lstnumberx15.13" style="font-size:90%;">,</span><span class="ltx_text ltx_lst_space" id="lstnumberx15.14" style="font-size:90%;"> </span><span class="ltx_text ltx_lst_keyword" id="lstnumberx15.15" style="font-size:90%;color:#0000FF;">ob_msgErrorCode_t</span><span class="ltx_text ltx_lst_space" id="lstnumberx15.16" style="font-size:90%;"> </span><span class="ltx_text ltx_lst_identifier" id="lstnumberx15.17" style="font-size:90%;">errCode</span><span class="ltx_text" id="lstnumberx15.18" style="font-size:90%;">,</span><span class="ltx_text ltx_lst_space" id="lstnumberx15.19" style="font-size:90%;"> </span><span class="ltx_text ltx_lst_keyword" id="lstnumberx15.20" style="font-size:90%;color:#0000FF;">uint8_t</span><span class="ltx_text ltx_lst_space" id="lstnumberx15.21" style="font-size:90%;"> </span><span class="ltx_text" id="lstnumberx15.22" style="font-size:90%;">*</span><span class="ltx_text ltx_lst_identifier" id="lstnumberx15.23" style="font-size:90%;">data_buf</span><span class="ltx_text" id="lstnumberx15.24" style="font-size:90%;">,</span><span class="ltx_text ltx_lst_space" id="lstnumberx15.25" style="font-size:90%;"> </span><span class="ltx_text ltx_lst_keyword" id="lstnumberx15.26" style="font-size:90%;color:#0000FF;">uint16_t</span><span class="ltx_text ltx_lst_space" id="lstnumberx15.27" style="font-size:90%;"> </span><span class="ltx_text ltx_lst_identifier" id="lstnumberx15.28" style="font-size:90%;">data_len</span><span class="ltx_text" id="lstnumberx15.29" style="font-size:90%;">);</span> </div> </div> </div> <div class="ltx_para" id="S4.SS1.SSS2.p6"> <p class="ltx_p" id="S4.SS1.SSS2.p6.1">OmniBuds also manages audio playback and phone calls via Bluetooth Classic, using the Hands-Free Profile (HFP) for calls and Audio-Video Remote Control Profile (A2DP) for playback, coordinated by the communication subsystem and wireless chipset.</p> </div> </section> </section> <section class="ltx_subsection" id="S4.SS2"> <h3 class="ltx_title ltx_title_subsection"> <span class="ltx_tag ltx_tag_subsection">4.2. </span>Sense and Compute Subsystem</h3> <div class="ltx_para" id="S4.SS2.p1"> <p class="ltx_p" id="S4.SS2.p1.1">The Sense and Compute Subsystem in OmniBuds provides applications with primitives to manage the platform’s computational units and sensors efficiently. Three critical modules define this subsystem: the Sensor Distribution Module, which manages sensor data access for multiple applications; the Machine Learning Engine, responsible for executing models on the CNN accelerator; and the Audio Manager, which oversees audio pipelines within the dedicated Audio DSP. The following sections provide an overview of these key modules.</p> </div> <section class="ltx_subsubsection" id="S4.SS2.SSS1"> <h4 class="ltx_title ltx_title_subsubsection"> <span class="ltx_tag ltx_tag_subsubsection">4.2.1. </span>Sensor Distribution Module</h4> <div class="ltx_para" id="S4.SS2.SSS1.p1"> <p class="ltx_p" id="S4.SS2.SSS1.p1.1">The Sensor Distribution Module simplifies access to sensor data for multiple applications, particularly for sensors like the IMU, PPG and temperature. It allows application developers to easily integrate sensor data without managing conflicts or sensor resource issues.</p> </div> <div class="ltx_para" id="S4.SS2.SSS1.p2"> <p class="ltx_p" id="S4.SS2.SSS1.p2.1">Different Business Logic modules are provided for the available sensors (Figure <a class="ltx_ref" href="https://arxiv.org/html/2410.04775v1#S4.F9" title="Figure 9 ‣ 4. Software Architecture ‣ OmniBuds: A Sensory Earable Platform for Advanced Bio-Sensing and On-Device Machine Learning"><span class="ltx_text ltx_ref_tag">9</span></a>) which are used as singleton <cite class="ltx_cite ltx_citemacro_citep">(<span class="ltx_text" style="font-size:90%;">Gamma</span>, <a class="ltx_ref" href="https://arxiv.org/html/2410.04775v1#bib.bib20" title=""><span class="ltx_text" style="font-size:90%;">1995</span></a>)</cite> from multiple applications. Applications register with the module by specifying their requirements for the sensor data, such as sampling rate, window length, and other parameters. The module then configures the sensor accordingly and efficiently delivers the requested data to each application. To further enhance efficiency, the module leverages on-chip computational features, such as the BioHub and the machine learning core in the IMU, to process data locally when necessary. This reduces the computational load on the main MCU, conserves power, and ensures that sensor data is processed and delivered in a timely manner.</p> </div> </section> <section class="ltx_subsubsection" id="S4.SS2.SSS2"> <h4 class="ltx_title ltx_title_subsubsection"> <span class="ltx_tag ltx_tag_subsubsection">4.2.2. </span>Machine Learning Engine</h4> <div class="ltx_para" id="S4.SS2.SSS2.p1"> <p class="ltx_p" id="S4.SS2.SSS2.p1.1">The Machine Learning Engine manages the execution of machine learning models on the CNN accelerator, ensuring efficient on-device inference. The use of this module involves a complete pipeline, from offline model training and synthesis to deployment and execution, ensuring compatibility with the accelerator’s architecture.</p> </div> <div class="ltx_para" id="S4.SS2.SSS2.p2"> <p class="ltx_p" id="S4.SS2.SSS2.p2.1">Models are trained and optimised offline before being transferred to the OmniBuds via BLE. The engine then loads and executes the model on the CNN accelerator, making inference results available to other applications. To conserve power, the engine dynamically configures the accelerator, shutting down unused hardware sections during inference.</p> </div> <div class="ltx_para" id="S4.SS2.SSS2.p3"> <p class="ltx_p" id="S4.SS2.SSS2.p3.1">Looking ahead, the Machine Learning Engine is designed to support future capabilities such as splitting inference tasks across both earbuds, enabling distributed computation, and running multiple models concurrently. These features will extend the flexibility of the system, enhancing performance and energy efficiency.</p> </div> </section> <section class="ltx_subsubsection" id="S4.SS2.SSS3"> <h4 class="ltx_title ltx_title_subsubsection"> <span class="ltx_tag ltx_tag_subsubsection">4.2.3. </span>Audio Manager</h4> <figure class="ltx_figure" id="S4.F10"><img alt="Refer to caption" class="ltx_graphics ltx_centering ltx_img_landscape" height="451" id="S4.F10.g1" src="x7.png" width="830"/> <figcaption class="ltx_caption ltx_centering"><span class="ltx_tag ltx_tag_figure">Figure 10. </span>Fast DSP algorithms for ANC/Pass-through and music playback and calls.</figcaption> </figure> <figure class="ltx_figure" id="S4.F11"><img alt="Refer to caption" class="ltx_graphics ltx_centering ltx_img_landscape" height="456" id="S4.F11.g1" src="x8.png" width="830"/> <figcaption class="ltx_caption ltx_centering"><span class="ltx_tag ltx_tag_figure">Figure 11. </span>Slow DSP algorithms to support sensing applications like spoken keyword spotting and multi-modal blood pressure estimation.</figcaption> </figure> <div class="ltx_para" id="S4.SS2.SSS3.p1"> <p class="ltx_p" id="S4.SS2.SSS3.p1.1">The Audio Manager handles the audio processing pipelines on the dedicated audio DSP, divided into <span class="ltx_text ltx_font_italic" id="S4.SS2.SSS3.p1.1.1">time-critical</span> and <span class="ltx_text ltx_font_italic" id="S4.SS2.SSS3.p1.1.2">non-time-critical</span> categories.</p> </div> <section class="ltx_paragraph" id="S4.SS2.SSS3.Px1"> <h5 class="ltx_title ltx_title_paragraph">Time-Critical Pipelines</h5> <div class="ltx_para" id="S4.SS2.SSS3.Px1.p1"> <p class="ltx_p" id="S4.SS2.SSS3.Px1.p1.1">Time-critical pipelines include processes such as Active Noise Cancellation (ANC) and pass-through functionality. These pipelines run on the Fast-DSP core, which is optimised for low-latency processing, operating at a high sampling frequency of <math alttext="F_{s}=384" class="ltx_Math" display="inline" id="S4.SS2.SSS3.Px1.p1.1.m1.1"><semantics id="S4.SS2.SSS3.Px1.p1.1.m1.1a"><mrow id="S4.SS2.SSS3.Px1.p1.1.m1.1.1" xref="S4.SS2.SSS3.Px1.p1.1.m1.1.1.cmml"><msub id="S4.SS2.SSS3.Px1.p1.1.m1.1.1.2" xref="S4.SS2.SSS3.Px1.p1.1.m1.1.1.2.cmml"><mi id="S4.SS2.SSS3.Px1.p1.1.m1.1.1.2.2" xref="S4.SS2.SSS3.Px1.p1.1.m1.1.1.2.2.cmml">F</mi><mi id="S4.SS2.SSS3.Px1.p1.1.m1.1.1.2.3" xref="S4.SS2.SSS3.Px1.p1.1.m1.1.1.2.3.cmml">s</mi></msub><mo id="S4.SS2.SSS3.Px1.p1.1.m1.1.1.1" xref="S4.SS2.SSS3.Px1.p1.1.m1.1.1.1.cmml">=</mo><mn id="S4.SS2.SSS3.Px1.p1.1.m1.1.1.3" xref="S4.SS2.SSS3.Px1.p1.1.m1.1.1.3.cmml">384</mn></mrow><annotation-xml encoding="MathML-Content" id="S4.SS2.SSS3.Px1.p1.1.m1.1b"><apply id="S4.SS2.SSS3.Px1.p1.1.m1.1.1.cmml" xref="S4.SS2.SSS3.Px1.p1.1.m1.1.1"><eq id="S4.SS2.SSS3.Px1.p1.1.m1.1.1.1.cmml" xref="S4.SS2.SSS3.Px1.p1.1.m1.1.1.1"></eq><apply id="S4.SS2.SSS3.Px1.p1.1.m1.1.1.2.cmml" xref="S4.SS2.SSS3.Px1.p1.1.m1.1.1.2"><csymbol cd="ambiguous" id="S4.SS2.SSS3.Px1.p1.1.m1.1.1.2.1.cmml" xref="S4.SS2.SSS3.Px1.p1.1.m1.1.1.2">subscript</csymbol><ci id="S4.SS2.SSS3.Px1.p1.1.m1.1.1.2.2.cmml" xref="S4.SS2.SSS3.Px1.p1.1.m1.1.1.2.2">𝐹</ci><ci id="S4.SS2.SSS3.Px1.p1.1.m1.1.1.2.3.cmml" xref="S4.SS2.SSS3.Px1.p1.1.m1.1.1.2.3">𝑠</ci></apply><cn id="S4.SS2.SSS3.Px1.p1.1.m1.1.1.3.cmml" type="integer" xref="S4.SS2.SSS3.Px1.p1.1.m1.1.1.3">384</cn></apply></annotation-xml><annotation encoding="application/x-tex" id="S4.SS2.SSS3.Px1.p1.1.m1.1c">F_{s}=384</annotation><annotation encoding="application/x-llamapun" id="S4.SS2.SSS3.Px1.p1.1.m1.1d">italic_F start_POSTSUBSCRIPT italic_s end_POSTSUBSCRIPT = 384</annotation></semantics></math> KHz. As shown in Figure <a class="ltx_ref" href="https://arxiv.org/html/2410.04775v1#S4.F10" title="Figure 10 ‣ 4.2.3. Audio Manager ‣ 4.2. Sense and Compute Subsystem ‣ 4. Software Architecture ‣ OmniBuds: A Sensory Earable Platform for Advanced Bio-Sensing and On-Device Machine Learning"><span class="ltx_text ltx_ref_tag">10</span></a>, the Fast-DSP processes both ANC and pass-through in a shared structure, with differences in the parameters loaded into the feed-forward and feedback filter banks. For safety, when the pass-through functionality is enabled, the limiter ensures that no sound pressure levels exceed 85 dB-SPL. The Fast-DSP also manages music playback and phone calls, which are processed in a similar pipeline to ANC and pass-through but differ in the filter parameters used. In both music and phone calls, audio content coming from the Bluetooth SOC needs to be routed to the earbud’s speaker to be played back.</p> </div> <div class="ltx_para" id="S4.SS2.SSS3.Px1.p2"> <p class="ltx_p" id="S4.SS2.SSS3.Px1.p2.1">While ANC and pass-through are mutually exclusive, they can coexist with music playback or calls, where the outputs from both pipelines are mixed before being sent to the speaker. In this dual-processing configuration, different filter parameters are loaded to compensate for the effect of ANC or pass-through on the ear canal’s acoustics.</p> </div> </section> <section class="ltx_paragraph" id="S4.SS2.SSS3.Px2"> <h5 class="ltx_title ltx_title_paragraph">Non-Time-Critical Pipelines</h5> <div class="ltx_para" id="S4.SS2.SSS3.Px2.p1"> <p class="ltx_p" id="S4.SS2.SSS3.Px2.p1.1">Non-time-critical pipelines (see Figure <a class="ltx_ref" href="https://arxiv.org/html/2410.04775v1#S4.F11" title="Figure 11 ‣ 4.2.3. Audio Manager ‣ 4.2. Sense and Compute Subsystem ‣ 4. Software Architecture ‣ OmniBuds: A Sensory Earable Platform for Advanced Bio-Sensing and On-Device Machine Learning"><span class="ltx_text ltx_ref_tag">11</span></a>) handle tasks that are less latency-sensitive, such as keyword spotting, blood pressure monitoring, and heart rate detection, running on the Slow-DSP core at a lower sampling frequency of <math alttext="F_{s}=48" class="ltx_Math" display="inline" id="S4.SS2.SSS3.Px2.p1.1.m1.1"><semantics id="S4.SS2.SSS3.Px2.p1.1.m1.1a"><mrow id="S4.SS2.SSS3.Px2.p1.1.m1.1.1" xref="S4.SS2.SSS3.Px2.p1.1.m1.1.1.cmml"><msub id="S4.SS2.SSS3.Px2.p1.1.m1.1.1.2" xref="S4.SS2.SSS3.Px2.p1.1.m1.1.1.2.cmml"><mi id="S4.SS2.SSS3.Px2.p1.1.m1.1.1.2.2" xref="S4.SS2.SSS3.Px2.p1.1.m1.1.1.2.2.cmml">F</mi><mi id="S4.SS2.SSS3.Px2.p1.1.m1.1.1.2.3" xref="S4.SS2.SSS3.Px2.p1.1.m1.1.1.2.3.cmml">s</mi></msub><mo id="S4.SS2.SSS3.Px2.p1.1.m1.1.1.1" xref="S4.SS2.SSS3.Px2.p1.1.m1.1.1.1.cmml">=</mo><mn id="S4.SS2.SSS3.Px2.p1.1.m1.1.1.3" xref="S4.SS2.SSS3.Px2.p1.1.m1.1.1.3.cmml">48</mn></mrow><annotation-xml encoding="MathML-Content" id="S4.SS2.SSS3.Px2.p1.1.m1.1b"><apply id="S4.SS2.SSS3.Px2.p1.1.m1.1.1.cmml" xref="S4.SS2.SSS3.Px2.p1.1.m1.1.1"><eq id="S4.SS2.SSS3.Px2.p1.1.m1.1.1.1.cmml" xref="S4.SS2.SSS3.Px2.p1.1.m1.1.1.1"></eq><apply id="S4.SS2.SSS3.Px2.p1.1.m1.1.1.2.cmml" xref="S4.SS2.SSS3.Px2.p1.1.m1.1.1.2"><csymbol cd="ambiguous" id="S4.SS2.SSS3.Px2.p1.1.m1.1.1.2.1.cmml" xref="S4.SS2.SSS3.Px2.p1.1.m1.1.1.2">subscript</csymbol><ci id="S4.SS2.SSS3.Px2.p1.1.m1.1.1.2.2.cmml" xref="S4.SS2.SSS3.Px2.p1.1.m1.1.1.2.2">𝐹</ci><ci id="S4.SS2.SSS3.Px2.p1.1.m1.1.1.2.3.cmml" xref="S4.SS2.SSS3.Px2.p1.1.m1.1.1.2.3">𝑠</ci></apply><cn id="S4.SS2.SSS3.Px2.p1.1.m1.1.1.3.cmml" type="integer" xref="S4.SS2.SSS3.Px2.p1.1.m1.1.1.3">48</cn></apply></annotation-xml><annotation encoding="application/x-tex" id="S4.SS2.SSS3.Px2.p1.1.m1.1c">F_{s}=48</annotation><annotation encoding="application/x-llamapun" id="S4.SS2.SSS3.Px2.p1.1.m1.1d">italic_F start_POSTSUBSCRIPT italic_s end_POSTSUBSCRIPT = 48</annotation></semantics></math> KHz. To achieve this, MEMS microphone signals—originally sampled at higher frequencies—are decimated.</p> </div> <div class="ltx_para" id="S4.SS2.SSS3.Px2.p2"> <p class="ltx_p" id="S4.SS2.SSS3.Px2.p2.1">The Slow-DSP offloads audio-related tasks, such as sensor sampling, filtering, and signal pre-conditioning, reducing the load on the MCU. For tasks like keyword spotting or blood pressure monitoring, the DSP routes MEMS microphone signals to the microcontroller via the <math alttext="I2S_{0}" class="ltx_Math" display="inline" id="S4.SS2.SSS3.Px2.p2.1.m1.1"><semantics id="S4.SS2.SSS3.Px2.p2.1.m1.1a"><mrow id="S4.SS2.SSS3.Px2.p2.1.m1.1.1" xref="S4.SS2.SSS3.Px2.p2.1.m1.1.1.cmml"><mi id="S4.SS2.SSS3.Px2.p2.1.m1.1.1.2" xref="S4.SS2.SSS3.Px2.p2.1.m1.1.1.2.cmml">I</mi><mo id="S4.SS2.SSS3.Px2.p2.1.m1.1.1.1" xref="S4.SS2.SSS3.Px2.p2.1.m1.1.1.1.cmml"></mo><mn id="S4.SS2.SSS3.Px2.p2.1.m1.1.1.3" xref="S4.SS2.SSS3.Px2.p2.1.m1.1.1.3.cmml">2</mn><mo id="S4.SS2.SSS3.Px2.p2.1.m1.1.1.1a" xref="S4.SS2.SSS3.Px2.p2.1.m1.1.1.1.cmml"></mo><msub id="S4.SS2.SSS3.Px2.p2.1.m1.1.1.4" xref="S4.SS2.SSS3.Px2.p2.1.m1.1.1.4.cmml"><mi id="S4.SS2.SSS3.Px2.p2.1.m1.1.1.4.2" xref="S4.SS2.SSS3.Px2.p2.1.m1.1.1.4.2.cmml">S</mi><mn id="S4.SS2.SSS3.Px2.p2.1.m1.1.1.4.3" xref="S4.SS2.SSS3.Px2.p2.1.m1.1.1.4.3.cmml">0</mn></msub></mrow><annotation-xml encoding="MathML-Content" id="S4.SS2.SSS3.Px2.p2.1.m1.1b"><apply id="S4.SS2.SSS3.Px2.p2.1.m1.1.1.cmml" xref="S4.SS2.SSS3.Px2.p2.1.m1.1.1"><times id="S4.SS2.SSS3.Px2.p2.1.m1.1.1.1.cmml" xref="S4.SS2.SSS3.Px2.p2.1.m1.1.1.1"></times><ci id="S4.SS2.SSS3.Px2.p2.1.m1.1.1.2.cmml" xref="S4.SS2.SSS3.Px2.p2.1.m1.1.1.2">𝐼</ci><cn id="S4.SS2.SSS3.Px2.p2.1.m1.1.1.3.cmml" type="integer" xref="S4.SS2.SSS3.Px2.p2.1.m1.1.1.3">2</cn><apply id="S4.SS2.SSS3.Px2.p2.1.m1.1.1.4.cmml" xref="S4.SS2.SSS3.Px2.p2.1.m1.1.1.4"><csymbol cd="ambiguous" id="S4.SS2.SSS3.Px2.p2.1.m1.1.1.4.1.cmml" xref="S4.SS2.SSS3.Px2.p2.1.m1.1.1.4">subscript</csymbol><ci id="S4.SS2.SSS3.Px2.p2.1.m1.1.1.4.2.cmml" xref="S4.SS2.SSS3.Px2.p2.1.m1.1.1.4.2">𝑆</ci><cn id="S4.SS2.SSS3.Px2.p2.1.m1.1.1.4.3.cmml" type="integer" xref="S4.SS2.SSS3.Px2.p2.1.m1.1.1.4.3">0</cn></apply></apply></annotation-xml><annotation encoding="application/x-tex" id="S4.SS2.SSS3.Px2.p2.1.m1.1c">I2S_{0}</annotation><annotation encoding="application/x-llamapun" id="S4.SS2.SSS3.Px2.p2.1.m1.1d">italic_I 2 italic_S start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT</annotation></semantics></math> bus, where further processing or inference (using the CNN accelerator) takes place. During calls, the DSP similarly routes MEMS microphone signals to the Bluetooth SoC, ensuring efficient transmission to the host device while the MCU remains focused on higher-level application logic.</p> </div> </section> </section> </section> <section class="ltx_subsection" id="S4.SS3"> <h3 class="ltx_title ltx_title_subsection"> <span class="ltx_tag ltx_tag_subsection">4.3. </span>Load Balancing Subsystem</h3> <div class="ltx_para" id="S4.SS3.p1"> <p class="ltx_p" id="S4.SS3.p1.1">OmniBuds prioritise efficient battery management and balanced system performance given the limited battery capacity. The Load Balancing Subsystem ensures that both communication and computational tasks are distributed optimally across the two earbuds, enhancing battery life.</p> </div> <div class="ltx_para" id="S4.SS3.p2"> <p class="ltx_p" id="S4.SS3.p2.1">In terms of communication, OmniBuds present themselves as a single device to external devices (Section <a class="ltx_ref" href="https://arxiv.org/html/2410.04775v1#S4.SS1.SSS2" title="4.1.2. Communication Interface ‣ 4.1. Communication Subsystem ‣ 4. Software Architecture ‣ OmniBuds: A Sensory Earable Platform for Advanced Bio-Sensing and On-Device Machine Learning"><span class="ltx_text ltx_ref_tag">4.1.2</span></a>). The primary earbud maintains the connection with the external device, routing any necessary information to and from the secondary earbud. To prevent one earbud from being overly burdened, the system periodically alternates the roles of primary and secondary between the two earbuds, balancing the communication load and distributing battery usage evenly.</p> </div> <div class="ltx_para" id="S4.SS3.p3"> <p class="ltx_p" id="S4.SS3.p3.1">For computational load balancing, OmniBuds include a dedicated Load Balancing Module. This module autonomously determines, at runtime, which earbud should execute a given task. The Load Balancer operates as a system-wide component, running on both earbuds and maintaining a shared database of peripherals and their states. When a peripheral is enabled, the Load Balancer determines its operational state based on the active policies, ensuring that tasks are executed efficiently across the earbuds.</p> </div> </section> </section> <section class="ltx_section" id="S5"> <h2 class="ltx_title ltx_title_section"> <span class="ltx_tag ltx_tag_section">5. </span>Smartphone Application</h2> <figure class="ltx_figure" id="S5.F12"><img alt="Refer to caption" class="ltx_graphics ltx_centering ltx_img_portrait" height="519" id="S5.F12.g1" src="extracted/5905686/Images/ob_app_dashboard_crop_transparent.png" width="299"/> <figcaption class="ltx_caption ltx_centering"><span class="ltx_tag ltx_tag_figure">Figure 12. </span>Dashboard of the OmniBuds companion app.</figcaption> </figure> <div class="ltx_para" id="S5.p1"> <p class="ltx_p" id="S5.p1.1">The OmniBuds come with a companion app for both iOS and Android, providing real-time monitoring and visualisation of physiological data. Through a dashboard (Figure <a class="ltx_ref" href="https://arxiv.org/html/2410.04775v1#S5.F12" title="Figure 12 ‣ 5. Smartphone Application ‣ OmniBuds: A Sensory Earable Platform for Advanced Bio-Sensing and On-Device Machine Learning"><span class="ltx_text ltx_ref_tag">12</span></a>), users can view six vital signs: heart rate, heart rate variability, respiratory rate, blood oxygen saturation, body temperature, and blood pressure. Historical data is accessible with a simple tap, enabling health tracking over daily, monthly, and yearly intervals. The app also allows users to control acoustic settings, switch between transparency and noise-cancelling modes, and initiate blood pressure measurements directly from the dashboard.</p> </div> <div class="ltx_para" id="S5.p2"> <p class="ltx_p" id="S5.p2.1">For advanced users and developers, the app includes a Developer Mode, which allows the collection of raw, unprocessed sensor data. The data can be saved as <span class="ltx_text ltx_font_italic" id="S5.p2.1.1">.csv</span> files for further analysis, offering invaluable opportunities for experimentation and research. Additionally, the Developer Mode enables modifications to parameters such as sensor sampling rates, LED current for the PPG sensor, and the angular rate for the accelerometer, among others, as well as the choice of which sensor to enable/disable. In addition, when data is being recorded, users can visualise real-time plots of raw sensor data from a dedicated Plot View. This allows for immediate feedback on parameter fine-tuning.</p> </div> </section> <section class="ltx_section" id="S6"> <h2 class="ltx_title ltx_title_section"> <span class="ltx_tag ltx_tag_section">6. </span>Applications of OmniBuds</h2> <div class="ltx_para" id="S6.p1"> <p class="ltx_p" id="S6.p1.1">OmniBuds’ unique combination of sensors, computational capabilities, and compact form factor opens the door to a wide range of applications across multiple fields. While some of these applications have already been implemented, others remain potential future use cases. The following examples, although not exhaustive, showcase the versatility of OmniBuds, highlighting how the platform’s advanced features can be leveraged for innovative research and practical solutions in earable computing.</p> </div> <section class="ltx_subsection" id="S6.SS1"> <h3 class="ltx_title ltx_title_subsection"> <span class="ltx_tag ltx_tag_subsection">6.1. </span>Existing Applications</h3> <section class="ltx_subsubsection" id="S6.SS1.SSS1"> <h4 class="ltx_title ltx_title_subsubsection"> <span class="ltx_tag ltx_tag_subsubsection">6.1.1. </span>Vital Signs Monitoring</h4> <div class="ltx_para" id="S6.SS1.SSS1.p1"> <p class="ltx_p" id="S6.SS1.SSS1.p1.1">The advanced sensing capabilities of OmniBuds, coupled with on-device processing and efficient use of computational resources, make it a versatile platform for continuous vital signs monitoring. By integrating multiple sensor modalities in a compact form factor, OmniBuds enable precise, real-time health tracking, supporting a wide range of research and healthcare applications. Out-of-the-box OmniBuds can monitor heart rate (HR), heart rate variability (HRV), respiration rate (RR), blood oxygen saturation (SpO<sub class="ltx_sub" id="S6.SS1.SSS1.p1.1.1"><span class="ltx_text ltx_font_italic" id="S6.SS1.SSS1.p1.1.1.1">2</span></sub>), blood pressure, and skin temperature. While skin temperature is measured directly, the other vitals are computed on-device, utilising either the main processor or the dedicated BioHub.</p> </div> <div class="ltx_para" id="S6.SS1.SSS1.p2"> <p class="ltx_p" id="S6.SS1.SSS1.p2.1"><span class="ltx_text ltx_font_bold" id="S6.SS1.SSS1.p2.1.1">Heart Rate and Heart Rate Variability</span> OmniBuds extract HR and HRV using the PPG signal, where HR is determined from peaks in the signal, and HRV is derived from inter-beat intervals. These computations occur on the BioHub, which also mitigates motion artefacts by combining PPG data with IMU readings.</p> </div> <div class="ltx_para" id="S6.SS1.SSS1.p3"> <p class="ltx_p" id="S6.SS1.SSS1.p3.1"><span class="ltx_text ltx_font_bold" id="S6.SS1.SSS1.p3.1.1">Blood Oxygen Saturation</span> Blood oxygen saturation (SpO<sub class="ltx_sub" id="S6.SS1.SSS1.p3.1.2"><span class="ltx_text ltx_font_italic" id="S6.SS1.SSS1.p3.1.2.1">2</span></sub>) is calculated by analysing the ratio of the pulsatile and non-pulsatile components of red and infrared PPG signals. This computation is also handled by the BioHub, ensuring efficient processing while freeing the main MCU for other tasks.</p> </div> <div class="ltx_para" id="S6.SS1.SSS1.p4"> <p class="ltx_p" id="S6.SS1.SSS1.p4.1"><span class="ltx_text ltx_font_bold" id="S6.SS1.SSS1.p4.1.1">Respiration Rate</span> OmniBuds estimate RR through PPG-based respiratory-induced intensity variation (RIIV). The signal is filtered to remove cardiac frequencies, and FFT is applied to determine the respiratory rate <cite class="ltx_cite ltx_citemacro_citep">(<span class="ltx_text" style="font-size:90%;">Romero et al</span><span class="ltx_text" style="font-size:90%;">.</span>, <a class="ltx_ref" href="https://arxiv.org/html/2410.04775v1#bib.bib35" title=""><span class="ltx_text" style="font-size:90%;">2024</span></a>)</cite>. This process is performed by the main MCU using the PPG raw data through the Sensor Distribution module which manages the access to BioHub and PPG for both processed data (HR, HRV and SpO<sub class="ltx_sub" id="S6.SS1.SSS1.p4.1.2"><span class="ltx_text ltx_font_italic" id="S6.SS1.SSS1.p4.1.2.1">2</span></sub>) and raw data.</p> </div> <div class="ltx_para" id="S6.SS1.SSS1.p5"> <p class="ltx_p" id="S6.SS1.SSS1.p5.1"><span class="ltx_text ltx_font_bold" id="S6.SS1.SSS1.p5.1.1">Blood Pressure</span> OmniBuds support cuff-less blood pressure measurement using a multi-modal technique that captures the time difference between the S1 heart sound (detected by the in-ear microphone) and the PPG upstroke, known as vascular transit time (VTT) <cite class="ltx_cite ltx_citemacro_citep">(<span class="ltx_text" style="font-size:90%;">Truong et al</span><span class="ltx_text" style="font-size:90%;">.</span>, <a class="ltx_ref" href="https://arxiv.org/html/2410.04775v1#bib.bib37" title=""><span class="ltx_text" style="font-size:90%;">2022</span></a>)</cite>. This, combined with ejection time (ET), enables systolic and diastolic blood pressure estimation through a personalised model. Future extensions could further enhance accuracy by leveraging OmniBuds’ symmetrical hardware for multi-location sensing <cite class="ltx_cite ltx_citemacro_citep">(<span class="ltx_text" style="font-size:90%;">Balaji et al</span><span class="ltx_text" style="font-size:90%;">.</span>, <a class="ltx_ref" href="https://arxiv.org/html/2410.04775v1#bib.bib6" title=""><span class="ltx_text" style="font-size:90%;">2023</span></a>)</cite>.</p> </div> </section> <section class="ltx_subsubsection" id="S6.SS1.SSS2"> <h4 class="ltx_title ltx_title_subsubsection"> <span class="ltx_tag ltx_tag_subsubsection">6.1.2. </span>Multi-modal Contextual Recognition</h4> <div class="ltx_para" id="S6.SS1.SSS2.p1"> <p class="ltx_p" id="S6.SS1.SSS2.p1.1">In addition to monitoring all five vital signs and their derivatives, the sensor suite in OmniBuds enables the detection of typical motion-based contexts commonly tracked by earable devices. These include physical activity <cite class="ltx_cite ltx_citemacro_citep">(<span class="ltx_text" style="font-size:90%;">Kawsar et al</span><span class="ltx_text" style="font-size:90%;">.</span>, <a class="ltx_ref" href="https://arxiv.org/html/2410.04775v1#bib.bib25" title=""><span class="ltx_text" style="font-size:90%;">2018b</span></a>; <span class="ltx_text" style="font-size:90%;">Ma et al</span><span class="ltx_text" style="font-size:90%;">.</span>, <a class="ltx_ref" href="https://arxiv.org/html/2410.04775v1#bib.bib28" title=""><span class="ltx_text" style="font-size:90%;">2021</span></a>)</cite>, head tracking <cite class="ltx_cite ltx_citemacro_citep">(<span class="ltx_text" style="font-size:90%;">Ferlini et al</span><span class="ltx_text" style="font-size:90%;">.</span>, <a class="ltx_ref" href="https://arxiv.org/html/2410.04775v1#bib.bib17" title=""><span class="ltx_text" style="font-size:90%;">2019</span></a>)</cite>, head gestures <cite class="ltx_cite ltx_citemacro_citep">(<span class="ltx_text" style="font-size:90%;">Ma et al</span><span class="ltx_text" style="font-size:90%;">.</span>, <a class="ltx_ref" href="https://arxiv.org/html/2410.04775v1#bib.bib28" title=""><span class="ltx_text" style="font-size:90%;">2021</span></a>)</cite>, and facial expressions <cite class="ltx_cite ltx_citemacro_citep">(<span class="ltx_text" style="font-size:90%;">Montanari et al</span><span class="ltx_text" style="font-size:90%;">.</span>, <a class="ltx_ref" href="https://arxiv.org/html/2410.04775v1#bib.bib31" title=""><span class="ltx_text" style="font-size:90%;">2023</span></a>; <span class="ltx_text" style="font-size:90%;">Lee et al</span><span class="ltx_text" style="font-size:90%;">.</span>, <a class="ltx_ref" href="https://arxiv.org/html/2410.04775v1#bib.bib27" title=""><span class="ltx_text" style="font-size:90%;">2019</span></a>)</cite>. While primarily driven by the integrated 9-axis IMU, more complex activities such as dietary monitoring <cite class="ltx_cite ltx_citemacro_citep">(<span class="ltx_text" style="font-size:90%;">Kawsar et al</span><span class="ltx_text" style="font-size:90%;">.</span>, <a class="ltx_ref" href="https://arxiv.org/html/2410.04775v1#bib.bib25" title=""><span class="ltx_text" style="font-size:90%;">2018b</span></a>)</cite>, energy expenditure <cite class="ltx_cite ltx_citemacro_citep">(<span class="ltx_text" style="font-size:90%;">Gashi et al</span><span class="ltx_text" style="font-size:90%;">.</span>, <a class="ltx_ref" href="https://arxiv.org/html/2410.04775v1#bib.bib21" title=""><span class="ltx_text" style="font-size:90%;">2022</span></a>)</cite>, and mental fatigue assessment <cite class="ltx_cite ltx_citemacro_citep">(<span class="ltx_text" style="font-size:90%;">Kalanadhabhatta et al</span><span class="ltx_text" style="font-size:90%;">.</span>, <a class="ltx_ref" href="https://arxiv.org/html/2410.04775v1#bib.bib23" title=""><span class="ltx_text" style="font-size:90%;">2021</span></a>)</cite> require combining data from multiple sensors, such as PPG and microphones.</p> </div> <div class="ltx_para" id="S6.SS1.SSS2.p2"> <p class="ltx_p" id="S6.SS1.SSS2.p2.1">What sets OmniBuds apart from previous platforms is its capability to run these diverse pipelines directly on-device, selecting the hardware component that is most suited to the task. For instance, low-power motion-based pipelines, such as activity recognition (e.g., walking, running) and head gestures detection (e.g., nodding, shaking), are efficiently processed directly on the IMU, minimising power consumption and reducing the load on the main MCU. Conversely, more complex tasks, such as dietary monitoring and fatigue estimation, can leverage the CNN accelerator to execute larger machine learning models, enabling the device to handle higher-capacity computations on-device with reduced latency <cite class="ltx_cite ltx_citemacro_citep">(<span class="ltx_text" style="font-size:90%;">Ardis and Muchsel</span>, <a class="ltx_ref" href="https://arxiv.org/html/2410.04775v1#bib.bib4" title=""><span class="ltx_text" style="font-size:90%;">[n. d.]</span></a>; <span class="ltx_text" style="font-size:90%;">Moss et al</span><span class="ltx_text" style="font-size:90%;">.</span>, <a class="ltx_ref" href="https://arxiv.org/html/2410.04775v1#bib.bib32" title=""><span class="ltx_text" style="font-size:90%;">2022</span></a>)</cite>.</p> </div> </section> </section> <section class="ltx_subsection" id="S6.SS2"> <h3 class="ltx_title ltx_title_subsection"> <span class="ltx_tag ltx_tag_subsection">6.2. </span>Future Applications</h3> <section class="ltx_subsubsection" id="S6.SS2.SSS1"> <h4 class="ltx_title ltx_title_subsubsection"> <span class="ltx_tag ltx_tag_subsubsection">6.2.1. </span>Emotion Recognition and Augmented Feedback</h4> <div class="ltx_para" id="S6.SS2.SSS1.p1"> <p class="ltx_p" id="S6.SS2.SSS1.p1.1">OmniBuds could potentially integrate emotion recognition based on physiological changes, such as heart rate, body temperature, voice tone and facial expressions. For example, the device could monitor stress, anxiety and fatigue and adjust audio feedback accordingly, offering calming sounds or adjusting music tempo to match the user’s emotional state <cite class="ltx_cite ltx_citemacro_citep">(<span class="ltx_text" style="font-size:90%;">Butkow et al</span><span class="ltx_text" style="font-size:90%;">.</span>, <a class="ltx_ref" href="https://arxiv.org/html/2410.04775v1#bib.bib9" title=""><span class="ltx_text" style="font-size:90%;">2024</span></a>)</cite>. This speculative application leverages multi-modal sensor data available on OmniBuds and their privacy-preserving design.</p> </div> </section> <section class="ltx_subsubsection" id="S6.SS2.SSS2"> <h4 class="ltx_title ltx_title_subsubsection"> <span class="ltx_tag ltx_tag_subsubsection">6.2.2. </span>Gesture-Based Interface for Cognitive Augmentation</h4> <div class="ltx_para" id="S6.SS2.SSS2.p1"> <p class="ltx_p" id="S6.SS2.SSS2.p1.1">Using the IMU, OmniBuds could detect subtle head and facial movements, enabling gesture-based interfaces to control devices or interact with digital content hands-free. This could enhance AR/VR experiences or provide accessibility options for users with limited mobility.</p> </div> </section> <section class="ltx_subsubsection" id="S6.SS2.SSS3"> <h4 class="ltx_title ltx_title_subsubsection"> <span class="ltx_tag ltx_tag_subsubsection">6.2.3. </span>Personalised Fitness Coaching with Motion Analysis</h4> <div class="ltx_para" id="S6.SS2.SSS3.p1"> <p class="ltx_p" id="S6.SS2.SSS3.p1.1">OmniBuds’ 9-axis IMU and CNN accelerator offer the potential for real-time motion analysis during workouts. By processing head movements and posture data directly on the device, OmniBuds could enable personalised fitness coaching, offering feedback on form and performance without needing external computation.</p> </div> </section> <section class="ltx_subsubsection" id="S6.SS2.SSS4"> <h4 class="ltx_title ltx_title_subsubsection"> <span class="ltx_tag ltx_tag_subsubsection">6.2.4. </span>Adaptive Personal Assistant Based on Physical and Cognitive State</h4> <div class="ltx_para" id="S6.SS2.SSS4.p1"> <p class="ltx_p" id="S6.SS2.SSS4.p1.1">By monitoring a user’s physical and cognitive state, OmniBuds could adapt the behaviour of a personal assistant based on the user’s current conditions. For instance, when detecting fatigue or cognitive overload, the assistant could suggest breaks or adjust task complexity. This speculative application highlights the potential for a more responsive and adaptive interaction enabled by the OmniBuds’ computational units.</p> </div> </section> <section class="ltx_subsubsection" id="S6.SS2.SSS5"> <h4 class="ltx_title ltx_title_subsubsection"> <span class="ltx_tag ltx_tag_subsubsection">6.2.5. </span>Acoustic Augmented Reality for Situational Awareness</h4> <div class="ltx_para" id="S6.SS2.SSS5.p1"> <p class="ltx_p" id="S6.SS2.SSS5.p1.1">Using its dual microphones and Audio DSP, OmniBuds could enhance situational awareness by amplifying critical environmental sounds, such as approaching vehicles or alerts, while reducing background noise. This speculative application could augment safety in urban environments or outdoor activities. This shows the potential for real-time auditory augmentation offered by OmniBuds.</p> </div> </section> </section> </section> <section class="ltx_section" id="S7"> <h2 class="ltx_title ltx_title_section"> <span class="ltx_tag ltx_tag_section">7. </span>Conclusion</h2> <div class="ltx_para" id="S7.p1"> <p class="ltx_p" id="S7.p1.1">OmniBuds represent a significant advancement in the field of earable sensing and computing, offering a versatile platform that combines cutting-edge hardware with a flexible software architecture. With their integrated health monitoring, energy-efficient computing, and privacy-preserving design, OmniBuds open up new possibilities for researchers exploring diverse applications in wearable technology. As the platform continues to evolve, it has the potential to support groundbreaking research in areas such as health monitoring, cognitive interaction, and beyond. For updates and more information, visit <a class="ltx_ref ltx_url ltx_font_typewriter" href="https://www.omnibuds.tech/" title="">https://www.omnibuds.tech/</a>.</p> </div> </section> <section class="ltx_bibliography" id="bib"> <h2 class="ltx_title ltx_title_bibliography" style="font-size:90%;">References</h2> <ul class="ltx_biblist"> <li class="ltx_bibitem" id="bib.bib1"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem"><span class="ltx_text" id="bib.bib1.2.2.1" style="font-size:90%;">(1)</span></span> <span class="ltx_bibblock"><span class="ltx_text" id="bib.bib1.3.1" style="font-size:90%;"> </span> </span> </li> <li class="ltx_bibitem" id="bib.bib2"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem"><span class="ltx_text" id="bib.bib2.4.4.1" style="font-size:90%;">app ([n. d.])</span></span> <span class="ltx_bibblock"><span class="ltx_text" id="bib.bib2.6.1" style="font-size:90%;"> [n. d.]. </span> </span> <span class="ltx_bibblock"><a class="ltx_ref ltx_url ltx_font_typewriter" href="https://support.apple.com/en-mide/111851" style="font-size:90%;" title="">https://support.apple.com/en-mide/111851</a><span class="ltx_text" id="bib.bib2.7.1" style="font-size:90%;">. </span> </span> <span class="ltx_bibblock"> </span> <span class="ltx_bibblock"><span class="ltx_text" id="bib.bib2.8.1" style="font-size:90%;">[Accessed 09-09-2024]. </span> </span> </li> <li class="ltx_bibitem" id="bib.bib3"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem"><span class="ltx_text" id="bib.bib3.4.4.1" style="font-size:90%;">bos ([n. d.])</span></span> <span class="ltx_bibblock"><span class="ltx_text" id="bib.bib3.6.1" style="font-size:90%;"> [n. d.]. </span> </span> <span class="ltx_bibblock"><span class="ltx_text" id="bib.bib3.7.1" style="font-size:90%;">Bose QuietComfort Earbuds II — Bose — boseindia.com. </span> </span> <span class="ltx_bibblock"><a class="ltx_ref ltx_url ltx_font_typewriter" href="https://www.boseindia.com/en_in/products/headphones/earbuds/quietcomfort-earbuds-ii.html" style="font-size:90%;" title="">https://www.boseindia.com/en_in/products/headphones/earbuds/quietcomfort-earbuds-ii.html</a><span class="ltx_text" id="bib.bib3.8.1" style="font-size:90%;">. </span> </span> <span class="ltx_bibblock"> </span> <span class="ltx_bibblock"><span class="ltx_text" id="bib.bib3.9.1" style="font-size:90%;">[Accessed 08-09-2024]. </span> </span> </li> <li class="ltx_bibitem" id="bib.bib4"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem"><span class="ltx_text" id="bib.bib4.4.4.1" style="font-size:90%;">Ardis and Muchsel ([n. d.])</span></span> <span class="ltx_bibblock"><span class="ltx_text" id="bib.bib4.6.1" style="font-size:90%;"> Kristopher Ardis and Robert Muchsel. [n. d.]. </span> </span> <span class="ltx_bibblock"><span class="ltx_text" id="bib.bib4.7.1" style="font-size:90%;">Cutting the AI Power Cord: Technology to Enable True Edge Inference. </span> </span> <span class="ltx_bibblock"><a class="ltx_ref ltx_url ltx_font_typewriter" href="https://cms.tinyml.org/wp-content/uploads/talks2020/tinyML_Talks_Kris_Ardis_and_Robert_Muchsel_-201027.pdf" style="font-size:90%;" title="">https://cms.tinyml.org/wp-content/uploads/talks2020/tinyML_Talks_Kris_Ardis_and_Robert_Muchsel_-201027.pdf</a><span class="ltx_text" id="bib.bib4.8.1" style="font-size:90%;">. </span> </span> <span class="ltx_bibblock"> </span> <span class="ltx_bibblock"><span class="ltx_text" id="bib.bib4.9.1" style="font-size:90%;">[Accessed 09-09-2024]. </span> </span> </li> <li class="ltx_bibitem" id="bib.bib5"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem"><span class="ltx_text" id="bib.bib5.5.5.1" style="font-size:90%;">Atallah et al</span><span class="ltx_text" id="bib.bib5.6.6.2" style="font-size:90%;">.</span><span class="ltx_text" id="bib.bib5.7.7.3" style="font-size:90%;"> (2014)</span></span> <span class="ltx_bibblock"><span class="ltx_text" id="bib.bib5.9.1" style="font-size:90%;"> L Atallah, A Wiik, B Lo, JP Cobb, AA Amis, and GZ Yang. 2014. </span> </span> <span class="ltx_bibblock"><span class="ltx_text" id="bib.bib5.10.1" style="font-size:90%;">Gait asymmetry detection in older adults using a light ear-worn sensor. </span> </span> <span class="ltx_bibblock"><em class="ltx_emph ltx_font_italic" id="bib.bib5.11.1" style="font-size:90%;">Physiological measurement</em><span class="ltx_text" id="bib.bib5.12.2" style="font-size:90%;"> 35, 5 (2014), N29. </span> </span> <span class="ltx_bibblock"> </span> </li> <li class="ltx_bibitem" id="bib.bib6"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem"><span class="ltx_text" id="bib.bib6.5.5.1" style="font-size:90%;">Balaji et al</span><span class="ltx_text" id="bib.bib6.6.6.2" style="font-size:90%;">.</span><span class="ltx_text" id="bib.bib6.7.7.3" style="font-size:90%;"> (2023)</span></span> <span class="ltx_bibblock"><span class="ltx_text" id="bib.bib6.9.1" style="font-size:90%;"> Ananta Narayanan Balaji, Andrea Ferlini, Fahim Kawsar, and Alessandro Montanari. 2023. </span> </span> <span class="ltx_bibblock"><span class="ltx_text" id="bib.bib6.10.1" style="font-size:90%;">Stereo-bp: Non-invasive blood pressure sensing with earables. In </span><em class="ltx_emph ltx_font_italic" id="bib.bib6.11.2" style="font-size:90%;">Proceedings of the 24th International Workshop on Mobile Computing Systems and Applications</em><span class="ltx_text" id="bib.bib6.12.3" style="font-size:90%;">. 96–102. </span> </span> <span class="ltx_bibblock"> </span> </li> <li class="ltx_bibitem" id="bib.bib7"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem"><span class="ltx_text" id="bib.bib7.4.4.1" style="font-size:90%;">Bleichner and Debener (2017)</span></span> <span class="ltx_bibblock"><span class="ltx_text" id="bib.bib7.6.1" style="font-size:90%;"> Martin G Bleichner and Stefan Debener. 2017. </span> </span> <span class="ltx_bibblock"><span class="ltx_text" id="bib.bib7.7.1" style="font-size:90%;">Concealed, unobtrusive ear-centered EEG acquisition: cEEGrids for transparent EEG. </span> </span> <span class="ltx_bibblock"><em class="ltx_emph ltx_font_italic" id="bib.bib7.8.1" style="font-size:90%;">Frontiers in human neuroscience</em><span class="ltx_text" id="bib.bib7.9.2" style="font-size:90%;"> 11 (2017), 163. </span> </span> <span class="ltx_bibblock"> </span> </li> <li class="ltx_bibitem" id="bib.bib8"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem"><span class="ltx_text" id="bib.bib8.5.5.1" style="font-size:90%;">Butkow et al</span><span class="ltx_text" id="bib.bib8.6.6.2" style="font-size:90%;">.</span><span class="ltx_text" id="bib.bib8.7.7.3" style="font-size:90%;"> (2023)</span></span> <span class="ltx_bibblock"><span class="ltx_text" id="bib.bib8.9.1" style="font-size:90%;"> Kayla-Jade Butkow, Ting Dang, Andrea Ferlini, Dong Ma, and Cecilia Mascolo. 2023. </span> </span> <span class="ltx_bibblock"><span class="ltx_text" id="bib.bib8.10.1" style="font-size:90%;">heart: Motion-resilient heart rate monitoring with in-ear microphones. In </span><em class="ltx_emph ltx_font_italic" id="bib.bib8.11.2" style="font-size:90%;">2023 IEEE International Conference on Pervasive Computing and Communications (PerCom)</em><span class="ltx_text" id="bib.bib8.12.3" style="font-size:90%;">. IEEE, 200–209. </span> </span> <span class="ltx_bibblock"> </span> </li> <li class="ltx_bibitem" id="bib.bib9"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem"><span class="ltx_text" id="bib.bib9.5.5.1" style="font-size:90%;">Butkow et al</span><span class="ltx_text" id="bib.bib9.6.6.2" style="font-size:90%;">.</span><span class="ltx_text" id="bib.bib9.7.7.3" style="font-size:90%;"> (2024)</span></span> <span class="ltx_bibblock"><span class="ltx_text" id="bib.bib9.9.1" style="font-size:90%;"> Kayla-Jade Butkow, Andrea Ferlini, Fahim Kawsar, Cecilia Mascolo, and Alessandro Montanari. 2024. </span> </span> <span class="ltx_bibblock"><span class="ltx_text" id="bib.bib9.10.1" style="font-size:90%;">EarTune: Exploring the Physiology of Music Listening. In </span><em class="ltx_emph ltx_font_italic" id="bib.bib9.11.2" style="font-size:90%;">Companion of the 2024 on ACM International Joint Conference on Pervasive and Ubiquitous Computing</em><span class="ltx_text" id="bib.bib9.12.3" style="font-size:90%;">. 644–649. </span> </span> <span class="ltx_bibblock"> </span> </li> <li class="ltx_bibitem" id="bib.bib10"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem"><span class="ltx_text" id="bib.bib10.5.5.1" style="font-size:90%;">Chan et al</span><span class="ltx_text" id="bib.bib10.6.6.2" style="font-size:90%;">.</span><span class="ltx_text" id="bib.bib10.7.7.3" style="font-size:90%;"> (2023)</span></span> <span class="ltx_bibblock"><span class="ltx_text" id="bib.bib10.9.1" style="font-size:90%;"> Justin Chan, Antonio Glenn, Malek Itani, Lisa R Mancl, Emily Gallagher, Randall Bly, Shwetak Patel, and Shyamnath Gollakota. 2023. </span> </span> <span class="ltx_bibblock"><span class="ltx_text" id="bib.bib10.10.1" style="font-size:90%;">Wireless earbuds for low-cost hearing screening. In </span><em class="ltx_emph ltx_font_italic" id="bib.bib10.11.2" style="font-size:90%;">Proceedings of the 21st Annual International Conference on Mobile Systems, Applications and Services</em><span class="ltx_text" id="bib.bib10.12.3" style="font-size:90%;">. 84–95. </span> </span> <span class="ltx_bibblock"> </span> </li> <li class="ltx_bibitem" id="bib.bib11"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem"><span class="ltx_text" id="bib.bib11.5.5.1" style="font-size:90%;">Choi et al</span><span class="ltx_text" id="bib.bib11.6.6.2" style="font-size:90%;">.</span><span class="ltx_text" id="bib.bib11.7.7.3" style="font-size:90%;"> (2022)</span></span> <span class="ltx_bibblock"><span class="ltx_text" id="bib.bib11.9.1" style="font-size:90%;"> Seokmin Choi, Yang Gao, Yincheng Jin, Se Jun Kim, Jiyang Li, Wenyao Xu, and Zhanpeng Jin. 2022. </span> </span> <span class="ltx_bibblock"><span class="ltx_text" id="bib.bib11.10.1" style="font-size:90%;">PPGface: Like what you are watching? Earphones can” feel” your facial expressions. </span> </span> <span class="ltx_bibblock"><em class="ltx_emph ltx_font_italic" id="bib.bib11.11.1" style="font-size:90%;">Proceedings of the ACM on interactive, mobile, wearable and ubiquitous technologies</em><span class="ltx_text" id="bib.bib11.12.2" style="font-size:90%;"> 6, 2 (2022), 1–32. </span> </span> <span class="ltx_bibblock"> </span> </li> <li class="ltx_bibitem" id="bib.bib12"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem"><span class="ltx_text" id="bib.bib12.4.4.1" style="font-size:90%;">Choudhury (2021)</span></span> <span class="ltx_bibblock"><span class="ltx_text" id="bib.bib12.6.1" style="font-size:90%;"> Romit Roy Choudhury. 2021. </span> </span> <span class="ltx_bibblock"><span class="ltx_text" id="bib.bib12.7.1" style="font-size:90%;">Earable computing: A new area to think about. In </span><em class="ltx_emph ltx_font_italic" id="bib.bib12.8.2" style="font-size:90%;">Proceedings of the 22nd International Workshop on Mobile Computing Systems and Applications</em><span class="ltx_text" id="bib.bib12.9.3" style="font-size:90%;">. 147–153. </span> </span> <span class="ltx_bibblock"> </span> </li> <li class="ltx_bibitem" id="bib.bib13"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem"><span class="ltx_text" id="bib.bib13.5.5.1" style="font-size:90%;">Demirel et al</span><span class="ltx_text" id="bib.bib13.6.6.2" style="font-size:90%;">.</span><span class="ltx_text" id="bib.bib13.7.7.3" style="font-size:90%;"> (2023)</span></span> <span class="ltx_bibblock"><span class="ltx_text" id="bib.bib13.9.1" style="font-size:90%;"> Berken Utku Demirel, Khaldoon Al-Naimi, Fahim Kawsar, and Alessandro Montanari. 2023. </span> </span> <span class="ltx_bibblock"><span class="ltx_text" id="bib.bib13.10.1" style="font-size:90%;">Cancelling Intermodulation Distortions for Otoacoustic Emission Measurements with Earbuds. In </span><em class="ltx_emph ltx_font_italic" id="bib.bib13.11.2" style="font-size:90%;">ICASSP 2023-2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)</em><span class="ltx_text" id="bib.bib13.12.3" style="font-size:90%;">. IEEE, 1–5. </span> </span> <span class="ltx_bibblock"> </span> </li> <li class="ltx_bibitem" id="bib.bib14"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem"><span class="ltx_text" id="bib.bib14.5.5.1" style="font-size:90%;">Demirel et al</span><span class="ltx_text" id="bib.bib14.6.6.2" style="font-size:90%;">.</span><span class="ltx_text" id="bib.bib14.7.7.3" style="font-size:90%;"> (2024)</span></span> <span class="ltx_bibblock"><span class="ltx_text" id="bib.bib14.9.1" style="font-size:90%;"> Berken Utku Demirel, Ting Dang, Khaldoon Al-Naimi, Fahim Kawsar, and Alessandro Montanari. 2024. </span> </span> <span class="ltx_bibblock"><span class="ltx_text" id="bib.bib14.10.1" style="font-size:90%;">Unobtrusive air leakage estimation for earables with in-ear microphones. </span> </span> <span class="ltx_bibblock"><em class="ltx_emph ltx_font_italic" id="bib.bib14.11.1" style="font-size:90%;">Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies</em><span class="ltx_text" id="bib.bib14.12.2" style="font-size:90%;"> 7, 4 (2024), 1–29. </span> </span> <span class="ltx_bibblock"> </span> </li> <li class="ltx_bibitem" id="bib.bib15"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem"><span class="ltx_text" id="bib.bib15.5.5.1" style="font-size:90%;">Ferlini et al</span><span class="ltx_text" id="bib.bib15.6.6.2" style="font-size:90%;">.</span><span class="ltx_text" id="bib.bib15.7.7.3" style="font-size:90%;"> (2021a)</span></span> <span class="ltx_bibblock"><span class="ltx_text" id="bib.bib15.9.1" style="font-size:90%;"> Andrea Ferlini, Dong Ma, Robert Harle, and Cecilia Mascolo. 2021a. </span> </span> <span class="ltx_bibblock"><span class="ltx_text" id="bib.bib15.10.1" style="font-size:90%;">EarGate: gait-based user identification with in-ear microphones. In </span><em class="ltx_emph ltx_font_italic" id="bib.bib15.11.2" style="font-size:90%;">Proceedings of the 27th Annual International Conference on Mobile Computing and Networking</em><span class="ltx_text" id="bib.bib15.12.3" style="font-size:90%;">. 337–349. </span> </span> <span class="ltx_bibblock"> </span> </li> <li class="ltx_bibitem" id="bib.bib16"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem"><span class="ltx_text" id="bib.bib16.5.5.1" style="font-size:90%;">Ferlini et al</span><span class="ltx_text" id="bib.bib16.6.6.2" style="font-size:90%;">.</span><span class="ltx_text" id="bib.bib16.7.7.3" style="font-size:90%;"> (2021b)</span></span> <span class="ltx_bibblock"><span class="ltx_text" id="bib.bib16.9.1" style="font-size:90%;"> Andrea Ferlini, Alessandro Montanari, Andreas Grammenos, Robert Harle, and Cecilia Mascolo. 2021b. </span> </span> <span class="ltx_bibblock"><span class="ltx_text" id="bib.bib16.10.1" style="font-size:90%;">Enabling in-ear magnetic sensing: Automatic and user transparent magnetometer calibration. In </span><em class="ltx_emph ltx_font_italic" id="bib.bib16.11.2" style="font-size:90%;">2021 IEEE International Conference on Pervasive Computing and Communications (PerCom)</em><span class="ltx_text" id="bib.bib16.12.3" style="font-size:90%;">. IEEE, 1–8. </span> </span> <span class="ltx_bibblock"> </span> </li> <li class="ltx_bibitem" id="bib.bib17"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem"><span class="ltx_text" id="bib.bib17.5.5.1" style="font-size:90%;">Ferlini et al</span><span class="ltx_text" id="bib.bib17.6.6.2" style="font-size:90%;">.</span><span class="ltx_text" id="bib.bib17.7.7.3" style="font-size:90%;"> (2019)</span></span> <span class="ltx_bibblock"><span class="ltx_text" id="bib.bib17.9.1" style="font-size:90%;"> Andrea Ferlini, Alessandro Montanari, Cecilia Mascolo, and Robert Harle. 2019. </span> </span> <span class="ltx_bibblock"><span class="ltx_text" id="bib.bib17.10.1" style="font-size:90%;">Head motion tracking through in-ear wearables. In </span><em class="ltx_emph ltx_font_italic" id="bib.bib17.11.2" style="font-size:90%;">Proceedings of the 1st International Workshop on Earable Computing</em><span class="ltx_text" id="bib.bib17.12.3" style="font-size:90%;">. 8–13. </span> </span> <span class="ltx_bibblock"> </span> </li> <li class="ltx_bibitem" id="bib.bib18"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem"><span class="ltx_text" id="bib.bib18.5.5.1" style="font-size:90%;">Ferlini et al</span><span class="ltx_text" id="bib.bib18.6.6.2" style="font-size:90%;">.</span><span class="ltx_text" id="bib.bib18.7.7.3" style="font-size:90%;"> (2021c)</span></span> <span class="ltx_bibblock"><span class="ltx_text" id="bib.bib18.9.1" style="font-size:90%;"> Andrea Ferlini, Alessandro Montanari, Chulhong Min, Hongwei Li, Ugo Sassi, and Fahim Kawsar. 2021c. </span> </span> <span class="ltx_bibblock"><span class="ltx_text" id="bib.bib18.10.1" style="font-size:90%;">In-ear ppg for vital signs. </span> </span> <span class="ltx_bibblock"><em class="ltx_emph ltx_font_italic" id="bib.bib18.11.1" style="font-size:90%;">IEEE Pervasive Computing</em><span class="ltx_text" id="bib.bib18.12.2" style="font-size:90%;"> 21, 1 (2021), 65–74. </span> </span> <span class="ltx_bibblock"> </span> </li> <li class="ltx_bibitem" id="bib.bib19"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem"><span class="ltx_text" id="bib.bib19.5.5.1" style="font-size:90%;">Franklin et al</span><span class="ltx_text" id="bib.bib19.6.6.2" style="font-size:90%;">.</span><span class="ltx_text" id="bib.bib19.7.7.3" style="font-size:90%;"> (2021)</span></span> <span class="ltx_bibblock"><span class="ltx_text" id="bib.bib19.9.1" style="font-size:90%;"> Matija Franklin, David Lagnado, Chulhong Min, Akhil Mathur, and Fahim Kawsar. 2021. </span> </span> <span class="ltx_bibblock"><span class="ltx_text" id="bib.bib19.10.1" style="font-size:90%;">Designing memory aids for dementia patients using earables. In </span><em class="ltx_emph ltx_font_italic" id="bib.bib19.11.2" style="font-size:90%;">Adjunct Proceedings of the 2021 ACM International Joint Conference on Pervasive and Ubiquitous Computing and Proceedings of the 2021 ACM International Symposium on Wearable Computers</em><span class="ltx_text" id="bib.bib19.12.3" style="font-size:90%;">. 152–157. </span> </span> <span class="ltx_bibblock"> </span> </li> <li class="ltx_bibitem" id="bib.bib20"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem"><span class="ltx_text" id="bib.bib20.4.4.1" style="font-size:90%;">Gamma (1995)</span></span> <span class="ltx_bibblock"><span class="ltx_text" id="bib.bib20.6.1" style="font-size:90%;"> Erich Gamma. 1995. </span> </span> <span class="ltx_bibblock"><span class="ltx_text" id="bib.bib20.7.1" style="font-size:90%;">Design patterns: elements of reusable object-oriented software. </span> </span> <span class="ltx_bibblock"> </span> <span class="ltx_bibblock"> </span> </li> <li class="ltx_bibitem" id="bib.bib21"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem"><span class="ltx_text" id="bib.bib21.5.5.1" style="font-size:90%;">Gashi et al</span><span class="ltx_text" id="bib.bib21.6.6.2" style="font-size:90%;">.</span><span class="ltx_text" id="bib.bib21.7.7.3" style="font-size:90%;"> (2022)</span></span> <span class="ltx_bibblock"><span class="ltx_text" id="bib.bib21.9.1" style="font-size:90%;"> Shkurta Gashi, Chulhong Min, Alessandro Montanari, Silvia Santini, and Fahim Kawsar. 2022. </span> </span> <span class="ltx_bibblock"><span class="ltx_text" id="bib.bib21.10.1" style="font-size:90%;">A multidevice and multimodal dataset for human energy expenditure estimation using wearable devices. </span> </span> <span class="ltx_bibblock"><em class="ltx_emph ltx_font_italic" id="bib.bib21.11.1" style="font-size:90%;">Scientific Data</em><span class="ltx_text" id="bib.bib21.12.2" style="font-size:90%;"> 9, 1 (2022), 537. </span> </span> <span class="ltx_bibblock"> </span> </li> <li class="ltx_bibitem" id="bib.bib22"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem"><span class="ltx_text" id="bib.bib22.5.5.1" style="font-size:90%;">Goverdovsky et al</span><span class="ltx_text" id="bib.bib22.6.6.2" style="font-size:90%;">.</span><span class="ltx_text" id="bib.bib22.7.7.3" style="font-size:90%;"> (2017)</span></span> <span class="ltx_bibblock"><span class="ltx_text" id="bib.bib22.9.1" style="font-size:90%;"> Valentin Goverdovsky, Wilhelm Von Rosenberg, Takashi Nakamura, David Looney, David J Sharp, Christos Papavassiliou, Mary J Morrell, and Danilo P Mandic. 2017. </span> </span> <span class="ltx_bibblock"><span class="ltx_text" id="bib.bib22.10.1" style="font-size:90%;">Hearables: Multimodal physiological in-ear sensing. </span> </span> <span class="ltx_bibblock"><em class="ltx_emph ltx_font_italic" id="bib.bib22.11.1" style="font-size:90%;">Scientific reports</em><span class="ltx_text" id="bib.bib22.12.2" style="font-size:90%;"> 7, 1 (2017), 6948. </span> </span> <span class="ltx_bibblock"> </span> </li> <li class="ltx_bibitem" id="bib.bib23"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem"><span class="ltx_text" id="bib.bib23.5.5.1" style="font-size:90%;">Kalanadhabhatta et al</span><span class="ltx_text" id="bib.bib23.6.6.2" style="font-size:90%;">.</span><span class="ltx_text" id="bib.bib23.7.7.3" style="font-size:90%;"> (2021)</span></span> <span class="ltx_bibblock"><span class="ltx_text" id="bib.bib23.9.1" style="font-size:90%;"> Manasa Kalanadhabhatta, Chulhong Min, Alessandro Montanari, and Fahim Kawsar. 2021. </span> </span> <span class="ltx_bibblock"><span class="ltx_text" id="bib.bib23.10.1" style="font-size:90%;">FatigueSet: A multi-modal dataset for modeling mental fatigue and fatigability. In </span><em class="ltx_emph ltx_font_italic" id="bib.bib23.11.2" style="font-size:90%;">International Conference on Pervasive Computing Technologies for Healthcare</em><span class="ltx_text" id="bib.bib23.12.3" style="font-size:90%;">. Springer, 204–217. </span> </span> <span class="ltx_bibblock"> </span> </li> <li class="ltx_bibitem" id="bib.bib24"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem"><span class="ltx_text" id="bib.bib24.5.5.1" style="font-size:90%;">Kawsar et al</span><span class="ltx_text" id="bib.bib24.6.6.2" style="font-size:90%;">.</span><span class="ltx_text" id="bib.bib24.7.7.3" style="font-size:90%;"> (2018a)</span></span> <span class="ltx_bibblock"><span class="ltx_text" id="bib.bib24.9.1" style="font-size:90%;"> Fahim Kawsar, Chulhong Min, Akhil Mathur, and Alessandro Montanari. 2018a. </span> </span> <span class="ltx_bibblock"><span class="ltx_text" id="bib.bib24.10.1" style="font-size:90%;">Earables for personal-scale behavior analytics. </span> </span> <span class="ltx_bibblock"><em class="ltx_emph ltx_font_italic" id="bib.bib24.11.1" style="font-size:90%;">IEEE Pervasive Computing</em><span class="ltx_text" id="bib.bib24.12.2" style="font-size:90%;"> 17, 3 (2018), 83–89. </span> </span> <span class="ltx_bibblock"> </span> </li> <li class="ltx_bibitem" id="bib.bib25"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem"><span class="ltx_text" id="bib.bib25.5.5.1" style="font-size:90%;">Kawsar et al</span><span class="ltx_text" id="bib.bib25.6.6.2" style="font-size:90%;">.</span><span class="ltx_text" id="bib.bib25.7.7.3" style="font-size:90%;"> (2018b)</span></span> <span class="ltx_bibblock"><span class="ltx_text" id="bib.bib25.9.1" style="font-size:90%;"> Fahim Kawsar, Chulhong Min, Akhil Mathur, Alessandro Montanari, Utku Günay Acer, and Marc Van den Broeck. 2018b. </span> </span> <span class="ltx_bibblock"><span class="ltx_text" id="bib.bib25.10.1" style="font-size:90%;">eSense: Open earable platform for human sensing. In </span><em class="ltx_emph ltx_font_italic" id="bib.bib25.11.2" style="font-size:90%;">Proceedings of the 16th ACM Conference on Embedded Networked Sensor Systems</em><span class="ltx_text" id="bib.bib25.12.3" style="font-size:90%;">. 371–372. </span> </span> <span class="ltx_bibblock"> </span> </li> <li class="ltx_bibitem" id="bib.bib26"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem"><span class="ltx_text" id="bib.bib26.5.5.1" style="font-size:90%;">Kidmose et al</span><span class="ltx_text" id="bib.bib26.6.6.2" style="font-size:90%;">.</span><span class="ltx_text" id="bib.bib26.7.7.3" style="font-size:90%;"> (2013)</span></span> <span class="ltx_bibblock"><span class="ltx_text" id="bib.bib26.9.1" style="font-size:90%;"> Preben Kidmose, David Looney, Michael Ungstrup, Mike Lind Rank, and Danilo P Mandic. 2013. </span> </span> <span class="ltx_bibblock"><span class="ltx_text" id="bib.bib26.10.1" style="font-size:90%;">A study of evoked potentials from ear-EEG. </span> </span> <span class="ltx_bibblock"><em class="ltx_emph ltx_font_italic" id="bib.bib26.11.1" style="font-size:90%;">IEEE Transactions on Biomedical Engineering</em><span class="ltx_text" id="bib.bib26.12.2" style="font-size:90%;"> 60, 10 (2013), 2824–2830. </span> </span> <span class="ltx_bibblock"> </span> </li> <li class="ltx_bibitem" id="bib.bib27"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem"><span class="ltx_text" id="bib.bib27.5.5.1" style="font-size:90%;">Lee et al</span><span class="ltx_text" id="bib.bib27.6.6.2" style="font-size:90%;">.</span><span class="ltx_text" id="bib.bib27.7.7.3" style="font-size:90%;"> (2019)</span></span> <span class="ltx_bibblock"><span class="ltx_text" id="bib.bib27.9.1" style="font-size:90%;"> Seungchul Lee, Chulhong Min, Alessandro Montanari, Akhil Mathur, Youngjae Chang, Junehwa Song, and Fahim Kawsar. 2019. </span> </span> <span class="ltx_bibblock"><span class="ltx_text" id="bib.bib27.10.1" style="font-size:90%;">Automatic smile and frown recognition with kinetic earables. In </span><em class="ltx_emph ltx_font_italic" id="bib.bib27.11.2" style="font-size:90%;">Proceedings of the 10th Augmented Human International Conference 2019</em><span class="ltx_text" id="bib.bib27.12.3" style="font-size:90%;">. 1–4. </span> </span> <span class="ltx_bibblock"> </span> </li> <li class="ltx_bibitem" id="bib.bib28"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem"><span class="ltx_text" id="bib.bib28.5.5.1" style="font-size:90%;">Ma et al</span><span class="ltx_text" id="bib.bib28.6.6.2" style="font-size:90%;">.</span><span class="ltx_text" id="bib.bib28.7.7.3" style="font-size:90%;"> (2021)</span></span> <span class="ltx_bibblock"><span class="ltx_text" id="bib.bib28.9.1" style="font-size:90%;"> Dong Ma, Andrea Ferlini, and Cecilia Mascolo. 2021. </span> </span> <span class="ltx_bibblock"><span class="ltx_text" id="bib.bib28.10.1" style="font-size:90%;">Oesense: employing occlusion effect for in-ear human sensing. In </span><em class="ltx_emph ltx_font_italic" id="bib.bib28.11.2" style="font-size:90%;">Proceedings of the 19th Annual International Conference on Mobile Systems, Applications, and Services</em><span class="ltx_text" id="bib.bib28.12.3" style="font-size:90%;">. 175–187. </span> </span> <span class="ltx_bibblock"> </span> </li> <li class="ltx_bibitem" id="bib.bib29"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem"><span class="ltx_text" id="bib.bib29.5.5.1" style="font-size:90%;">Min et al</span><span class="ltx_text" id="bib.bib29.6.6.2" style="font-size:90%;">.</span><span class="ltx_text" id="bib.bib29.7.7.3" style="font-size:90%;"> (2023)</span></span> <span class="ltx_bibblock"><span class="ltx_text" id="bib.bib29.9.1" style="font-size:90%;"> Chulhong Min, Akhil Mathur, Utku Günay Acer, Alessandro Montanari, and Fahim Kawsar. 2023. </span> </span> <span class="ltx_bibblock"><span class="ltx_text" id="bib.bib29.10.1" style="font-size:90%;">SensiX++: Bringing mlops and multi-tenant model serving to sensory edge devices. </span> </span> <span class="ltx_bibblock"><em class="ltx_emph ltx_font_italic" id="bib.bib29.11.1" style="font-size:90%;">ACM Transactions on Embedded Computing Systems</em><span class="ltx_text" id="bib.bib29.12.2" style="font-size:90%;"> 22, 6 (2023), 1–27. </span> </span> <span class="ltx_bibblock"> </span> </li> <li class="ltx_bibitem" id="bib.bib30"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem"><span class="ltx_text" id="bib.bib30.5.5.1" style="font-size:90%;">Min et al</span><span class="ltx_text" id="bib.bib30.6.6.2" style="font-size:90%;">.</span><span class="ltx_text" id="bib.bib30.7.7.3" style="font-size:90%;"> (2022)</span></span> <span class="ltx_bibblock"><span class="ltx_text" id="bib.bib30.9.1" style="font-size:90%;"> Chulhong Min, Akhil Mathur, Alessandro Montanari, and Fahim Kawsar. 2022. </span> </span> <span class="ltx_bibblock"><span class="ltx_text" id="bib.bib30.10.1" style="font-size:90%;">SensiX: A system for best-effort inference of machine learning models in multi-device environments. </span> </span> <span class="ltx_bibblock"><em class="ltx_emph ltx_font_italic" id="bib.bib30.11.1" style="font-size:90%;">IEEE Transactions on Mobile Computing</em><span class="ltx_text" id="bib.bib30.12.2" style="font-size:90%;"> 22, 9 (2022), 5525–5538. </span> </span> <span class="ltx_bibblock"> </span> </li> <li class="ltx_bibitem" id="bib.bib31"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem"><span class="ltx_text" id="bib.bib31.5.5.1" style="font-size:90%;">Montanari et al</span><span class="ltx_text" id="bib.bib31.6.6.2" style="font-size:90%;">.</span><span class="ltx_text" id="bib.bib31.7.7.3" style="font-size:90%;"> (2023)</span></span> <span class="ltx_bibblock"><span class="ltx_text" id="bib.bib31.9.1" style="font-size:90%;"> Alessandro Montanari, Andrea Ferlini, Ananta Narayanan Balaji, Cecilia Mascolo, and Fahim Kawsar. 2023. </span> </span> <span class="ltx_bibblock"><span class="ltx_text" id="bib.bib31.10.1" style="font-size:90%;">EarSet: A Multi-Modal Dataset for Studying the Impact of Head and Facial Movements on In-Ear PPG Signals. </span> </span> <span class="ltx_bibblock"><em class="ltx_emph ltx_font_italic" id="bib.bib31.11.1" style="font-size:90%;">Scientific Data</em><span class="ltx_text" id="bib.bib31.12.2" style="font-size:90%;"> 10, 1 (2023), 850. </span> </span> <span class="ltx_bibblock"> </span> </li> <li class="ltx_bibitem" id="bib.bib32"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem"><span class="ltx_text" id="bib.bib32.5.5.1" style="font-size:90%;">Moss et al</span><span class="ltx_text" id="bib.bib32.6.6.2" style="font-size:90%;">.</span><span class="ltx_text" id="bib.bib32.7.7.3" style="font-size:90%;"> (2022)</span></span> <span class="ltx_bibblock"><span class="ltx_text" id="bib.bib32.9.1" style="font-size:90%;"> Arthur Moss, Hyunjong Lee, Lei Xun, Chulhong Min, Fahim Kawsar, and Alessandro Montanari. 2022. </span> </span> <span class="ltx_bibblock"><span class="ltx_text" id="bib.bib32.10.1" style="font-size:90%;">Ultra-low power DNN accelerators for IoT: Resource characterization of the MAX78000. In </span><em class="ltx_emph ltx_font_italic" id="bib.bib32.11.2" style="font-size:90%;">Proceedings of the 20th ACM Conference on Embedded Networked Sensor Systems</em><span class="ltx_text" id="bib.bib32.12.3" style="font-size:90%;">. 934–940. </span> </span> <span class="ltx_bibblock"> </span> </li> <li class="ltx_bibitem" id="bib.bib33"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem"><span class="ltx_text" id="bib.bib33.5.5.1" style="font-size:90%;">Prakash et al</span><span class="ltx_text" id="bib.bib33.6.6.2" style="font-size:90%;">.</span><span class="ltx_text" id="bib.bib33.7.7.3" style="font-size:90%;"> (2019)</span></span> <span class="ltx_bibblock"><span class="ltx_text" id="bib.bib33.9.1" style="font-size:90%;"> Jay Prakash, Zhijian Yang, Yu-Lin Wei, and Romit Roy Choudhury. 2019. </span> </span> <span class="ltx_bibblock"><span class="ltx_text" id="bib.bib33.10.1" style="font-size:90%;">Stear: Robust step counting from earables. In </span><em class="ltx_emph ltx_font_italic" id="bib.bib33.11.2" style="font-size:90%;">Proceedings of the 1st International Workshop on Earable Computing</em><span class="ltx_text" id="bib.bib33.12.3" style="font-size:90%;">. 36–41. </span> </span> <span class="ltx_bibblock"> </span> </li> <li class="ltx_bibitem" id="bib.bib34"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem"><span class="ltx_text" id="bib.bib34.5.5.1" style="font-size:90%;">Röddiger et al</span><span class="ltx_text" id="bib.bib34.6.6.2" style="font-size:90%;">.</span><span class="ltx_text" id="bib.bib34.7.7.3" style="font-size:90%;"> (2022)</span></span> <span class="ltx_bibblock"><span class="ltx_text" id="bib.bib34.9.1" style="font-size:90%;"> Tobias Röddiger, Christopher Clarke, Paula Breitling, Tim Schneegans, Haibin Zhao, Hans Gellersen, and Michael Beigl. 2022. </span> </span> <span class="ltx_bibblock"><span class="ltx_text" id="bib.bib34.10.1" style="font-size:90%;">Sensing with earables: A systematic literature review and taxonomy of phenomena. </span> </span> <span class="ltx_bibblock"><em class="ltx_emph ltx_font_italic" id="bib.bib34.11.1" style="font-size:90%;">Proceedings of the ACM on interactive, mobile, wearable and ubiquitous technologies</em><span class="ltx_text" id="bib.bib34.12.2" style="font-size:90%;"> 6, 3 (2022), 1–57. </span> </span> <span class="ltx_bibblock"> </span> </li> <li class="ltx_bibitem" id="bib.bib35"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem"><span class="ltx_text" id="bib.bib35.5.5.1" style="font-size:90%;">Romero et al</span><span class="ltx_text" id="bib.bib35.6.6.2" style="font-size:90%;">.</span><span class="ltx_text" id="bib.bib35.7.7.3" style="font-size:90%;"> (2024)</span></span> <span class="ltx_bibblock"><span class="ltx_text" id="bib.bib35.9.1" style="font-size:90%;"> Julia Romero, Andrea Ferlini, Dimitris Spathis, Ting Dang, Katayoun Farrahi, Fahim Kawsar, and Alessandro Montanari. 2024. </span> </span> <span class="ltx_bibblock"><span class="ltx_text" id="bib.bib35.10.1" style="font-size:90%;">OptiBreathe: An Earable-based PPG System for Continuous Respiration Rate, Breathing Phase, and Tidal Volume Monitoring. In </span><em class="ltx_emph ltx_font_italic" id="bib.bib35.11.2" style="font-size:90%;">Proceedings of the 25th International Workshop on Mobile Computing Systems and Applications</em><span class="ltx_text" id="bib.bib35.12.3" style="font-size:90%;">. 99–106. </span> </span> <span class="ltx_bibblock"> </span> </li> <li class="ltx_bibitem" id="bib.bib36"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem"><span class="ltx_text" id="bib.bib36.5.5.1" style="font-size:90%;">Shahid et al</span><span class="ltx_text" id="bib.bib36.6.6.2" style="font-size:90%;">.</span><span class="ltx_text" id="bib.bib36.7.7.3" style="font-size:90%;"> (2024)</span></span> <span class="ltx_bibblock"><span class="ltx_text" id="bib.bib36.9.1" style="font-size:90%;"> Irtaza Shahid, Khaldoon Al-Naimi, Ting Dang, Yang Liu, Fahim Kawsar, and Alessandro Montanari. 2024. </span> </span> <span class="ltx_bibblock"><span class="ltx_text" id="bib.bib36.10.1" style="font-size:90%;">Towards Enabling DPOAE Estimation on Single-Speaker Earbuds. In </span><em class="ltx_emph ltx_font_italic" id="bib.bib36.11.2" style="font-size:90%;">ICASSP 2024-2024 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)</em><span class="ltx_text" id="bib.bib36.12.3" style="font-size:90%;">. IEEE, 246–250. </span> </span> <span class="ltx_bibblock"> </span> </li> <li class="ltx_bibitem" id="bib.bib37"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem"><span class="ltx_text" id="bib.bib37.5.5.1" style="font-size:90%;">Truong et al</span><span class="ltx_text" id="bib.bib37.6.6.2" style="font-size:90%;">.</span><span class="ltx_text" id="bib.bib37.7.7.3" style="font-size:90%;"> (2022)</span></span> <span class="ltx_bibblock"><span class="ltx_text" id="bib.bib37.9.1" style="font-size:90%;"> Hoang Truong, Alessandro Montanari, and Fahim Kawsar. 2022. </span> </span> <span class="ltx_bibblock"><span class="ltx_text" id="bib.bib37.10.1" style="font-size:90%;">Non-invasive blood pressure monitoring with multi-modal in-ear sensing. In </span><em class="ltx_emph ltx_font_italic" id="bib.bib37.11.2" style="font-size:90%;">ICASSP 2022-2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)</em><span class="ltx_text" id="bib.bib37.12.3" style="font-size:90%;">. IEEE, 6–10. </span> </span> <span class="ltx_bibblock"> </span> </li> <li class="ltx_bibitem" id="bib.bib38"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem"><span class="ltx_text" id="bib.bib38.5.5.1" style="font-size:90%;">Veluri et al</span><span class="ltx_text" id="bib.bib38.6.6.2" style="font-size:90%;">.</span><span class="ltx_text" id="bib.bib38.7.7.3" style="font-size:90%;"> (2023)</span></span> <span class="ltx_bibblock"><span class="ltx_text" id="bib.bib38.9.1" style="font-size:90%;"> Bandhav Veluri, Malek Itani, Justin Chan, Takuya Yoshioka, and Shyamnath Gollakota. 2023. </span> </span> <span class="ltx_bibblock"><span class="ltx_text" id="bib.bib38.10.1" style="font-size:90%;">Semantic hearing: Programming acoustic scenes with binaural hearables. In </span><em class="ltx_emph ltx_font_italic" id="bib.bib38.11.2" style="font-size:90%;">Proceedings of the 36th Annual ACM Symposium on User Interface Software and Technology</em><span class="ltx_text" id="bib.bib38.12.3" style="font-size:90%;">. 1–15. </span> </span> <span class="ltx_bibblock"> </span> </li> <li class="ltx_bibitem" id="bib.bib39"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem"><span class="ltx_text" id="bib.bib39.5.5.1" style="font-size:90%;">Veluri et al</span><span class="ltx_text" id="bib.bib39.6.6.2" style="font-size:90%;">.</span><span class="ltx_text" id="bib.bib39.7.7.3" style="font-size:90%;"> (2024a)</span></span> <span class="ltx_bibblock"><span class="ltx_text" id="bib.bib39.9.1" style="font-size:90%;"> Bandhav Veluri, Malek Itani, Tuochao Chen, and Shyamnath Gollakota. 2024a. </span> </span> <span class="ltx_bibblock"><span class="ltx_text" id="bib.bib39.10.1" style="font-size:90%;">AI-based headphones for augmenting human hearing. </span> </span> <span class="ltx_bibblock"><em class="ltx_emph ltx_font_italic" id="bib.bib39.11.1" style="font-size:90%;">The Journal of the Acoustical Society of America</em><span class="ltx_text" id="bib.bib39.12.2" style="font-size:90%;"> 155, 3_Supplement (2024), A298–A298. </span> </span> <span class="ltx_bibblock"> </span> </li> <li class="ltx_bibitem" id="bib.bib40"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem"><span class="ltx_text" id="bib.bib40.5.5.1" style="font-size:90%;">Veluri et al</span><span class="ltx_text" id="bib.bib40.6.6.2" style="font-size:90%;">.</span><span class="ltx_text" id="bib.bib40.7.7.3" style="font-size:90%;"> (2024b)</span></span> <span class="ltx_bibblock"><span class="ltx_text" id="bib.bib40.9.1" style="font-size:90%;"> Bandhav Veluri, Malek Itani, Tuochao Chen, Takuya Yoshioka, and Shyamnath Gollakota. 2024b. </span> </span> <span class="ltx_bibblock"><span class="ltx_text" id="bib.bib40.10.1" style="font-size:90%;">Look Once to Hear: Target Speech Hearing with Noisy Examples. In </span><em class="ltx_emph ltx_font_italic" id="bib.bib40.11.2" style="font-size:90%;">Proceedings of the CHI Conference on Human Factors in Computing Systems</em><span class="ltx_text" id="bib.bib40.12.3" style="font-size:90%;">. 1–16. </span> </span> <span class="ltx_bibblock"> </span> </li> <li class="ltx_bibitem" id="bib.bib41"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem"><span class="ltx_text" id="bib.bib41.5.5.1" style="font-size:90%;">Yadav et al</span><span class="ltx_text" id="bib.bib41.6.6.2" style="font-size:90%;">.</span><span class="ltx_text" id="bib.bib41.7.7.3" style="font-size:90%;"> (2018)</span></span> <span class="ltx_bibblock"><span class="ltx_text" id="bib.bib41.9.1" style="font-size:90%;"> Umang Yadav, Sherif N Abbas, and Dimitrios Hatzinakos. 2018. </span> </span> <span class="ltx_bibblock"><span class="ltx_text" id="bib.bib41.10.1" style="font-size:90%;">Evaluation of PPG biometrics for authentication in different states. In </span><em class="ltx_emph ltx_font_italic" id="bib.bib41.11.2" style="font-size:90%;">2018 International Conference on Biometrics (ICB)</em><span class="ltx_text" id="bib.bib41.12.3" style="font-size:90%;">. IEEE, 277–282. </span> </span> <span class="ltx_bibblock"> </span> </li> <li class="ltx_bibitem" id="bib.bib42"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem"><span class="ltx_text" id="bib.bib42.4.4.1" style="font-size:90%;">Yang and Choudhury (2021)</span></span> <span class="ltx_bibblock"><span class="ltx_text" id="bib.bib42.6.1" style="font-size:90%;"> Zhijian Yang and Romit Roy Choudhury. 2021. </span> </span> <span class="ltx_bibblock"><span class="ltx_text" id="bib.bib42.7.1" style="font-size:90%;">Personalizing head related transfer functions for earables. In </span><em class="ltx_emph ltx_font_italic" id="bib.bib42.8.2" style="font-size:90%;">Proceedings of the 2021 ACM SIGCOMM 2021 Conference</em><span class="ltx_text" id="bib.bib42.9.3" style="font-size:90%;">. 137–150. </span> </span> <span class="ltx_bibblock"> </span> </li> <li class="ltx_bibitem" id="bib.bib43"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem"><span class="ltx_text" id="bib.bib43.5.5.1" style="font-size:90%;">Yang et al</span><span class="ltx_text" id="bib.bib43.6.6.2" style="font-size:90%;">.</span><span class="ltx_text" id="bib.bib43.7.7.3" style="font-size:90%;"> (2020)</span></span> <span class="ltx_bibblock"><span class="ltx_text" id="bib.bib43.9.1" style="font-size:90%;"> Zhijian Yang, Yu-Lin Wei, Sheng Shen, and Romit Roy Choudhury. 2020. </span> </span> <span class="ltx_bibblock"><span class="ltx_text" id="bib.bib43.10.1" style="font-size:90%;">Ear-ar: indoor acoustic augmented reality on earphones. In </span><em class="ltx_emph ltx_font_italic" id="bib.bib43.11.2" style="font-size:90%;">Proceedings of the 26th Annual International Conference on Mobile Computing and Networking</em><span class="ltx_text" id="bib.bib43.12.3" style="font-size:90%;">. 1–14. </span> </span> <span class="ltx_bibblock"> </span> </li> </ul> </section> </article> </div> <footer class="ltx_page_footer"> <div class="ltx_page_logo">Generated on Mon Oct 7 06:31:24 2024 by <a class="ltx_LaTeXML_logo" href="http://dlmf.nist.gov/LaTeXML/"><span style="letter-spacing:-0.2em; margin-right:0.1em;">L<span class="ltx_font_smallcaps" style="position:relative; bottom:2.2pt;">a</span>T<span class="ltx_font_smallcaps" style="font-size:120%;position:relative; bottom:-0.2ex;">e</span></span><span style="font-size:90%; position:relative; bottom:-0.2ex;">XML</span><img alt="Mascot Sammy" src="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAAsAAAAOCAYAAAD5YeaVAAAAAXNSR0IArs4c6QAAAAZiS0dEAP8A/wD/oL2nkwAAAAlwSFlzAAALEwAACxMBAJqcGAAAAAd0SU1FB9wKExQZLWTEaOUAAAAddEVYdENvbW1lbnQAQ3JlYXRlZCB3aXRoIFRoZSBHSU1Q72QlbgAAAdpJREFUKM9tkL+L2nAARz9fPZNCKFapUn8kyI0e4iRHSR1Kb8ng0lJw6FYHFwv2LwhOpcWxTjeUunYqOmqd6hEoRDhtDWdA8ApRYsSUCDHNt5ul13vz4w0vWCgUnnEc975arX6ORqN3VqtVZbfbTQC4uEHANM3jSqXymFI6yWazP2KxWAXAL9zCUa1Wy2tXVxheKA9YNoR8Pt+aTqe4FVVVvz05O6MBhqUIBGk8Hn8HAOVy+T+XLJfLS4ZhTiRJgqIoVBRFIoric47jPnmeB1mW/9rr9ZpSSn3Lsmir1fJZlqWlUonKsvwWwD8ymc/nXwVBeLjf7xEKhdBut9Hr9WgmkyGEkJwsy5eHG5vN5g0AKIoCAEgkEkin0wQAfN9/cXPdheu6P33fBwB4ngcAcByHJpPJl+fn54mD3Gg0NrquXxeLRQAAwzAYj8cwTZPwPH9/sVg8PXweDAauqqr2cDjEer1GJBLBZDJBs9mE4zjwfZ85lAGg2+06hmGgXq+j3+/DsixYlgVN03a9Xu8jgCNCyIegIAgx13Vfd7vdu+FweG8YRkjXdWy329+dTgeSJD3ieZ7RNO0VAXAPwDEAO5VKndi2fWrb9jWl9Esul6PZbDY9Go1OZ7PZ9z/lyuD3OozU2wAAAABJRU5ErkJggg=="/></a> </div></footer> </div> </body> </html>