CINXE.COM
WavChat: A Survey of Spoken Dialogue Models
<!DOCTYPE html> <html lang="en"> <head> <meta content="text/html; charset=utf-8" http-equiv="content-type"/> <title>WavChat: A Survey of Spoken Dialogue Models</title> <!--Generated on Thu Nov 14 18:22:34 2024 by LaTeXML (version 0.8.8) http://dlmf.nist.gov/LaTeXML/.--> <meta content="width=device-width, initial-scale=1, shrink-to-fit=no" name="viewport"/> <link href="https://cdn.jsdelivr.net/npm/bootstrap@5.3.0/dist/css/bootstrap.min.css" rel="stylesheet" type="text/css"/> <link href="/static/browse/0.3.4/css/ar5iv.0.7.9.min.css" rel="stylesheet" type="text/css"/> <link href="/static/browse/0.3.4/css/ar5iv-fonts.0.7.9.min.css" rel="stylesheet" type="text/css"/> <link href="/static/browse/0.3.4/css/latexml_styles.css" rel="stylesheet" type="text/css"/> <script src="https://cdn.jsdelivr.net/npm/bootstrap@5.3.0/dist/js/bootstrap.bundle.min.js"></script> <script src="https://cdnjs.cloudflare.com/ajax/libs/html2canvas/1.3.3/html2canvas.min.js"></script> <script src="/static/browse/0.3.4/js/addons_new.js"></script> <script src="/static/browse/0.3.4/js/feedbackOverlay.js"></script> <base href="/html/2411.13577v1/"/></head> <body> <nav class="ltx_page_navbar"> <nav class="ltx_TOC"> <ol class="ltx_toclist"> <li class="ltx_tocentry ltx_tocentry_section"><a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#S1" title="In WavChat: A Survey of Spoken Dialogue Models"><span class="ltx_text ltx_ref_title"><span class="ltx_tag ltx_tag_ref">1 </span>Introduction</span></a></li> <li class="ltx_tocentry ltx_tocentry_section"> <a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#S2" title="In WavChat: A Survey of Spoken Dialogue Models"><span class="ltx_text ltx_ref_title"><span class="ltx_tag ltx_tag_ref">2 </span>Overall</span></a> <ol class="ltx_toclist ltx_toclist_section"> <li class="ltx_tocentry ltx_tocentry_subsection"> <a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#S2.SS1" title="In 2 Overall ‣ WavChat: A Survey of Spoken Dialogue Models"><span class="ltx_text ltx_ref_title"><span class="ltx_tag ltx_tag_ref">2.1 </span>Functions of Spoken Dialogue Systems</span></a> <ol class="ltx_toclist ltx_toclist_subsection"> <li class="ltx_tocentry ltx_tocentry_subsubsection"><a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#S2.SS1.SSS1" title="In 2.1 Functions of Spoken Dialogue Systems ‣ 2 Overall ‣ WavChat: A Survey of Spoken Dialogue Models"><span class="ltx_text ltx_ref_title"><span class="ltx_tag ltx_tag_ref">2.1.1 </span>Text Intelligence</span></a></li> <li class="ltx_tocentry ltx_tocentry_subsubsection"><a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#S2.SS1.SSS2" title="In 2.1 Functions of Spoken Dialogue Systems ‣ 2 Overall ‣ WavChat: A Survey of Spoken Dialogue Models"><span class="ltx_text ltx_ref_title"><span class="ltx_tag ltx_tag_ref">2.1.2 </span>Speech Intelligence</span></a></li> <li class="ltx_tocentry ltx_tocentry_subsubsection"><a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#S2.SS1.SSS3" title="In 2.1 Functions of Spoken Dialogue Systems ‣ 2 Overall ‣ WavChat: A Survey of Spoken Dialogue Models"><span class="ltx_text ltx_ref_title"><span class="ltx_tag ltx_tag_ref">2.1.3 </span>Audio and Music Generation</span></a></li> <li class="ltx_tocentry ltx_tocentry_subsubsection"><a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#S2.SS1.SSS4" title="In 2.1 Functions of Spoken Dialogue Systems ‣ 2 Overall ‣ WavChat: A Survey of Spoken Dialogue Models"><span class="ltx_text ltx_ref_title"><span class="ltx_tag ltx_tag_ref">2.1.4 </span>Audio and Music Understanding</span></a></li> <li class="ltx_tocentry ltx_tocentry_subsubsection"><a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#S2.SS1.SSS5" title="In 2.1 Functions of Spoken Dialogue Systems ‣ 2 Overall ‣ WavChat: A Survey of Spoken Dialogue Models"><span class="ltx_text ltx_ref_title"><span class="ltx_tag ltx_tag_ref">2.1.5 </span>Multilingual Capability</span></a></li> <li class="ltx_tocentry ltx_tocentry_subsubsection"><a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#S2.SS1.SSS6" title="In 2.1 Functions of Spoken Dialogue Systems ‣ 2 Overall ‣ WavChat: A Survey of Spoken Dialogue Models"><span class="ltx_text ltx_ref_title"><span class="ltx_tag ltx_tag_ref">2.1.6 </span>Context Learning</span></a></li> <li class="ltx_tocentry ltx_tocentry_subsubsection"><a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#S2.SS1.SSS7" title="In 2.1 Functions of Spoken Dialogue Systems ‣ 2 Overall ‣ WavChat: A Survey of Spoken Dialogue Models"><span class="ltx_text ltx_ref_title"><span class="ltx_tag ltx_tag_ref">2.1.7 </span>Interaction Capability</span></a></li> <li class="ltx_tocentry ltx_tocentry_subsubsection"><a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#S2.SS1.SSS8" title="In 2.1 Functions of Spoken Dialogue Systems ‣ 2 Overall ‣ WavChat: A Survey of Spoken Dialogue Models"><span class="ltx_text ltx_ref_title"><span class="ltx_tag ltx_tag_ref">2.1.8 </span>Streaming Latency</span></a></li> <li class="ltx_tocentry ltx_tocentry_subsubsection"><a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#S2.SS1.SSS9" title="In 2.1 Functions of Spoken Dialogue Systems ‣ 2 Overall ‣ WavChat: A Survey of Spoken Dialogue Models"><span class="ltx_text ltx_ref_title"><span class="ltx_tag ltx_tag_ref">2.1.9 </span>Multimodal Capability</span></a></li> </ol> </li> <li class="ltx_tocentry ltx_tocentry_subsection"><a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#S2.SS2" title="In 2 Overall ‣ WavChat: A Survey of Spoken Dialogue Models"><span class="ltx_text ltx_ref_title"><span class="ltx_tag ltx_tag_ref">2.2 </span>Cascaded Spoken Dialogue Systems</span></a></li> <li class="ltx_tocentry ltx_tocentry_subsection"><a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#S2.SS3" title="In 2 Overall ‣ WavChat: A Survey of Spoken Dialogue Models"><span class="ltx_text ltx_ref_title"><span class="ltx_tag ltx_tag_ref">2.3 </span>End-to-End Spoken Dialogue Systems</span></a></li> </ol> </li> <li class="ltx_tocentry ltx_tocentry_section"> <a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#S3" title="In WavChat: A Survey of Spoken Dialogue Models"><span class="ltx_text ltx_ref_title"><span class="ltx_tag ltx_tag_ref">3 </span>Representations of Spoken Dialogue Models</span></a> <ol class="ltx_toclist ltx_toclist_section"> <li class="ltx_tocentry ltx_tocentry_subsection"><a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#S3.SS1" title="In 3 Representations of Spoken Dialogue Models ‣ WavChat: A Survey of Spoken Dialogue Models"><span class="ltx_text ltx_ref_title"><span class="ltx_tag ltx_tag_ref">3.1 </span>Speech Representations at the Inputs</span></a></li> <li class="ltx_tocentry ltx_tocentry_subsection"><a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#S3.SS2" title="In 3 Representations of Spoken Dialogue Models ‣ WavChat: A Survey of Spoken Dialogue Models"><span class="ltx_text ltx_ref_title"><span class="ltx_tag ltx_tag_ref">3.2 </span>Speech Representations at the Outputs</span></a></li> <li class="ltx_tocentry ltx_tocentry_subsection"> <a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#S3.SS3" title="In 3 Representations of Spoken Dialogue Models ‣ WavChat: A Survey of Spoken Dialogue Models"><span class="ltx_text ltx_ref_title"><span class="ltx_tag ltx_tag_ref">3.3 </span>Discussions about Representation used in Spoken Dialogue Systems</span></a> <ol class="ltx_toclist ltx_toclist_subsection"> <li class="ltx_tocentry ltx_tocentry_subsubsection"><a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#S3.SS3.SSS1" title="In 3.3 Discussions about Representation used in Spoken Dialogue Systems ‣ 3 Representations of Spoken Dialogue Models ‣ WavChat: A Survey of Spoken Dialogue Models"><span class="ltx_text ltx_ref_title"><span class="ltx_tag ltx_tag_ref">3.3.1 </span>Semantic Representation vs. Acoustic Representation</span></a></li> <li class="ltx_tocentry ltx_tocentry_subsubsection"><a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#S3.SS3.SSS2" title="In 3.3 Discussions about Representation used in Spoken Dialogue Systems ‣ 3 Representations of Spoken Dialogue Models ‣ WavChat: A Survey of Spoken Dialogue Models"><span class="ltx_text ltx_ref_title"><span class="ltx_tag ltx_tag_ref">3.3.2 </span>Continuous Representation vs. Discrete Representation</span></a></li> <li class="ltx_tocentry ltx_tocentry_subsubsection"><a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#S3.SS3.SSS3" title="In 3.3 Discussions about Representation used in Spoken Dialogue Systems ‣ 3 Representations of Spoken Dialogue Models ‣ WavChat: A Survey of Spoken Dialogue Models"><span class="ltx_text ltx_ref_title"><span class="ltx_tag ltx_tag_ref">3.3.3 </span>Single-Layer Quantizer vs. Multi-Layer Quantizer</span></a></li> <li class="ltx_tocentry ltx_tocentry_subsubsection"><a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#S3.SS3.SSS4" title="In 3.3 Discussions about Representation used in Spoken Dialogue Systems ‣ 3 Representations of Spoken Dialogue Models ‣ WavChat: A Survey of Spoken Dialogue Models"><span class="ltx_text ltx_ref_title"><span class="ltx_tag ltx_tag_ref">3.3.4 </span>With Text Guidance vs. Without Text Guidance</span></a></li> </ol> </li> </ol> </li> <li class="ltx_tocentry ltx_tocentry_section"> <a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#S4" title="In WavChat: A Survey of Spoken Dialogue Models"><span class="ltx_text ltx_ref_title"><span class="ltx_tag ltx_tag_ref">4 </span>Training Paradigm of Spoken Dialogue Model</span></a> <ol class="ltx_toclist ltx_toclist_section"> <li class="ltx_tocentry ltx_tocentry_subsection"><a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#S4.SS1" title="In 4 Training Paradigm of Spoken Dialogue Model ‣ WavChat: A Survey of Spoken Dialogue Models"><span class="ltx_text ltx_ref_title"><span class="ltx_tag ltx_tag_ref">4.1 </span>Architecture Paradigm about Modal Alignment of Speech and Text</span></a></li> <li class="ltx_tocentry ltx_tocentry_subsection"> <a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#S4.SS2" title="In 4 Training Paradigm of Spoken Dialogue Model ‣ WavChat: A Survey of Spoken Dialogue Models"><span class="ltx_text ltx_ref_title"><span class="ltx_tag ltx_tag_ref">4.2 </span>Multi-stage Training strategy</span></a> <ol class="ltx_toclist ltx_toclist_subsection"> <li class="ltx_tocentry ltx_tocentry_subsubsection"><a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#S4.SS2.SSS1" title="In 4.2 Multi-stage Training strategy ‣ 4 Training Paradigm of Spoken Dialogue Model ‣ WavChat: A Survey of Spoken Dialogue Models"><span class="ltx_text ltx_ref_title"><span class="ltx_tag ltx_tag_ref">4.2.1 </span>Text LLM Pre-Training</span></a></li> <li class="ltx_tocentry ltx_tocentry_subsubsection"><a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#S4.SS2.SSS2" title="In 4.2 Multi-stage Training strategy ‣ 4 Training Paradigm of Spoken Dialogue Model ‣ WavChat: A Survey of Spoken Dialogue Models"><span class="ltx_text ltx_ref_title"><span class="ltx_tag ltx_tag_ref">4.2.2 </span>Modality Adaptation and Alignment Post-training</span></a></li> <li class="ltx_tocentry ltx_tocentry_subsubsection"><a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#S4.SS2.SSS3" title="In 4.2 Multi-stage Training strategy ‣ 4 Training Paradigm of Spoken Dialogue Model ‣ WavChat: A Survey of Spoken Dialogue Models"><span class="ltx_text ltx_ref_title"><span class="ltx_tag ltx_tag_ref">4.2.3 </span>Supervised Fine-tuning or Dialogue Dataset Fine-tuning</span></a></li> <li class="ltx_tocentry ltx_tocentry_subsubsection"><a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#S4.SS2.SSS4" title="In 4.2 Multi-stage Training strategy ‣ 4 Training Paradigm of Spoken Dialogue Model ‣ WavChat: A Survey of Spoken Dialogue Models"><span class="ltx_text ltx_ref_title"><span class="ltx_tag ltx_tag_ref">4.2.4 </span>Preference Optimization and Reinforcement Learning</span></a></li> </ol> </li> <li class="ltx_tocentry ltx_tocentry_subsection"><a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#S4.SS3" title="In 4 Training Paradigm of Spoken Dialogue Model ‣ WavChat: A Survey of Spoken Dialogue Models"><span class="ltx_text ltx_ref_title"><span class="ltx_tag ltx_tag_ref">4.3 </span>Training Frameworks and Generation Strategies</span></a></li> <li class="ltx_tocentry ltx_tocentry_subsection"> <a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#S4.SS4" title="In 4 Training Paradigm of Spoken Dialogue Model ‣ WavChat: A Survey of Spoken Dialogue Models"><span class="ltx_text ltx_ref_title"><span class="ltx_tag ltx_tag_ref">4.4 </span>Discussions about Training Paradigm in Spoken Dialogue Models</span></a> <ol class="ltx_toclist ltx_toclist_subsection"> <li class="ltx_tocentry ltx_tocentry_subsubsection"><a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#S4.SS4.SSS1" title="In 4.4 Discussions about Training Paradigm in Spoken Dialogue Models ‣ 4 Training Paradigm of Spoken Dialogue Model ‣ WavChat: A Survey of Spoken Dialogue Models"><span class="ltx_text ltx_ref_title"><span class="ltx_tag ltx_tag_ref">4.4.1 </span>Text and Speech Modality Alignment</span></a></li> <li class="ltx_tocentry ltx_tocentry_subsubsection"><a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#S4.SS4.SSS2" title="In 4.4 Discussions about Training Paradigm in Spoken Dialogue Models ‣ 4 Training Paradigm of Spoken Dialogue Model ‣ WavChat: A Survey of Spoken Dialogue Models"><span class="ltx_text ltx_ref_title"><span class="ltx_tag ltx_tag_ref">4.4.2 </span>Different Temporal Alignment Methods in Spoken Dialogue Models</span></a></li> <li class="ltx_tocentry ltx_tocentry_subsubsection"><a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#S4.SS4.SSS3" title="In 4.4 Discussions about Training Paradigm in Spoken Dialogue Models ‣ 4 Training Paradigm of Spoken Dialogue Model ‣ WavChat: A Survey of Spoken Dialogue Models"><span class="ltx_text ltx_ref_title"><span class="ltx_tag ltx_tag_ref">4.4.3 </span>Reinforcement Learning (RL) in Spoken Dialogue Models</span></a></li> </ol> </li> </ol> </li> <li class="ltx_tocentry ltx_tocentry_section"> <a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#S5" title="In WavChat: A Survey of Spoken Dialogue Models"><span class="ltx_text ltx_ref_title"><span class="ltx_tag ltx_tag_ref">5 </span>Streaming, Duplex, and Interaction</span></a> <ol class="ltx_toclist ltx_toclist_section"> <li class="ltx_tocentry ltx_tocentry_subsection"> <a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#S5.SS1" title="In 5 Streaming, Duplex, and Interaction ‣ WavChat: A Survey of Spoken Dialogue Models"><span class="ltx_text ltx_ref_title"><span class="ltx_tag ltx_tag_ref">5.1 </span>Streaming Spoken Dialogue Models</span></a> <ol class="ltx_toclist ltx_toclist_subsection"> <li class="ltx_tocentry ltx_tocentry_subsubsection"><a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#S5.SS1.SSS1" title="In 5.1 Streaming Spoken Dialogue Models ‣ 5 Streaming, Duplex, and Interaction ‣ WavChat: A Survey of Spoken Dialogue Models"><span class="ltx_text ltx_ref_title"><span class="ltx_tag ltx_tag_ref">5.1.1 </span>Streaming End-to-End Spoken Dialogue Models</span></a></li> <li class="ltx_tocentry ltx_tocentry_subsubsection"><a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#S5.SS1.SSS2" title="In 5.1 Streaming Spoken Dialogue Models ‣ 5 Streaming, Duplex, and Interaction ‣ WavChat: A Survey of Spoken Dialogue Models"><span class="ltx_text ltx_ref_title"><span class="ltx_tag ltx_tag_ref">5.1.2 </span>Streaming Cascaded Spoken Dialogue Models</span></a></li> </ol> </li> <li class="ltx_tocentry ltx_tocentry_subsection"> <a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#S5.SS2" title="In 5 Streaming, Duplex, and Interaction ‣ WavChat: A Survey of Spoken Dialogue Models"><span class="ltx_text ltx_ref_title"><span class="ltx_tag ltx_tag_ref">5.2 </span>Duplex Technology and Interaction</span></a> <ol class="ltx_toclist ltx_toclist_subsection"> <li class="ltx_tocentry ltx_tocentry_subsubsection"><a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#S5.SS2.SSS1" title="In 5.2 Duplex Technology and Interaction ‣ 5 Streaming, Duplex, and Interaction ‣ WavChat: A Survey of Spoken Dialogue Models"><span class="ltx_text ltx_ref_title"><span class="ltx_tag ltx_tag_ref">5.2.1 </span>Duplex Technology</span></a></li> <li class="ltx_tocentry ltx_tocentry_subsubsection"><a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#S5.SS2.SSS2" title="In 5.2 Duplex Technology and Interaction ‣ 5 Streaming, Duplex, and Interaction ‣ WavChat: A Survey of Spoken Dialogue Models"><span class="ltx_text ltx_ref_title"><span class="ltx_tag ltx_tag_ref">5.2.2 </span>Interaction</span></a></li> <li class="ltx_tocentry ltx_tocentry_subsubsection"><a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#S5.SS2.SSS3" title="In 5.2 Duplex Technology and Interaction ‣ 5 Streaming, Duplex, and Interaction ‣ WavChat: A Survey of Spoken Dialogue Models"><span class="ltx_text ltx_ref_title"><span class="ltx_tag ltx_tag_ref">5.2.3 </span>Discussions about streaming and interaction</span></a></li> </ol> </li> </ol> </li> <li class="ltx_tocentry ltx_tocentry_section"> <a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#S6" title="In WavChat: A Survey of Spoken Dialogue Models"><span class="ltx_text ltx_ref_title"><span class="ltx_tag ltx_tag_ref">6 </span>Training Resources and Evaluation</span></a> <ol class="ltx_toclist ltx_toclist_section"> <li class="ltx_tocentry ltx_tocentry_subsection"> <a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#S6.SS1" title="In 6 Training Resources and Evaluation ‣ WavChat: A Survey of Spoken Dialogue Models"><span class="ltx_text ltx_ref_title"><span class="ltx_tag ltx_tag_ref">6.1 </span>Training resources</span></a> <ol class="ltx_toclist ltx_toclist_subsection"> <li class="ltx_tocentry ltx_tocentry_subsubsection"><a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#S6.SS1.SSS1" title="In 6.1 Training resources ‣ 6 Training Resources and Evaluation ‣ WavChat: A Survey of Spoken Dialogue Models"><span class="ltx_text ltx_ref_title"><span class="ltx_tag ltx_tag_ref">6.1.1 </span>Training resources about Text LLM Pre-training</span></a></li> <li class="ltx_tocentry ltx_tocentry_subsubsection"><a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#S6.SS1.SSS2" title="In 6.1 Training resources ‣ 6 Training Resources and Evaluation ‣ WavChat: A Survey of Spoken Dialogue Models"><span class="ltx_text ltx_ref_title"><span class="ltx_tag ltx_tag_ref">6.1.2 </span>Training resources about Post-Train for Audio Modal Alignment</span></a></li> <li class="ltx_tocentry ltx_tocentry_subsubsection"><a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#S6.SS1.SSS3" title="In 6.1 Training resources ‣ 6 Training Resources and Evaluation ‣ WavChat: A Survey of Spoken Dialogue Models"><span class="ltx_text ltx_ref_title"><span class="ltx_tag ltx_tag_ref">6.1.3 </span>Training resources about Post-Train for Dual-Stream Dialogue Processing</span></a></li> <li class="ltx_tocentry ltx_tocentry_subsubsection"><a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#S6.SS1.SSS4" title="In 6.1 Training resources ‣ 6 Training Resources and Evaluation ‣ WavChat: A Survey of Spoken Dialogue Models"><span class="ltx_text ltx_ref_title"><span class="ltx_tag ltx_tag_ref">6.1.4 </span>Training resources about Enhancing Conversational Abilities and Instruction Tuning</span></a></li> </ol> </li> <li class="ltx_tocentry ltx_tocentry_subsection"> <a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#S6.SS2" title="In 6 Training Resources and Evaluation ‣ WavChat: A Survey of Spoken Dialogue Models"><span class="ltx_text ltx_ref_title"><span class="ltx_tag ltx_tag_ref">6.2 </span>Evaluation</span></a> <ol class="ltx_toclist ltx_toclist_subsection"> <li class="ltx_tocentry ltx_tocentry_subsubsection"><a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#S6.SS2.SSS1" title="In 6.2 Evaluation ‣ 6 Training Resources and Evaluation ‣ WavChat: A Survey of Spoken Dialogue Models"><span class="ltx_text ltx_ref_title"><span class="ltx_tag ltx_tag_ref">6.2.1 </span>Common Evaluation</span></a></li> <li class="ltx_tocentry ltx_tocentry_subsubsection"><a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#S6.SS2.SSS2" title="In 6.2 Evaluation ‣ 6 Training Resources and Evaluation ‣ WavChat: A Survey of Spoken Dialogue Models"><span class="ltx_text ltx_ref_title"><span class="ltx_tag ltx_tag_ref">6.2.2 </span>Advanced Evaluation</span></a></li> </ol> </li> <li class="ltx_tocentry ltx_tocentry_subsection"><a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#S6.SS3" title="In 6 Training Resources and Evaluation ‣ WavChat: A Survey of Spoken Dialogue Models"><span class="ltx_text ltx_ref_title"><span class="ltx_tag ltx_tag_ref">6.3 </span>Benchmark</span></a></li> </ol> </li> <li class="ltx_tocentry ltx_tocentry_section"><a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#S7" title="In WavChat: A Survey of Spoken Dialogue Models"><span class="ltx_text ltx_ref_title"><span class="ltx_tag ltx_tag_ref">7 </span>Conclusion</span></a></li> <li class="ltx_tocentry ltx_tocentry_appendix"><a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#A1" title="In WavChat: A Survey of Spoken Dialogue Models"><span class="ltx_text ltx_ref_title"><span class="ltx_tag ltx_tag_ref">A </span>Resources about Music and Sound Datasets</span></a></li> <li class="ltx_tocentry ltx_tocentry_appendix"><a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#A2" title="In WavChat: A Survey of Spoken Dialogue Models"><span class="ltx_text ltx_ref_title"><span class="ltx_tag ltx_tag_ref">B </span>Open-source Spoken Dialogue Models</span></a></li> <li class="ltx_tocentry ltx_tocentry_appendix"><a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#A3" title="In WavChat: A Survey of Spoken Dialogue Models"><span class="ltx_text ltx_ref_title"><span class="ltx_tag ltx_tag_ref">C </span>Open-source Codec Models</span></a></li> </ol></nav> </nav> <div class="ltx_page_main"> <div class="ltx_page_content"> <article class="ltx_document ltx_authors_1line"> <h1 class="ltx_title ltx_title_document">WavChat: A Survey of Spoken Dialogue Models</h1> <div class="ltx_authors"> <span class="ltx_creator ltx_role_author"> <span class="ltx_personname"> Shengpeng Ji <sup class="ltx_sup" id="id24.24.id1"><span class="ltx_text ltx_font_italic" id="id24.24.id1.1">♠</span></sup> Yifu Chen <sup class="ltx_sup" id="id25.25.id2"><span class="ltx_text ltx_font_italic" id="id25.25.id2.1">♠</span></sup> <span class="ltx_note ltx_role_footnotemark" id="footnotex1"><sup class="ltx_note_mark">1</sup><span class="ltx_note_outer"><span class="ltx_note_content"><sup class="ltx_note_mark">1</sup><span class="ltx_note_type">footnotemark: </span><span class="ltx_tag ltx_tag_note">1</span></span></span></span> Minghui Fang <sup class="ltx_sup" id="id26.26.id3"><span class="ltx_text ltx_font_italic" id="id26.26.id3.1">♠</span></sup> <span class="ltx_note ltx_role_footnotemark" id="footnotex2"><sup class="ltx_note_mark">1</sup><span class="ltx_note_outer"><span class="ltx_note_content"><sup class="ltx_note_mark">1</sup><span class="ltx_note_type">footnotemark: </span><span class="ltx_tag ltx_tag_note">1</span></span></span></span> Jialong Zuo <sup class="ltx_sup" id="id27.27.id4"><span class="ltx_text ltx_font_italic" id="id27.27.id4.1">♠</span></sup> <span class="ltx_note ltx_role_footnotemark" id="footnotex3"><sup class="ltx_note_mark">1</sup><span class="ltx_note_outer"><span class="ltx_note_content"><sup class="ltx_note_mark">1</sup><span class="ltx_note_type">footnotemark: </span><span class="ltx_tag ltx_tag_note">1</span></span></span></span> Jingyu Lu <sup class="ltx_sup" id="id28.28.id5"><span class="ltx_text ltx_font_italic" id="id28.28.id5.1">♠</span></sup> Hanting Wang <sup class="ltx_sup" id="id29.29.id6"><span class="ltx_text ltx_font_italic" id="id29.29.id6.1">♠</span></sup> <br class="ltx_break"/><span class="ltx_text ltx_font_bold" id="id12.12.6"> Ziyue Jiang <sup class="ltx_sup" id="id12.12.6.1"><span class="ltx_text ltx_font_medium ltx_font_italic" id="id12.12.6.1.1">♠</span></sup> Long Zhou <sup class="ltx_sup" id="id12.12.6.2"><span class="ltx_text ltx_font_medium ltx_font_italic" id="id12.12.6.2.1">♢</span></sup> Shujie Liu <sup class="ltx_sup" id="id12.12.6.3"><span class="ltx_text ltx_font_medium ltx_font_italic" id="id12.12.6.3.1">♢</span></sup> Xize Cheng <sup class="ltx_sup" id="id12.12.6.4"><span class="ltx_text ltx_font_medium ltx_font_italic" id="id12.12.6.4.1">♠</span></sup> Xiaoda Yang <sup class="ltx_sup" id="id12.12.6.5"><span class="ltx_text ltx_font_medium ltx_font_italic" id="id12.12.6.5.1">♠</span></sup> Zehan Wang <sup class="ltx_sup" id="id12.12.6.6"><span class="ltx_text ltx_font_medium ltx_font_italic" id="id12.12.6.6.1">♠</span></sup> </span> <br class="ltx_break"/><span class="ltx_text ltx_font_bold" id="id19.19.13"> Qian Yang <sup class="ltx_sup" id="id19.19.13.1"><span class="ltx_text ltx_font_medium ltx_font_italic" id="id19.19.13.1.1">♠</span></sup> Jian Li <sup class="ltx_sup" id="id19.19.13.2"><span class="ltx_text ltx_font_medium ltx_font_italic" id="id19.19.13.2.1">♣</span></sup> Yidi Jiang <sup class="ltx_sup" id="id19.19.13.3"><span class="ltx_text ltx_font_medium ltx_font_italic" id="id19.19.13.3.1">♡</span></sup> Jingzhen He <sup class="ltx_sup" id="id19.19.13.4"><span class="ltx_text ltx_font_medium ltx_font_italic" id="id19.19.13.4.1">♡</span></sup> Yunfei Chu <sup class="ltx_sup" id="id19.19.13.5"><span class="ltx_text ltx_font_medium ltx_font_italic" id="id19.19.13.5.1">♡</span></sup> Jin Xu <sup class="ltx_sup" id="id19.19.13.6"><span class="ltx_text ltx_font_medium ltx_font_italic" id="id19.19.13.6.1">♡</span></sup> Zhou Zhao <sup class="ltx_sup" id="id19.19.13.7"><span class="ltx_text ltx_font_medium" id="id19.19.13.7.1">♠</span></sup> </span> <br class="ltx_break"/><sup class="ltx_sup" id="id30.30.id7">♠</sup> Zhejiang University & <sup class="ltx_sup" id="id31.31.id8">♢</sup> Microsoft & <sup class="ltx_sup" id="id32.32.id9"><span class="ltx_text ltx_font_italic" id="id32.32.id9.1">♡</span></sup> Alibaba Group & <sup class="ltx_sup" id="id33.33.id10"><span class="ltx_text ltx_font_italic" id="id33.33.id10.1">♣</span></sup> Tencent YouTu Lab <br class="ltx_break"/><span class="ltx_text ltx_font_typewriter" id="id34.34.id11">{shengpengji,zhaozhou}@zju.edu.cn</span> </span><span class="ltx_author_notes">Equal contribution.<span class="ltx_text ltx_font_bold" id="id35.35.id1">Corresponding author.</span></span></span> </div> <div class="ltx_abstract"> <h6 class="ltx_title ltx_title_abstract">Abstract</h6> <p class="ltx_p" id="id36.id1">Recent advancements in spoken dialogue models, exemplified by systems like GPT-4o, have captured significant attention in the speech domain. In the broader context of multimodal models, the speech modality offers a direct interface for human-computer interaction, enabling direct communication between AI and users. Compared to traditional three-tier cascaded spoken dialogue models that comprise speech recognition (ASR), large language models (LLMs), and text-to-speech (TTS), modern spoken dialogue models exhibit greater intelligence. These advanced spoken dialogue models not only comprehend audio, music, and other speech-related features, but also capture stylistic and timbral characteristics in speech. Moreover, they erate high-quality, multi-turn speech responses with low latency, enabling real-time interaction through simultaneous listening and speaking capability. Despite the progress in spoken dialogue systems, there is a lack of comprehensive surveys that systematically organize and analyze these systems and the underlying technologies. To address this, <span class="ltx_text ltx_font_bold" id="id36.id1.1">we have first compiled existing spoken dialogue systems in the chronological order and categorized them into the cascaded and end-to-end paradigms.</span> We then provide an in-depth overview of the core technologies in spoken dialogue models, covering aspects such as <span class="ltx_text ltx_font_bold" id="id36.id1.2">speech representation, training paradigm, streaming, duplex, and interaction capabilities.</span> Each section discusses the limitations of these technologies and outlines considerations for future research. Additionally, we present a thorough review of <span class="ltx_text ltx_font_bold" id="id36.id1.3">relevant datasets, evaluation metrics, and benchmarks</span> from the perspectives of training and evaluating spoken dialogue systems. We hope this survey will contribute to advancing both academic research and industrial applications in the field of spoken dialogue systems. The related material is available at <a class="ltx_ref ltx_url ltx_font_typewriter" href="https://github.com/jishengpeng/WavChat" title="">https://github.com/jishengpeng/WavChat</a>.</p> </div> <section class="ltx_section" id="S1"> <h2 class="ltx_title ltx_title_section"> <span class="ltx_tag ltx_tag_section">1 </span>Introduction</h2> <div class="ltx_para" id="S1.p1"> <p class="ltx_p" id="S1.p1.1">Spoken dialogue models <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib44" title="">44</a>, <a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib242" title="">242</a>, <a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib223" title="">223</a>]</cite> represent one of the most direct methods of human-computer interaction, evolving from traditional voice assistants such as Alexa<span class="ltx_note ltx_role_footnote" id="footnote1"><sup class="ltx_note_mark">1</sup><span class="ltx_note_outer"><span class="ltx_note_content"><sup class="ltx_note_mark">1</sup><span class="ltx_tag ltx_tag_note">1</span><a class="ltx_ref ltx_url ltx_font_typewriter" href="https://www.alexa.com/" title="">https://www.alexa.com/</a></span></span></span>, Siri<span class="ltx_note ltx_role_footnote" id="footnote2"><sup class="ltx_note_mark">2</sup><span class="ltx_note_outer"><span class="ltx_note_content"><sup class="ltx_note_mark">2</sup><span class="ltx_tag ltx_tag_note">2</span><a class="ltx_ref ltx_url ltx_font_typewriter" href="https://www.apple.com/siri/" title="">https://www.apple.com/siri/</a></span></span></span>, and Google Assistant<span class="ltx_note ltx_role_footnote" id="footnote3"><sup class="ltx_note_mark">3</sup><span class="ltx_note_outer"><span class="ltx_note_content"><sup class="ltx_note_mark">3</sup><span class="ltx_tag ltx_tag_note">3</span><a class="ltx_ref ltx_url ltx_font_typewriter" href="https://assistant.google.com/" title="">https://assistant.google.com/</a></span></span></span> to the latest intelligent dialogue systems, such as GPT-4o<span class="ltx_note ltx_role_footnote" id="footnote4"><sup class="ltx_note_mark">4</sup><span class="ltx_note_outer"><span class="ltx_note_content"><sup class="ltx_note_mark">4</sup><span class="ltx_tag ltx_tag_note">4</span><a class="ltx_ref ltx_url ltx_font_typewriter" href="https://openai.com/index/chatgpt-can-now-see-hear-and-speak/" title="">https://openai.com/index/chatgpt-can-now-see-hear-and-speak/</a></span></span></span>. The fundamental definition of a spoken dialogue model refers to a dialogue system capable of generating intelligent verbal responses based on the input speech. On the one hand, the <span class="ltx_text ltx_font_bold" id="S1.p1.1.1">speech modality</span> serves as both the input and output interface for the human-computer interaction in the spoken dialogue models. On the other hand, the <span class="ltx_text ltx_font_bold" id="S1.p1.1.2">dialogue system</span> <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib52" title="">52</a>]</cite> requires the model to possess a certain level of textual intelligence, including the ability to comprehend the knowledge of human society and generating professional and intelligent responses. Recently, intelligent spoken dialogue systems, exemplified by GPT-4o and Moshi <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib44" title="">44</a>]</cite>, have garnered significant attention for their ability to extend speech intelligence capabilities beyond traditional text-based dialogue models <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib84" title="">84</a>]</cite>. These dialogue models can not only generate natural, human-like speech responses <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib44" title="">44</a>, <a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib195" title="">195</a>]</cite> but also demonstrate an advanced understanding and generation of acoustic features beyond text, such as timbre, emotion, and style <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib127" title="">127</a>, <a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib128" title="">128</a>, <a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib227" title="">227</a>]</cite>. Additionally, they exhibit strong performance in processing other speech-related representations, including music and audio events <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib33" title="">33</a>, <a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib34" title="">34</a>, <a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib67" title="">67</a>, <a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib198" title="">198</a>]</cite>. Their realistic conversational interactivity <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib61" title="">61</a>, <a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib223" title="">223</a>]</cite> and low-latency dialogue experiences <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib44" title="">44</a>]</cite> further distinguish them among the traditional spoken dialogue models.</p> </div> <div class="ltx_para" id="S1.p2"> <p class="ltx_p" id="S1.p2.1">The history of spoken dialogue models can be traced back to early systems like dGSLM <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib157" title="">157</a>]</cite> and AudioGPT <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib84" title="">84</a>]</cite>, leading up to more recent advancements such as GPT-4o and Moshi <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib44" title="">44</a>]</cite>. During this period, many notable spoken dialogue models have emerged. As shown in Figure <a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#S1.F1" title="Figure 1 ‣ 1 Introduction ‣ WavChat: A Survey of Spoken Dialogue Models"><span class="ltx_text ltx_ref_tag">1</span></a>, we have organized these models in chronological order. Broadly, they can be categorized into two types: cascaded spoken dialogue models <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib33" title="">33</a>, <a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib34" title="">34</a>]</cite> and end-to-end <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib149" title="">149</a>, <a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib222" title="">222</a>, <a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib246" title="">246</a>, <a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib248" title="">248</a>]</cite> spoken dialogue models. Given that most current spoken dialogue models rely on alignment with the text modality, the distinction between cascaded and end-to-end models is crucial. As illustrated in Figure <a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#S1.F2" title="Figure 2 ‣ 1 Introduction ‣ WavChat: A Survey of Spoken Dialogue Models"><span class="ltx_text ltx_ref_tag">2</span></a>, we classify all spoken dialogue models based on whether <span class="ltx_text ltx_font_bold" id="S1.p2.1.1">the core language model can directly understand and generate speech representations</span>, dividing them into cascaded and end-to-end categories. Traditional cascaded spoken dialogue systems such as AudioGPT <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib84" title="">84</a>]</cite> are structured around text as the central intermediary, typically comprising three cascaded modules. First, the input audio is transcribed into text by an automatic speech recognition (ASR) module <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib169" title="">169</a>]</cite>. The transcribed text is then fed into a large language model (LLM) such as ChatGPT to generate a textual response. Finally, this textual response is converted back into audio through a text-to-speech (TTS) module <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib109" title="">109</a>, <a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib176" title="">176</a>]</cite>. While this cascaded architecture leverages the strong in-context capabilities of large language models, it introduces several challenges, including high latency, limited interactivity, and the inability to process non-textual information. To address these issues, recent research has taken two primary directions. Some approaches <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib34" title="">34</a>, <a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib198" title="">198</a>]</cite> focus on optimizing the understanding and generation components within the cascaded system to mitigate the aforementioned limitations. Some other approach <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib222" title="">222</a>, <a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib223" title="">223</a>, <a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib244" title="">244</a>, <a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib248" title="">248</a>]</cite> seek to directly solve these problems by adopting end-to-end architectures for spoken dialogue systems. Although end-to-end spoken dialogue models exhibit various differences in terms of representations and model architectures, they share a common feature: they do not rely on text as the central intermediary. Instead, these models aim to directly comprehend and generate speech representations. We define such systems as end-to-end spoken dialogue models.</p> </div> <figure class="ltx_figure" id="S1.F1"><img alt="Refer to caption" class="ltx_graphics ltx_centering ltx_img_landscape" height="310" id="S1.F1.g1" src="extracted/6000571/images/img1-paper-list.png" width="568"/> <figcaption class="ltx_caption ltx_centering"><span class="ltx_tag ltx_tag_figure"><span class="ltx_text" id="S1.F1.2.1.1" style="font-size:90%;">Figure 1</span>: </span><span class="ltx_text" id="S1.F1.3.2" style="font-size:90%;">A timeline of existing spoken dialogue models in recent years. The timeline was established mainly according to the release date (e.g., the submission date to arXiv) of the technical paper for each model. It is worth noting that certain works, such as Westlake-Omni, MooER-Omni, Hertz-dev, SpeechGPT2 and Fish-Agent do not have corresponding published papers. Therefore, we have not included them in the figure. We mark the publicly available model checkpoints in yellow color.</span></figcaption> </figure> <div class="ltx_para" id="S1.p3"> <p class="ltx_p" id="S1.p3.1">When constructing spoken dialogue systems, we identify four core technologies closely related to spoken dialogue models, based on the different levels of intelligence involved. The first is the design of speech representations (i.e., tokenizers and detokenizers). The second concerns the paradigm for training, inference, and generation, specifically how to align the speech modality with the text modality while preserving or enhancing the intelligence of existing text-based dialogue models. This part also involves selecting different model architectures, generation strategies, and multi-stage training approaches. The third challenge involves the design of interactive, duplex, streaming for spoken dialogue systems. Lastly, the fourth challenge relates to data—specifically, how to construct training datasets for spoken dialogue systems and evaluate their performance.</p> </div> <div class="ltx_para" id="S1.p4"> <p class="ltx_p" id="S1.p4.1">Given these considerations, in the following sections of this paper, we address these four key technologies in the order outlined above. In Section 2, we provide an overview of spoken dialogue systems, including typical spoken dialogue scenarios (i.e., how to define a spoken dialogue model) and recent developments in the cascaded and end-to-end spoken dialogue models. Section 3 focuses on the speech representations used in spoken dialogue systems. In Section 4, we systematically discuss the training paradigms, with particular emphasis on how to align the speech modality with the text modality, as well as multi-stage training strategies, model architectures, and generation strategies. Section 5 highlights the unique characteristics of spoken dialogue systems, particularly their duplex, streaming nature, which distinguishes them from text-based dialogue systems. In Section 6, we examine the construction of training datasets and the evaluation methodologies specific to spoken dialogue models. At the end of each section, we include a summary and discussion to reflect on the key insights. Finally, in Section 7, we conclude the survey by summarizing the major findings and discussing open issues for future research. Given the complexity of the technical points, we provide an overview of the structure of this survey in Figure <a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#S2.F3" title="Figure 3 ‣ 2.1.2 Speech Intelligence ‣ 2.1 Functions of Spoken Dialogue Systems ‣ 2 Overall ‣ WavChat: A Survey of Spoken Dialogue Models"><span class="ltx_text ltx_ref_tag">3</span></a>.</p> </div> <figure class="ltx_figure" id="S1.F2"><img alt="Refer to caption" class="ltx_graphics ltx_centering ltx_img_square" height="537" id="S1.F2.g1" src="extracted/6000571/images/img2-method.png" width="598"/> <figcaption class="ltx_caption ltx_centering"><span class="ltx_tag ltx_tag_figure"><span class="ltx_text" id="S1.F2.3.1.1" style="font-size:90%;">Figure 2</span>: </span><span class="ltx_text" id="S1.F2.4.2" style="font-size:90%;">A general overview of current spoken dialogue systems. We categorize these systems into two paradigms, cascaded spoken dialogue models and end-to-end spoken dialogue models, based on whether the core language model can <span class="ltx_text ltx_font_bold" id="S1.F2.4.2.1">directly</span> understand and generate speech representations. Additionally, we provide a visualization of the input and output methods used in different spoken dialogue systems.</span></figcaption> </figure> </section> <section class="ltx_section" id="S2"> <h2 class="ltx_title ltx_title_section"> <span class="ltx_tag ltx_tag_section">2 </span>Overall</h2> <div class="ltx_para" id="S2.p1"> <p class="ltx_p" id="S2.p1.1">In this section, we will provide an overall overview of spoken dialogue models. we begin by defining what constitutes an intelligent spoken dialogue model by examining various dialogue scenarios. We then provide a comprehensive overview of spoken dialogue models, distinguishing between cascaded spoken dialogue models and end-to-end spoken dialogue models.</p> </div> <section class="ltx_subsection" id="S2.SS1"> <h3 class="ltx_title ltx_title_subsection"> <span class="ltx_tag ltx_tag_subsection">2.1 </span>Functions of Spoken Dialogue Systems</h3> <div class="ltx_para" id="S2.SS1.p1"> <p class="ltx_p" id="S2.SS1.p1.1">Based on the demos and inference interfaces of representative models such as GPT-4o, Moshi <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib44" title="">44</a>]</cite>, Qwen2-Audio <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib33" title="">33</a>]</cite>, and VITA <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib61" title="">61</a>]</cite>, we categorize the usage scenarios of modern intelligent spoken dialogue models into the following nine representative categories: 1) Text Intelligence, 2) Speech Intelligence, 3) Audio and Music Generation, 4) Audio and Music Understanding, 5) Multilingual Capability, 6) Context Learning, 7) Interaction Capability, 8) Streaming Latency, and 9) Multimodal Capability. For the nine distinct use cases in spoken dialogue models, we provide corresponding examples for each scenario in Figure <a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#S2.F4" title="Figure 4 ‣ 2.1.4 Audio and Music Understanding ‣ 2.1 Functions of Spoken Dialogue Systems ‣ 2 Overall ‣ WavChat: A Survey of Spoken Dialogue Models"><span class="ltx_text ltx_ref_tag">4</span></a>. It is clear from these usage scenarios that a spoken dialogue model is not simply an extension of a text-based dialogue model to the speech modality (i.e., where the speech modality serves merely as an interface for converting speech into text). Rather, an intelligent spoken dialogue system must be capable of comprehending and generating acoustic information embedded in speech (such as timbre, style, and emotion) and of understanding and producing a wider range of audio representations, including information related to audio events and music. Additionally, unlike non-streaming text-based systems, spoken dialogue models need to support real-time, interactive streaming capabilities. These usage scenarios not only highlight the intelligence inherent in spoken dialogue systems but also present significant challenges for building end-to-end spoken dialogue models. Below, we provide a detailed examination of each of the nine usage scenarios.</p> </div> <section class="ltx_subsubsection" id="S2.SS1.SSS1"> <h4 class="ltx_title ltx_title_subsubsection"> <span class="ltx_tag ltx_tag_subsubsection">2.1.1 </span>Text Intelligence</h4> <div class="ltx_para" id="S2.SS1.SSS1.p1"> <p class="ltx_p" id="S2.SS1.SSS1.p1.1">As illustrated in Figure <a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#S2.F4" title="Figure 4 ‣ 2.1.4 Audio and Music Understanding ‣ 2.1 Functions of Spoken Dialogue Systems ‣ 2 Overall ‣ WavChat: A Survey of Spoken Dialogue Models"><span class="ltx_text ltx_ref_tag">4</span></a> (a), a spoken dialogue system must retain the fundamental capabilities of the original text-based dialogue models, such as ChatGPT. We define this usage scenario as textual intelligence. In this context, the spoken dialogue model can intelligently respond to user requests, generating appropriate responses such as travel itineraries, work plans, and scheduling. However, due to the limitations of voice-based interaction, the textual intelligence of current spoken dialogue systems is more focused on the daily scenarios. In certain contexts, such as complex mathematical theorem reasoning, the performance requirements for spoken dialogue models differ from those of text-based dialogue models <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib200" title="">200</a>]</cite>. These advanced aspects of textual intelligence warrant further exploration in unified multimodal dialogue models.</p> </div> </section> <section class="ltx_subsubsection" id="S2.SS1.SSS2"> <h4 class="ltx_title ltx_title_subsubsection"> <span class="ltx_tag ltx_tag_subsubsection">2.1.2 </span>Speech Intelligence</h4> <div class="ltx_para" id="S2.SS1.SSS2.p1"> <p class="ltx_p" id="S2.SS1.SSS2.p1.1">A distinguishing feature of spoken dialogue models, compared to text-based dialogue models <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib200" title="">200</a>]</cite>, is their ability to understand and generate acoustic information beyond mere textual content. In the speech modality, not only is the textual content present, but also additional acoustic information, such as timbre (speaker identity) and style (emotion, prosody, etc.). As illustrated in Figure <a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#S2.F4" title="Figure 4 ‣ 2.1.4 Audio and Music Understanding ‣ 2.1 Functions of Spoken Dialogue Systems ‣ 2 Overall ‣ WavChat: A Survey of Spoken Dialogue Models"><span class="ltx_text ltx_ref_tag">4</span></a> (b), an intelligent spoken dialogue system should be capable of <span class="ltx_text ltx_font_bold" id="S2.SS1.SSS2.p1.1.1">understanding</span> the timbre and style</p> </div> <figure class="ltx_figure" id="S2.F3"><img alt="Refer to caption" class="ltx_graphics ltx_centering ltx_img_portrait" height="1309" id="S2.F3.g1" src="x1.png" width="830"/> <figcaption class="ltx_caption ltx_centering"><span class="ltx_tag ltx_tag_figure"><span class="ltx_text" id="S2.F3.2.1.1" style="font-size:90%;">Figure 3</span>: </span><span class="ltx_text" id="S2.F3.3.2" style="font-size:90%;">A general overview about the structure of WavChat </span></figcaption> </figure> <div class="ltx_para" id="S2.SS1.SSS2.p2"> <p class="ltx_p" id="S2.SS1.SSS2.p2.1">of conversational speech and, ideally, <span class="ltx_text ltx_font_bold" id="S2.SS1.SSS2.p2.1.1">generating</span> responses with specified timbre and style in a <span class="ltx_text ltx_font_bold" id="S2.SS1.SSS2.p2.1.2">zero-shot</span> manner.</p> </div> <div class="ltx_para" id="S2.SS1.SSS2.p3"> <p class="ltx_p" id="S2.SS1.SSS2.p3.1">This capability about speech intelligence involves several use cases. First, on the comprehension side, the spoken dialogue system should generate responses based on the speaker’s vocal style. For example, in the E-chat <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib227" title="">227</a>]</cite>, a classic example might be: if a user asks, "My phone won’t turn on, what should I do?" in a cheerful tone, the system might respond, "It looks like you’re excited about getting a new phone. What type of phone are you interested in?" Conversely, if the user asks the same question in a sad tone, the system might reply, "It’s unfortunate your phone isn’t working. If you’re familiar with the repair policy, let’s proceed with the next steps." This situation indicates that the spoken dialogue system may generate responses with different <span class="ltx_text ltx_font_bold" id="S2.SS1.SSS2.p3.1.1">content</span> based on varying acoustic information. Furthermore, the system should comprehend various acoustic cues, such as accents or emotional states, and adjust its responses of different <span class="ltx_text ltx_font_bold" id="S2.SS1.SSS2.p3.1.2">acoustic</span> information accordingly. For instance, if the speaker is an American, the system might reply with a native English accent, whereas if the speaker is a Shanghainese user, the system could respond using the corresponding dialect. Similarly, if the user speaks with a sad tone, the dialogue system should be able to generate a more encouraging and empathetic response.</p> </div> <div class="ltx_para" id="S2.SS1.SSS2.p4"> <p class="ltx_p" id="S2.SS1.SSS2.p4.1">On the generation side, speech intelligence is more prominently reflected in its controllability, such as voice cloning and style control. For example, the system could be instructed to mimic a specific voice or respond in a designated style (e.g., mimicking a grandmother’s soft and gentle voice for a comforting interaction). Additionally, the system could use a voice prompt provided during the conversation to fully clone the timbre from the prompt and generate speech in that same voice. In summary, the ability to comprehend and generate acoustic information is one of the key characteristics of an intelligent spoken dialogue model.</p> </div> </section> <section class="ltx_subsubsection" id="S2.SS1.SSS3"> <h4 class="ltx_title ltx_title_subsubsection"> <span class="ltx_tag ltx_tag_subsubsection">2.1.3 </span>Audio and Music Generation</h4> <div class="ltx_para" id="S2.SS1.SSS3.p1"> <p class="ltx_p" id="S2.SS1.SSS3.p1.1">In the spoken dialogue models, beyond basic spoken dialogue capabilities, an intelligent spoken dialogue system may be required to generate music and audio. For example, a user might instruct the system to generate a one-minute piano piece or a ten-second recording of a dog barking. Additionally, users might provide lyrics and a musical melody, asking the spoken dialogue model to create a pop song. The system should thus inherit the generative capabilities of large-scale music <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib2" title="">2</a>, <a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib40" title="">40</a>, <a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib116" title="">116</a>, <a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib141" title="">141</a>]</cite> and audio <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib83" title="">83</a>, <a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib134" title="">134</a>, <a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib136" title="">136</a>]</cite> models on the output side.</p> </div> </section> <section class="ltx_subsubsection" id="S2.SS1.SSS4"> <h4 class="ltx_title ltx_title_subsubsection"> <span class="ltx_tag ltx_tag_subsubsection">2.1.4 </span>Audio and Music Understanding</h4> <div class="ltx_para" id="S2.SS1.SSS4.p1"> <p class="ltx_p" id="S2.SS1.SSS4.p1.1">Complementing its music and audio generation capabilities, a spoken dialogue model should also be able to understand music and audio on the input side <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib33" title="">33</a>, <a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib198" title="">198</a>]</cite>. For instance, when given an audio clip, the intelligent system should identify both its content and acoustic characteristics, such as recognizing whether the sound is a bird chirping or a cat meowing, or whether the music is calm or energetic. Moreover, the system could extend its understanding by creating literary works—like poetry or songs—based on the given music or audio.</p> </div> <figure class="ltx_figure" id="S2.F4"> <div class="ltx_flex_figure"> <div class="ltx_flex_cell ltx_flex_size_3"> <figure class="ltx_figure ltx_figure_panel ltx_align_center" id="S2.F4.sf1"><img alt="Refer to caption" class="ltx_graphics ltx_centering ltx_img_square" height="690" id="S2.F4.sf1.g1" src="x2.png" width="831"/> <figcaption class="ltx_caption ltx_centering"><span class="ltx_tag ltx_tag_figure"><span class="ltx_text" id="S2.F4.sf1.2.1.1" style="font-size:90%;">(a)</span> </span><span class="ltx_text" id="S2.F4.sf1.3.2" style="font-size:90%;">Text Intelligence</span></figcaption> </figure> </div> <div class="ltx_flex_cell ltx_flex_size_3"> <figure class="ltx_figure ltx_figure_panel ltx_align_center" id="S2.F4.sf2"><img alt="Refer to caption" class="ltx_graphics ltx_centering ltx_img_square" height="690" id="S2.F4.sf2.g1" src="x3.png" width="831"/> <figcaption class="ltx_caption ltx_centering"><span class="ltx_tag ltx_tag_figure"><span class="ltx_text" id="S2.F4.sf2.2.1.1" style="font-size:90%;">(b)</span> </span><span class="ltx_text" id="S2.F4.sf2.3.2" style="font-size:90%;">Speech Intelligence</span></figcaption> </figure> </div> <div class="ltx_flex_cell ltx_flex_size_3"> <figure class="ltx_figure ltx_figure_panel ltx_align_center" id="S2.F4.sf3"><img alt="Refer to caption" class="ltx_graphics ltx_centering ltx_img_square" height="690" id="S2.F4.sf3.g1" src="x4.png" width="831"/> <figcaption class="ltx_caption ltx_centering"><span class="ltx_tag ltx_tag_figure"><span class="ltx_text" id="S2.F4.sf3.2.1.1" style="font-size:90%;">(c)</span> </span><span class="ltx_text" id="S2.F4.sf3.3.2" style="font-size:90%;">Audio and Music Generation</span></figcaption> </figure> </div> <div class="ltx_flex_break"></div> <div class="ltx_flex_cell ltx_flex_size_3"> <figure class="ltx_figure ltx_figure_panel ltx_align_center" id="S2.F4.sf4"><img alt="Refer to caption" class="ltx_graphics ltx_centering ltx_img_square" height="690" id="S2.F4.sf4.g1" src="x5.png" width="831"/> <figcaption class="ltx_caption ltx_centering"><span class="ltx_tag ltx_tag_figure"><span class="ltx_text" id="S2.F4.sf4.2.1.1" style="font-size:90%;">(d)</span> </span><span class="ltx_text" id="S2.F4.sf4.3.2" style="font-size:90%;">Audio and Music Understanding</span></figcaption> </figure> </div> <div class="ltx_flex_cell ltx_flex_size_3"> <figure class="ltx_figure ltx_figure_panel ltx_align_center" id="S2.F4.sf5"><img alt="Refer to caption" class="ltx_graphics ltx_centering ltx_img_square" height="690" id="S2.F4.sf5.g1" src="x6.png" width="831"/> <figcaption class="ltx_caption ltx_centering"><span class="ltx_tag ltx_tag_figure"><span class="ltx_text" id="S2.F4.sf5.2.1.1" style="font-size:90%;">(e)</span> </span><span class="ltx_text" id="S2.F4.sf5.3.2" style="font-size:90%;">Multilingual Capability</span></figcaption> </figure> </div> <div class="ltx_flex_cell ltx_flex_size_3"> <figure class="ltx_figure ltx_figure_panel ltx_align_center" id="S2.F4.sf6"><img alt="Refer to caption" class="ltx_graphics ltx_centering ltx_img_square" height="690" id="S2.F4.sf6.g1" src="x7.png" width="831"/> <figcaption class="ltx_caption ltx_centering"><span class="ltx_tag ltx_tag_figure"><span class="ltx_text" id="S2.F4.sf6.2.1.1" style="font-size:90%;">(f)</span> </span><span class="ltx_text" id="S2.F4.sf6.3.2" style="font-size:90%;">Context Learning</span></figcaption> </figure> </div> <div class="ltx_flex_break"></div> <div class="ltx_flex_cell ltx_flex_size_3"> <figure class="ltx_figure ltx_figure_panel ltx_align_center" id="S2.F4.sf7"><img alt="Refer to caption" class="ltx_graphics ltx_centering ltx_img_square" height="690" id="S2.F4.sf7.g1" src="x8.png" width="831"/> <figcaption class="ltx_caption ltx_centering"><span class="ltx_tag ltx_tag_figure"><span class="ltx_text" id="S2.F4.sf7.2.1.1" style="font-size:90%;">(g)</span> </span><span class="ltx_text" id="S2.F4.sf7.3.2" style="font-size:90%;">Interaction Capability</span></figcaption> </figure> </div> <div class="ltx_flex_cell ltx_flex_size_3"> <figure class="ltx_figure ltx_figure_panel ltx_align_center" id="S2.F4.sf8"><img alt="Refer to caption" class="ltx_graphics ltx_centering ltx_img_square" height="690" id="S2.F4.sf8.g1" src="x9.png" width="831"/> <figcaption class="ltx_caption ltx_centering"><span class="ltx_tag ltx_tag_figure"><span class="ltx_text" id="S2.F4.sf8.2.1.1" style="font-size:90%;">(h)</span> </span><span class="ltx_text" id="S2.F4.sf8.3.2" style="font-size:90%;">Streaming Latency</span></figcaption> </figure> </div> <div class="ltx_flex_cell ltx_flex_size_3"> <figure class="ltx_figure ltx_figure_panel ltx_align_center" id="S2.F4.sf9"><img alt="Refer to caption" class="ltx_graphics ltx_centering ltx_img_square" height="690" id="S2.F4.sf9.g1" src="x10.png" width="831"/> <figcaption class="ltx_caption ltx_centering"><span class="ltx_tag ltx_tag_figure"><span class="ltx_text" id="S2.F4.sf9.2.1.1" style="font-size:90%;">(i)</span> </span><span class="ltx_text" id="S2.F4.sf9.3.2" style="font-size:90%;">Multimodal Capability</span></figcaption> </figure> </div> </div> <figcaption class="ltx_caption ltx_centering"><span class="ltx_tag ltx_tag_figure"><span class="ltx_text" id="S2.F4.2.1.1" style="font-size:90%;">Figure 4</span>: </span><span class="ltx_text" id="S2.F4.3.2" style="font-size:90%;">An overall demonstration of the functions of the spoken dialogue systems. We describe the ideal capabilities of such systems from nine different perspectives: Text Intelligence, Speech Intelligence, Audio and Music Generation, Audio and Music Understanding, Multilingual Capability, Context Learning, Interaction Capability, Streaming Latency, and Multimodal Capability. Each function is illustrated with corresponding dialogue examples.</span></figcaption> </figure> </section> <section class="ltx_subsubsection" id="S2.SS1.SSS5"> <h4 class="ltx_title ltx_title_subsubsection"> <span class="ltx_tag ltx_tag_subsubsection">2.1.5 </span>Multilingual Capability</h4> <div class="ltx_para" id="S2.SS1.SSS5.p1"> <p class="ltx_p" id="S2.SS1.SSS5.p1.1">Similar to text-based dialogue models, spoken dialogue systems are expected to possess multilingual capabilities. Specifically, these models should be able to perform multilingual content translation, such as translating a spoken segment in Japanese into French speech clips, effectively inheriting the capabilities of simultaneous interpretation. In addition to multilingual content translation, the system should also handle multilingual acoustic information. This means that the intelligent spoken dialogue model should be able to generate responses in various languages and accents, replying in the corresponding accent of the target language based on the different input speech.</p> </div> </section> <section class="ltx_subsubsection" id="S2.SS1.SSS6"> <h4 class="ltx_title ltx_title_subsubsection"> <span class="ltx_tag ltx_tag_subsubsection">2.1.6 </span>Context Learning</h4> <div class="ltx_para" id="S2.SS1.SSS6.p1"> <p class="ltx_p" id="S2.SS1.SSS6.p1.1">In the spoken dialogue models, the ability to handle long-form and multi-turn conversations is a key benchmark for evaluating performance <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib44" title="">44</a>]</cite>. This requires that spoken dialogue models not only support long-duration audio inputs but also generate extended audio outputs. Moreover, they must be capable of engaging in multi-turn conversations based on historical context. An important aspect of multi-turn dialogue is the ability to revise previous responses based on new user instructions. As shown in Figure <a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#S2.F4" title="Figure 4 ‣ 2.1.4 Audio and Music Understanding ‣ 2.1 Functions of Spoken Dialogue Systems ‣ 2 Overall ‣ WavChat: A Survey of Spoken Dialogue Models"><span class="ltx_text ltx_ref_tag">4</span></a> (f), an intelligent spoken dialogue model should be able to continuously modify its previous replies according to the user’s evolving requests.</p> </div> </section> <section class="ltx_subsubsection" id="S2.SS1.SSS7"> <h4 class="ltx_title ltx_title_subsubsection"> <span class="ltx_tag ltx_tag_subsubsection">2.1.7 </span>Interaction Capability</h4> <div class="ltx_para" id="S2.SS1.SSS7.p1"> <p class="ltx_p" id="S2.SS1.SSS7.p1.1">A distinguishing feature of spoken dialogue systems compared to the text-based dialogue models is their duplex and interactive nature <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib44" title="">44</a>]</cite>. In text-based dialogue, interactions typically follow a half-duplex structure, where the response can only be provided after the question has been completed, and the user is unable to interrupt the reply in real-time. However, in the spoken dialogue systems, full-duplex interaction is common. This means that a conversation does not need to be fully completed before a response can be generated. Both the system and the user can interrupt and interact in real time. For example, if the user is unsatisfied with the system’s response, they can immediately interrupt, causing the system to halt its current generation and respond to the new input. Additionally, to emulate more natural conversational settings, the system can also interrupt the user when appropriate, such as when clarifying the user’s intent. Beyond the ability to interrupt, interactive dialogue often includes the use of conversational fillers, such as "okay," "haha," or "oh," which signal acknowledgment or agreement. Including these within spoken dialogue models enhances the realism and natural flow of conversations. The underlying requirement for interaction capabilities is that the system should be able to listen and speak simultaneously, responding dynamically to the flow of the interaction.</p> </div> </section> <section class="ltx_subsubsection" id="S2.SS1.SSS8"> <h4 class="ltx_title ltx_title_subsubsection"> <span class="ltx_tag ltx_tag_subsubsection">2.1.8 </span>Streaming Latency</h4> <div class="ltx_para" id="S2.SS1.SSS8.p1"> <p class="ltx_p" id="S2.SS1.SSS8.p1.1">Streaming comprehension and generation are also fundamental functionalities of spoken dialogue models <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib223" title="">223</a>, <a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib248" title="">248</a>, <a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib57" title="">57</a>]</cite>. In the real-world scenarios, a model cannot wait until an entire minute-long audio segment has been processed before generating a response. Instead, the model must operate on a chunk-based mechanism, dynamically processing and generating audio in real time, one chunk at a time. Additionally, the streaming requirement means that the entire system must operate in a causal manner—understanding and generating audio based solely on past information, without relying on future information. Streaming function is often closely tied to the need for low latency. In practical conversational experiences, the latency of the first token generated by the spoken dialogue model (i.e., the wait time for the user) and the average latency of the generation process are critical factors that influence the overall responsiveness and usability of the spoken dialogue system.</p> </div> </section> <section class="ltx_subsubsection" id="S2.SS1.SSS9"> <h4 class="ltx_title ltx_title_subsubsection"> <span class="ltx_tag ltx_tag_subsubsection">2.1.9 </span>Multimodal Capability</h4> <div class="ltx_para" id="S2.SS1.SSS9.p1"> <p class="ltx_p" id="S2.SS1.SSS9.p1.1">Multimodal dialogue capability <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib25" title="">25</a>, <a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib61" title="">61</a>]</cite> represents an advanced feature of spoken dialogue models. In existing systems, this typically refers to the ability to process inputs from multiple modalities, such as video, images, and text, while generating intelligent speech responses. A spoken dialogue model equipped with this capability achieves the ability to “hear, see, and speak” simultaneously. Multimodal inputs significantly enhance the potential of these systems; for instance, users can employ various gestures to improve the quality of the model’s generated responses, and the system can develop a deeper understanding of the physical world. Beyond multimodal inputs, the future of dialogue systems lies in large multimodal models that unify the comprehension and generation capabilities across all modalities, with spoken dialogue serving as the foundational modality.</p> </div> </section> </section> <section class="ltx_subsection" id="S2.SS2"> <h3 class="ltx_title ltx_title_subsection"> <span class="ltx_tag ltx_tag_subsection">2.2 </span>Cascaded Spoken Dialogue Systems</h3> <div class="ltx_para" id="S2.SS2.p1"> <p class="ltx_p" id="S2.SS2.p1.1">The earliest prototype of cascaded spoken dialogue systems can be traced back to AudioGPT <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib84" title="">84</a>]</cite>. To achieve speech-to-speech dialogue functionality, the system first employed an Automatic Speech Recognition (ASR) model to convert speech into text, followed by ChatGPT for text-based dialogue, and finally, a Text-to-Speech (TTS) model to convert the generated text back into speech. In this primitive version, speech was used solely as an input-output interface, retaining only the most basic textual intelligence. For example, in the Huggingface’s open-source Speech-To-Speech framework<span class="ltx_note ltx_role_footnote" id="footnote5"><sup class="ltx_note_mark">5</sup><span class="ltx_note_outer"><span class="ltx_note_content"><sup class="ltx_note_mark">5</sup><span class="ltx_tag ltx_tag_note">5</span><a class="ltx_ref ltx_url ltx_font_typewriter" href="https://github.com/huggingface/speech-to-speech" title="">https://github.com/huggingface/speech-to-speech</a></span></span></span>, an additional Voice Activity Detection (VAD) module<span class="ltx_note ltx_role_footnote" id="footnote6"><sup class="ltx_note_mark">6</sup><span class="ltx_note_outer"><span class="ltx_note_content"><sup class="ltx_note_mark">6</sup><span class="ltx_tag ltx_tag_note">6</span><a class="ltx_ref ltx_url ltx_font_typewriter" href="https://github.com/snakers/silero-vad" title="">https://github.com/snakers/silero-vad</a></span></span></span> was further layered onto the traditional cascaded modules to distinguish between speech and silent segments, as well as between different speakers.</p> </div> <div class="ltx_para" id="S2.SS2.p2"> <p class="ltx_p" id="S2.SS2.p2.1">After the basic textual intelligence had been established in the cascaded spoken dialogue models, researchers began incorporating paralinguistic features, such as emotion and style, to enhance the speech intelligence in the cascaded spoken dialogue models. For instance, ParalinGPT <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib128" title="">128</a>]</cite> and E-chat <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib227" title="">227</a>]</cite> integrate conversational context, speech embeddings, and paralinguistic attributes into an autoregressive model via a sliding window, allowing the model to generate more accurate text responses by combining historical text and emotional representations. Similarly, Spoken-LLM <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib127" title="">127</a>]</cite> introduces an Emotion2Vec <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib143" title="">143</a>]</cite> module to provide style vectors to the Llama2-Chat model. Through LoRA <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib79" title="">79</a>]</cite> fine-tuning, Llama2-Chat is trained not only to generate content-based text responses but also to produce text responses with specific stylistic attributes (e.g., <cheerful, fast, normal>), which can guide downstream TTS systems in generating expressive speech.</p> </div> <div class="ltx_para" id="S2.SS2.p3"> <p class="ltx_p" id="S2.SS2.p3.1">In addition to understanding acoustic information within cascaded spoken dialogue models, there have been efforts to directly input speech representations while retaining text as the output modality <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib41" title="">41</a>, <a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib34" title="">34</a>, <a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib111" title="">111</a>]</cite>. This forces cascaded spoken dialogue systems to process input speech directly. A common approach involves integrating frozen speech encoders (such as Whisper <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib169" title="">169</a>]</cite>) with trainable encoder adapters, allowing the speech input to be interpreted as a specialized form of text by the large language model. By extending the vocabulary of the text-based dialogue model, the large language model can process speech as if it were a unique form of text, enabling the generation of appropriate text responses in the cascaded spoken dialogue models.</p> </div> <div class="ltx_para" id="S2.SS2.p4"> <p class="ltx_p" id="S2.SS2.p4.1">Notably, these cascaded spoken dialogue models have further advanced beyond the comprehension of human speech alone and can now understand a variety of audio modalities, including music and audio <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib67" title="">67</a>, <a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib198" title="">198</a>]</cite>. For example, SALMONN <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib198" title="">198</a>]</cite> models both speech and audio information by freezing the Whisper <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib169" title="">169</a>]</cite> and BEATs <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib28" title="">28</a>]</cite> encoder and bridging them to a large language model via a Window-Level Q-Former <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib121" title="">121</a>]</cite>. As a result, these cascaded spoken dialogue systems are capable of further performing a wide range of tasks on the comprehension side. For instance, models like Qwen-audio <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib33" title="">33</a>, <a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib34" title="">34</a>]</cite> can handle multiple tasks such as Automatic Speech Recognition (ASR), Speech-to-Text Translation (S2TT), Automatic Audio Captioning (AAC), Acoustic Scene Classification (ASC), Speech Emotion Recognition (SER), Audio Question Answering (AQA), Vocal Sound Classification (VSC), and Music Note Analysis (MNA). Consequently, these cascaded models are often regarded as part of multitask speech-text large language models.</p> </div> <div class="ltx_para" id="S2.SS2.p5"> <p class="ltx_p" id="S2.SS2.p5.1">It is worth noting that the aforementioned cascaded spoken dialogue models generate text only and then directly feed it into a pre-trained TTS module. However, more recent cascaded spoken dialogue models, such as Llama3.1, have begun integrating trainable TTS modules as part of the decoder within the large language model (LLM). While these models have made progress in incorporating low-latency streaming functionalities, they are still fundamentally based on generating text content first, which is then converted into speech. They do not directly generate speech-related representations within the LLM itself. Therefore, we classify these models as cascaded spoken dialogue systems.</p> </div> <div class="ltx_para" id="S2.SS2.p6"> <p class="ltx_p" id="S2.SS2.p6.1">In addition, some recent efforts have focused on enhancing models like Qwen2-Audio <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib33" title="">33</a>]</cite> by incorporating multimodal comprehension capabilities, thereby enabling a degree of multimodal dialogue functionality. For instance, models such as VITA <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib61" title="">61</a>]</cite> and Baichuan-Omni<cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib122" title="">122</a>]</cite> integrate various encoders or tokenizers for images, audio, and video into the LLM, allowing the model to understand multimodal inputs and generate corresponding text responses.</p> </div> <div class="ltx_para" id="S2.SS2.p7"> <p class="ltx_p" id="S2.SS2.p7.1">The above developments concern the comprehension side of cascaded spoken dialogue systems. On the generation side, two main types of speech synthesis work are relevant to cascaded spoken dialogue systems. Firstly, there has been a recent surge of advanced speech synthesis systems that can produce highly expressive and natural audio based on textual input, such as VALL-E (X) <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib209" title="">209</a>, <a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib250" title="">250</a>]</cite>, MegaTTS1/2 <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib96" title="">96</a>, <a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib97" title="">97</a>]</cite>, CosyVoice <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib49" title="">49</a>]</cite>, ChatTTS<span class="ltx_note ltx_role_footnote" id="footnote7"><sup class="ltx_note_mark">7</sup><span class="ltx_note_outer"><span class="ltx_note_content"><sup class="ltx_note_mark">7</sup><span class="ltx_tag ltx_tag_note">7</span><a class="ltx_ref ltx_url ltx_font_typewriter" href="https://github.com/2noise/ChatTTS" title="">https://github.com/2noise/ChatTTS</a></span></span></span>, FishSpeech<span class="ltx_note ltx_role_footnote" id="footnote8"><sup class="ltx_note_mark">8</sup><span class="ltx_note_outer"><span class="ltx_note_content"><sup class="ltx_note_mark">8</sup><span class="ltx_tag ltx_tag_note">8</span><a class="ltx_ref ltx_url ltx_font_typewriter" href="https://github.com/fishaudio/fish-speech" title="">https://github.com/fishaudio/fish-speech</a></span></span></span>, ParlerTTS <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib140" title="">140</a>]</cite>, MaskGCT <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib216" title="">216</a>]</cite> and F5-TTS <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib30" title="">30</a>]</cite>. In addition, there has been significant progress in the field of text-style controllable TTS, with systems like TextrolSpeech <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib92" title="">92</a>]</cite>, PromptTTS <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib71" title="">71</a>]</cite>, PromptTTS2 <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib118" title="">118</a>]</cite>, InstructTTS <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib231" title="">231</a>]</cite>, and ControlSpeech <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib93" title="">93</a>]</cite>. These TTS systems can generate highly natural audio based both on the content and style of the text output produced by the cascaded spoken dialogue models.</p> </div> </section> <section class="ltx_subsection" id="S2.SS3"> <h3 class="ltx_title ltx_title_subsection"> <span class="ltx_tag ltx_tag_subsection">2.3 </span>End-to-End Spoken Dialogue Systems</h3> <div class="ltx_para" id="S2.SS3.p1"> <p class="ltx_p" id="S2.SS3.p1.1">Ideally, end-to-end spoken dialogue models should enable <span class="ltx_text ltx_font_bold" id="S2.SS3.p1.1.1">only</span> speech input and output during both training and inference, thereby achieving multiple intelligent dialogue functions. However, considering that speech modal is a low-density (contains a lot of acoustic information) modality compared to text modal, and that the volume of available text data far exceeds that of available speech data, many end-to-end spoken dialogue models choose to align the speech modality with the text modality to leverage pre-trained language models (LLMs). Consequently, as showed in the Figure <a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#S1.F2" title="Figure 2 ‣ 1 Introduction ‣ WavChat: A Survey of Spoken Dialogue Models"><span class="ltx_text ltx_ref_tag">2</span></a>, as long as the large language models can directly understand and generate speech representations, we classify such systems as end-to-end spoken dialogue models. In contrast, if the large language models can only generate text, we categorize the system as cascaded spoken dialogue systems.</p> </div> <div class="ltx_para" id="S2.SS3.p2"> <p class="ltx_p" id="S2.SS3.p2.1">The earliest end-to-end spoken dialogue system can be traced back to dGSLM <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib157" title="">157</a>]</cite>, which was trained on thousands of hours of dual-track data <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib37" title="">37</a>]</cite> using self-attention and cross-attention mechanisms to simulate duplex interactions. Although dGSLM lacks integration with LLMs and even basic textual intelligence, it is notable as the first fully end-to-end spoken dialogue system that does not rely on text while maintaining excellent conversational interactivity.</p> </div> <div class="ltx_para" id="S2.SS3.p3"> <p class="ltx_p" id="S2.SS3.p3.1">Following the release of dGSLM <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib157" title="">157</a>]</cite>, the progress in the domain of end-to-end spoken dialogue systems stagnated for a few months. However, with the advent of ChatGPT, this field experienced rapid development. A representative approach is SpeechGPT <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib242" title="">242</a>]</cite>, which employs autoregressive language modeling by using a sequence of speech tokens, text tokens, text tokens, and speech tokens. This method enables the direct generation of speech tokens using textual intelligence, inspiring subsequent end-to-end spoken dialogue systems such as Spectron <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib146" title="">146</a>]</cite>, SpeechGPT-Gen <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib244" title="">244</a>]</cite>, GLM-4-Voice<span class="ltx_note ltx_role_footnote" id="footnote9"><sup class="ltx_note_mark">9</sup><span class="ltx_note_outer"><span class="ltx_note_content"><sup class="ltx_note_mark">9</sup><span class="ltx_tag ltx_tag_note">9</span><a class="ltx_ref ltx_url ltx_font_typewriter" href="https://github.com/THUDM/GLM-4-Voice" title="">https://github.com/THUDM/GLM-4-Voice</a></span></span></span>, and EMOVA <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib25" title="">25</a>]</cite>. These systems continue to use an autoregressive framework, generating the text tokens followed by the speech tokens. Although this approach allows LLMs to generate speech tokens directly, it introduces latency issues since speech token generation cannot begin until the generation of text tokens is complete. This leads to problems in multi-turn dialogue and overall system delay.</p> </div> <div class="ltx_para" id="S2.SS3.p4"> <p class="ltx_p" id="S2.SS3.p4.1">Beyond the design of SpeechGPT <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib242" title="">242</a>]</cite>, another intuitive approach is to directly use the hidden states before the LLM’s softmax layer to predict both text tokens and speech tokens through different projection layers. This allows the network to share weights up to the projection layer, thereby aligning the speech and text modalities. The PSLM <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib154" title="">154</a>]</cite> model is a typical example of this design. Another method, proposed by Meta, is the interleaving approach, as seen in Spirit-LM <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib158" title="">158</a>]</cite>, where speech and text sequences are concatenated into a single token stream and trained using a word-level interleaving method with a small, automatically curated speech-text parallel corpus. However, this approach requires precise alignment between speech and text.</p> </div> <div class="ltx_para" id="S2.SS3.p5"> <p class="ltx_p" id="S2.SS3.p5.1">Recently, several new end-to-end spoken dialogue systems have emerged. For instance, Moshi <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib44" title="">44</a>]</cite>, which is based on a global-local transformer, can simultaneously generate text and speech acoustic tokens from a multi-layer quantizer. Starting from a text-based language model backbone, Moshi generates speech tokens from the residual quantizer of a neural audio codec while modeling both the user’s speech and the system’s responses in parallel streams. This design eliminates the need for explicit speaker turns and allows for the modeling of arbitrary conversational dynamics. Moreover, Moshi extends previous hierarchical semantic-to-acoustic token generation by first predicting time-aligned text tokens as a prefix to audio tokens. Similarly, Mini-Omni <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib222" title="">222</a>]</cite> uses a MusicGen-based <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib40" title="">40</a>]</cite> method to simultaneously generate text and speech codec tokens. It introduces two strategies: autoregressive generation without strict temporal alignment by padding text tokens and batch-parallel inference strategies to boost performance. Mini-Omni2 <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib223" title="">223</a>]</cite> further enhances this by incorporating multimodal understanding and duplex functionality. At the same time, Llama-Omni <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib57" title="">57</a>]</cite>, Freeze-Omni <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib212" title="">212</a>]</cite> and IntrinsicVoice <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib248" title="">248</a>]</cite> design an LLM for real-time voice interaction. Their commonality lies in the fact that, at the generation stage, the hidden states of the LLM are further fed into the corresponding decoder model. LLaMA-Omni <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib57" title="">57</a>]</cite> integrates a pretrained speech encoder, a speech adapter, an LLM, and a streaming speech decoder. It eliminates the need for speech transcription, and can simultaneously generate text and speech responses directly from speech instructions with low latency. Freeze-Omni <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib212" title="">212</a>]</cite> designed 3-stage training strategies both for the modeling of speech input and output, enabling it to obtain speech-to-speech dialogue ability noly by using text-speech paired data. The core idea of Freeze-Omni lies in transferring the functionalities of spoken dialogue models to the encoder (ASR) and decoder (TTS), rather than assigning these tasks to the large language model. IntrinsicVoice <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib248" title="">248</a>]</cite> facilitates the transfer of textual capabilities from pre-trained LLMs to the speech modality by reducing the modality gap between text and speech. By using a GroupFormer to generate HuBERT tokens from the LLM’s hidden states, IntrinsicVoice effectively reduces speech sequences to lengths comparable to text sequences, generating high-quality audio while significantly speeding up inference and mitigating long-text modeling issues. Additionally, some end-to-end spoken dialogue models align speech and text through multi-stage training, eliminating the need to generate text during inference. For example, Omni-Flatten <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib246" title="">246</a>]</cite> employs modality alignment, half-duplex dialogue learning, and full-duplex dialogue learning, along with a flattening-style standardization of text and speech tokens, to achieve duplex, text-free speech dialogue during inference. Similar approaches include SyncLLM <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib203" title="">203</a>]</cite>.</p> </div> <div class="ltx_para" id="S2.SS3.p6"> <p class="ltx_p" id="S2.SS3.p6.1">In this section, we have provided a general overview of current end-to-end spoken dialogue systems. However, these systems differ significantly in their speech representations, training paradigm, model architectures and generation strategy. In Section <a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#S3" title="3 Representations of Spoken Dialogue Models ‣ WavChat: A Survey of Spoken Dialogue Models"><span class="ltx_text ltx_ref_tag">3</span></a> and <a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#S4" title="4 Training Paradigm of Spoken Dialogue Model ‣ WavChat: A Survey of Spoken Dialogue Models"><span class="ltx_text ltx_ref_tag">4</span></a>, we will present a detailed classificationfollowed by our discussions at the end of each section.</p> </div> </section> </section> <section class="ltx_section" id="S3"> <h2 class="ltx_title ltx_title_section"> <span class="ltx_tag ltx_tag_section">3 </span>Representations of Spoken Dialogue Models</h2> <div class="ltx_para" id="S3.p1"> <p class="ltx_p" id="S3.p1.1">Representations play a critical role in spoken dialogue systems as they determine how the spoken dialogue system comprehends, processes, and generates speech signals. Additionally, they serve as a bridge between speech and other modalities, thereby directly influencing the system’s performance, functionality, and range of applications. Compared to text and visual representations, speech representations possess a unique complexity. Text representations primarily rely on a well-defined symbolic system, conveying meaning through structured elements like vocabulary and syntax. Visual representations, on the other hand, focus on capturing spatial relationships and visual features in images. In contrast, speech signals contain both dynamic acoustic features (such as timbre, prosody and emotion) and rich semantic content, requiring representations that not only capture temporal variations but also preserve an understanding of the underlying meaning.</p> </div> <div class="ltx_para" id="S3.p2"> <p class="ltx_p" id="S3.p2.1">The unique nature of speech has led to the development of two types of representation models. The representations obtained by these two modeling approaches are often classified as semantic tokens and acoustic tokens. <span class="ltx_text ltx_font_bold" id="S3.p2.1.1">One category (semantic) is prediction-based modeling</span>, these models are trained for representation learning by predicting future frames in an autoregressive manner <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib35" title="">35</a>, <a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib187" title="">187</a>]</cite> or by using surrounding frames to predict masked frames <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib31" title="">31</a>, <a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib78" title="">78</a>, <a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib133" title="">133</a>]</cite>. This approach tends to prioritize capturing linguistic information within speech, making it particularly useful for recognition and understanding tasks. <span class="ltx_text ltx_font_bold" id="S3.p2.1.2">The other category (acoustic) focuses on speech compression and reconstruction</span> <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib90" title="">90</a>, <a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib43" title="">43</a>, <a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib113" title="">113</a>, <a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib238" title="">238</a>]</cite>. These models quantify speech features (which are downsampled from raw wavforms by one encoder) into a series of discrete tokens, then use one decoder to upsample these discrete tokens into the speech, calculating the reconstruction loss against the original signal. By this approach, we can get discrete acoustic tokens with impressive compression rates and high-fidelity acoustic information, making it more suitable for tasks such as speech synthesis and emotion analysis.</p> </div> <div class="ltx_para" id="S3.p3"> <p class="ltx_p" id="S3.p3.1">In the spoken dialogue systems, as illustrated in Figure <a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#S1.F2" title="Figure 2 ‣ 1 Introduction ‣ WavChat: A Survey of Spoken Dialogue Models"><span class="ltx_text ltx_ref_tag">2</span></a>, different spoken dialogue models employ various approaches for representation selection. In the following part, we will enumerate the commonly used speech representations in spoken dialogue models from both the input and output perspectives. At the end of this section, we will thoroughly discuss the advantages and limitations of these representations, as well as the future trends in the development of representations used in spoken dialogue models.</p> </div> <section class="ltx_subsection" id="S3.SS1"> <h3 class="ltx_title ltx_title_subsection"> <span class="ltx_tag ltx_tag_subsection">3.1 </span>Speech Representations at the Inputs</h3> <div class="ltx_para" id="S3.SS1.p1"> <p class="ltx_p" id="S3.SS1.p1.1"><span class="ltx_text ltx_font_bold" id="S3.SS1.p1.1.1">Semantic.</span> To enhance language models’ ability to understand speech representations and align multimodal data at input, using pretrained models such as Wav2Vec <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib184" title="">184</a>]</cite>, HuBERT <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib78" title="">78</a>]</cite>, Whisper <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib169" title="">169</a>]</cite>, and WavLM <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib27" title="">27</a>]</cite> to extract high-level semantic features from speech has become a core strategy for many spoken dialogue systems.</p> </div> <div class="ltx_para" id="S3.SS1.p2"> <p class="ltx_p" id="S3.SS1.p2.1"><math alttext="\bullet" class="ltx_Math" display="inline" id="S3.SS1.p2.1.m1.1"><semantics id="S3.SS1.p2.1.m1.1a"><mo id="S3.SS1.p2.1.m1.1.1" xref="S3.SS1.p2.1.m1.1.1.cmml">∙</mo><annotation-xml encoding="MathML-Content" id="S3.SS1.p2.1.m1.1b"><ci id="S3.SS1.p2.1.m1.1.1.cmml" xref="S3.SS1.p2.1.m1.1.1">∙</ci></annotation-xml><annotation encoding="application/x-tex" id="S3.SS1.p2.1.m1.1c">\bullet</annotation><annotation encoding="application/x-llamapun" id="S3.SS1.p2.1.m1.1d">∙</annotation></semantics></math> <em class="ltx_emph ltx_font_italic" id="S3.SS1.p2.1.1">Wav2Vec.</em> Wav2Vec <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib184" title="">184</a>]</cite> is a foundational work in the field of speech representation learning, pioneering the extraction of self-supervised speech representations from unlabeled speech data. This approach has driven technological advancements in tasks such as speech recognition, speaker identification, and other speech processing applications. Wav2Vec employs a multi-layer, one-dimensional convolutional neural network directly on raw speech waveforms to progressively extract temporal speech features. Training is accomplished through contrastive learning: the model selects a "correct" target (from the current speech frame) alongside several "incorrect" targets (negative samples). By learning to distinguish positive samples from negatives, the model effectively learns to represent speech features in latent space. As an improved version of Wav2Vec, Wav2Vec 2.0 <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib10" title="">10</a>]</cite> introduces the Transformer architecture and masked modeling. Wav2Vec 2.0 quantizes the latent speech representations extracted by the CNN and then uses a Transformer to model semantic information, similar to BERT <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib45" title="">45</a>]</cite>. It also employs a contrastive learning objective, requiring the model to distinguish the correct quantized representations from multiple candidate representations. ParalinGPT <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib128" title="">128</a>]</cite> aims to incorporate emotional expression in conversational interactions, choosing Wav2Vec 2.0 for its proven capability to encode rich prosodic information, beneficial for speech emotion recognition <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib123" title="">123</a>]</cite>. Specifically, ParalinGPT uses Wav2Vec 2.0’s intermediate layer (the 12th layer) for frame-by-frame feature extraction, as this layer has shown optimal results in linear probing tasks for emotion analysis. Additionally, ParalinGPT applies mean pooling and a linear feature projector to extract utterance embeddings.</p> </div> <div class="ltx_para" id="S3.SS1.p3"> <p class="ltx_p" id="S3.SS1.p3.2"><math alttext="\bullet" class="ltx_Math" display="inline" id="S3.SS1.p3.1.m1.1"><semantics id="S3.SS1.p3.1.m1.1a"><mo id="S3.SS1.p3.1.m1.1.1" xref="S3.SS1.p3.1.m1.1.1.cmml">∙</mo><annotation-xml encoding="MathML-Content" id="S3.SS1.p3.1.m1.1b"><ci id="S3.SS1.p3.1.m1.1.1.cmml" xref="S3.SS1.p3.1.m1.1.1">∙</ci></annotation-xml><annotation encoding="application/x-tex" id="S3.SS1.p3.1.m1.1c">\bullet</annotation><annotation encoding="application/x-llamapun" id="S3.SS1.p3.1.m1.1d">∙</annotation></semantics></math> <em class="ltx_emph ltx_font_italic" id="S3.SS1.p3.2.1">XLS-R.</em> XLS-R <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib9" title="">9</a>]</cite> is a multilingual self-supervised speech representation model based on the Wav2Vec 2.0 architecture. It extends and optimizes Wav2Vec 2.0 to support a broader range of languages, particularly low-resource languages. During cross-lingual training, XLS-R employs multilingual data augmentation and denoising techniques, enhancing the model’s adaptability when processing speech in various languages. USDM <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib106" title="">106</a>]</cite> uses XLS-R to obtain continuous intermediate representations at 50Hz, followed by a quantizer <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib14" title="">14</a>]</cite> with <math alttext="K" class="ltx_Math" display="inline" id="S3.SS1.p3.2.m2.1"><semantics id="S3.SS1.p3.2.m2.1a"><mi id="S3.SS1.p3.2.m2.1.1" xref="S3.SS1.p3.2.m2.1.1.cmml">K</mi><annotation-xml encoding="MathML-Content" id="S3.SS1.p3.2.m2.1b"><ci id="S3.SS1.p3.2.m2.1.1.cmml" xref="S3.SS1.p3.2.m2.1.1">𝐾</ci></annotation-xml><annotation encoding="application/x-tex" id="S3.SS1.p3.2.m2.1c">K</annotation><annotation encoding="application/x-llamapun" id="S3.SS1.p3.2.m2.1d">italic_K</annotation></semantics></math>=10000 to generate speech tokens.</p> </div> <div class="ltx_para" id="S3.SS1.p4"> <p class="ltx_p" id="S3.SS1.p4.1"><math alttext="\bullet" class="ltx_Math" display="inline" id="S3.SS1.p4.1.m1.1"><semantics id="S3.SS1.p4.1.m1.1a"><mo id="S3.SS1.p4.1.m1.1.1" xref="S3.SS1.p4.1.m1.1.1.cmml">∙</mo><annotation-xml encoding="MathML-Content" id="S3.SS1.p4.1.m1.1b"><ci id="S3.SS1.p4.1.m1.1.1.cmml" xref="S3.SS1.p4.1.m1.1.1">∙</ci></annotation-xml><annotation encoding="application/x-tex" id="S3.SS1.p4.1.m1.1c">\bullet</annotation><annotation encoding="application/x-llamapun" id="S3.SS1.p4.1.m1.1d">∙</annotation></semantics></math> <em class="ltx_emph ltx_font_italic" id="S3.SS1.p4.1.1">HuBERT.</em> HuBERT <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib78" title="">78</a>]</cite> is a commonly used unsupervised learning model that performs K-Means clustering on the MFCC <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib251" title="">251</a>]</cite> features of speech to assign pseudo-labels to each frame. It uses a convolutional encoder to generate a sequence of features at a 20ms frame rate from 16kHz sampled speech. Finally, it randomly masks a portion of features from consecutive frames as input to the Transformer <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib201" title="">201</a>]</cite>. HuBERT generates masked content based on surrounding context, enabling it to capture temporal and semantic information within speech and gain a deeper understanding of contextual details. Spoken dialogue systems, such as E-Chat <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib227" title="">227</a>]</cite>, SpeechGPT <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib242" title="">242</a>]</cite>, PSLM <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib154" title="">154</a>]</cite>, IntrinsicVoice <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib248" title="">248</a>]</cite>, widely use HuBERT as their speech encoder. E-Chat extracts the weighted sum of the 24 layers from the HuBERT to serve as speech embeddings, and incorporates an additional set of weighted parameters to extract emotion embeddings, thereby enabling emotion-aware capabilities. SpeechGPT applies K-Means clustering to quantize the continuous features extracted from HuBERT, converting them into discrete unit sequences. These discrete units are then integrated into the vocabulary of the large language model, enabling direct alignment between the text and speech modalities. To more effectively integrate the language model with speech streams, PSLM adds an additional embedding layer after extracting features with HuBERT. IntrinsicVoice uses HuBERT as the speech tokenizer, grouping speech tokens to reduce sequence length. An embedding layer then converts these tokens into dense embeddings, which are subsequently mapped into the language model’s embedding space using a trainable speech adapter. Spirit-LM <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib158" title="">158</a>]</cite> extracts semantic features using HuBERT, employing a K-Means model with 500 units as the basic unit. It trains a feedforward quantizer with data augmentation techniques <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib64" title="">64</a>]</cite> to produce discrete speech tokens. In the Align-SLM <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib129" title="">129</a>]</cite>, HuBERT is used and the cluster number K is set to 500. Notably, when continuous representations are clustered into discrete units, they primarily capture content information, which can be leveraged for modeling and understanding. This process first extracts 25Hz frame-level continuous representations from the 11-th layer of the HuBERT model, assigns each frame to its closest cluster index, and then de-duplicates consecutive identical indices to shorten the sequence.</p> </div> <div class="ltx_para" id="S3.SS1.p5"> <p class="ltx_p" id="S3.SS1.p5.1"><math alttext="\bullet" class="ltx_Math" display="inline" id="S3.SS1.p5.1.m1.1"><semantics id="S3.SS1.p5.1.m1.1a"><mo id="S3.SS1.p5.1.m1.1.1" xref="S3.SS1.p5.1.m1.1.1.cmml">∙</mo><annotation-xml encoding="MathML-Content" id="S3.SS1.p5.1.m1.1b"><ci id="S3.SS1.p5.1.m1.1.1.cmml" xref="S3.SS1.p5.1.m1.1.1">∙</ci></annotation-xml><annotation encoding="application/x-tex" id="S3.SS1.p5.1.m1.1c">\bullet</annotation><annotation encoding="application/x-llamapun" id="S3.SS1.p5.1.m1.1d">∙</annotation></semantics></math> <em class="ltx_emph ltx_font_italic" id="S3.SS1.p5.1.1">Whisper.</em> Whisper <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib169" title="">169</a>]</cite>, based on the classic encoder-decoder architecture, has gained widespread attention in the field of speech recognition. The encoder transforms input speech into high-level feature representations, while the decoder generates the corresponding text output from these representations. Pretrained on large-scale data across various speech environments with text as the target, Whisper demonstrates strong capabilities in extracting semantic information from speech. Qwen-Audio <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib34" title="">34</a>]</cite>, Qwen-Audio 2 <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib33" title="">33</a>]</cite> use Whisper’s encoder to convert speech into continuous representations, which are then combined with text representations and fed into the large language model. Mini-Omni <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib222" title="">222</a>]</cite>, Mini-Omni 2 <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib223" title="">223</a>]</cite>, and LLama-Omni <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib57" title="">57</a>]</cite> follow a similar approach, connecting a speech adapter after the Whisper encoder. Their shared objective is to map speech representations into the text embedding space of the large language model, enhancing the model’s ability to understand speech by forcibly aligning them through vocabulary expansion.</p> </div> <div class="ltx_para" id="S3.SS1.p6"> <p class="ltx_p" id="S3.SS1.p6.1"><math alttext="\bullet" class="ltx_Math" display="inline" id="S3.SS1.p6.1.m1.1"><semantics id="S3.SS1.p6.1.m1.1a"><mo id="S3.SS1.p6.1.m1.1.1" xref="S3.SS1.p6.1.m1.1.1.cmml">∙</mo><annotation-xml encoding="MathML-Content" id="S3.SS1.p6.1.m1.1b"><ci id="S3.SS1.p6.1.m1.1.1.cmml" xref="S3.SS1.p6.1.m1.1.1">∙</ci></annotation-xml><annotation encoding="application/x-tex" id="S3.SS1.p6.1.m1.1c">\bullet</annotation><annotation encoding="application/x-llamapun" id="S3.SS1.p6.1.m1.1d">∙</annotation></semantics></math> <em class="ltx_emph ltx_font_italic" id="S3.SS1.p6.1.1">WavLM.</em> WavLM <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib27" title="">27</a>]</cite> is a pretrained model designed for comprehensive speech processing tasks, playing a critical role in advancing speech technology. Specifically, WavLM employs a masked speech denoising and prediction framework, where some inputs consist of simulated noise or overlapping speech with masked sections. The goal is to predict pseudo-labels of the original speech in the masked areas. This approach enables the model to learn ASR-related information through masked speech prediction, while also gaining knowledge relevant to non-ASR tasks through speech denoising modeling. The masking and prediction pipeline for speech frames in WavLM is similar to that of HuBERT. However, WavLM introduces an additional gated relative position bias to enhance the model’s sensitivity to temporal information in speech. SpeechVerse <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib41" title="">41</a>]</cite> leverages the pretrained WavLM Large as its backbone speech encoder, encoding all intermediate layer features from WavLM to capture various forms of semantics and achieve better generalization performance. To address the significant length disparity between speech features and text tokens, SpeechVerse applies a learnable convolutional module for downsampling the speech features.</p> </div> <div class="ltx_para" id="S3.SS1.p7"> <p class="ltx_p" id="S3.SS1.p7.6"><math alttext="\bullet" class="ltx_Math" display="inline" id="S3.SS1.p7.1.m1.1"><semantics id="S3.SS1.p7.1.m1.1a"><mo id="S3.SS1.p7.1.m1.1.1" xref="S3.SS1.p7.1.m1.1.1.cmml">∙</mo><annotation-xml encoding="MathML-Content" id="S3.SS1.p7.1.m1.1b"><ci id="S3.SS1.p7.1.m1.1.1.cmml" xref="S3.SS1.p7.1.m1.1.1">∙</ci></annotation-xml><annotation encoding="application/x-tex" id="S3.SS1.p7.1.m1.1c">\bullet</annotation><annotation encoding="application/x-llamapun" id="S3.SS1.p7.1.m1.1d">∙</annotation></semantics></math> <em class="ltx_emph ltx_font_italic" id="S3.SS1.p7.2.1"><math alttext="S^{3}" class="ltx_Math" display="inline" id="S3.SS1.p7.2.1.m1.1"><semantics id="S3.SS1.p7.2.1.m1.1a"><msup id="S3.SS1.p7.2.1.m1.1.1" xref="S3.SS1.p7.2.1.m1.1.1.cmml"><mi id="S3.SS1.p7.2.1.m1.1.1.2" xref="S3.SS1.p7.2.1.m1.1.1.2.cmml">S</mi><mn id="S3.SS1.p7.2.1.m1.1.1.3" xref="S3.SS1.p7.2.1.m1.1.1.3.cmml">3</mn></msup><annotation-xml encoding="MathML-Content" id="S3.SS1.p7.2.1.m1.1b"><apply id="S3.SS1.p7.2.1.m1.1.1.cmml" xref="S3.SS1.p7.2.1.m1.1.1"><csymbol cd="ambiguous" id="S3.SS1.p7.2.1.m1.1.1.1.cmml" xref="S3.SS1.p7.2.1.m1.1.1">superscript</csymbol><ci id="S3.SS1.p7.2.1.m1.1.1.2.cmml" xref="S3.SS1.p7.2.1.m1.1.1.2">𝑆</ci><cn id="S3.SS1.p7.2.1.m1.1.1.3.cmml" type="integer" xref="S3.SS1.p7.2.1.m1.1.1.3">3</cn></apply></annotation-xml><annotation encoding="application/x-tex" id="S3.SS1.p7.2.1.m1.1c">S^{3}</annotation><annotation encoding="application/x-llamapun" id="S3.SS1.p7.2.1.m1.1d">italic_S start_POSTSUPERSCRIPT 3 end_POSTSUPERSCRIPT</annotation></semantics></math> Tokenizer.</em> CosyVoice <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib49" title="">49</a>]</cite> proposes using a supervised automatic speech recognition module to generate a supervised semantic speech(<math alttext="S^{3}" class="ltx_Math" display="inline" id="S3.SS1.p7.3.m2.1"><semantics id="S3.SS1.p7.3.m2.1a"><msup id="S3.SS1.p7.3.m2.1.1" xref="S3.SS1.p7.3.m2.1.1.cmml"><mi id="S3.SS1.p7.3.m2.1.1.2" xref="S3.SS1.p7.3.m2.1.1.2.cmml">S</mi><mn id="S3.SS1.p7.3.m2.1.1.3" xref="S3.SS1.p7.3.m2.1.1.3.cmml">3</mn></msup><annotation-xml encoding="MathML-Content" id="S3.SS1.p7.3.m2.1b"><apply id="S3.SS1.p7.3.m2.1.1.cmml" xref="S3.SS1.p7.3.m2.1.1"><csymbol cd="ambiguous" id="S3.SS1.p7.3.m2.1.1.1.cmml" xref="S3.SS1.p7.3.m2.1.1">superscript</csymbol><ci id="S3.SS1.p7.3.m2.1.1.2.cmml" xref="S3.SS1.p7.3.m2.1.1.2">𝑆</ci><cn id="S3.SS1.p7.3.m2.1.1.3.cmml" type="integer" xref="S3.SS1.p7.3.m2.1.1.3">3</cn></apply></annotation-xml><annotation encoding="application/x-tex" id="S3.SS1.p7.3.m2.1c">S^{3}</annotation><annotation encoding="application/x-llamapun" id="S3.SS1.p7.3.m2.1d">italic_S start_POSTSUPERSCRIPT 3 end_POSTSUPERSCRIPT</annotation></semantics></math>) tokenizer. Unlike a standard ASR model, the <math alttext="S^{3}" class="ltx_Math" display="inline" id="S3.SS1.p7.4.m3.1"><semantics id="S3.SS1.p7.4.m3.1a"><msup id="S3.SS1.p7.4.m3.1.1" xref="S3.SS1.p7.4.m3.1.1.cmml"><mi id="S3.SS1.p7.4.m3.1.1.2" xref="S3.SS1.p7.4.m3.1.1.2.cmml">S</mi><mn id="S3.SS1.p7.4.m3.1.1.3" xref="S3.SS1.p7.4.m3.1.1.3.cmml">3</mn></msup><annotation-xml encoding="MathML-Content" id="S3.SS1.p7.4.m3.1b"><apply id="S3.SS1.p7.4.m3.1.1.cmml" xref="S3.SS1.p7.4.m3.1.1"><csymbol cd="ambiguous" id="S3.SS1.p7.4.m3.1.1.1.cmml" xref="S3.SS1.p7.4.m3.1.1">superscript</csymbol><ci id="S3.SS1.p7.4.m3.1.1.2.cmml" xref="S3.SS1.p7.4.m3.1.1.2">𝑆</ci><cn id="S3.SS1.p7.4.m3.1.1.3.cmml" type="integer" xref="S3.SS1.p7.4.m3.1.1.3">3</cn></apply></annotation-xml><annotation encoding="application/x-tex" id="S3.SS1.p7.4.m3.1c">S^{3}</annotation><annotation encoding="application/x-llamapun" id="S3.SS1.p7.4.m3.1d">italic_S start_POSTSUPERSCRIPT 3 end_POSTSUPERSCRIPT</annotation></semantics></math> tokenizer splits the encoder into two parts and introduces a vector quantization layer in between. The first encoder converts the mel spectrogram into context-aware representations, while the second encoder transforms discrete speech units into continuous hidden states. Finally, a Transformer-based ASR decoder predicts the posterior probabilities of text labels. Through supervision in multilingual ASR tasks, the <math alttext="S^{3}" class="ltx_Math" display="inline" id="S3.SS1.p7.5.m4.1"><semantics id="S3.SS1.p7.5.m4.1a"><msup id="S3.SS1.p7.5.m4.1.1" xref="S3.SS1.p7.5.m4.1.1.cmml"><mi id="S3.SS1.p7.5.m4.1.1.2" xref="S3.SS1.p7.5.m4.1.1.2.cmml">S</mi><mn id="S3.SS1.p7.5.m4.1.1.3" xref="S3.SS1.p7.5.m4.1.1.3.cmml">3</mn></msup><annotation-xml encoding="MathML-Content" id="S3.SS1.p7.5.m4.1b"><apply id="S3.SS1.p7.5.m4.1.1.cmml" xref="S3.SS1.p7.5.m4.1.1"><csymbol cd="ambiguous" id="S3.SS1.p7.5.m4.1.1.1.cmml" xref="S3.SS1.p7.5.m4.1.1">superscript</csymbol><ci id="S3.SS1.p7.5.m4.1.1.2.cmml" xref="S3.SS1.p7.5.m4.1.1.2">𝑆</ci><cn id="S3.SS1.p7.5.m4.1.1.3.cmml" type="integer" xref="S3.SS1.p7.5.m4.1.1.3">3</cn></apply></annotation-xml><annotation encoding="application/x-tex" id="S3.SS1.p7.5.m4.1c">S^{3}</annotation><annotation encoding="application/x-llamapun" id="S3.SS1.p7.5.m4.1d">italic_S start_POSTSUPERSCRIPT 3 end_POSTSUPERSCRIPT</annotation></semantics></math> tokenizer can convert speech into semantically consistent tokens that facilitate both speech understanding and generation. OmniFlatten <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib246" title="">246</a>]</cite> uses the <math alttext="S^{3}" class="ltx_Math" display="inline" id="S3.SS1.p7.6.m5.1"><semantics id="S3.SS1.p7.6.m5.1a"><msup id="S3.SS1.p7.6.m5.1.1" xref="S3.SS1.p7.6.m5.1.1.cmml"><mi id="S3.SS1.p7.6.m5.1.1.2" xref="S3.SS1.p7.6.m5.1.1.2.cmml">S</mi><mn id="S3.SS1.p7.6.m5.1.1.3" xref="S3.SS1.p7.6.m5.1.1.3.cmml">3</mn></msup><annotation-xml encoding="MathML-Content" id="S3.SS1.p7.6.m5.1b"><apply id="S3.SS1.p7.6.m5.1.1.cmml" xref="S3.SS1.p7.6.m5.1.1"><csymbol cd="ambiguous" id="S3.SS1.p7.6.m5.1.1.1.cmml" xref="S3.SS1.p7.6.m5.1.1">superscript</csymbol><ci id="S3.SS1.p7.6.m5.1.1.2.cmml" xref="S3.SS1.p7.6.m5.1.1.2">𝑆</ci><cn id="S3.SS1.p7.6.m5.1.1.3.cmml" type="integer" xref="S3.SS1.p7.6.m5.1.1.3">3</cn></apply></annotation-xml><annotation encoding="application/x-tex" id="S3.SS1.p7.6.m5.1c">S^{3}</annotation><annotation encoding="application/x-llamapun" id="S3.SS1.p7.6.m5.1d">italic_S start_POSTSUPERSCRIPT 3 end_POSTSUPERSCRIPT</annotation></semantics></math> tokenizer to extract discrete speech tokens, which are then directly fed into a text-speech pre-trained Transformer.</p> </div> <div class="ltx_para" id="S3.SS1.p8"> <p class="ltx_p" id="S3.SS1.p8.1"><math alttext="\bullet" class="ltx_Math" display="inline" id="S3.SS1.p8.1.m1.1"><semantics id="S3.SS1.p8.1.m1.1a"><mo id="S3.SS1.p8.1.m1.1.1" xref="S3.SS1.p8.1.m1.1.1.cmml">∙</mo><annotation-xml encoding="MathML-Content" id="S3.SS1.p8.1.m1.1b"><ci id="S3.SS1.p8.1.m1.1.1.cmml" xref="S3.SS1.p8.1.m1.1.1">∙</ci></annotation-xml><annotation encoding="application/x-tex" id="S3.SS1.p8.1.m1.1c">\bullet</annotation><annotation encoding="application/x-llamapun" id="S3.SS1.p8.1.m1.1d">∙</annotation></semantics></math> <em class="ltx_emph ltx_font_italic" id="S3.SS1.p8.1.1">SPIRAL.</em> SPIRAL <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib85" title="">85</a>]</cite> aims to learn representations from speech data that are robust to noise and perturbations. It uses a teacher-student network, where various perturbations—such as noise addition, gain adjustment, and time-frequency warping—are applied to the speech input of the student model. The teacher model then guides the student model to produce consistent representations despite these perturbations. EMOVA <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib25" title="">25</a>]</cite> utilizes the SPIRAL’s architecture as a speech encoder to process speech, and employs the finite scalar quantization <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib149" title="">149</a>]</cite> to discretize these features. This process aligns speech with the text vocabulary, allowing for a more natural integration into the LLM.</p> </div> <div class="ltx_para" id="S3.SS1.p9"> <p class="ltx_p" id="S3.SS1.p9.1"><math alttext="\bullet" class="ltx_Math" display="inline" id="S3.SS1.p9.1.m1.1"><semantics id="S3.SS1.p9.1.m1.1a"><mo id="S3.SS1.p9.1.m1.1.1" xref="S3.SS1.p9.1.m1.1.1.cmml">∙</mo><annotation-xml encoding="MathML-Content" id="S3.SS1.p9.1.m1.1b"><ci id="S3.SS1.p9.1.m1.1.1.cmml" xref="S3.SS1.p9.1.m1.1.1">∙</ci></annotation-xml><annotation encoding="application/x-tex" id="S3.SS1.p9.1.m1.1c">\bullet</annotation><annotation encoding="application/x-llamapun" id="S3.SS1.p9.1.m1.1d">∙</annotation></semantics></math> <em class="ltx_emph ltx_font_italic" id="S3.SS1.p9.1.1">Others.</em> Some spoken dialogue systems do not use pre-trained representation models; instead, they process input features by stacking fundamental modules. VITA <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib61" title="">61</a>]</cite> initially decomposes the speech signal using mel filter banks, mimicking the nonlinear perception of sound in humans. It then processes the input features with a 4-layer CNN downsampling module followed by a 24-layer Transformer. To align with the subsequent language model, VITA employs a simple 2-layer MLP as an adapter. Freeze-Omni <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib213" title="">213</a>]</cite> utilizes a chunk-wise streaming speech encoder to transform input speech features into high-dimensional representations. An adapter module then maps these high-dimensional representations into the embedding space of the main LLM, ensuring a quick, low-latency response to the input speech. The speech encoder module consists of several downsampling convolutional layers and Transformer blocks, while the adapter includes only a few downsampling convolutional layers. Downsampling layers are used to reduce the frame rate of speech features, increase the LLM’s processing speed during the prefill phase, and minimize latency.</p> </div> <div class="ltx_para" id="S3.SS1.p10"> <p class="ltx_p" id="S3.SS1.p10.1"><span class="ltx_text ltx_font_bold" id="S3.SS1.p10.1.1">Acoustic.</span> Considering that semantic features are insufficient to capture the emotion, timbre, and style of speech, some representation models, such as Emotion2Vec <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib143" title="">143</a>]</cite>, attempt to extract acoustic information through self-supervised training. Others focus on reconstruction objectives to ensure high-fidelity speech, including models like Encodec <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib43" title="">43</a>]</cite>, SpeechTokenizer <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib249" title="">249</a>]</cite>, Mimi <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib44" title="">44</a>]</cite>.</p> </div> <div class="ltx_para" id="S3.SS1.p11"> <p class="ltx_p" id="S3.SS1.p11.1"><math alttext="\bullet" class="ltx_Math" display="inline" id="S3.SS1.p11.1.m1.1"><semantics id="S3.SS1.p11.1.m1.1a"><mo id="S3.SS1.p11.1.m1.1.1" xref="S3.SS1.p11.1.m1.1.1.cmml">∙</mo><annotation-xml encoding="MathML-Content" id="S3.SS1.p11.1.m1.1b"><ci id="S3.SS1.p11.1.m1.1.1.cmml" xref="S3.SS1.p11.1.m1.1.1">∙</ci></annotation-xml><annotation encoding="application/x-tex" id="S3.SS1.p11.1.m1.1c">\bullet</annotation><annotation encoding="application/x-llamapun" id="S3.SS1.p11.1.m1.1d">∙</annotation></semantics></math> <em class="ltx_emph ltx_font_italic" id="S3.SS1.p11.1.1">Encodec.</em> EnCodec <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib43" title="">43</a>]</cite> is a straightforward, streaming, convolution-based encoder-decoder architecture. Raw speech is downsampled through a series of convolutional layers, mapping it to latent feature representations. Residual vector quantization <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib238" title="">238</a>]</cite> then discretizes the encoder’s continuous latent features. The quantization objective is to map continuous features to a predefined set of discrete tokens (known as a "codebook") for subsequent compression and transmission. The decoder restores the discrete features to a waveform close to the original speech through a series of de-convolution layers. LauraGPT <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib50" title="">50</a>]</cite> employs an enhanced version of EnCodec as its speech encoder with specific modifications: (1) adding a reconstruction loss in the magnitude spectral domain to improve mid-to-high frequency signal quality; (2) stacking five strided convolutional blocks with strides of (8, 5, 4, 2, 2) to address the challenges of long sequence lengths, resulting in a token rate of 25Hz per token group; and (3) using 32 quantizers with structured dropout in the Residual Vector Quantization (RVQ) module, each with a vocabulary size of 1024. This revision increases speech quality by incorporating more quantizers while preserving most information in the shallow quantizers. LauraGPT ultimately selects the output from the first quantizer layer as the speech token, balancing performance with sequence length efficiency. The remaining quantizers are used only during the training of the encoder-decoder model.</p> </div> <div class="ltx_para" id="S3.SS1.p12"> <p class="ltx_p" id="S3.SS1.p12.1"><math alttext="\bullet" class="ltx_Math" display="inline" id="S3.SS1.p12.1.m1.1"><semantics id="S3.SS1.p12.1.m1.1a"><mo id="S3.SS1.p12.1.m1.1.1" xref="S3.SS1.p12.1.m1.1.1.cmml">∙</mo><annotation-xml encoding="MathML-Content" id="S3.SS1.p12.1.m1.1b"><ci id="S3.SS1.p12.1.m1.1.1.cmml" xref="S3.SS1.p12.1.m1.1.1">∙</ci></annotation-xml><annotation encoding="application/x-tex" id="S3.SS1.p12.1.m1.1c">\bullet</annotation><annotation encoding="application/x-llamapun" id="S3.SS1.p12.1.m1.1d">∙</annotation></semantics></math> <em class="ltx_emph ltx_font_italic" id="S3.SS1.p12.1.1">SpeechTokenizer.</em> SpeechTokenizer <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib249" title="">249</a>]</cite> unifies semantic and acoustic tokens, hierarchically decomposing different aspects of speech information across various RVQ layers. It is built on the framework of RVQ-GANs, following the same pattern as SoundStream <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib238" title="">238</a>]</cite> and EnCodec <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib43" title="">43</a>]</cite>. Notably, SpeechTokenizer has substituted the two-layer LSTM, originally following the convolution blocks in the EnCodec encoder, with a two-layer BiLSTM to augment the semantic modeling ability. SpeechTokenizer uses HuBERT as a semantic teacher, given HuBERT’s proven capacity to encode substantial content information <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib155" title="">155</a>]</cite>. During training, it introduces two types of distillation: continuous representation distillation and pseudo-label prediction. For continuous representation distillation, SpeechTokenizer employs the 9th layer HuBERT representation or the average representation across all HuBERT layers as semantic teachers. The training objective is to maximize the cosine similarity at the dimension level across all timesteps between the outputs of RVQ first layer and semantic teacher representations. For pseudo-label prediction, SpeechTokenizer adopts HuBERT units as the target label. In dialogue systems, SpeechGPT-Gen uses SpeechTokenizer RVQ-1 to process raw speech, primarily enhancing the large language model’s ability to model the semantics of speech.</p> </div> <div class="ltx_para" id="S3.SS1.p13"> <p class="ltx_p" id="S3.SS1.p13.1"><math alttext="\bullet" class="ltx_Math" display="inline" id="S3.SS1.p13.1.m1.1"><semantics id="S3.SS1.p13.1.m1.1a"><mo id="S3.SS1.p13.1.m1.1.1" xref="S3.SS1.p13.1.m1.1.1.cmml">∙</mo><annotation-xml encoding="MathML-Content" id="S3.SS1.p13.1.m1.1b"><ci id="S3.SS1.p13.1.m1.1.1.cmml" xref="S3.SS1.p13.1.m1.1.1">∙</ci></annotation-xml><annotation encoding="application/x-tex" id="S3.SS1.p13.1.m1.1c">\bullet</annotation><annotation encoding="application/x-llamapun" id="S3.SS1.p13.1.m1.1d">∙</annotation></semantics></math> <em class="ltx_emph ltx_font_italic" id="S3.SS1.p13.1.1">Mimi.</em> Taking inspiration from previous work on SpeechTokenizer, Mimi <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib44" title="">44</a>]</cite> uses distillation to transfer non-causal, high-level semantic information into the tokens produced by a causal model, allowing for streaming encoding and decoding of semantic-acoustic tokens. To improve the ability of Mimi to encode speech into compact representations while reconstructing high-quality speech, Transformer modules are added in the encoder and decoder. Mimi uses WavLM to distill RVQ-1, enriching it with semantic information. Notably, performing distillation significantly enhances the speech discrimination capability of the first quantizer; however, it can also negatively impact speech quality. Mimi hypothesizes that this is due to distilling semantic information into the first level of a single RVQ: As higher-order quantizers operate on the residual of the first one, the latter needs to trade speech quality for phonetic discriminability. Mimi addresses this issue by introducing a split-RVQ approach. Instead of using a single 8-level RVQ, it extracts semantic information into a simple VQ and applies a parallel 7-level RVQ, combining their outputs at the end. This removes the constraint that acoustic information must be preserved in the residuals of the semantic quantizer. After careful design, Mimi serves as the speech encoder in Moshi <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib44" title="">44</a>]</cite>, this approach enhances the model’s ability to capture both semantic and acoustic details.</p> </div> <div class="ltx_para" id="S3.SS1.p14"> <p class="ltx_p" id="S3.SS1.p14.1"><math alttext="\bullet" class="ltx_Math" display="inline" id="S3.SS1.p14.1.m1.1"><semantics id="S3.SS1.p14.1.m1.1a"><mo id="S3.SS1.p14.1.m1.1.1" xref="S3.SS1.p14.1.m1.1.1.cmml">∙</mo><annotation-xml encoding="MathML-Content" id="S3.SS1.p14.1.m1.1b"><ci id="S3.SS1.p14.1.m1.1.1.cmml" xref="S3.SS1.p14.1.m1.1.1">∙</ci></annotation-xml><annotation encoding="application/x-tex" id="S3.SS1.p14.1.m1.1c">\bullet</annotation><annotation encoding="application/x-llamapun" id="S3.SS1.p14.1.m1.1d">∙</annotation></semantics></math> <em class="ltx_emph ltx_font_italic" id="S3.SS1.p14.1.1">Emotion2Vec.</em> Emotion2Vec <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib143" title="">143</a>]</cite> is a versatile speech emotion representation model designed to extract emotional features from speech. During the pre-training phase, Emotion2Vec conducts online distillation with a teacher network and a student network. When a specific downstream task is performed, Emotion2Vec is frozen and a lightweight downstream model is trained. Emotion2Vec introduces an utterance-level loss to control global emotion and employs a frame-level loss to build a frame-wise pretext task, enabling it to learn contextual emotions. Spoken-LLM <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib127" title="">127</a>]</cite> uses features extracted by Emotion2Vec as input for the large language model, aiming to enable the model to understand and respond to emotions.</p> </div> </section> <section class="ltx_subsection" id="S3.SS2"> <h3 class="ltx_title ltx_title_subsection"> <span class="ltx_tag ltx_tag_subsection">3.2 </span>Speech Representations at the Outputs</h3> <div class="ltx_para" id="S3.SS2.p1"> <p class="ltx_p" id="S3.SS2.p1.1"><span class="ltx_text ltx_font_bold" id="S3.SS2.p1.1.1">Semantic.</span> At the output stage, Most spoken dialogue systems choose to autoregressively model semantic tokens, such as <math alttext="S^{3}" class="ltx_Math" display="inline" id="S3.SS2.p1.1.m1.1"><semantics id="S3.SS2.p1.1.m1.1a"><msup id="S3.SS2.p1.1.m1.1.1" xref="S3.SS2.p1.1.m1.1.1.cmml"><mi id="S3.SS2.p1.1.m1.1.1.2" xref="S3.SS2.p1.1.m1.1.1.2.cmml">S</mi><mn id="S3.SS2.p1.1.m1.1.1.3" xref="S3.SS2.p1.1.m1.1.1.3.cmml">3</mn></msup><annotation-xml encoding="MathML-Content" id="S3.SS2.p1.1.m1.1b"><apply id="S3.SS2.p1.1.m1.1.1.cmml" xref="S3.SS2.p1.1.m1.1.1"><csymbol cd="ambiguous" id="S3.SS2.p1.1.m1.1.1.1.cmml" xref="S3.SS2.p1.1.m1.1.1">superscript</csymbol><ci id="S3.SS2.p1.1.m1.1.1.2.cmml" xref="S3.SS2.p1.1.m1.1.1.2">𝑆</ci><cn id="S3.SS2.p1.1.m1.1.1.3.cmml" type="integer" xref="S3.SS2.p1.1.m1.1.1.3">3</cn></apply></annotation-xml><annotation encoding="application/x-tex" id="S3.SS2.p1.1.m1.1c">S^{3}</annotation><annotation encoding="application/x-llamapun" id="S3.SS2.p1.1.m1.1d">italic_S start_POSTSUPERSCRIPT 3 end_POSTSUPERSCRIPT</annotation></semantics></math> tokens <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib49" title="">49</a>]</cite> and HuBERT <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib78" title="">78</a>]</cite> units. It is worth noting that these semantic tokens lack acoustic conditioning and therefore require a vocoder <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib108" title="">108</a>, <a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib166" title="">166</a>]</cite> or decoder, which futher takes semantic discrete units as input to synthesize speech consistent with the speakers encountered during training.</p> </div> <div class="ltx_para" id="S3.SS2.p2"> <p class="ltx_p" id="S3.SS2.p2.3"><math alttext="\bullet" class="ltx_Math" display="inline" id="S3.SS2.p2.1.m1.1"><semantics id="S3.SS2.p2.1.m1.1a"><mo id="S3.SS2.p2.1.m1.1.1" xref="S3.SS2.p2.1.m1.1.1.cmml">∙</mo><annotation-xml encoding="MathML-Content" id="S3.SS2.p2.1.m1.1b"><ci id="S3.SS2.p2.1.m1.1.1.cmml" xref="S3.SS2.p2.1.m1.1.1">∙</ci></annotation-xml><annotation encoding="application/x-tex" id="S3.SS2.p2.1.m1.1c">\bullet</annotation><annotation encoding="application/x-llamapun" id="S3.SS2.p2.1.m1.1d">∙</annotation></semantics></math> <em class="ltx_emph ltx_font_italic" id="S3.SS2.p2.2.1"><math alttext="S^{3}" class="ltx_Math" display="inline" id="S3.SS2.p2.2.1.m1.1"><semantics id="S3.SS2.p2.2.1.m1.1a"><msup id="S3.SS2.p2.2.1.m1.1.1" xref="S3.SS2.p2.2.1.m1.1.1.cmml"><mi id="S3.SS2.p2.2.1.m1.1.1.2" xref="S3.SS2.p2.2.1.m1.1.1.2.cmml">S</mi><mn id="S3.SS2.p2.2.1.m1.1.1.3" xref="S3.SS2.p2.2.1.m1.1.1.3.cmml">3</mn></msup><annotation-xml encoding="MathML-Content" id="S3.SS2.p2.2.1.m1.1b"><apply id="S3.SS2.p2.2.1.m1.1.1.cmml" xref="S3.SS2.p2.2.1.m1.1.1"><csymbol cd="ambiguous" id="S3.SS2.p2.2.1.m1.1.1.1.cmml" xref="S3.SS2.p2.2.1.m1.1.1">superscript</csymbol><ci id="S3.SS2.p2.2.1.m1.1.1.2.cmml" xref="S3.SS2.p2.2.1.m1.1.1.2">𝑆</ci><cn id="S3.SS2.p2.2.1.m1.1.1.3.cmml" type="integer" xref="S3.SS2.p2.2.1.m1.1.1.3">3</cn></apply></annotation-xml><annotation encoding="application/x-tex" id="S3.SS2.p2.2.1.m1.1c">S^{3}</annotation><annotation encoding="application/x-llamapun" id="S3.SS2.p2.2.1.m1.1d">italic_S start_POSTSUPERSCRIPT 3 end_POSTSUPERSCRIPT</annotation></semantics></math> Tokenizer.</em> OmniFlatten <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib246" title="">246</a>]</cite> uses the LLM to autoregressively predict <math alttext="S^{3}" class="ltx_Math" display="inline" id="S3.SS2.p2.3.m2.1"><semantics id="S3.SS2.p2.3.m2.1a"><msup id="S3.SS2.p2.3.m2.1.1" xref="S3.SS2.p2.3.m2.1.1.cmml"><mi id="S3.SS2.p2.3.m2.1.1.2" xref="S3.SS2.p2.3.m2.1.1.2.cmml">S</mi><mn id="S3.SS2.p2.3.m2.1.1.3" xref="S3.SS2.p2.3.m2.1.1.3.cmml">3</mn></msup><annotation-xml encoding="MathML-Content" id="S3.SS2.p2.3.m2.1b"><apply id="S3.SS2.p2.3.m2.1.1.cmml" xref="S3.SS2.p2.3.m2.1.1"><csymbol cd="ambiguous" id="S3.SS2.p2.3.m2.1.1.1.cmml" xref="S3.SS2.p2.3.m2.1.1">superscript</csymbol><ci id="S3.SS2.p2.3.m2.1.1.2.cmml" xref="S3.SS2.p2.3.m2.1.1.2">𝑆</ci><cn id="S3.SS2.p2.3.m2.1.1.3.cmml" type="integer" xref="S3.SS2.p2.3.m2.1.1.3">3</cn></apply></annotation-xml><annotation encoding="application/x-tex" id="S3.SS2.p2.3.m2.1c">S^{3}</annotation><annotation encoding="application/x-llamapun" id="S3.SS2.p2.3.m2.1d">italic_S start_POSTSUPERSCRIPT 3 end_POSTSUPERSCRIPT</annotation></semantics></math> tokens at the speech output stage. When converting discrete tokens back into speech, it adopts the same optimal transport conditional flow matching model (OT-CFM) as used in CosyVoice <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib49" title="">49</a>]</cite>. OT-CFM transforms the speech token sequence into Mel spectrogram, which is then used to generate the final speech with the HiFi-GAN vocoder <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib108" title="">108</a>]</cite>.</p> </div> <div class="ltx_para" id="S3.SS2.p3"> <p class="ltx_p" id="S3.SS2.p3.2"><math alttext="\bullet" class="ltx_Math" display="inline" id="S3.SS2.p3.1.m1.1"><semantics id="S3.SS2.p3.1.m1.1a"><mo id="S3.SS2.p3.1.m1.1.1" xref="S3.SS2.p3.1.m1.1.1.cmml">∙</mo><annotation-xml encoding="MathML-Content" id="S3.SS2.p3.1.m1.1b"><ci id="S3.SS2.p3.1.m1.1.1.cmml" xref="S3.SS2.p3.1.m1.1.1">∙</ci></annotation-xml><annotation encoding="application/x-tex" id="S3.SS2.p3.1.m1.1c">\bullet</annotation><annotation encoding="application/x-llamapun" id="S3.SS2.p3.1.m1.1d">∙</annotation></semantics></math> <em class="ltx_emph ltx_font_italic" id="S3.SS2.p3.2.1">Hubert.</em> Speech tokens extracted by the pre-trained HuBERT <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib78" title="">78</a>]</cite> are widely used as generation targets for large language models in the spoken dialogue systems. SpeechGPT <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib242" title="">242</a>]</cite> and Spirit-LM <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib158" title="">158</a>]</cite> use LLaMA <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib200" title="">200</a>]</cite> to autoregressively predict a sequence of units and are trained with a HuBERT unit-based HiFi-GAN <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib108" title="">108</a>]</cite> to decode the speech signal from discrete representations. PSLM <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib154" title="">154</a>]</cite> introduces an additional speech projection layer after the Transformer layers to process the hidden states, obtaining semantic tokens via the softmax layler. The speech decoder in LLama-Omni <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib57" title="">57</a>]</cite> operates in a non-autoregressive manner, taking the output hidden states of the large language model as input to generate a discrete HuBERT unit sequence corresponding to the speech response. The discrete units can be converted into waveform with an additional unit-based vocoder <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib166" title="">166</a>]</cite>. IntrinsicVoice <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib248" title="">248</a>]</cite> introduces Group-Former to enhance the large language model’s capability in sequence modeling. When the large language model predicts the <math alttext="<speech>" class="ltx_Math" display="inline" id="S3.SS2.p3.2.m2.1"><semantics id="S3.SS2.p3.2.m2.1a"><mrow id="S3.SS2.p3.2.m2.1.1.1" xref="S3.SS2.p3.2.m2.1.1.2.cmml"><mo fence="true" id="S3.SS2.p3.2.m2.1.1.1.2" rspace="0em" xref="S3.SS2.p3.2.m2.1.1.2.1.cmml"><</mo><mrow id="S3.SS2.p3.2.m2.1.1.1.1" xref="S3.SS2.p3.2.m2.1.1.1.1.cmml"><mi id="S3.SS2.p3.2.m2.1.1.1.1.2" xref="S3.SS2.p3.2.m2.1.1.1.1.2.cmml">s</mi><mo id="S3.SS2.p3.2.m2.1.1.1.1.1" xref="S3.SS2.p3.2.m2.1.1.1.1.1.cmml"></mo><mi id="S3.SS2.p3.2.m2.1.1.1.1.3" xref="S3.SS2.p3.2.m2.1.1.1.1.3.cmml">p</mi><mo id="S3.SS2.p3.2.m2.1.1.1.1.1a" xref="S3.SS2.p3.2.m2.1.1.1.1.1.cmml"></mo><mi id="S3.SS2.p3.2.m2.1.1.1.1.4" xref="S3.SS2.p3.2.m2.1.1.1.1.4.cmml">e</mi><mo id="S3.SS2.p3.2.m2.1.1.1.1.1b" xref="S3.SS2.p3.2.m2.1.1.1.1.1.cmml"></mo><mi id="S3.SS2.p3.2.m2.1.1.1.1.5" xref="S3.SS2.p3.2.m2.1.1.1.1.5.cmml">e</mi><mo id="S3.SS2.p3.2.m2.1.1.1.1.1c" xref="S3.SS2.p3.2.m2.1.1.1.1.1.cmml"></mo><mi id="S3.SS2.p3.2.m2.1.1.1.1.6" xref="S3.SS2.p3.2.m2.1.1.1.1.6.cmml">c</mi><mo id="S3.SS2.p3.2.m2.1.1.1.1.1d" xref="S3.SS2.p3.2.m2.1.1.1.1.1.cmml"></mo><mi id="S3.SS2.p3.2.m2.1.1.1.1.7" xref="S3.SS2.p3.2.m2.1.1.1.1.7.cmml">h</mi></mrow><mo fence="true" id="S3.SS2.p3.2.m2.1.1.1.3" lspace="0em" xref="S3.SS2.p3.2.m2.1.1.2.1.cmml">></mo></mrow><annotation-xml encoding="MathML-Content" id="S3.SS2.p3.2.m2.1b"><apply id="S3.SS2.p3.2.m2.1.1.2.cmml" xref="S3.SS2.p3.2.m2.1.1.1"><csymbol cd="latexml" id="S3.SS2.p3.2.m2.1.1.2.1.cmml" xref="S3.SS2.p3.2.m2.1.1.1.2">expectation</csymbol><apply id="S3.SS2.p3.2.m2.1.1.1.1.cmml" xref="S3.SS2.p3.2.m2.1.1.1.1"><times id="S3.SS2.p3.2.m2.1.1.1.1.1.cmml" xref="S3.SS2.p3.2.m2.1.1.1.1.1"></times><ci id="S3.SS2.p3.2.m2.1.1.1.1.2.cmml" xref="S3.SS2.p3.2.m2.1.1.1.1.2">𝑠</ci><ci id="S3.SS2.p3.2.m2.1.1.1.1.3.cmml" xref="S3.SS2.p3.2.m2.1.1.1.1.3">𝑝</ci><ci id="S3.SS2.p3.2.m2.1.1.1.1.4.cmml" xref="S3.SS2.p3.2.m2.1.1.1.1.4">𝑒</ci><ci id="S3.SS2.p3.2.m2.1.1.1.1.5.cmml" xref="S3.SS2.p3.2.m2.1.1.1.1.5">𝑒</ci><ci id="S3.SS2.p3.2.m2.1.1.1.1.6.cmml" xref="S3.SS2.p3.2.m2.1.1.1.1.6">𝑐</ci><ci id="S3.SS2.p3.2.m2.1.1.1.1.7.cmml" xref="S3.SS2.p3.2.m2.1.1.1.1.7">ℎ</ci></apply></apply></annotation-xml><annotation encoding="application/x-tex" id="S3.SS2.p3.2.m2.1c"><speech></annotation><annotation encoding="application/x-llamapun" id="S3.SS2.p3.2.m2.1d">< italic_s italic_p italic_e italic_e italic_c italic_h ></annotation></semantics></math> token, the global embedding is passed through a projection layer and delivered, along with a set of learnable queries, to the group model, which then predicts units. IntrinsicVoice uses HiFi-GAN <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib108" title="">108</a>]</cite>, a non-autoregressive neural vocoder that efficiently generates high-fidelity waveforms, for speech detokenization to reduce overall latency. Align-SLM <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib129" title="">129</a>]</cite> also uses a HiFiGAN-based <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib108" title="">108</a>]</cite> model to convert discrete units back into waveforms, utilizing model checkpoints from the textlesslib <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib102" title="">102</a>]</cite> library.</p> </div> <div class="ltx_para" id="S3.SS2.p4"> <p class="ltx_p" id="S3.SS2.p4.1"><math alttext="\bullet" class="ltx_Math" display="inline" id="S3.SS2.p4.1.m1.1"><semantics id="S3.SS2.p4.1.m1.1a"><mo id="S3.SS2.p4.1.m1.1.1" xref="S3.SS2.p4.1.m1.1.1.cmml">∙</mo><annotation-xml encoding="MathML-Content" id="S3.SS2.p4.1.m1.1b"><ci id="S3.SS2.p4.1.m1.1.1.cmml" xref="S3.SS2.p4.1.m1.1.1">∙</ci></annotation-xml><annotation encoding="application/x-tex" id="S3.SS2.p4.1.m1.1c">\bullet</annotation><annotation encoding="application/x-llamapun" id="S3.SS2.p4.1.m1.1d">∙</annotation></semantics></math> <em class="ltx_emph ltx_font_italic" id="S3.SS2.p4.1.1">Others.</em> USDM <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib106" title="">106</a>]</cite> does not generate speech directly from input speech; instead, it first transcribes the speech, generates the response text, and then produces corresponding speech token in an end-to-end pipeline. By inserting text-related tasks between speech input and output, the model benefits from both pre-trained LLMs and chain-of-thought <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib218" title="">218</a>]</cite> reasoning in the intermediate modality. Since each stage in the pipeline processes all input and output tokens generated by the previous stage. USDM is more robust to transcription errors and better able to produce contextually relevant spoken responses compared to a cascaded approach with separate modules. USDM uses the Voicebox <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib117" title="">117</a>]</cite> architecture to train a unit-to-speech model for reconstructing speech from units. EMOVA <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib25" title="">25</a>]</cite> generates a response in the form of speech units when given an image or speech input, which is then converted into an output waveform using the U2S detokenizer. The U2S detokenizer follows the VAE architecture: it uses a speech unit encoder to convert the predicted speech units into continuous embeddings, combines these with style embeddings predicted by the large language model to determine duration, and finally reconstructs the speech waveform through the decoder.</p> </div> <div class="ltx_para" id="S3.SS2.p5"> <p class="ltx_p" id="S3.SS2.p5.1"><span class="ltx_text ltx_font_bold" id="S3.SS2.p5.1.1">Acoustic.</span> Many spoken dialogue systems choose to directly generate tokens from acoustic representation models, such as EnCodec <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib43" title="">43</a>]</cite>, SpeechTokenizer <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib249" title="">249</a>]</cite>, and Mimi <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib44" title="">44</a>]</cite>. These acoustic tokens are then upsampled into the raw waveform through the frozen codec decoder directly.</p> </div> <div class="ltx_para" id="S3.SS2.p6"> <p class="ltx_p" id="S3.SS2.p6.1"><math alttext="\bullet" class="ltx_Math" display="inline" id="S3.SS2.p6.1.m1.1"><semantics id="S3.SS2.p6.1.m1.1a"><mo id="S3.SS2.p6.1.m1.1.1" xref="S3.SS2.p6.1.m1.1.1.cmml">∙</mo><annotation-xml encoding="MathML-Content" id="S3.SS2.p6.1.m1.1b"><ci id="S3.SS2.p6.1.m1.1.1.cmml" xref="S3.SS2.p6.1.m1.1.1">∙</ci></annotation-xml><annotation encoding="application/x-tex" id="S3.SS2.p6.1.m1.1c">\bullet</annotation><annotation encoding="application/x-llamapun" id="S3.SS2.p6.1.m1.1d">∙</annotation></semantics></math> <em class="ltx_emph ltx_font_italic" id="S3.SS2.p6.1.1">Encodec.</em> LauraGPT <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib50" title="">50</a>]</cite> uses Qwen-1.8B <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib11" title="">11</a>]</cite> to predict speech tokens. When synthesizing speech, it conditions the predictor not only on the speech tokens predicted by the LLM but also on text and speech inputs. Such text and speech conditionings allow the model to generate high-quality speech signals by leveraging the diverse information in prompt and noisy speeches, which is lacked in the discrete tokens (output from the first quantizer of the Encodec). The predicted speech tokens and conditioning inputs are delivered together to the codec vocoder. An encoder-only Transformer models these inputs into dense embeddings, which are then reconstructed into speech by the codec decoder.</p> </div> <div class="ltx_para" id="S3.SS2.p7"> <p class="ltx_p" id="S3.SS2.p7.1"><math alttext="\bullet" class="ltx_Math" display="inline" id="S3.SS2.p7.1.m1.1"><semantics id="S3.SS2.p7.1.m1.1a"><mo id="S3.SS2.p7.1.m1.1.1" xref="S3.SS2.p7.1.m1.1.1.cmml">∙</mo><annotation-xml encoding="MathML-Content" id="S3.SS2.p7.1.m1.1b"><ci id="S3.SS2.p7.1.m1.1.1.cmml" xref="S3.SS2.p7.1.m1.1.1">∙</ci></annotation-xml><annotation encoding="application/x-tex" id="S3.SS2.p7.1.m1.1c">\bullet</annotation><annotation encoding="application/x-llamapun" id="S3.SS2.p7.1.m1.1d">∙</annotation></semantics></math> <em class="ltx_emph ltx_font_italic" id="S3.SS2.p7.1.1">SNAC.</em> SNAC <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib193" title="">193</a>]</cite> encodes speech into hierarchical tokens, similar to EnCodec <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib43" title="">43</a>]</cite> and DAC <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib113" title="">113</a>]</cite>, by introducing quantization at different time resolutions to form a multi-scale discrete representation of speech. In this approach, shallow RVQ layers have a lower sampling frequency, covering a broader time span, while deeper RVQ layers sample at higher frequencies. SNAC introduces modest enhancements over RVQ-GAN by incorporating residual noise blocks, deep convolutions, and local window attention. The Mini-Omni <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib222" title="">222</a>, <a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib223" title="">223</a>]</cite> series continues the parallel generation method introduced by MusicGen<cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib40" title="">40</a>]</cite>, utilizing SNAC <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib193" title="">193</a>]</cite> as the speech encoder, which comprises seven complementary token layers. In a single step, it generates eight tokens, including text, while maintaining a one-step delay between layers. Furthermore, Mini-Omni and Mini-Omni 2 incorporates a batch approach that involves two samples: one requiring both text and speech responses and the other necessitating a text-only response. By discarding the text token from the first sample and embedding the output from the second sample into the first, it effectively transfer the model’s text-based capabilities to speech tasks, significantly enhancing reasoning abilities with minimal resource overhead.</p> </div> <div class="ltx_para" id="S3.SS2.p8"> <p class="ltx_p" id="S3.SS2.p8.7"><math alttext="\bullet" class="ltx_Math" display="inline" id="S3.SS2.p8.1.m1.1"><semantics id="S3.SS2.p8.1.m1.1a"><mo id="S3.SS2.p8.1.m1.1.1" xref="S3.SS2.p8.1.m1.1.1.cmml">∙</mo><annotation-xml encoding="MathML-Content" id="S3.SS2.p8.1.m1.1b"><ci id="S3.SS2.p8.1.m1.1.1.cmml" xref="S3.SS2.p8.1.m1.1.1">∙</ci></annotation-xml><annotation encoding="application/x-tex" id="S3.SS2.p8.1.m1.1c">\bullet</annotation><annotation encoding="application/x-llamapun" id="S3.SS2.p8.1.m1.1d">∙</annotation></semantics></math> <em class="ltx_emph ltx_font_italic" id="S3.SS2.p8.7.1">SpeechTokenizer.</em> On the output side, SpeechGPT-Gen synthesizes speech tokens using flow matching<cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib131" title="">131</a>]</cite>. Flow matching effectively models the transformation from a simple prior distribution to complex data distributions, yielding promising results in speech generation. SpeechGPT-Gen <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib244" title="">244</a>]</cite> applies flow matching for perceptual modeling, generating speech tokens that align with those of SpeechTokenizer <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib249" title="">249</a>]</cite>. Specifically, given speech <math alttext="S" class="ltx_Math" display="inline" id="S3.SS2.p8.2.m2.1"><semantics id="S3.SS2.p8.2.m2.1a"><mi id="S3.SS2.p8.2.m2.1.1" xref="S3.SS2.p8.2.m2.1.1.cmml">S</mi><annotation-xml encoding="MathML-Content" id="S3.SS2.p8.2.m2.1b"><ci id="S3.SS2.p8.2.m2.1.1.cmml" xref="S3.SS2.p8.2.m2.1.1">𝑆</ci></annotation-xml><annotation encoding="application/x-tex" id="S3.SS2.p8.2.m2.1c">S</annotation><annotation encoding="application/x-llamapun" id="S3.SS2.p8.2.m2.1d">italic_S</annotation></semantics></math>, semantic representation <math alttext="V_{1}" class="ltx_Math" display="inline" id="S3.SS2.p8.3.m3.1"><semantics id="S3.SS2.p8.3.m3.1a"><msub id="S3.SS2.p8.3.m3.1.1" xref="S3.SS2.p8.3.m3.1.1.cmml"><mi id="S3.SS2.p8.3.m3.1.1.2" xref="S3.SS2.p8.3.m3.1.1.2.cmml">V</mi><mn id="S3.SS2.p8.3.m3.1.1.3" xref="S3.SS2.p8.3.m3.1.1.3.cmml">1</mn></msub><annotation-xml encoding="MathML-Content" id="S3.SS2.p8.3.m3.1b"><apply id="S3.SS2.p8.3.m3.1.1.cmml" xref="S3.SS2.p8.3.m3.1.1"><csymbol cd="ambiguous" id="S3.SS2.p8.3.m3.1.1.1.cmml" xref="S3.SS2.p8.3.m3.1.1">subscript</csymbol><ci id="S3.SS2.p8.3.m3.1.1.2.cmml" xref="S3.SS2.p8.3.m3.1.1.2">𝑉</ci><cn id="S3.SS2.p8.3.m3.1.1.3.cmml" type="integer" xref="S3.SS2.p8.3.m3.1.1.3">1</cn></apply></annotation-xml><annotation encoding="application/x-tex" id="S3.SS2.p8.3.m3.1c">V_{1}</annotation><annotation encoding="application/x-llamapun" id="S3.SS2.p8.3.m3.1d">italic_V start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT</annotation></semantics></math>, perceptual representation <math alttext="V_{2:8}" class="ltx_Math" display="inline" id="S3.SS2.p8.4.m4.1"><semantics id="S3.SS2.p8.4.m4.1a"><msub id="S3.SS2.p8.4.m4.1.1" xref="S3.SS2.p8.4.m4.1.1.cmml"><mi id="S3.SS2.p8.4.m4.1.1.2" xref="S3.SS2.p8.4.m4.1.1.2.cmml">V</mi><mrow id="S3.SS2.p8.4.m4.1.1.3" xref="S3.SS2.p8.4.m4.1.1.3.cmml"><mn id="S3.SS2.p8.4.m4.1.1.3.2" xref="S3.SS2.p8.4.m4.1.1.3.2.cmml">2</mn><mo id="S3.SS2.p8.4.m4.1.1.3.1" lspace="0.278em" rspace="0.278em" xref="S3.SS2.p8.4.m4.1.1.3.1.cmml">:</mo><mn id="S3.SS2.p8.4.m4.1.1.3.3" xref="S3.SS2.p8.4.m4.1.1.3.3.cmml">8</mn></mrow></msub><annotation-xml encoding="MathML-Content" id="S3.SS2.p8.4.m4.1b"><apply id="S3.SS2.p8.4.m4.1.1.cmml" xref="S3.SS2.p8.4.m4.1.1"><csymbol cd="ambiguous" id="S3.SS2.p8.4.m4.1.1.1.cmml" xref="S3.SS2.p8.4.m4.1.1">subscript</csymbol><ci id="S3.SS2.p8.4.m4.1.1.2.cmml" xref="S3.SS2.p8.4.m4.1.1.2">𝑉</ci><apply id="S3.SS2.p8.4.m4.1.1.3.cmml" xref="S3.SS2.p8.4.m4.1.1.3"><ci id="S3.SS2.p8.4.m4.1.1.3.1.cmml" xref="S3.SS2.p8.4.m4.1.1.3.1">:</ci><cn id="S3.SS2.p8.4.m4.1.1.3.2.cmml" type="integer" xref="S3.SS2.p8.4.m4.1.1.3.2">2</cn><cn id="S3.SS2.p8.4.m4.1.1.3.3.cmml" type="integer" xref="S3.SS2.p8.4.m4.1.1.3.3">8</cn></apply></apply></annotation-xml><annotation encoding="application/x-tex" id="S3.SS2.p8.4.m4.1c">V_{2:8}</annotation><annotation encoding="application/x-llamapun" id="S3.SS2.p8.4.m4.1d">italic_V start_POSTSUBSCRIPT 2 : 8 end_POSTSUBSCRIPT</annotation></semantics></math> and the complete information representation <math alttext="V_{1:8}=V_{1}+V_{2:8}" class="ltx_Math" display="inline" id="S3.SS2.p8.5.m5.1"><semantics id="S3.SS2.p8.5.m5.1a"><mrow id="S3.SS2.p8.5.m5.1.1" xref="S3.SS2.p8.5.m5.1.1.cmml"><msub id="S3.SS2.p8.5.m5.1.1.2" xref="S3.SS2.p8.5.m5.1.1.2.cmml"><mi id="S3.SS2.p8.5.m5.1.1.2.2" xref="S3.SS2.p8.5.m5.1.1.2.2.cmml">V</mi><mrow id="S3.SS2.p8.5.m5.1.1.2.3" xref="S3.SS2.p8.5.m5.1.1.2.3.cmml"><mn id="S3.SS2.p8.5.m5.1.1.2.3.2" xref="S3.SS2.p8.5.m5.1.1.2.3.2.cmml">1</mn><mo id="S3.SS2.p8.5.m5.1.1.2.3.1" lspace="0.278em" rspace="0.278em" xref="S3.SS2.p8.5.m5.1.1.2.3.1.cmml">:</mo><mn id="S3.SS2.p8.5.m5.1.1.2.3.3" xref="S3.SS2.p8.5.m5.1.1.2.3.3.cmml">8</mn></mrow></msub><mo id="S3.SS2.p8.5.m5.1.1.1" xref="S3.SS2.p8.5.m5.1.1.1.cmml">=</mo><mrow id="S3.SS2.p8.5.m5.1.1.3" xref="S3.SS2.p8.5.m5.1.1.3.cmml"><msub id="S3.SS2.p8.5.m5.1.1.3.2" xref="S3.SS2.p8.5.m5.1.1.3.2.cmml"><mi id="S3.SS2.p8.5.m5.1.1.3.2.2" xref="S3.SS2.p8.5.m5.1.1.3.2.2.cmml">V</mi><mn id="S3.SS2.p8.5.m5.1.1.3.2.3" xref="S3.SS2.p8.5.m5.1.1.3.2.3.cmml">1</mn></msub><mo id="S3.SS2.p8.5.m5.1.1.3.1" xref="S3.SS2.p8.5.m5.1.1.3.1.cmml">+</mo><msub id="S3.SS2.p8.5.m5.1.1.3.3" xref="S3.SS2.p8.5.m5.1.1.3.3.cmml"><mi id="S3.SS2.p8.5.m5.1.1.3.3.2" xref="S3.SS2.p8.5.m5.1.1.3.3.2.cmml">V</mi><mrow id="S3.SS2.p8.5.m5.1.1.3.3.3" xref="S3.SS2.p8.5.m5.1.1.3.3.3.cmml"><mn id="S3.SS2.p8.5.m5.1.1.3.3.3.2" xref="S3.SS2.p8.5.m5.1.1.3.3.3.2.cmml">2</mn><mo id="S3.SS2.p8.5.m5.1.1.3.3.3.1" lspace="0.278em" rspace="0.278em" xref="S3.SS2.p8.5.m5.1.1.3.3.3.1.cmml">:</mo><mn id="S3.SS2.p8.5.m5.1.1.3.3.3.3" xref="S3.SS2.p8.5.m5.1.1.3.3.3.3.cmml">8</mn></mrow></msub></mrow></mrow><annotation-xml encoding="MathML-Content" id="S3.SS2.p8.5.m5.1b"><apply id="S3.SS2.p8.5.m5.1.1.cmml" xref="S3.SS2.p8.5.m5.1.1"><eq id="S3.SS2.p8.5.m5.1.1.1.cmml" xref="S3.SS2.p8.5.m5.1.1.1"></eq><apply id="S3.SS2.p8.5.m5.1.1.2.cmml" xref="S3.SS2.p8.5.m5.1.1.2"><csymbol cd="ambiguous" id="S3.SS2.p8.5.m5.1.1.2.1.cmml" xref="S3.SS2.p8.5.m5.1.1.2">subscript</csymbol><ci id="S3.SS2.p8.5.m5.1.1.2.2.cmml" xref="S3.SS2.p8.5.m5.1.1.2.2">𝑉</ci><apply id="S3.SS2.p8.5.m5.1.1.2.3.cmml" xref="S3.SS2.p8.5.m5.1.1.2.3"><ci id="S3.SS2.p8.5.m5.1.1.2.3.1.cmml" xref="S3.SS2.p8.5.m5.1.1.2.3.1">:</ci><cn id="S3.SS2.p8.5.m5.1.1.2.3.2.cmml" type="integer" xref="S3.SS2.p8.5.m5.1.1.2.3.2">1</cn><cn id="S3.SS2.p8.5.m5.1.1.2.3.3.cmml" type="integer" xref="S3.SS2.p8.5.m5.1.1.2.3.3">8</cn></apply></apply><apply id="S3.SS2.p8.5.m5.1.1.3.cmml" xref="S3.SS2.p8.5.m5.1.1.3"><plus id="S3.SS2.p8.5.m5.1.1.3.1.cmml" xref="S3.SS2.p8.5.m5.1.1.3.1"></plus><apply id="S3.SS2.p8.5.m5.1.1.3.2.cmml" xref="S3.SS2.p8.5.m5.1.1.3.2"><csymbol cd="ambiguous" id="S3.SS2.p8.5.m5.1.1.3.2.1.cmml" xref="S3.SS2.p8.5.m5.1.1.3.2">subscript</csymbol><ci id="S3.SS2.p8.5.m5.1.1.3.2.2.cmml" xref="S3.SS2.p8.5.m5.1.1.3.2.2">𝑉</ci><cn id="S3.SS2.p8.5.m5.1.1.3.2.3.cmml" type="integer" xref="S3.SS2.p8.5.m5.1.1.3.2.3">1</cn></apply><apply id="S3.SS2.p8.5.m5.1.1.3.3.cmml" xref="S3.SS2.p8.5.m5.1.1.3.3"><csymbol cd="ambiguous" id="S3.SS2.p8.5.m5.1.1.3.3.1.cmml" xref="S3.SS2.p8.5.m5.1.1.3.3">subscript</csymbol><ci id="S3.SS2.p8.5.m5.1.1.3.3.2.cmml" xref="S3.SS2.p8.5.m5.1.1.3.3.2">𝑉</ci><apply id="S3.SS2.p8.5.m5.1.1.3.3.3.cmml" xref="S3.SS2.p8.5.m5.1.1.3.3.3"><ci id="S3.SS2.p8.5.m5.1.1.3.3.3.1.cmml" xref="S3.SS2.p8.5.m5.1.1.3.3.3.1">:</ci><cn id="S3.SS2.p8.5.m5.1.1.3.3.3.2.cmml" type="integer" xref="S3.SS2.p8.5.m5.1.1.3.3.3.2">2</cn><cn id="S3.SS2.p8.5.m5.1.1.3.3.3.3.cmml" type="integer" xref="S3.SS2.p8.5.m5.1.1.3.3.3.3">8</cn></apply></apply></apply></apply></annotation-xml><annotation encoding="application/x-tex" id="S3.SS2.p8.5.m5.1c">V_{1:8}=V_{1}+V_{2:8}</annotation><annotation encoding="application/x-llamapun" id="S3.SS2.p8.5.m5.1d">italic_V start_POSTSUBSCRIPT 1 : 8 end_POSTSUBSCRIPT = italic_V start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT + italic_V start_POSTSUBSCRIPT 2 : 8 end_POSTSUBSCRIPT</annotation></semantics></math> extracted by SpeechTokenizer, perceptual modeling refers to predicting the complete representation <math alttext="V_{1:8}" class="ltx_Math" display="inline" id="S3.SS2.p8.6.m6.1"><semantics id="S3.SS2.p8.6.m6.1a"><msub id="S3.SS2.p8.6.m6.1.1" xref="S3.SS2.p8.6.m6.1.1.cmml"><mi id="S3.SS2.p8.6.m6.1.1.2" xref="S3.SS2.p8.6.m6.1.1.2.cmml">V</mi><mrow id="S3.SS2.p8.6.m6.1.1.3" xref="S3.SS2.p8.6.m6.1.1.3.cmml"><mn id="S3.SS2.p8.6.m6.1.1.3.2" xref="S3.SS2.p8.6.m6.1.1.3.2.cmml">1</mn><mo id="S3.SS2.p8.6.m6.1.1.3.1" lspace="0.278em" rspace="0.278em" xref="S3.SS2.p8.6.m6.1.1.3.1.cmml">:</mo><mn id="S3.SS2.p8.6.m6.1.1.3.3" xref="S3.SS2.p8.6.m6.1.1.3.3.cmml">8</mn></mrow></msub><annotation-xml encoding="MathML-Content" id="S3.SS2.p8.6.m6.1b"><apply id="S3.SS2.p8.6.m6.1.1.cmml" xref="S3.SS2.p8.6.m6.1.1"><csymbol cd="ambiguous" id="S3.SS2.p8.6.m6.1.1.1.cmml" xref="S3.SS2.p8.6.m6.1.1">subscript</csymbol><ci id="S3.SS2.p8.6.m6.1.1.2.cmml" xref="S3.SS2.p8.6.m6.1.1.2">𝑉</ci><apply id="S3.SS2.p8.6.m6.1.1.3.cmml" xref="S3.SS2.p8.6.m6.1.1.3"><ci id="S3.SS2.p8.6.m6.1.1.3.1.cmml" xref="S3.SS2.p8.6.m6.1.1.3.1">:</ci><cn id="S3.SS2.p8.6.m6.1.1.3.2.cmml" type="integer" xref="S3.SS2.p8.6.m6.1.1.3.2">1</cn><cn id="S3.SS2.p8.6.m6.1.1.3.3.cmml" type="integer" xref="S3.SS2.p8.6.m6.1.1.3.3">8</cn></apply></apply></annotation-xml><annotation encoding="application/x-tex" id="S3.SS2.p8.6.m6.1c">V_{1:8}</annotation><annotation encoding="application/x-llamapun" id="S3.SS2.p8.6.m6.1d">italic_V start_POSTSUBSCRIPT 1 : 8 end_POSTSUBSCRIPT</annotation></semantics></math> given the prompt speech a and the semantic representation <math alttext="V_{1}" class="ltx_Math" display="inline" id="S3.SS2.p8.7.m7.1"><semantics id="S3.SS2.p8.7.m7.1a"><msub id="S3.SS2.p8.7.m7.1.1" xref="S3.SS2.p8.7.m7.1.1.cmml"><mi id="S3.SS2.p8.7.m7.1.1.2" xref="S3.SS2.p8.7.m7.1.1.2.cmml">V</mi><mn id="S3.SS2.p8.7.m7.1.1.3" xref="S3.SS2.p8.7.m7.1.1.3.cmml">1</mn></msub><annotation-xml encoding="MathML-Content" id="S3.SS2.p8.7.m7.1b"><apply id="S3.SS2.p8.7.m7.1.1.cmml" xref="S3.SS2.p8.7.m7.1.1"><csymbol cd="ambiguous" id="S3.SS2.p8.7.m7.1.1.1.cmml" xref="S3.SS2.p8.7.m7.1.1">subscript</csymbol><ci id="S3.SS2.p8.7.m7.1.1.2.cmml" xref="S3.SS2.p8.7.m7.1.1.2">𝑉</ci><cn id="S3.SS2.p8.7.m7.1.1.3.cmml" type="integer" xref="S3.SS2.p8.7.m7.1.1.3">1</cn></apply></annotation-xml><annotation encoding="application/x-tex" id="S3.SS2.p8.7.m7.1c">V_{1}</annotation><annotation encoding="application/x-llamapun" id="S3.SS2.p8.7.m7.1d">italic_V start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT</annotation></semantics></math>. SpeechGPT-Gen synthesizes response speech by concatenating the output of SpeechGPT <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib242" title="">242</a>]</cite> with the prompt speech and using a flow matching model.</p> </div> <div class="ltx_para" id="S3.SS2.p9"> <p class="ltx_p" id="S3.SS2.p9.6"><math alttext="\bullet" class="ltx_Math" display="inline" id="S3.SS2.p9.1.m1.1"><semantics id="S3.SS2.p9.1.m1.1a"><mo id="S3.SS2.p9.1.m1.1.1" xref="S3.SS2.p9.1.m1.1.1.cmml">∙</mo><annotation-xml encoding="MathML-Content" id="S3.SS2.p9.1.m1.1b"><ci id="S3.SS2.p9.1.m1.1.1.cmml" xref="S3.SS2.p9.1.m1.1.1">∙</ci></annotation-xml><annotation encoding="application/x-tex" id="S3.SS2.p9.1.m1.1c">\bullet</annotation><annotation encoding="application/x-llamapun" id="S3.SS2.p9.1.m1.1d">∙</annotation></semantics></math> <em class="ltx_emph ltx_font_italic" id="S3.SS2.p9.6.1">Mimi.</em> Mimi <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib44" title="">44</a>]</cite> has eight codebooks at a frame rate of 12.5Hz, which requires 100 autoregressive steps to generate one second speech. This results in high computational costs and incompatibility with streaming inference. To address these issues, Moshi <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib44" title="">44</a>]</cite> proposes the RQ-Transformer, comprising a temporal Transformer and a deep Transformer. The RQ-Transformer breaks down a flattened sequence of length <math alttext="K\cdot S" class="ltx_Math" display="inline" id="S3.SS2.p9.2.m2.1"><semantics id="S3.SS2.p9.2.m2.1a"><mrow id="S3.SS2.p9.2.m2.1.1" xref="S3.SS2.p9.2.m2.1.1.cmml"><mi id="S3.SS2.p9.2.m2.1.1.2" xref="S3.SS2.p9.2.m2.1.1.2.cmml">K</mi><mo id="S3.SS2.p9.2.m2.1.1.1" lspace="0.222em" rspace="0.222em" xref="S3.SS2.p9.2.m2.1.1.1.cmml">⋅</mo><mi id="S3.SS2.p9.2.m2.1.1.3" xref="S3.SS2.p9.2.m2.1.1.3.cmml">S</mi></mrow><annotation-xml encoding="MathML-Content" id="S3.SS2.p9.2.m2.1b"><apply id="S3.SS2.p9.2.m2.1.1.cmml" xref="S3.SS2.p9.2.m2.1.1"><ci id="S3.SS2.p9.2.m2.1.1.1.cmml" xref="S3.SS2.p9.2.m2.1.1.1">⋅</ci><ci id="S3.SS2.p9.2.m2.1.1.2.cmml" xref="S3.SS2.p9.2.m2.1.1.2">𝐾</ci><ci id="S3.SS2.p9.2.m2.1.1.3.cmml" xref="S3.SS2.p9.2.m2.1.1.3">𝑆</ci></apply></annotation-xml><annotation encoding="application/x-tex" id="S3.SS2.p9.2.m2.1c">K\cdot S</annotation><annotation encoding="application/x-llamapun" id="S3.SS2.p9.2.m2.1d">italic_K ⋅ italic_S</annotation></semantics></math> into <math alttext="S" class="ltx_Math" display="inline" id="S3.SS2.p9.3.m3.1"><semantics id="S3.SS2.p9.3.m3.1a"><mi id="S3.SS2.p9.3.m3.1.1" xref="S3.SS2.p9.3.m3.1.1.cmml">S</mi><annotation-xml encoding="MathML-Content" id="S3.SS2.p9.3.m3.1b"><ci id="S3.SS2.p9.3.m3.1.1.cmml" xref="S3.SS2.p9.3.m3.1.1">𝑆</ci></annotation-xml><annotation encoding="application/x-tex" id="S3.SS2.p9.3.m3.1c">S</annotation><annotation encoding="application/x-llamapun" id="S3.SS2.p9.3.m3.1d">italic_S</annotation></semantics></math> timesteps for a large temporal Transformer which produces a context embedding used to condition a smaller depth Transformer over <math alttext="K" class="ltx_Math" display="inline" id="S3.SS2.p9.4.m4.1"><semantics id="S3.SS2.p9.4.m4.1a"><mi id="S3.SS2.p9.4.m4.1.1" xref="S3.SS2.p9.4.m4.1.1.cmml">K</mi><annotation-xml encoding="MathML-Content" id="S3.SS2.p9.4.m4.1b"><ci id="S3.SS2.p9.4.m4.1.1.cmml" xref="S3.SS2.p9.4.m4.1.1">𝐾</ci></annotation-xml><annotation encoding="application/x-tex" id="S3.SS2.p9.4.m4.1c">K</annotation><annotation encoding="application/x-llamapun" id="S3.SS2.p9.4.m4.1d">italic_K</annotation></semantics></math> steps. This allows scaling to longer sequences by increasing <math alttext="S" class="ltx_Math" display="inline" id="S3.SS2.p9.5.m5.1"><semantics id="S3.SS2.p9.5.m5.1a"><mi id="S3.SS2.p9.5.m5.1.1" xref="S3.SS2.p9.5.m5.1.1.cmml">S</mi><annotation-xml encoding="MathML-Content" id="S3.SS2.p9.5.m5.1b"><ci id="S3.SS2.p9.5.m5.1.1.cmml" xref="S3.SS2.p9.5.m5.1.1">𝑆</ci></annotation-xml><annotation encoding="application/x-tex" id="S3.SS2.p9.5.m5.1c">S</annotation><annotation encoding="application/x-llamapun" id="S3.SS2.p9.5.m5.1d">italic_S</annotation></semantics></math> or to a higher depth by increasing <math alttext="K" class="ltx_Math" display="inline" id="S3.SS2.p9.6.m6.1"><semantics id="S3.SS2.p9.6.m6.1a"><mi id="S3.SS2.p9.6.m6.1.1" xref="S3.SS2.p9.6.m6.1.1.cmml">K</mi><annotation-xml encoding="MathML-Content" id="S3.SS2.p9.6.m6.1b"><ci id="S3.SS2.p9.6.m6.1.1.cmml" xref="S3.SS2.p9.6.m6.1.1">𝐾</ci></annotation-xml><annotation encoding="application/x-tex" id="S3.SS2.p9.6.m6.1c">K</annotation><annotation encoding="application/x-llamapun" id="S3.SS2.p9.6.m6.1d">italic_K</annotation></semantics></math> than modeling the flattened sequence with a single model.</p> </div> <div class="ltx_para" id="S3.SS2.p10"> <p class="ltx_p" id="S3.SS2.p10.1"><math alttext="\bullet" class="ltx_Math" display="inline" id="S3.SS2.p10.1.m1.1"><semantics id="S3.SS2.p10.1.m1.1a"><mo id="S3.SS2.p10.1.m1.1.1" xref="S3.SS2.p10.1.m1.1.1.cmml">∙</mo><annotation-xml encoding="MathML-Content" id="S3.SS2.p10.1.m1.1b"><ci id="S3.SS2.p10.1.m1.1.1.cmml" xref="S3.SS2.p10.1.m1.1.1">∙</ci></annotation-xml><annotation encoding="application/x-tex" id="S3.SS2.p10.1.m1.1c">\bullet</annotation><annotation encoding="application/x-llamapun" id="S3.SS2.p10.1.m1.1d">∙</annotation></semantics></math> <em class="ltx_emph ltx_font_italic" id="S3.SS2.p10.1.1">TiCodec.</em> Ti-Codec <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib177" title="">177</a>]</cite> is a decoupled codec model which can separate the time-varying and time-invariant information in speech and quantize them separately. Inspired by VALL-E <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib209" title="">209</a>]</cite>, Freeze-Omni <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib213" title="">213</a>]</cite> uses a token-based speech decoder which contains NAR prefill and AR generate stage to achieve speech output capabilities. The speech decoder mainly consists of the NAR decoder, the AR decoder, and the frozen decoder of a codec model <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib177" title="">177</a>]</cite>. Both the NAR decoder and AR decoder are built upon transformer blocks. The NAR decoder is used to model the semantic features from the output of LLM, and then the AR decoder generates speech tokens based on the output of the NAR decoder. Finally, the decoder of the codec model converts the speech tokens into a speech stream.</p> </div> <figure class="ltx_table" id="S3.T1"> <figcaption class="ltx_caption ltx_centering"><span class="ltx_tag ltx_tag_table"><span class="ltx_text" id="S3.T1.2.1.1" style="font-size:90%;">Table 1</span>: </span><span class="ltx_text" id="S3.T1.3.2" style="font-size:90%;">The comparison of semantic and acoustic representations.</span></figcaption> <div class="ltx_inline-block ltx_align_center ltx_transformed_outer" id="S3.T1.4" style="width:421.8pt;height:51.6pt;vertical-align:-1.3pt;"><span class="ltx_transformed_inner" style="transform:translate(-113.6pt,13.5pt) scale(0.65,0.65) ;"> <table class="ltx_tabular ltx_align_middle" id="S3.T1.4.1"> <tr class="ltx_tr" id="S3.T1.4.1.1"> <td class="ltx_td ltx_border_r ltx_border_tt" id="S3.T1.4.1.1.1"></td> <td class="ltx_td ltx_align_center ltx_border_r ltx_border_tt" id="S3.T1.4.1.1.2"> <table class="ltx_tabular ltx_align_middle" id="S3.T1.4.1.1.2.1"> <tr class="ltx_tr" id="S3.T1.4.1.1.2.1.1"> <td class="ltx_td ltx_nopad_r ltx_align_center" id="S3.T1.4.1.1.2.1.1.1">Advantages of the</td> </tr> <tr class="ltx_tr" id="S3.T1.4.1.1.2.1.2"> <td class="ltx_td ltx_nopad_r ltx_align_center" id="S3.T1.4.1.1.2.1.2.1">comprehension side</td> </tr> </table> </td> <td class="ltx_td ltx_align_center ltx_border_r ltx_border_tt" id="S3.T1.4.1.1.3"> <table class="ltx_tabular ltx_align_middle" id="S3.T1.4.1.1.3.1"> <tr class="ltx_tr" id="S3.T1.4.1.1.3.1.1"> <td class="ltx_td ltx_nopad_r ltx_align_center" id="S3.T1.4.1.1.3.1.1.1">Performance of</td> </tr> <tr class="ltx_tr" id="S3.T1.4.1.1.3.1.2"> <td class="ltx_td ltx_nopad_r ltx_align_center" id="S3.T1.4.1.1.3.1.2.1">unify music and audio</td> </tr> </table> </td> <td class="ltx_td ltx_align_center ltx_border_r ltx_border_tt" id="S3.T1.4.1.1.4"> <table class="ltx_tabular ltx_align_middle" id="S3.T1.4.1.1.4.1"> <tr class="ltx_tr" id="S3.T1.4.1.1.4.1.1"> <td class="ltx_td ltx_nopad_r ltx_align_center" id="S3.T1.4.1.1.4.1.1.1">Compression rate</td> </tr> <tr class="ltx_tr" id="S3.T1.4.1.1.4.1.2"> <td class="ltx_td ltx_nopad_r ltx_align_center" id="S3.T1.4.1.1.4.1.2.1">of speech</td> </tr> </table> </td> <td class="ltx_td ltx_align_center ltx_border_r ltx_border_tt" id="S3.T1.4.1.1.5"> <table class="ltx_tabular ltx_align_middle" id="S3.T1.4.1.1.5.1"> <tr class="ltx_tr" id="S3.T1.4.1.1.5.1.1"> <td class="ltx_td ltx_nopad_r ltx_align_center" id="S3.T1.4.1.1.5.1.1.1">Emotional and</td> </tr> <tr class="ltx_tr" id="S3.T1.4.1.1.5.1.2"> <td class="ltx_td ltx_nopad_r ltx_align_center" id="S3.T1.4.1.1.5.1.2.1">acoustic information</td> </tr> </table> </td> <td class="ltx_td ltx_align_center ltx_border_tt" id="S3.T1.4.1.1.6"> <table class="ltx_tabular ltx_align_middle" id="S3.T1.4.1.1.6.1"> <tr class="ltx_tr" id="S3.T1.4.1.1.6.1.1"> <td class="ltx_td ltx_nopad_r ltx_align_center" id="S3.T1.4.1.1.6.1.1.1">Pipeline for</td> </tr> <tr class="ltx_tr" id="S3.T1.4.1.1.6.1.2"> <td class="ltx_td ltx_nopad_r ltx_align_center" id="S3.T1.4.1.1.6.1.2.1">post-processing</td> </tr> </table> </td> </tr> <tr class="ltx_tr" id="S3.T1.4.1.2"> <td class="ltx_td ltx_align_center ltx_border_r ltx_border_t" id="S3.T1.4.1.2.1">Semantic</td> <td class="ltx_td ltx_align_center ltx_border_r ltx_border_t" id="S3.T1.4.1.2.2">Strong</td> <td class="ltx_td ltx_align_center ltx_border_r ltx_border_t" id="S3.T1.4.1.2.3">Weak</td> <td class="ltx_td ltx_align_center ltx_border_r ltx_border_t" id="S3.T1.4.1.2.4">High</td> <td class="ltx_td ltx_align_center ltx_border_r ltx_border_t" id="S3.T1.4.1.2.5">Less</td> <td class="ltx_td ltx_align_center ltx_border_t" id="S3.T1.4.1.2.6">Cascade</td> </tr> <tr class="ltx_tr" id="S3.T1.4.1.3"> <td class="ltx_td ltx_align_center ltx_border_bb ltx_border_r ltx_border_t" id="S3.T1.4.1.3.1">Acoustic</td> <td class="ltx_td ltx_align_center ltx_border_bb ltx_border_r ltx_border_t" id="S3.T1.4.1.3.2">Weak</td> <td class="ltx_td ltx_align_center ltx_border_bb ltx_border_r ltx_border_t" id="S3.T1.4.1.3.3">Strong</td> <td class="ltx_td ltx_align_center ltx_border_bb ltx_border_r ltx_border_t" id="S3.T1.4.1.3.4">Low</td> <td class="ltx_td ltx_align_center ltx_border_bb ltx_border_r ltx_border_t" id="S3.T1.4.1.3.5">More</td> <td class="ltx_td ltx_align_center ltx_border_bb ltx_border_t" id="S3.T1.4.1.3.6">End-to-end</td> </tr> </table> </span></div> </figure> </section> <section class="ltx_subsection" id="S3.SS3"> <h3 class="ltx_title ltx_title_subsection"> <span class="ltx_tag ltx_tag_subsection">3.3 </span>Discussions about Representation used in Spoken Dialogue Systems</h3> <section class="ltx_subsubsection" id="S3.SS3.SSS1"> <h4 class="ltx_title ltx_title_subsubsection"> <span class="ltx_tag ltx_tag_subsubsection">3.3.1 </span>Semantic Representation vs. Acoustic Representation</h4> <div class="ltx_para" id="S3.SS3.SSS1.p1"> <p class="ltx_p" id="S3.SS3.SSS1.p1.1">Current dialogue systems typically choose different approaches for the understanding (input) and generation (output) sides based on task requirements. For example, Spirit-LM <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib158" title="">158</a>]</cite> uses semantic representations (HuBERT <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib78" title="">78</a>]</cite>) consistently on both ends, while Mini-Omni <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib222" title="">222</a>]</cite> uses semantic representations (Whisper <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib169" title="">169</a>]</cite>) on the input side and acoustic representations (SNAC <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib193" title="">193</a>]</cite>) on the output side. Each combination offers unique advantages and trade-offs, and a consensus on a unified speech representation approach has yet to be reached in practical applications.</p> </div> <div class="ltx_para" id="S3.SS3.SSS1.p2"> <p class="ltx_p" id="S3.SS3.SSS1.p2.1">We revisited the differences between semantic and acoustic representations, as shown in Table <a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#S3.T1" title="Table 1 ‣ 3.2 Speech Representations at the Outputs ‣ 3 Representations of Spoken Dialogue Models ‣ WavChat: A Survey of Spoken Dialogue Models"><span class="ltx_text ltx_ref_tag">1</span></a>. Benefiting from specific task objectives, models such as Wav2Vec <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib184" title="">184</a>]</cite>, HuBERT <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib78" title="">78</a>]</cite>, WavLM <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib27" title="">27</a>]</cite>, and Whisper <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib169" title="">169</a>]</cite> focus on extracting semantic information embedded within the spoken content. This inherent advantage allows speech to be directly mapped into the embedding space of large language models (LLMs), facilitating alignment with other modalities and fully leveraging the LLM’s strengths. In contrast, acoustic representations extracted by models like EnCodec <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib43" title="">43</a>]</cite> and DAC <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib113" title="">113</a>]</cite> are less conducive to LLM understanding, which is why SpeechTokenizer <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib249" title="">249</a>]</cite> and Mimi <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib44" title="">44</a>]</cite> opt for semantic distillation. In addition, semantic representations offer higher compression rates. By configuring various downsampling parameters in convolutional layers, models like HuBERT and Whisper easily achieve frame rates of 25Hz to 50Hz. Spirit-LM <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib158" title="">158</a>]</cite>, for instance, uses 25Hz HuBERT units, meaning that only 25 tokens are needed to represent one second of speech. In contrast, acoustic features are designed with compression and reconstruction in mind, where the constraints of signal transmission make extreme compression and high-quality reconstruction challenging to achieve simultaneously. Although Mimi <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib44" title="">44</a>]</cite> has achieved a frame rate of 12.5Hz, its use of 8 codebooks means that autoregressively predicting one second of speech requires 100 steps. Finally, in certain scenarios, semantic representations hold distinct advantages.</p> </div> <div class="ltx_para" id="S3.SS3.SSS1.p3"> <p class="ltx_p" id="S3.SS3.SSS1.p3.1">However, we must acknowledge that purely semantic representations fall short in naturalness and expressiveness, especially in tasks involving emotional expression or complex speech dynamics, where acoustic representations provide more nuanced information. For instance, HuBERT <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib78" title="">78</a>]</cite> cannot extract prosodic and stylistic features as effectively as EnCodec <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib43" title="">43</a>]</cite> or Emotion2Vec <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib143" title="">143</a>]</cite>. Notably, using acoustic representations allows for flexible handling of various data types—speech, audio, music, and sound—making dialogue systems more unified and versatile. Moreover, when acoustic representations are used as the output of a language model, they can seamlessly connect to the codec decoder for speech synthesis. In contrast, dialogue systems using semantic features often require separately trained vocoders <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib158" title="">158</a>, <a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib106" title="">106</a>]</cite> or rely on additional text-to-speech toolkits <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib57" title="">57</a>]</cite>. This gap is crucial for dialogue systems, as the resulting latency directly impacts the user experience.</p> </div> <div class="ltx_para" id="S3.SS3.SSS1.p4"> <p class="ltx_p" id="S3.SS3.SSS1.p4.1">Given the unique advantages of semantic and acoustic features across different tasks, future research may shift toward integrating these features. A valuable perspective is that models like SpeechTokenizer <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib249" title="">249</a>]</cite> and Mimi <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib44" title="">44</a>]</cite> have already attempted to distill semantic representations from HuBERT <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib78" title="">78</a>]</cite> or WavLM <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib27" title="">27</a>]</cite> into RVQ-1, ensuring a balanced representation of both semantic and acoustic information in the system. With technological advancements, we look forward to more unified and refined modeling approaches. A promising direction would be to design new training objectives for speech tokenizers, exploring both data-driven and objective-driven methods, thus avoiding the need for additional pre-trained models. As spoken dialogue Systems are still evolving, exploring more robust hybrid representations is indeed valuable.</p> </div> </section> <section class="ltx_subsubsection" id="S3.SS3.SSS2"> <h4 class="ltx_title ltx_title_subsubsection"> <span class="ltx_tag ltx_tag_subsubsection">3.3.2 </span>Continuous Representation vs. Discrete Representation</h4> <div class="ltx_para" id="S3.SS3.SSS2.p1"> <p class="ltx_p" id="S3.SS3.SSS2.p1.1">There is still no consensus on whether to use continuous or discrete representations in the spoken dialogue systems. Considerations on the input side mainly depend on the type of representation model chosen by the system. Some systems <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib222" title="">222</a>, <a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib223" title="">223</a>, <a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib57" title="">57</a>]</cite> use models like HuBERT <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib78" title="">78</a>]</cite> or Whisper <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib169" title="">169</a>]</cite> to extract continuous speech representations, which requires adding a speech adapter and an additional training phase focused on modality alignment. Another systems <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib242" title="">242</a>, <a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib25" title="">25</a>, <a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib44" title="">44</a>]</cite> use models like EnCodec <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib43" title="">43</a>]</cite> or Mimi <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib44" title="">44</a>]</cite> to extract discrete speech representations, adding speech tokens directly to the LLM’s vocabulary, thereby shifting the training burden onto the LLM itself. Despite the different approaches, the key is to enable large language models to effectively understand speech features. For autoregressive models, using discrete inputs may appear more manageable; however, whether this truly outperforms continuous inputs in terms of performance remains to be explored.</p> </div> <div class="ltx_para" id="S3.SS3.SSS2.p2"> <p class="ltx_p" id="S3.SS3.SSS2.p2.1">Language models trained with next-token prediction objectives tend to favor discrete modalities. Using discrete features on the output side naturally supports simple codec decoders <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib222" title="">222</a>, <a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib223" title="">223</a>, <a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib44" title="">44</a>, <a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib213" title="">213</a>]</cite> for reconstructing high-fidelity speech, enhancing speech quality and acoustic control while enabling an end-to-end system. In contrast, continuous features may require additional text-to-speech toolkits <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib61" title="">61</a>]</cite> or vocoders <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib57" title="">57</a>]</cite>, resulting in a cascaded pipeline and making it difficult to preserve detailed acoustic information. Another notable advantage of using discrete representations as output is the ability to quickly feed them into the input of the next dialogue round, as demonstrated in OmniFlatten <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib246" title="">246</a>]</cite>. In the field of computer vision, a range of work <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib256" title="">256</a>, <a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib221" title="">221</a>]</cite> has emerged that combines discrete and continuous representations, aiming to fully integrate these modes without information loss, and has already achieved success in certain areas. These approaches may provide valuable insights for the next generation of spoken dialogue systems.</p> </div> </section> <section class="ltx_subsubsection" id="S3.SS3.SSS3"> <h4 class="ltx_title ltx_title_subsubsection"> <span class="ltx_tag ltx_tag_subsubsection">3.3.3 </span>Single-Layer Quantizer vs. Multi-Layer Quantizer</h4> <div class="ltx_para" id="S3.SS3.SSS3.p1"> <p class="ltx_p" id="S3.SS3.SSS3.p1.1">As previously mentioned regarding compression rates, the number of quantizers must be carefully considered when using the speech codec. Currently, dialogue systems commonly use multi-layer quantizers, such as those in EnCodec <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib43" title="">43</a>]</cite>, SpeechTokenizer <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib249" title="">249</a>]</cite>, SNAC <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib193" title="">193</a>]</cite> and Mimi <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib44" title="">44</a>]</cite>. This inevitably introduces generation latency, as residual vector quantization requires each quantizer’s input to depend on the output of the previous quantizer. Mini-Omni <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib222" title="">222</a>]</cite> and Mini-Omni 2 <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib223" title="">223</a>]</cite> adopt an approach similar to MusicGen <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib40" title="">40</a>]</cite>, introducing delayed steps to enable parallel generation across multiple quantizers. Moshi <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib44" title="">44</a>]</cite> proposes splitting the RVQ, allowing the eight VQs to generate independently in parallel. These strategies help mitigate latency issues to some extent but still fall short of the efficiency achieved with semantic representations.</p> </div> <div class="ltx_para" id="S3.SS3.SSS3.p2"> <p class="ltx_p" id="S3.SS3.SSS3.p2.1">Recently, research on single-layer quantizers has shown promising breakthroughs. Models like WavTokenizer <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib90" title="">90</a>]</cite>, Single-Codec <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib119" title="">119</a>]</cite>, and BigCodec <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib224" title="">224</a>]</cite> advocate using a single VQ to discretize speech, achieving competitive results in both reconstruction and generation tasks. Notably, WavTokenizer <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib90" title="">90</a>]</cite> has already achieved an impressive compression rate of 40Hz. Integrating a single-layer quantizer with dialogue systems is promising, as it allows for rapid extraction of speech features on the input side and significantly reduces the burden of autoregressive modeling.</p> </div> </section> <section class="ltx_subsubsection" id="S3.SS3.SSS4"> <h4 class="ltx_title ltx_title_subsubsection"> <span class="ltx_tag ltx_tag_subsubsection">3.3.4 </span>With Text Guidance vs. Without Text Guidance</h4> <div class="ltx_para" id="S3.SS3.SSS4.p1"> <p class="ltx_p" id="S3.SS3.SSS4.p1.1">In practice, researchers have found direct speech-to-speech generation challenging <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib222" title="">222</a>, <a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib223" title="">223</a>, <a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib57" title="">57</a>]</cite> due to complex mapping relationships, so intermediate texts are often generated to achieve higher generation quality. Current end-to-end dialogue systems commonly adopt one of two strategies: one <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib57" title="">57</a>, <a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib248" title="">248</a>]</cite> generates the hidden states corresponding to the text response first, which are then post-processed to obtain speech tokens; the other <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib222" title="">222</a>, <a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib223" title="">223</a>, <a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib44" title="">44</a>]</cite> generates text and speech tokens in parallel. These approaches leverage the text modeling capabilities of large language models, essentially guiding the synthesis of semantically consistent speech by first generating text. However, this comes at the expense of response speed.</p> </div> <div class="ltx_para" id="S3.SS3.SSS4.p2"> <p class="ltx_p" id="S3.SS3.SSS4.p2.1">Although directly performing speech-to-speech generation presents challenges such as increased model complexity and inference difficulty, we believe it remains a promising direction for future research. One approach is to retrain large spoken language models to adapt to specific speech representations. However, this faces challenges related to data resources, as large-scale and high-quality conversational datasets remain scarce. Additionally, this method cannot completely eliminate text prompts and requires multi-stage training, starting with text-speech pairs to allow the model to progressively acquire conversational capabilities. Another approach could begin with speech codecs, as demonstrated by SpeechTokenizer and Mimi’s extensive work in semantic distillation. We envision a novel speech codec that aligns text and speech during the encoding phase, thereby reducing the generation burden on large language models. By aligning speech representations with the text representation space earlier in the process, the autoregressive modeling would no longer require text guidance, giving rise to an entirely new paradigm for conversational systems.</p> </div> </section> </section> </section> <section class="ltx_section" id="S4"> <h2 class="ltx_title ltx_title_section"> <span class="ltx_tag ltx_tag_section">4 </span>Training Paradigm of Spoken Dialogue Model</h2> <div class="ltx_para" id="S4.p1"> <p class="ltx_p" id="S4.p1.1">Existing text-based large language models have demonstrated strong contextual understanding and reasoning abilities in the field of natural language processing, such as GPT-4 <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib1" title="">1</a>]</cite>, Llama 3.1 <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib52" title="">52</a>]</cite>, and Qwen-2 <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib228" title="">228</a>]</cite>. Due to their training on large-scale corpora, these models achieve exceptional accuracy when handling complex contexts. To further expand the capabilities of large language models, some research <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib25" title="">25</a>, <a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib33" title="">33</a>, <a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib61" title="">61</a>, <a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib223" title="">223</a>]</cite> has explored enabling them to understand other modalities, thereby building multimodal interaction abilities. The spoken dialogue model, also known as the speech-text dialogue model, allows users to interact with LLMs naturally and straightforwardly through speech. However, the transition from text intelligence to speech intelligence involves two inherent hurdles: one core issue is the insufficient amount of speech data compared to the massive datasets used for pre-training text-based large language models. For instance, Llama 3.1 <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib52" title="">52</a>]</cite> uses 800 billion training tokens, and Qwen-2 <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib228" title="">228</a>]</cite> is trained on over 7 trillion tokens, whereas pure speech pre-training data often amounts to hundreds of thousands or millions of hours. For example, Moshi’s <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib44" title="">44</a>]</cite> pre-training speech data comprises 7 million hours, and the amount of labeled speech data is even smaller, making it difficult to support LLMs in achieving powerful speech intelligence comparable to text. Another challenge is that speech information density is not as compact as text. Text commonly uses byte-pair encoding (BPE) <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib62" title="">62</a>, <a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib186" title="">186</a>]</cite> encoding to compress it into a tight token space, whereas the speech modality includes not only semantic information but also acoustical information, which is less dense. This undoubtedly increases the difficulty for LLMs to learn. Understanding and generating the inherent knowledge of the speech modality more effectively is a significant challenge.</p> </div> <div class="ltx_para" id="S4.p2"> <p class="ltx_p" id="S4.p2.1">Consequently, existing spoken dialogue models aim to build upon text-based LLMs by incorporating the speech modality into these large language models. <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib242" title="">242</a>, <a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib25" title="">25</a>, <a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib222" title="">222</a>, <a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib44" title="">44</a>]</cite> support speech-in and speech-out capabilities for LLMs, forming the foundation of basic speech dialogue capabilities. Some of the latest advanced approaches <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib44" title="">44</a>, <a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib246" title="">246</a>, <a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib203" title="">203</a>]</cite> attempt to transition from traditional turn-based spoken dialogue systems to full-duplex systems, aiming to simulate the natural spontaneity of human conversation. While these advancements are promising, achieving low latency and natural interaction in full-duplex systems remains a significant challenge. Moreover, enhancing LLMs to effectively handle the speech modality—mastering both speech comprehension and generation—while maintaining robust natural language text processing capabilities, is hindered by the limited size of labeled speech datasets. These datasets are far smaller compared to the vast amounts of pure text data available, which risks diminishing the models’ original text processing capabilities. Thus, building a truly end-to-end conversational model that meets real-world requirements necessitates careful consideration of model architecture, training paradigms, and training data. Overall, we believe that several key aspects are crucial in the training paradigm of spoken dialogue models: aligning speech-text modalities to ensure consistent understanding, designing multi-stage training strategies for gradual adaptation, and optimizing training structures and inference paradigms for efficient performance.</p> </div> <section class="ltx_subsection" id="S4.SS1"> <h3 class="ltx_title ltx_title_subsection"> <span class="ltx_tag ltx_tag_subsection">4.1 </span>Architecture Paradigm about Modal Alignment of Speech and Text</h3> <div class="ltx_para" id="S4.SS1.p1"> <p class="ltx_p" id="S4.SS1.p1.1">To enable large language models (LLMs) to handle both speech input and output, a significant amount of prior work <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib179" title="">179</a>, <a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib52" title="">52</a>, <a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib57" title="">57</a>, <a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib222" title="">222</a>, <a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib44" title="">44</a>]</cite> has focused on adapting text-based foundation models into robust spoken dialogue models. Based on different architectural paradigms, these approaches can be broadly categorized into five types, as shown in Figure <a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#S4.F5" title="Figure 5 ‣ 4.1 Architecture Paradigm about Modal Alignment of Speech and Text ‣ 4 Training Paradigm of Spoken Dialogue Model ‣ WavChat: A Survey of Spoken Dialogue Models"><span class="ltx_text ltx_ref_tag">5</span></a>.</p> </div> <figure class="ltx_figure" id="S4.F5"><img alt="Refer to caption" class="ltx_graphics ltx_centering ltx_img_landscape" height="462" id="S4.F5.g1" src="x11.png" width="788"/> <figcaption class="ltx_caption ltx_centering"><span class="ltx_tag ltx_tag_figure"><span class="ltx_text" id="S4.F5.2.1.1" style="font-size:90%;">Figure 5</span>: </span><span class="ltx_text" id="S4.F5.3.2" style="font-size:90%;">Categorization Diagram of Spoken Dialogue Model Architectural Paradigms.</span></figcaption> </figure> <div class="ltx_para" id="S4.SS1.p2"> <p class="ltx_p" id="S4.SS1.p2.1"><span class="ltx_text ltx_font_bold" id="S4.SS1.p2.1.1">Text-Output Only Method.</span> These systems <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib33" title="">33</a>, <a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib34" title="">34</a>, <a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib67" title="">67</a>, <a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib227" title="">227</a>, <a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib198" title="">198</a>, <a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib80" title="">80</a>, <a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib41" title="">41</a>, <a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib61" title="">61</a>]</cite> maintain the text-based LLM’s foundational structure unchanged, <span class="ltx_text ltx_font_bold" id="S4.SS1.p2.1.2">using an audio encoder and adaptor to map speech input into the LLM’s pre-trained text latent space directly.</span> This method of direct embedding alignment, combined with a multi-task training strategy, equips the LLM with the ability to ’listen,’ thus enabling it to understand and process speech modality inputs effectively and perform exceptionally well in various audio understanding tasks. Nevertheless, the output remains text-based, which necessitates the use of an external text-to-speech (TTS) system <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib21" title="">21</a>, <a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib49" title="">49</a>]</cite> to generate speech output. LTU-AS <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib67" title="">67</a>]</cite> uses Whisper <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib169" title="">169</a>]</cite> and the Time and Layer-Wise Transformer (TLTR) as its audio encoder, allowing it to recognize both speech and audio events. Qwen-Audio 1 <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib34" title="">34</a>]</cite> scales up audio-language pre-training to cover over 30 tasks and various audio types, facilitating universal audio understanding abilities. It employs a unified encoder for all audio inputs, bridging the gap between audio and textual modalities, and uses the large language model Qwen-7B <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib11" title="">11</a>]</cite> as its foundational component. Qwen-Audio 2 <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib33" title="">33</a>]</cite> simplifies the pre-training process by utilizing natural language prompts for different data and tasks, with DPO <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib170" title="">170</a>]</cite> optimizing the model’s performance in terms of factuality and adherence to desired behavior. SALMMON <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib198" title="">198</a>]</cite> employs dual auditory encoders: a speech encoder from the Whisper model and a non-speech BEATs <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib28" title="">28</a>]</cite> audio encoder. The auditory features from these two encoders are complementary, making them suitable for general audio inputs that contain both speech and non-speech information. These inputs are then connected to a well-trained LLM using Q-former style attention to generate responses. VITA <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib61" title="">61</a>]</cite> implements a duplex solution through two independent modules: one generates text responses to user queries, while the other continuously monitors environmental input to selectively provide updated interaction content, although it still requires an external TTS system. All the aforementioned methods frequently overlook paralinguistic information, including emotion, prosody, and non-verbal elements, rendering them insufficient for scenarios that involve emotional speech dialogue. ParalinGPT <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib128" title="">128</a>]</cite> utilizes an ASR model to obtain text and a speech encoder to extract emotion embeddings, thereby more accurately simulating both the linguistic content and paralinguistic attributes of spoken responses. E-chat <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib227" title="">227</a>]</cite> employs a Hubert speech encoder <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib78" title="">78</a>]</cite> to extract speech and emotion features, using a connection module to map these features to the textual space within the LLM decoder. Although these approaches have explored emotional responses within spoken dialogue systems, they require additional systems to synthesize speech from text and suffer from high latency, making real-time dialogue challenging to achieve.</p> </div> <div class="ltx_para" id="S4.SS1.p3"> <p class="ltx_p" id="S4.SS1.p3.1"><span class="ltx_text ltx_font_bold" id="S4.SS1.p3.1.1">Chain-of-Modality (CoM) Method.</span> This method tokenizes speech into discrete tokens and extends the LLM’s vocabulary to handle both speech input and output. To address alignment issues between speech and text modalities, Recent works <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib242" title="">242</a>, <a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib244" title="">244</a>, <a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib156" title="">156</a>, <a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib25" title="">25</a>]</cite> utilize a prompting approach called Chain-of-Modality (CoM), which first generates response text autoregressively before producing the corresponding speech. This technique allows the text LLM’s output to guide speech generation, thereby enhancing the quality of the response content. However, it is not suitable for live interactions, as the model must complete the entire text response before beginning speech generation, leading to increased response latency. SpeechGPT <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib242" title="">242</a>]</cite> and SpeechGPT-gen <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib244" title="">244</a>]</cite> employ the SpeechTokenizer <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib249" title="">249</a>]</cite> model as a speech token extractor, breaking down speech generation into the prediction of semantic tokens followed by acoustic tokens. Spectron <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib156" title="">156</a>]</cite> performs speech continuation by predicting spectrograms frame-by-frame, optimizing the LLM with a combination of cross-entropy loss for text and reconstruction loss for speech frames. EMOVA <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib25" title="">25</a>]</cite>, on the other hand, utilizes the FSPIRAL <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib85" title="">85</a>]</cite> architecture for its speech encoder to capture phonetic and tonal information, which is then discretized using finite scalar quantization (FSQ) <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib149" title="">149</a>]</cite>. Its speech response procedure is divided into three primary steps: 1) transcribing user instructions into text, 2) generating textual responses based on these instructions, and 3) producing style labels and response speech units from the textual responses. This process enables EMOVA to facilitate emotional speech dialogue.</p> </div> <div class="ltx_para" id="S4.SS1.p4"> <p class="ltx_p" id="S4.SS1.p4.1"><span class="ltx_text ltx_font_bold" id="S4.SS1.p4.1.1">Interleaving Text and Speech Tokens.</span> Some earlier models <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib179" title="">179</a>, <a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib145" title="">145</a>]</cite> employed supervised training methods, using specific input and output sequences, and trained on mixed speech-text tasks, including text-to-speech (TTS), automatic speech recognition (ASR), and speech-to-speech translation. Spirit-LM <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib158" title="">158</a>]</cite> leverages the temporal alignment between speech and its transcription, continuing training on a pre-trained text-based LLM using alternating text and speech tokens. This significantly improves the model’s performance in both speech understanding and generation. However, it employs discrete Hubert units <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib78" title="">78</a>]</cite> as speech representations, which results in some loss of paralinguistic information. USDM <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib106" title="">106</a>]</cite> continues pretraining Mistral-7B <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib22" title="">22</a>]</cite> with interleaved speech-text data to capture multimodal semantics. For dialogue finetuning, it constructs templates using both speech and transcripts of user input as instruction data.</p> </div> <div class="ltx_para" id="S4.SS1.p5"> <p class="ltx_p" id="S4.SS1.p5.1"><span class="ltx_text ltx_font_bold" id="S4.SS1.p5.1.1">Parallel Generation of Text and Speech.</span> PSLM <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib154" title="">154</a>]</cite> proposes generating speech and text tokens in parallel to reduce latency; however, this approach may compromise response quality. Additionally, this method still relies on speech recognition for input <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib169" title="">169</a>]</cite>, which introduces further delay. Llama-Omni <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib57" title="">57</a>]</cite> introduces a novel streaming speech decoder that can simultaneously generate text responses and discrete speech unit sequences, significantly reducing latency and meeting real-time interaction needs. Moshi <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib44" title="">44</a>]</cite> and Mini-Omni <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib222" title="">222</a>]</cite> adopt similar approaches, introducing dual streams that generate both speech tokens and corresponding text tokens simultaneously on the assistant side, facilitating the transfer of the pre-trained LLM’s textual capabilities to the speech modality, enabling the model to directly engage in reasoning through speech. The key difference lies in how speech-text alignment is handled: Moshi <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib44" title="">44</a>]</cite> uses explicit alignment information to supervise the model’s learning, while Mini-Omni <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib222" title="">222</a>]</cite> allows the LLM to learn implicit alignment information. On the input side, Mini-Omni feeds continuous speech embeddings from the Whisper encoder <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib169" title="">169</a>]</cite> into the LLM, enhancing the model’s ability to understand spoken instructions without requiring text input. However, inconsistencies between speech input and output introduce additional computational overhead, increasing latency in multi-turn dialogue scenarios. In contrast, Moshi allows users to input speech without relying on text, and generates both text and speech tokens in parallel on the assistant side. Moshi further extends its architecture to model several speech streams in parallel, allowing for conceptually and practically simple handling of full-duplex dialogues with arbitrary dynamics.</p> </div> <div class="ltx_para" id="S4.SS1.p6"> <p class="ltx_p" id="S4.SS1.p6.1"><span class="ltx_text ltx_font_bold" id="S4.SS1.p6.1.1">Speech-to-Speech Generation.</span> This approach aims to remove the dependency on intermediate text, thereby reducing latency and making the system closer to real-time interaction. SyncLLM <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib203" title="">203</a>]</cite> achieves real-time full-duplex interaction through time chunking methods, integrating time information into LLMs to enable synchronous operation with the real-world clock. IntrinsicVoice <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib248" title="">248</a>]</cite> utilizes a specific model to generate multiple speech tokens in a single step, effectively reducing speech token sequences to lengths comparable to text sequences while producing high-quality audio. Align-SLM <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib129" title="">129</a>]</cite> utilizes a pre-trained self-supervised Hubert model <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib78" title="">78</a>]</cite> with K-means clustering <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib74" title="">74</a>]</cite> to convert continuous speech representations into discrete units. It employs LoRA adapter <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib79" title="">79</a>]</cite> fine-tuning on a pre-trained Twist <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib74" title="">74</a>]</cite> to produce multiple speech continuations from a given prompt and uses semantic metrics to generate preference data for Direct Preference Optimization (DPO) <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib170" title="">170</a>]</cite>. Experimental results indicate that integrating the preference optimization method significantly improves the semantic comprehension of the Spoken LLM.</p> </div> </section> <section class="ltx_subsection" id="S4.SS2"> <h3 class="ltx_title ltx_title_subsection"> <span class="ltx_tag ltx_tag_subsection">4.2 </span>Multi-stage Training strategy</h3> <div class="ltx_para" id="S4.SS2.p1"> <p class="ltx_p" id="S4.SS2.p1.1">This section primarily discusses the training process of the Spoken Dialogue Model, building upon previous work on spoken dialogue systems. Generally, this process consists of four stages: text LLM pre-training, modality adaptation and alignment post-training, followed by supervised fine-tuning, and optionally, preference optimization. The primary goal in training most spoken dialogue systems is to preserve the model’s original capabilities while integrating the speech modality for voice interaction into the LLM. The diagram of multi-stage training can be referred to in Figure <a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#S4.F6" title="Figure 6 ‣ 4.2 Multi-stage Training strategy ‣ 4 Training Paradigm of Spoken Dialogue Model ‣ WavChat: A Survey of Spoken Dialogue Models"><span class="ltx_text ltx_ref_tag">6</span></a>.</p> </div> <figure class="ltx_figure" id="S4.F6"><img alt="Refer to caption" class="ltx_graphics ltx_centering ltx_img_landscape" height="472" id="S4.F6.g1" src="x12.png" width="789"/> <figcaption class="ltx_caption ltx_centering"><span class="ltx_tag ltx_tag_figure"><span class="ltx_text" id="S4.F6.2.1.1" style="font-size:90%;">Figure 6</span>: </span><span class="ltx_text" id="S4.F6.3.2" style="font-size:90%;">Diagram of Multi-stage Training Steps.</span></figcaption> </figure> <section class="ltx_subsubsection" id="S4.SS2.SSS1"> <h4 class="ltx_title ltx_title_subsubsection"> <span class="ltx_tag ltx_tag_subsubsection">4.2.1 </span>Text LLM Pre-Training</h4> <div class="ltx_para" id="S4.SS2.SSS1.p1"> <p class="ltx_p" id="S4.SS2.SSS1.p1.1">The goal is to develop a text-intelligent LLM model capable of handling complex contexts and possessing knowledge reasoning abilities, thus preparing it for integration with speech-intelligent LLMs. Most spoken dialogue systems utilize pre-trained large language models as foundational models rather than pre-training with separate text data themselves. A series of approaches <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib242" title="">242</a>, <a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib244" title="">244</a>, <a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib158" title="">158</a>, <a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib25" title="">25</a>, <a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib57" title="">57</a>, <a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib203" title="">203</a>]</cite> use the LLaMA model and its variants as their foundational language model. On the other hand, <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib50" title="">50</a>, <a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib222" title="">222</a>, <a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib223" title="">223</a>, <a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib246" title="">246</a>]</cite> employ the Qwen <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib11" title="">11</a>, <a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib228" title="">228</a>]</cite> family of large language models as their backbone. Meanwhile, Moshi <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib44" title="">44</a>]</cite> employs an RQ-Transformer for hierarchical autoregressive modeling of speech, utilizing a unique structure that involves pre-training a text-only language model with datasets from the internet (e.g., Wikipedia <span class="ltx_note ltx_role_footnote" id="footnote10"><sup class="ltx_note_mark">10</sup><span class="ltx_note_outer"><span class="ltx_note_content"><sup class="ltx_note_mark">10</sup><span class="ltx_tag ltx_tag_note">10</span><a class="ltx_ref ltx_url ltx_font_typewriter" href="https://dumps.wikimedia.org/" title="">https://dumps.wikimedia.org/</a></span></span></span> and StackExchange <span class="ltx_note ltx_role_footnote" id="footnote11"><sup class="ltx_note_mark">11</sup><span class="ltx_note_outer"><span class="ltx_note_content"><sup class="ltx_note_mark">11</sup><span class="ltx_tag ltx_tag_note">11</span><a class="ltx_ref ltx_url ltx_font_typewriter" href="https://archive.org/details/stackexchange/" title="">https://archive.org/details/stackexchange/</a></span></span></span>). The collected data was filtered using a comprehensive preprocessing pipeline to ensure quality and relevance, which included deduplication to remove redundant entries, language identification to retain text in the desired language, and quality filtering to exclude low-quality or irrelevant content based on criteria such as coherence and completeness. VITA <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib61" title="">61</a>]</cite> utilizes Mixtral 8x7B1 <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib95" title="">95</a>]</cite>, a representative LLM with a sparse mixture of experts (SMoE) architecture, and performs pure-text instruction tuning for its extended Chinese vocabulary.</p> </div> </section> <section class="ltx_subsubsection" id="S4.SS2.SSS2"> <h4 class="ltx_title ltx_title_subsubsection"> <span class="ltx_tag ltx_tag_subsubsection">4.2.2 </span>Modality Adaptation and Alignment Post-training</h4> <figure class="ltx_figure" id="S4.F7"><img alt="Refer to caption" class="ltx_graphics ltx_centering ltx_img_landscape" height="328" id="S4.F7.g1" src="x13.png" width="789"/> <figcaption class="ltx_caption ltx_centering"><span class="ltx_tag ltx_tag_figure"><span class="ltx_text" id="S4.F7.2.1.1" style="font-size:90%;">Figure 7</span>: </span><span class="ltx_text" id="S4.F7.3.2" style="font-size:90%;">Alignment Post-training Methods.</span></figcaption> </figure> <div class="ltx_para" id="S4.SS2.SSS2.p1"> <p class="ltx_p" id="S4.SS2.SSS2.p1.1">This phase explores strategies to adapt text-based large language models (LLMs) for speech modality input, focusing on aligning text and audio modalities effectively. The primary goal is to enhance the models’ ability to understand and generate speech by bridging the gap between these two modalities. Common approaches include multimodal training techniques, leveraging unlabeled speech corpora, and employing multi-task learning frameworks. These methods typically involve fine-tuning existing LLMs with speech-related tasks and integrating speech-specific modules, such as speech adaptors and decoders, to facilitate seamless interaction between text and speech modalities. Different training tasks for modality adaptation and alignment are shown in Figure <a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#S4.F7" title="Figure 7 ‣ 4.2.2 Modality Adaptation and Alignment Post-training ‣ 4.2 Multi-stage Training strategy ‣ 4 Training Paradigm of Spoken Dialogue Model ‣ WavChat: A Survey of Spoken Dialogue Models"><span class="ltx_text ltx_ref_tag">7</span></a>. Spirit-LM <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib158" title="">158</a>]</cite> continuously pretrains on text LLM checkpoints using interleaved text and speech tokens to improve the model’s performance in speech understanding and generation. LLaMA-Omni <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib57" title="">57</a>]</cite> adopts a two-stage training strategy: the first stage jointly trains a speech adaptor and LLM with speech input and text responses, while the second stage uses the same dataset to train a streaming speech decoder independently. Consequently, this LLM primarily possesses the capability for speech input understanding, with speech generation handled by a separate decoder module. SpeechGPT <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib242" title="">242</a>]</cite>, Moshi <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib44" title="">44</a>]</cite>, and VITA <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib61" title="">61</a>]</cite> utilize unlabeled speech corpora to train models in a next-token prediction task. In the first phase, VITA focuses on training the audio encoder and connector, while in the second phase, it optimizes both the connector and the LLM model through multimodal training. Although capable of processing speech input, it outputs only text. Spectron <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib156" title="">156</a>]</cite> addresses the alignment issue between text and speech representations by jointly supervising multiple objectives. IntrinsicVoice <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib248" title="">248</a>]</cite> employs a two-stage training approach, constructing multiple cross-modal tasks from a single dataset to enable the model to better learn the semantic consistency between speech and text. Mini-Omni <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib222" title="">222</a>]</cite>, EMOVA <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib25" title="">25</a>]</cite>, and OmniFlatten <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib246" title="">246</a>]</cite> adopt similar methodologies, commencing with supervised multi-task fine-tuning of the text LLM backbone to achieve speech-text modality alignment and develop a multimodal LLM <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib99" title="">99</a>, <a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib120" title="">120</a>]</cite> using Automatic Speech Recognition (ASR) and Text-to-Speech (TTS) tasks. Notably, Mini-Omni divides the training of various modules into three phases: the first phase utilizes data from speech recognition and synthesis to enhance the model’s abilities in these aspects, training only the ASR and TTS adapters. The second phase focuses exclusively on enhancing the model’s text capabilities when given speech inputs, updating only the LLM parameters while freezing other modules. Through these two training phases, the original language LLM’s capabilities are maximally preserved, while adapting to speech modality input and output, thereby addressing the primary modality alignment tasks.</p> </div> </section> <section class="ltx_subsubsection" id="S4.SS2.SSS3"> <h4 class="ltx_title ltx_title_subsubsection"> <span class="ltx_tag ltx_tag_subsubsection">4.2.3 </span>Supervised Fine-tuning or Dialogue Dataset Fine-tuning</h4> <div class="ltx_para" id="S4.SS2.SSS3.p1"> <p class="ltx_p" id="S4.SS2.SSS3.p1.1">During this stage, most models use instruction-following datasets or dialogue data for supervised fine-tuning of the LLM, enhancing natural conversational abilities. <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib242" title="">242</a>, <a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib244" title="">244</a>]</cite> propose a two-stage instruction-tuning process that includes cross-modal instruction fine-tuning and chain-of-modality instruction fine-tuning. Ultimately, the model follows the A-T-T-A method to achieve end-to-end speech input and output. EMOVA <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib25" title="">25</a>]</cite> employs a similar chain-of-modality concept to construct instruction-tuning datasets, empowering it to respond accurately to speech instructions. Moshi <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib44" title="">44</a>]</cite>, Mini-Omni <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib222" title="">222</a>]</cite>, OmniFlatten <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib246" title="">246</a>]</cite>, and SyncLLM <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib203" title="">203</a>]</cite> utilize spoken dialogue datasets for fine-tuning, endowing the models with conversational interaction capabilities. Remarkably, Moshi constructs a more natural and realistic dialogue dataset that incorporates elements such as noise and overlap, enabling the model to learn authentic multi-stream interactions. OmniFlatten fine-tunes the speech-text LLM using interleaved and serialized dialogues across three stages to progressively train the model in acquiring half-duplex and full-duplex communication capabilities. Similarly, SyncLLM employs a three-stage training procedure that predominantly uses synthetic spoken dialogue data along with a relatively small amount of real-world spoken dialogue data to develop a full-duplex voice agent.</p> </div> </section> <section class="ltx_subsubsection" id="S4.SS2.SSS4"> <h4 class="ltx_title ltx_title_subsubsection"> <span class="ltx_tag ltx_tag_subsubsection">4.2.4 </span>Preference Optimization and Reinforcement Learning</h4> <div class="ltx_para" id="S4.SS2.SSS4.p1"> <p class="ltx_p" id="S4.SS2.SSS4.p1.1">The research on leveraging preference optimization to align a spoken dialogue model with human preferences is virtually absent. Recently, <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib5" title="">5</a>, <a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib243" title="">243</a>, <a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib23" title="">23</a>]</cite> adopted preference optimization for Text-to-Speech (TTS) models to align speech synthesis quality with human preferences but not for spoken dialogue models. Align-SLM <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib129" title="">129</a>]</cite> pioneers the integration of Direct Preference Optimization (DPO) <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib170" title="">170</a>]</cite> in textless Spoken Language Models (SLMs) to enhance semantic understanding. It transforms continuous speech into discrete units using a pre-trained Hubert model and K-means clustering. LoRA fine-tuning on a Spoken LLM generates multiple speech continuations from prompts. Semantic metrics create preference data offline, making DPO training efficient and stable, eliminating the need for an external reward model. Coupled with curriculum learning <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib15" title="">15</a>]</cite>, Align-SLM progressively refines preference data selection, optimizing semantic feedback, and improving SLM performance.</p> </div> </section> </section> <section class="ltx_subsection" id="S4.SS3"> <h3 class="ltx_title ltx_title_subsection"> <span class="ltx_tag ltx_tag_subsection">4.3 </span>Training Frameworks and Generation Strategies</h3> <div class="ltx_para" id="S4.SS3.p1"> <p class="ltx_p" id="S4.SS3.p1.1">Recent advanced methods in spoken dialogue models employ a variety of innovative techniques to achieve more natural speech output and lower latency. In this part, we explore various approaches that exemplify these advancements:</p> </div> <div class="ltx_para" id="S4.SS3.p2"> <p class="ltx_p" id="S4.SS3.p2.2"><math alttext="\bullet" class="ltx_Math" display="inline" id="S4.SS3.p2.1.m1.1"><semantics id="S4.SS3.p2.1.m1.1a"><mo id="S4.SS3.p2.1.m1.1.1" xref="S4.SS3.p2.1.m1.1.1.cmml">∙</mo><annotation-xml encoding="MathML-Content" id="S4.SS3.p2.1.m1.1b"><ci id="S4.SS3.p2.1.m1.1.1.cmml" xref="S4.SS3.p2.1.m1.1.1">∙</ci></annotation-xml><annotation encoding="application/x-tex" id="S4.SS3.p2.1.m1.1c">\bullet</annotation><annotation encoding="application/x-llamapun" id="S4.SS3.p2.1.m1.1d">∙</annotation></semantics></math> <em class="ltx_emph ltx_font_italic" id="S4.SS3.p2.2.1">LLama-Omni.</em> LLama-Omni <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib57" title="">57</a>]</cite> adds a streaming speech decoder that operates after the LLM. This decoder runs in a non-autoregressive manner, taking the output hidden states from the LLM as input and generating the discrete unit sequence corresponding to the speech response. To model the variable-length mapping between input and output, LLama-Omni employs an upsample factor, denoted as <math alttext="\lambda" class="ltx_Math" display="inline" id="S4.SS3.p2.2.m2.1"><semantics id="S4.SS3.p2.2.m2.1a"><mi id="S4.SS3.p2.2.m2.1.1" xref="S4.SS3.p2.2.m2.1.1.cmml">λ</mi><annotation-xml encoding="MathML-Content" id="S4.SS3.p2.2.m2.1b"><ci id="S4.SS3.p2.2.m2.1.1.cmml" xref="S4.SS3.p2.2.m2.1.1">𝜆</ci></annotation-xml><annotation encoding="application/x-tex" id="S4.SS3.p2.2.m2.1c">\lambda</annotation><annotation encoding="application/x-llamapun" id="S4.SS3.p2.2.m2.1d">italic_λ</annotation></semantics></math>, along with Connectionist Temporal Classification (CTC) loss <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib69" title="">69</a>]</cite>. This ensures that the model can generate speech responses simultaneously with text responses. Additionally, a predefined chunk size is set to further enable vocoder streaming synthesis of speech waveforms, facilitating real-time interaction and reducing latency.</p> </div> <div class="ltx_para" id="S4.SS3.p3"> <p class="ltx_p" id="S4.SS3.p3.1"><math alttext="\bullet" class="ltx_Math" display="inline" id="S4.SS3.p3.1.m1.1"><semantics id="S4.SS3.p3.1.m1.1a"><mo id="S4.SS3.p3.1.m1.1.1" xref="S4.SS3.p3.1.m1.1.1.cmml">∙</mo><annotation-xml encoding="MathML-Content" id="S4.SS3.p3.1.m1.1b"><ci id="S4.SS3.p3.1.m1.1.1.cmml" xref="S4.SS3.p3.1.m1.1.1">∙</ci></annotation-xml><annotation encoding="application/x-tex" id="S4.SS3.p3.1.m1.1c">\bullet</annotation><annotation encoding="application/x-llamapun" id="S4.SS3.p3.1.m1.1d">∙</annotation></semantics></math> <em class="ltx_emph ltx_font_italic" id="S4.SS3.p3.1.1">Mini-Omni.</em> Mini-Omni <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib222" title="">222</a>]</cite> selects SNAC <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib193" title="">193</a>]</cite>, a music-grade encoder, to discretize one second of audio into hundreds of tokens, which significantly increases the burden on the LLM for modeling speech tokens. Delay Pattern language model decoding strategies are often applied in modeling multiple parallel streams of acoustic tokens in speech tasks like MusicGen <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib40" title="">40</a>]</cite>, VoiceCraft <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib163" title="">163</a>]</cite>, and Parler-TTS <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib140" title="">140</a>]</cite>. Compared with traditional sequential step decoding, this strategy can effectively reduce the time steps required for LLM decoding and generating speech tokens. Inspired by this, Mini-Omni innovatively applies text-instructed delayed parallel generation to address the issue of long SNAC codebook sequences, simultaneously producing audio and text tokens. This effectively leverages and preserves the original capabilities of the language model. Moreover, Mini-Omni proposes a Batch Parallel Decoding method. Specifically, it generates two samples in parallel for a single input: the first predicts text tokens, and the second predicts both text and speech tokens simultaneously. The text output from the first sample is embedded into the corresponding positions of the second sample, while the second sample’s text output is discarded. This further enhances the model’s reasoning capabilities during dialogue, maximizing the transfer of its text-based abilities.</p> </div> <div class="ltx_para" id="S4.SS3.p4"> <p class="ltx_p" id="S4.SS3.p4.1"><math alttext="\bullet" class="ltx_Math" display="inline" id="S4.SS3.p4.1.m1.1"><semantics id="S4.SS3.p4.1.m1.1a"><mo id="S4.SS3.p4.1.m1.1.1" xref="S4.SS3.p4.1.m1.1.1.cmml">∙</mo><annotation-xml encoding="MathML-Content" id="S4.SS3.p4.1.m1.1b"><ci id="S4.SS3.p4.1.m1.1.1.cmml" xref="S4.SS3.p4.1.m1.1.1">∙</ci></annotation-xml><annotation encoding="application/x-tex" id="S4.SS3.p4.1.m1.1c">\bullet</annotation><annotation encoding="application/x-llamapun" id="S4.SS3.p4.1.m1.1d">∙</annotation></semantics></math> <em class="ltx_emph ltx_font_italic" id="S4.SS3.p4.1.1">IntrinsicVoice.</em> IntrinsicVoice <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib248" title="">248</a>]</cite> introduces a speech encoder and a streaming vocoder for the tokenization and detokenization of speech, and a GroupFormer for modeling speech and text sequences. This architecture integrates a large language model (LLM) with a GroupModel. Specifically, it uses a pre-trained HuBERT encoder <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib78" title="">78</a>]</cite> and its corresponding KMeans quantizer <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib74" title="">74</a>]</cite> to process speech inputs into discrete units. These units are organized into a grouped token sequence through a group partition operation. The grouped tokens are then passed through an embedding layer and adaptor module to map these embeddings into the LLM’s embedding space. The context embeddings output by the LLM are processed through a linear layer and concatenated with a specified number of learnable queries. This input is fed into a smaller non-autoregressive transformer encoder model, dubbed the "GroupModel," to predict a group of speech tokens in one step. The introduction of GroupFormer effectively improves the model’s ability to handle sequences within a group, mitigates the modality gap between speech and text, accelerates inference speed, and alleviates issues associated with long-sequence modeling.</p> </div> <div class="ltx_para" id="S4.SS3.p5"> <p class="ltx_p" id="S4.SS3.p5.1"><math alttext="\bullet" class="ltx_Math" display="inline" id="S4.SS3.p5.1.m1.1"><semantics id="S4.SS3.p5.1.m1.1a"><mo id="S4.SS3.p5.1.m1.1.1" xref="S4.SS3.p5.1.m1.1.1.cmml">∙</mo><annotation-xml encoding="MathML-Content" id="S4.SS3.p5.1.m1.1b"><ci id="S4.SS3.p5.1.m1.1.1.cmml" xref="S4.SS3.p5.1.m1.1.1">∙</ci></annotation-xml><annotation encoding="application/x-tex" id="S4.SS3.p5.1.m1.1c">\bullet</annotation><annotation encoding="application/x-llamapun" id="S4.SS3.p5.1.m1.1d">∙</annotation></semantics></math> <em class="ltx_emph ltx_font_italic" id="S4.SS3.p5.1.1">Moshi.</em> Moshi <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib44" title="">44</a>]</cite> introduces a mini codec model with 8 codebooks at a frame rate of 12.5 Hz for speech representation, where one second corresponds to 100 speech tokens. It adopts an RQ-Transformer consisting of a Temporal Transformer and a smaller Depth Transformer as the backbone network for the LLM, hierarchically modeling multi-codebook audio tokens. Similar architectures have appeared in prior research, such as UniAudio <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib232" title="">232</a>]</cite> and Megabyte <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib237" title="">237</a>]</cite>. The Depth Transformer models sub-sequence tokens conditioned on temporal context predicted by the Temporal Transformer. Given the smaller size of the Depth Transformer, sub-sequence generation can almost be viewed as parallel generation. This allows the model to scale to longer sequences by extending the temporal modeling capacity of the Temporal Transformer or to achieve greater depth by enhancing the hierarchical modeling capabilities of the Depth Transformer, rather than modeling the flattened sequence with a single model.</p> </div> <div class="ltx_para" id="S4.SS3.p6"> <p class="ltx_p" id="S4.SS3.p6.1"><math alttext="\bullet" class="ltx_Math" display="inline" id="S4.SS3.p6.1.m1.1"><semantics id="S4.SS3.p6.1.m1.1a"><mo id="S4.SS3.p6.1.m1.1.1" xref="S4.SS3.p6.1.m1.1.1.cmml">∙</mo><annotation-xml encoding="MathML-Content" id="S4.SS3.p6.1.m1.1b"><ci id="S4.SS3.p6.1.m1.1.1.cmml" xref="S4.SS3.p6.1.m1.1.1">∙</ci></annotation-xml><annotation encoding="application/x-tex" id="S4.SS3.p6.1.m1.1c">\bullet</annotation><annotation encoding="application/x-llamapun" id="S4.SS3.p6.1.m1.1d">∙</annotation></semantics></math> <em class="ltx_emph ltx_font_italic" id="S4.SS3.p6.1.1">SyncLLM.</em> SyncLLM <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib203" title="">203</a>]</cite> employs an auto-regressive transformer decoder for full-duplex dialogue, integrating time synchronization to align speech units with the real-world clock. It predicts interleaved speech tokens for both dialogue partners, maintaining timing with speaker tags. The model is trained on deduplicated HuBERT token sequences to enhance semantic fidelity while managing latency by anticipating user responses. Interpolation reconstructs token sequences to fit expected structures, facilitating seamless speech synthesis.</p> </div> <div class="ltx_para" id="S4.SS3.p7"> <p class="ltx_p" id="S4.SS3.p7.1"><span class="ltx_text ltx_font_bold" id="S4.SS3.p7.1.1">Text-guided generation.</span> Some end-to-end methods like <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib242" title="">242</a>, <a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib244" title="">244</a>, <a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib156" title="">156</a>, <a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib25" title="">25</a>]</cite> use chain-of-thought reasoning, which allows guiding speech generation with the output of an underlying text LLM. However, this is fundamentally incompatible with live interactions, as the model needs to produce an entire answer as text before it starts speaking. Later methods <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib57" title="">57</a>, <a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib222" title="">222</a>, <a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib44" title="">44</a>]</cite> can accept user speech input and simultaneously output speech and text, ensuring high-quality responses while significantly reducing latency. Lama-Omni <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib57" title="">57</a>]</cite> utilizes a streaming decoder to generate text and speech tokens in parallel. Mini-Omni <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib222" title="">222</a>]</cite> is restructured to transfer language reasoning abilities to streaming audio output through a text-audio parallel decoding approach. Moshi <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib44" title="">44</a>]</cite> details a novel feature, the Inner Monologue, which consists of joint modeling of the textual and speech modalities on the system side to improve the quality of interactions.</p> </div> <div class="ltx_para" id="S4.SS3.p8"> <p class="ltx_p" id="S4.SS3.p8.1"><span class="ltx_text ltx_font_bold" id="S4.SS3.p8.1.1">W/o text-guided generation.</span> Other methods achieve speech-to-speech generation without relying on text stream generation. IntrinsicVoice <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib248" title="">248</a>]</cite> introduces a novel GroupModel that predicts a group of speech tokens in one step based on global context embeddings. SyncLLM <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib203" title="">203</a>]</cite> predicts interleaved chunks of token sequences at each time step, allowing the model to handle all conversational cues such as backchannels, overlaps, interruptions, etc.</p> </div> </section> <section class="ltx_subsection" id="S4.SS4"> <h3 class="ltx_title ltx_title_subsection"> <span class="ltx_tag ltx_tag_subsection">4.4 </span>Discussions about Training Paradigm in Spoken Dialogue Models</h3> <section class="ltx_subsubsection" id="S4.SS4.SSS1"> <h4 class="ltx_title ltx_title_subsubsection"> <span class="ltx_tag ltx_tag_subsubsection">4.4.1 </span>Text and Speech Modality Alignment</h4> <div class="ltx_para" id="S4.SS4.SSS1.p1"> <p class="ltx_p" id="S4.SS4.SSS1.p1.1">In spoken dialogue systems, the alignment between speech and text modalities is a crucial stage. To preserve the textual intelligence of large language models (LLMs) as much as possible, nearly all current methodologies <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib242" title="">242</a>, <a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib154" title="">154</a>, <a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib57" title="">57</a>, <a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib222" title="">222</a>, <a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib223" title="">223</a>, <a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib44" title="">44</a>, <a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib246" title="">246</a>]</cite> incorporate a post-training phase utilizing speech-text paired data when developing spoken dialogue models. This may involve either expanding the vocabulary to treat speech tokens as an extension of the original vocabulary or using speech adaptors to map speech embeddings to the original text latent space of the LLM, and designing multi-task training objectives to achieve alignment between text and speech modalities. For example, data from speech recognition and speech synthesis can be used to train the model’s speech recognition and synthesis capabilities. Although this is an effective strategy, its implementation can still lead to a certain degree of catastrophic forgetting in LLMs due to the large volume of pre-trained text corpora and the imbalance with paired speech-text data, which can harm the model’s text-based capabilities. Therefore, precise parameter design and customized optimization strategies are needed to mitigate this issue as much as possible, as demonstrated by approaches like Moshi <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib44" title="">44</a>]</cite>.</p> </div> <div class="ltx_para" id="S4.SS4.SSS1.p2"> <p class="ltx_p" id="S4.SS4.SSS1.p2.1">This raises a consideration: during the training phase of spoken dialogue models, is it feasible to directly utilize speech data for adaptation to text-based LLMs, thereby eliminating the necessity for speech-text paired data? This is because unlabeled speech data is abundant and easily accessible, making it convenient and beneficial for training the speech intelligence of LLMs. This approach would require us to obtain a pre-aligned speech representation with the text modality. Perhaps we can consider further exploration and experimentation in the speech tokenizer component, such as directly mapping the semantic discrete units of speech onto the text token space to achieve enforced alignment.</p> </div> </section> <section class="ltx_subsubsection" id="S4.SS4.SSS2"> <h4 class="ltx_title ltx_title_subsubsection"> <span class="ltx_tag ltx_tag_subsubsection">4.4.2 </span>Different Temporal Alignment Methods in Spoken Dialogue Models</h4> <div class="ltx_para" id="S4.SS4.SSS2.p1"> <p class="ltx_p" id="S4.SS4.SSS2.p1.1">In speech and text modalities, there is often a significant mismatch in sequence lengths. Even when some speech tokenizers <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib90" title="">90</a>, <a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib119" title="">119</a>]</cite> employ extreme sequence compression methods, a length gap remains between the two. Temporal alignment information between speech and text has been explored in tasks like Automatic Speech Recognition (ASR) and Text-to-Speech (TTS) as demonstrated by models such as Whisper <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib169" title="">169</a>]</cite>, FastSpeech <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib176" title="">176</a>]</cite>, and VITS <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib107" title="">107</a>]</cite>. Recently, some spoken dialogue systems have utilized temporal alignment information to enhance model performance, yielding promising results. For instance, Spirit-LM <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib158" title="">158</a>]</cite> uses interleaving text and speech tokens for continual pre-training on the LLaMA base model, significantly boosting the model’s performance in speech understanding and generation. Experimental visualizations demonstrate that the similarity between text and speech features is notably higher in models trained with interleaved token sequences compared to those trained without this approach. This indicates that providing the model with explicit fine-grained temporal alignment information can effectively enhance modality alignment and improve the performance of LLMs.</p> </div> <div class="ltx_para" id="S4.SS4.SSS2.p2"> <p class="ltx_p" id="S4.SS4.SSS2.p2.1">Mini-Omni <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib222" title="">222</a>]</cite> achieves parallel generation of text and speech by padding text tokens to match the length of speech tokens, allowing the LLM to implicitly learn the alignment information between speech and text tokens. This can be viewed as a form of sentence-level temporal alignment information, a method also utilized in recent speech synthesis work <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib30" title="">30</a>]</cite>. Moshi <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib44" title="">44</a>]</cite>, on the other hand, uses word-level speech-text temporal alignment information and special marker tokens to achieve similar parallel generation capabilities. The difference lies in that Mini-Omni fully allows the LLM to implicitly learn the alignment, whereas Moshi provides word-level alignment priors first, and then lets the model learn finer-grained alignments.</p> </div> <div class="ltx_para" id="S4.SS4.SSS2.p3"> <p class="ltx_p" id="S4.SS4.SSS2.p3.1">Exploring the impact of introducing different levels of temporal alignment priors on the training effectiveness of spoken dialogue models, such as sentence-level, word-level, or phoneme-level, is an intriguing area of research. Understanding how these various alignment strategies affect model performance can guide the development of more efficient and accurate systems. For instance, sentence-level alignment might offer a broader contextual understanding, while word-level or phoneme-level alignments could provide more detailed synchronization between speech and text, potentially leading to improvements in nuanced tasks like speech synthesis and understanding.</p> </div> </section> <section class="ltx_subsubsection" id="S4.SS4.SSS3"> <h4 class="ltx_title ltx_title_subsubsection"> <span class="ltx_tag ltx_tag_subsubsection">4.4.3 </span>Reinforcement Learning (RL) in Spoken Dialogue Models</h4> <div class="ltx_para" id="S4.SS4.SSS3.p1"> <p class="ltx_p" id="S4.SS4.SSS3.p1.1">Reinforcement Learning (RL) has proven to be an effective learning paradigm in text and image processing <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib185" title="">185</a>, <a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib196" title="">196</a>, <a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib204" title="">204</a>]</cite>. Recent research has shown that Direct Preference Optimization (DPO) <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib170" title="">170</a>]</cite> can be extended to music and speech generation <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib36" title="">36</a>, <a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib243" title="">243</a>]</cite>. MusicRL <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib36" title="">36</a>]</cite> uses Reinforcement Learning from Human Feedback (RLHF) to improve music generation by fine-tuning a pretrained model for better text adherence and audio quality. By collecting extensive human feedback, MusicRL creates a more refined and subjective music generation system. Seed-TTS <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib5" title="">5</a>]</cite> explores RL methods, comparing external reward models like REINFORCE with simpler methods like DPO. The study highlights using REINFORCE to enhance speaker similarity and emotion controllability in the Seed-TTS system. Qwen2-Audio <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib33" title="">33</a>]</cite> uses DPO to align with human preferences by optimizing responses based on human-annotated data. This enhances its ability to follow audio instructions accurately and intelligently respond to complex audio inputs, improving its performance in audio-centric tasks. However, in the dialogue system field, reinforcement learning techniques based on human feedback <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib82" title="">82</a>]</cite> are rarely applied. Considering the diversity of inputs and outputs in large language models, exploring the incorporation of reinforcement learning strategies such as Proximal Policy Optimization (PPO) <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib185" title="">185</a>]</cite> can be beneficial. Additionally, considering the performance metrics for evaluating spoken dialogue systems, designing targeted reinforcement learning strategies and feedback functions to enhance different objectives is also a direction worth exploring.</p> </div> </section> </section> </section> <section class="ltx_section" id="S5"> <h2 class="ltx_title ltx_title_section"> <span class="ltx_tag ltx_tag_section">5 </span>Streaming, Duplex, and Interaction</h2> <div class="ltx_para" id="S5.p1"> <p class="ltx_p" id="S5.p1.1">Streaming, full-duplex technology, and interactions, are crucial elements for enhancing the interactive capabilities of spoken dialogue models because they directly impact the system’s responsiveness, the fluidity of natural interaction, and its ability to handle complex interactions.Unlike text language models, spoken dialogue models require real-time processing of user input. <span class="ltx_text ltx_font_bold" id="S5.p1.1.1">Streaming</span> allows the system to instantly acquire and process speech data; <span class="ltx_text ltx_font_bold" id="S5.p1.1.2">full-duplex technology</span> enables both the system and user to speak simultaneously, enhancing the naturalness of interaction; and <span class="ltx_text ltx_font_bold" id="S5.p1.1.3">handling of interactions</span> provides the model with the ability to recognize and adapt to various conversational contexts, making the dialogue more intelligent and realistic. Building on early explorations, GPT-4o’s advanced spoken dialogue capabilities have ignited a surge of research interest. With real-time voice processing and natural conversational interaction, these models offer users a seamless and efficient communication experience. However, achieving these capabilities requires deep research into model architecture, data collection, system design, and training methods. The model needs to be carefully designed and optimized in terms of real-time performance, stability, and response speed. At the same time, duplex technology is an indispensable key implementation, which ensures that the voice model has both "ears" and "mouths". Next, we will first discuss the streaming processing method in Section 5.1, then introduce the key technologies of duplex communication and explains how to handle interactation to improve user experience in Section 5.2.</p> </div> <section class="ltx_subsection" id="S5.SS1"> <h3 class="ltx_title ltx_title_subsection"> <span class="ltx_tag ltx_tag_subsection">5.1 </span>Streaming Spoken Dialogue Models</h3> <div class="ltx_para" id="S5.SS1.p1"> <p class="ltx_p" id="S5.SS1.p1.1">The core of streaming speech models lies in their "real-time" and "continuous" capabilities, meaning they can process input and generate output simultaneously without waiting for complete input. This includes two main aspects:</p> </div> <div class="ltx_para" id="S5.SS1.p2"> <p class="ltx_p" id="S5.SS1.p2.1"><math alttext="\bullet" class="ltx_Math" display="inline" id="S5.SS1.p2.1.m1.1"><semantics id="S5.SS1.p2.1.m1.1a"><mo id="S5.SS1.p2.1.m1.1.1" xref="S5.SS1.p2.1.m1.1.1.cmml">∙</mo><annotation-xml encoding="MathML-Content" id="S5.SS1.p2.1.m1.1b"><ci id="S5.SS1.p2.1.m1.1.1.cmml" xref="S5.SS1.p2.1.m1.1.1">∙</ci></annotation-xml><annotation encoding="application/x-tex" id="S5.SS1.p2.1.m1.1c">\bullet</annotation><annotation encoding="application/x-llamapun" id="S5.SS1.p2.1.m1.1d">∙</annotation></semantics></math> <em class="ltx_emph ltx_font_italic" id="S5.SS1.p2.1.1">Streaming Understanding.</em> The model can process audio input as the user speaks, without needing to wait for the user to finish entirely, allowing it to align more naturally with the flow of conversation.</p> </div> <div class="ltx_para" id="S5.SS1.p3"> <p class="ltx_p" id="S5.SS1.p3.1"><math alttext="\bullet" class="ltx_Math" display="inline" id="S5.SS1.p3.1.m1.1"><semantics id="S5.SS1.p3.1.m1.1a"><mo id="S5.SS1.p3.1.m1.1.1" xref="S5.SS1.p3.1.m1.1.1.cmml">∙</mo><annotation-xml encoding="MathML-Content" id="S5.SS1.p3.1.m1.1b"><ci id="S5.SS1.p3.1.m1.1.1.cmml" xref="S5.SS1.p3.1.m1.1.1">∙</ci></annotation-xml><annotation encoding="application/x-tex" id="S5.SS1.p3.1.m1.1c">\bullet</annotation><annotation encoding="application/x-llamapun" id="S5.SS1.p3.1.m1.1d">∙</annotation></semantics></math> <em class="ltx_emph ltx_font_italic" id="S5.SS1.p3.1.1">Streaming Generation.</em> This concept refers to the model’s ability to generate output without waiting for all intermediate hidden states. Instead, it can produce output progressively as processing occurs, which improves responsiveness and allows for smoother, more efficient interactions.</p> </div> <div class="ltx_para" id="S5.SS1.p4"> <p class="ltx_p" id="S5.SS1.p4.1">These streaming capabilities allow the model to perform more fluidly in real-time interactions, providing a seamless communication experience for users. We will explore streaming techniques in both end-to-end and cascaded spoken dialogue models, discussing the implementation methods of streaming in each system and highlighting their similarities and differences.</p> </div> <section class="ltx_subsubsection" id="S5.SS1.SSS1"> <h4 class="ltx_title ltx_title_subsubsection"> <span class="ltx_tag ltx_tag_subsubsection">5.1.1 </span>Streaming End-to-End Spoken Dialogue Models</h4> <div class="ltx_para" id="S5.SS1.SSS1.p1"> <p class="ltx_p" id="S5.SS1.SSS1.p1.1">End-to-end streaming spoken dialogue models often leverage the knowledge of pre-trained text language models alongside an audio tokenizer, employing an tokenizer-detokenizer architecture to process and output audio signals. Based on the concepts of streaming input and output discussed above, end-to-end models also require specific design considerations to enable streaming capabilities. These designs center around the model’s input and output handling and can be distilled into three core techniques: causal convolution, causal attention mechanisms, and queue management.</p> </div> <div class="ltx_para" id="S5.SS1.SSS1.p2"> <p class="ltx_p" id="S5.SS1.SSS1.p2.7"><span class="ltx_text ltx_font_bold" id="S5.SS1.SSS1.p2.7.1">Causal Convolution.</span> Causal Convolution <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib12" title="">12</a>]</cite> is a specialized form of convolution widely used in time-series processing, especially suitable for streaming speech models. The key feature of causal convolution is that the current output depends only on the current and past inputs, without being influenced by future inputs, thereby strictly respecting temporal order. Unlike regular convolution, causal convolution achieves this by "shifting" the convolution kernel to avoid accessing future information. In a one-dimensional time series, if the convolution kernel size is <math alttext="k" class="ltx_Math" display="inline" id="S5.SS1.SSS1.p2.1.m1.1"><semantics id="S5.SS1.SSS1.p2.1.m1.1a"><mi id="S5.SS1.SSS1.p2.1.m1.1.1" xref="S5.SS1.SSS1.p2.1.m1.1.1.cmml">k</mi><annotation-xml encoding="MathML-Content" id="S5.SS1.SSS1.p2.1.m1.1b"><ci id="S5.SS1.SSS1.p2.1.m1.1.1.cmml" xref="S5.SS1.SSS1.p2.1.m1.1.1">𝑘</ci></annotation-xml><annotation encoding="application/x-tex" id="S5.SS1.SSS1.p2.1.m1.1c">k</annotation><annotation encoding="application/x-llamapun" id="S5.SS1.SSS1.p2.1.m1.1d">italic_k</annotation></semantics></math>, a standard convolution would use data from <math alttext="(t-k/2)" class="ltx_Math" display="inline" id="S5.SS1.SSS1.p2.2.m2.1"><semantics id="S5.SS1.SSS1.p2.2.m2.1a"><mrow id="S5.SS1.SSS1.p2.2.m2.1.1.1" xref="S5.SS1.SSS1.p2.2.m2.1.1.1.1.cmml"><mo id="S5.SS1.SSS1.p2.2.m2.1.1.1.2" stretchy="false" xref="S5.SS1.SSS1.p2.2.m2.1.1.1.1.cmml">(</mo><mrow id="S5.SS1.SSS1.p2.2.m2.1.1.1.1" xref="S5.SS1.SSS1.p2.2.m2.1.1.1.1.cmml"><mi id="S5.SS1.SSS1.p2.2.m2.1.1.1.1.2" xref="S5.SS1.SSS1.p2.2.m2.1.1.1.1.2.cmml">t</mi><mo id="S5.SS1.SSS1.p2.2.m2.1.1.1.1.1" xref="S5.SS1.SSS1.p2.2.m2.1.1.1.1.1.cmml">−</mo><mrow id="S5.SS1.SSS1.p2.2.m2.1.1.1.1.3" xref="S5.SS1.SSS1.p2.2.m2.1.1.1.1.3.cmml"><mi id="S5.SS1.SSS1.p2.2.m2.1.1.1.1.3.2" xref="S5.SS1.SSS1.p2.2.m2.1.1.1.1.3.2.cmml">k</mi><mo id="S5.SS1.SSS1.p2.2.m2.1.1.1.1.3.1" xref="S5.SS1.SSS1.p2.2.m2.1.1.1.1.3.1.cmml">/</mo><mn id="S5.SS1.SSS1.p2.2.m2.1.1.1.1.3.3" xref="S5.SS1.SSS1.p2.2.m2.1.1.1.1.3.3.cmml">2</mn></mrow></mrow><mo id="S5.SS1.SSS1.p2.2.m2.1.1.1.3" stretchy="false" xref="S5.SS1.SSS1.p2.2.m2.1.1.1.1.cmml">)</mo></mrow><annotation-xml encoding="MathML-Content" id="S5.SS1.SSS1.p2.2.m2.1b"><apply id="S5.SS1.SSS1.p2.2.m2.1.1.1.1.cmml" xref="S5.SS1.SSS1.p2.2.m2.1.1.1"><minus id="S5.SS1.SSS1.p2.2.m2.1.1.1.1.1.cmml" xref="S5.SS1.SSS1.p2.2.m2.1.1.1.1.1"></minus><ci id="S5.SS1.SSS1.p2.2.m2.1.1.1.1.2.cmml" xref="S5.SS1.SSS1.p2.2.m2.1.1.1.1.2">𝑡</ci><apply id="S5.SS1.SSS1.p2.2.m2.1.1.1.1.3.cmml" xref="S5.SS1.SSS1.p2.2.m2.1.1.1.1.3"><divide id="S5.SS1.SSS1.p2.2.m2.1.1.1.1.3.1.cmml" xref="S5.SS1.SSS1.p2.2.m2.1.1.1.1.3.1"></divide><ci id="S5.SS1.SSS1.p2.2.m2.1.1.1.1.3.2.cmml" xref="S5.SS1.SSS1.p2.2.m2.1.1.1.1.3.2">𝑘</ci><cn id="S5.SS1.SSS1.p2.2.m2.1.1.1.1.3.3.cmml" type="integer" xref="S5.SS1.SSS1.p2.2.m2.1.1.1.1.3.3">2</cn></apply></apply></annotation-xml><annotation encoding="application/x-tex" id="S5.SS1.SSS1.p2.2.m2.1c">(t-k/2)</annotation><annotation encoding="application/x-llamapun" id="S5.SS1.SSS1.p2.2.m2.1d">( italic_t - italic_k / 2 )</annotation></semantics></math> to <math alttext="(t+k/2)" class="ltx_Math" display="inline" id="S5.SS1.SSS1.p2.3.m3.1"><semantics id="S5.SS1.SSS1.p2.3.m3.1a"><mrow id="S5.SS1.SSS1.p2.3.m3.1.1.1" xref="S5.SS1.SSS1.p2.3.m3.1.1.1.1.cmml"><mo id="S5.SS1.SSS1.p2.3.m3.1.1.1.2" stretchy="false" xref="S5.SS1.SSS1.p2.3.m3.1.1.1.1.cmml">(</mo><mrow id="S5.SS1.SSS1.p2.3.m3.1.1.1.1" xref="S5.SS1.SSS1.p2.3.m3.1.1.1.1.cmml"><mi id="S5.SS1.SSS1.p2.3.m3.1.1.1.1.2" xref="S5.SS1.SSS1.p2.3.m3.1.1.1.1.2.cmml">t</mi><mo id="S5.SS1.SSS1.p2.3.m3.1.1.1.1.1" xref="S5.SS1.SSS1.p2.3.m3.1.1.1.1.1.cmml">+</mo><mrow id="S5.SS1.SSS1.p2.3.m3.1.1.1.1.3" xref="S5.SS1.SSS1.p2.3.m3.1.1.1.1.3.cmml"><mi id="S5.SS1.SSS1.p2.3.m3.1.1.1.1.3.2" xref="S5.SS1.SSS1.p2.3.m3.1.1.1.1.3.2.cmml">k</mi><mo id="S5.SS1.SSS1.p2.3.m3.1.1.1.1.3.1" xref="S5.SS1.SSS1.p2.3.m3.1.1.1.1.3.1.cmml">/</mo><mn id="S5.SS1.SSS1.p2.3.m3.1.1.1.1.3.3" xref="S5.SS1.SSS1.p2.3.m3.1.1.1.1.3.3.cmml">2</mn></mrow></mrow><mo id="S5.SS1.SSS1.p2.3.m3.1.1.1.3" stretchy="false" xref="S5.SS1.SSS1.p2.3.m3.1.1.1.1.cmml">)</mo></mrow><annotation-xml encoding="MathML-Content" id="S5.SS1.SSS1.p2.3.m3.1b"><apply id="S5.SS1.SSS1.p2.3.m3.1.1.1.1.cmml" xref="S5.SS1.SSS1.p2.3.m3.1.1.1"><plus id="S5.SS1.SSS1.p2.3.m3.1.1.1.1.1.cmml" xref="S5.SS1.SSS1.p2.3.m3.1.1.1.1.1"></plus><ci id="S5.SS1.SSS1.p2.3.m3.1.1.1.1.2.cmml" xref="S5.SS1.SSS1.p2.3.m3.1.1.1.1.2">𝑡</ci><apply id="S5.SS1.SSS1.p2.3.m3.1.1.1.1.3.cmml" xref="S5.SS1.SSS1.p2.3.m3.1.1.1.1.3"><divide id="S5.SS1.SSS1.p2.3.m3.1.1.1.1.3.1.cmml" xref="S5.SS1.SSS1.p2.3.m3.1.1.1.1.3.1"></divide><ci id="S5.SS1.SSS1.p2.3.m3.1.1.1.1.3.2.cmml" xref="S5.SS1.SSS1.p2.3.m3.1.1.1.1.3.2">𝑘</ci><cn id="S5.SS1.SSS1.p2.3.m3.1.1.1.1.3.3.cmml" type="integer" xref="S5.SS1.SSS1.p2.3.m3.1.1.1.1.3.3">2</cn></apply></apply></annotation-xml><annotation encoding="application/x-tex" id="S5.SS1.SSS1.p2.3.m3.1c">(t+k/2)</annotation><annotation encoding="application/x-llamapun" id="S5.SS1.SSS1.p2.3.m3.1d">( italic_t + italic_k / 2 )</annotation></semantics></math> at the current time step <math alttext="t" class="ltx_Math" display="inline" id="S5.SS1.SSS1.p2.4.m4.1"><semantics id="S5.SS1.SSS1.p2.4.m4.1a"><mi id="S5.SS1.SSS1.p2.4.m4.1.1" xref="S5.SS1.SSS1.p2.4.m4.1.1.cmml">t</mi><annotation-xml encoding="MathML-Content" id="S5.SS1.SSS1.p2.4.m4.1b"><ci id="S5.SS1.SSS1.p2.4.m4.1.1.cmml" xref="S5.SS1.SSS1.p2.4.m4.1.1">𝑡</ci></annotation-xml><annotation encoding="application/x-tex" id="S5.SS1.SSS1.p2.4.m4.1c">t</annotation><annotation encoding="application/x-llamapun" id="S5.SS1.SSS1.p2.4.m4.1d">italic_t</annotation></semantics></math>. Causal convolution, however, pads the input on the left with <math alttext="k-1" class="ltx_Math" display="inline" id="S5.SS1.SSS1.p2.5.m5.1"><semantics id="S5.SS1.SSS1.p2.5.m5.1a"><mrow id="S5.SS1.SSS1.p2.5.m5.1.1" xref="S5.SS1.SSS1.p2.5.m5.1.1.cmml"><mi id="S5.SS1.SSS1.p2.5.m5.1.1.2" xref="S5.SS1.SSS1.p2.5.m5.1.1.2.cmml">k</mi><mo id="S5.SS1.SSS1.p2.5.m5.1.1.1" xref="S5.SS1.SSS1.p2.5.m5.1.1.1.cmml">−</mo><mn id="S5.SS1.SSS1.p2.5.m5.1.1.3" xref="S5.SS1.SSS1.p2.5.m5.1.1.3.cmml">1</mn></mrow><annotation-xml encoding="MathML-Content" id="S5.SS1.SSS1.p2.5.m5.1b"><apply id="S5.SS1.SSS1.p2.5.m5.1.1.cmml" xref="S5.SS1.SSS1.p2.5.m5.1.1"><minus id="S5.SS1.SSS1.p2.5.m5.1.1.1.cmml" xref="S5.SS1.SSS1.p2.5.m5.1.1.1"></minus><ci id="S5.SS1.SSS1.p2.5.m5.1.1.2.cmml" xref="S5.SS1.SSS1.p2.5.m5.1.1.2">𝑘</ci><cn id="S5.SS1.SSS1.p2.5.m5.1.1.3.cmml" type="integer" xref="S5.SS1.SSS1.p2.5.m5.1.1.3">1</cn></apply></annotation-xml><annotation encoding="application/x-tex" id="S5.SS1.SSS1.p2.5.m5.1c">k-1</annotation><annotation encoding="application/x-llamapun" id="S5.SS1.SSS1.p2.5.m5.1d">italic_k - 1</annotation></semantics></math> zeros so that the kernel only uses data from <math alttext="t-k+1" class="ltx_Math" display="inline" id="S5.SS1.SSS1.p2.6.m6.1"><semantics id="S5.SS1.SSS1.p2.6.m6.1a"><mrow id="S5.SS1.SSS1.p2.6.m6.1.1" xref="S5.SS1.SSS1.p2.6.m6.1.1.cmml"><mrow id="S5.SS1.SSS1.p2.6.m6.1.1.2" xref="S5.SS1.SSS1.p2.6.m6.1.1.2.cmml"><mi id="S5.SS1.SSS1.p2.6.m6.1.1.2.2" xref="S5.SS1.SSS1.p2.6.m6.1.1.2.2.cmml">t</mi><mo id="S5.SS1.SSS1.p2.6.m6.1.1.2.1" xref="S5.SS1.SSS1.p2.6.m6.1.1.2.1.cmml">−</mo><mi id="S5.SS1.SSS1.p2.6.m6.1.1.2.3" xref="S5.SS1.SSS1.p2.6.m6.1.1.2.3.cmml">k</mi></mrow><mo id="S5.SS1.SSS1.p2.6.m6.1.1.1" xref="S5.SS1.SSS1.p2.6.m6.1.1.1.cmml">+</mo><mn id="S5.SS1.SSS1.p2.6.m6.1.1.3" xref="S5.SS1.SSS1.p2.6.m6.1.1.3.cmml">1</mn></mrow><annotation-xml encoding="MathML-Content" id="S5.SS1.SSS1.p2.6.m6.1b"><apply id="S5.SS1.SSS1.p2.6.m6.1.1.cmml" xref="S5.SS1.SSS1.p2.6.m6.1.1"><plus id="S5.SS1.SSS1.p2.6.m6.1.1.1.cmml" xref="S5.SS1.SSS1.p2.6.m6.1.1.1"></plus><apply id="S5.SS1.SSS1.p2.6.m6.1.1.2.cmml" xref="S5.SS1.SSS1.p2.6.m6.1.1.2"><minus id="S5.SS1.SSS1.p2.6.m6.1.1.2.1.cmml" xref="S5.SS1.SSS1.p2.6.m6.1.1.2.1"></minus><ci id="S5.SS1.SSS1.p2.6.m6.1.1.2.2.cmml" xref="S5.SS1.SSS1.p2.6.m6.1.1.2.2">𝑡</ci><ci id="S5.SS1.SSS1.p2.6.m6.1.1.2.3.cmml" xref="S5.SS1.SSS1.p2.6.m6.1.1.2.3">𝑘</ci></apply><cn id="S5.SS1.SSS1.p2.6.m6.1.1.3.cmml" type="integer" xref="S5.SS1.SSS1.p2.6.m6.1.1.3">1</cn></apply></annotation-xml><annotation encoding="application/x-tex" id="S5.SS1.SSS1.p2.6.m6.1c">t-k+1</annotation><annotation encoding="application/x-llamapun" id="S5.SS1.SSS1.p2.6.m6.1d">italic_t - italic_k + 1</annotation></semantics></math> to <math alttext="t" class="ltx_Math" display="inline" id="S5.SS1.SSS1.p2.7.m7.1"><semantics id="S5.SS1.SSS1.p2.7.m7.1a"><mi id="S5.SS1.SSS1.p2.7.m7.1.1" xref="S5.SS1.SSS1.p2.7.m7.1.1.cmml">t</mi><annotation-xml encoding="MathML-Content" id="S5.SS1.SSS1.p2.7.m7.1b"><ci id="S5.SS1.SSS1.p2.7.m7.1.1.cmml" xref="S5.SS1.SSS1.p2.7.m7.1.1">𝑡</ci></annotation-xml><annotation encoding="application/x-tex" id="S5.SS1.SSS1.p2.7.m7.1c">t</annotation><annotation encoding="application/x-llamapun" id="S5.SS1.SSS1.p2.7.m7.1d">italic_t</annotation></semantics></math>, aligning the kernel to only consider current and past inputs. This padding ensures that each layer’s output depends solely on current and prior information, maintaining causality. To further expand the model’s receptive field while preserving causality, <span class="ltx_text ltx_font_bold" id="S5.SS1.SSS1.p2.7.2">dilated causal convolution</span> can be used. This technique introduces gaps within the kernel by inserting zeros between weights, effectively expanding the convolution’s range. This allows the model to capture longer dependencies in the data without increasing latency, which is particularly useful for streaming applications. In streaming spoken dialogue models, causal convolution plays a critical role in:</p> </div> <div class="ltx_para" id="S5.SS1.SSS1.p3"> <p class="ltx_p" id="S5.SS1.SSS1.p3.1"><math alttext="\bullet" class="ltx_Math" display="inline" id="S5.SS1.SSS1.p3.1.m1.1"><semantics id="S5.SS1.SSS1.p3.1.m1.1a"><mo id="S5.SS1.SSS1.p3.1.m1.1.1" xref="S5.SS1.SSS1.p3.1.m1.1.1.cmml">∙</mo><annotation-xml encoding="MathML-Content" id="S5.SS1.SSS1.p3.1.m1.1b"><ci id="S5.SS1.SSS1.p3.1.m1.1.1.cmml" xref="S5.SS1.SSS1.p3.1.m1.1.1">∙</ci></annotation-xml><annotation encoding="application/x-tex" id="S5.SS1.SSS1.p3.1.m1.1c">\bullet</annotation><annotation encoding="application/x-llamapun" id="S5.SS1.SSS1.p3.1.m1.1d">∙</annotation></semantics></math> <em class="ltx_emph ltx_font_italic" id="S5.SS1.SSS1.p3.1.1">Ensuring real-time processing.</em> Causal convolution allows the model to compute outputs without accessing future frames, enabling real-time processing by generating outputs as input is received, which is essential for streaming.</p> </div> <div class="ltx_para" id="S5.SS1.SSS1.p4"> <p class="ltx_p" id="S5.SS1.SSS1.p4.1"><math alttext="\bullet" class="ltx_Math" display="inline" id="S5.SS1.SSS1.p4.1.m1.1"><semantics id="S5.SS1.SSS1.p4.1.m1.1a"><mo id="S5.SS1.SSS1.p4.1.m1.1.1" xref="S5.SS1.SSS1.p4.1.m1.1.1.cmml">∙</mo><annotation-xml encoding="MathML-Content" id="S5.SS1.SSS1.p4.1.m1.1b"><ci id="S5.SS1.SSS1.p4.1.m1.1.1.cmml" xref="S5.SS1.SSS1.p4.1.m1.1.1">∙</ci></annotation-xml><annotation encoding="application/x-tex" id="S5.SS1.SSS1.p4.1.m1.1c">\bullet</annotation><annotation encoding="application/x-llamapun" id="S5.SS1.SSS1.p4.1.m1.1d">∙</annotation></semantics></math> <em class="ltx_emph ltx_font_italic" id="S5.SS1.SSS1.p4.1.1">Reducing latency.</em> By not requiring future input data, causal convolution significantly lowers the latency in speech models, making it more suitable for real-time interaction applications, such as voice assistants and live translation.</p> </div> <div class="ltx_para" id="S5.SS1.SSS1.p5"> <p class="ltx_p" id="S5.SS1.SSS1.p5.1"><span class="ltx_text ltx_font_bold" id="S5.SS1.SSS1.p5.1.1">Causal Attention.</span> Causal Attention is a specialized form of the attention mechanism designed to ensure that each position in a sequence can only attend to previous positions, thus preserving the temporal order crucial for streaming models. This approach ensures that the model’s current output depends only on past and present information, preventing any “leakage” of future information, which is essential for real-time processing tasks. In causal attention, the attention mask is typically used to achieve causality. By applying a mask that blocks connections to future time steps, the model restricts each token’s receptive field to only the tokens before it. Specifically, a lower triangular mask is applied to the attention matrix, setting values to negative infinity for positions corresponding to future tokens. This masking technique ensures that the model’s predictions for each time step only consider current and past inputs, thereby adhering to a strict causal structure. In streaming speech models, causal attention plays a significant role in enabling real-time interaction. Unlike standard attention, which requires access to the entire sequence, causal attention can operate incrementally. As new inputs are processed, the model can generate outputs without waiting for future context.</p> </div> <div class="ltx_para" id="S5.SS1.SSS1.p6"> <p class="ltx_p" id="S5.SS1.SSS1.p6.1"><span class="ltx_text ltx_font_bold" id="S5.SS1.SSS1.p6.1.1">Queue Management <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib220" title="">220</a>]</cite>.</span> Audio streams are typically split into frames, then processed in sequence via a queue management system that ensures real-time, orderly processing.</p> </div> <div class="ltx_para" id="S5.SS1.SSS1.p7"> <p class="ltx_p" id="S5.SS1.SSS1.p7.1">Some end-to-end models, such as Llama-Omni<cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib57" title="">57</a>]</cite>, Mini-Omni<cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib222" title="">222</a>]</cite> and Mini-Omni2<cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib223" title="">223</a>]</cite>, employ non-streaming ASR model Whisper as an audio encoder components. These models have made improvements on the output side to reduce latency.</p> </div> <div class="ltx_para" id="S5.SS1.SSS1.p8"> <p class="ltx_p" id="S5.SS1.SSS1.p8.1"><math alttext="\bullet" class="ltx_Math" display="inline" id="S5.SS1.SSS1.p8.1.m1.1"><semantics id="S5.SS1.SSS1.p8.1.m1.1a"><mo id="S5.SS1.SSS1.p8.1.m1.1.1" xref="S5.SS1.SSS1.p8.1.m1.1.1.cmml">∙</mo><annotation-xml encoding="MathML-Content" id="S5.SS1.SSS1.p8.1.m1.1b"><ci id="S5.SS1.SSS1.p8.1.m1.1.1.cmml" xref="S5.SS1.SSS1.p8.1.m1.1.1">∙</ci></annotation-xml><annotation encoding="application/x-tex" id="S5.SS1.SSS1.p8.1.m1.1c">\bullet</annotation><annotation encoding="application/x-llamapun" id="S5.SS1.SSS1.p8.1.m1.1d">∙</annotation></semantics></math> <em class="ltx_emph ltx_font_italic" id="S5.SS1.SSS1.p8.1.1">Mini-Omni.</em> Mini-Omni use a generation strategy delayed parallel decoding is a that layer-by-layer delays during audio token generation. This allows the model to generate text and multiple audio tokens simultaneously at each step, accelerating streaming audio generation and ensuring low-latency real-time output.</p> </div> <div class="ltx_para" id="S5.SS1.SSS1.p9"> <p class="ltx_p" id="S5.SS1.SSS1.p9.1"><math alttext="\bullet" class="ltx_Math" display="inline" id="S5.SS1.SSS1.p9.1.m1.1"><semantics id="S5.SS1.SSS1.p9.1.m1.1a"><mo id="S5.SS1.SSS1.p9.1.m1.1.1" xref="S5.SS1.SSS1.p9.1.m1.1.1.cmml">∙</mo><annotation-xml encoding="MathML-Content" id="S5.SS1.SSS1.p9.1.m1.1b"><ci id="S5.SS1.SSS1.p9.1.m1.1.1.cmml" xref="S5.SS1.SSS1.p9.1.m1.1.1">∙</ci></annotation-xml><annotation encoding="application/x-tex" id="S5.SS1.SSS1.p9.1.m1.1c">\bullet</annotation><annotation encoding="application/x-llamapun" id="S5.SS1.SSS1.p9.1.m1.1d">∙</annotation></semantics></math> <em class="ltx_emph ltx_font_italic" id="S5.SS1.SSS1.p9.1.1">Llama-Omni.</em> Llama-Omni incorporates a non-autoregressive streaming speech decoder that leverages connectionist temporal classification (CTC) to directly generate a sequence of discrete audio tokens as the response.</p> </div> <div class="ltx_para" id="S5.SS1.SSS1.p10"> <p class="ltx_p" id="S5.SS1.SSS1.p10.1"><math alttext="\bullet" class="ltx_Math" display="inline" id="S5.SS1.SSS1.p10.1.m1.1"><semantics id="S5.SS1.SSS1.p10.1.m1.1a"><mo id="S5.SS1.SSS1.p10.1.m1.1.1" xref="S5.SS1.SSS1.p10.1.m1.1.1.cmml">∙</mo><annotation-xml encoding="MathML-Content" id="S5.SS1.SSS1.p10.1.m1.1b"><ci id="S5.SS1.SSS1.p10.1.m1.1.1.cmml" xref="S5.SS1.SSS1.p10.1.m1.1.1">∙</ci></annotation-xml><annotation encoding="application/x-tex" id="S5.SS1.SSS1.p10.1.m1.1c">\bullet</annotation><annotation encoding="application/x-llamapun" id="S5.SS1.SSS1.p10.1.m1.1d">∙</annotation></semantics></math> <em class="ltx_emph ltx_font_italic" id="S5.SS1.SSS1.p10.1.1">Intrinsicvoice. <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib248" title="">248</a>]</cite></em> Intrinsicvoice introduced GroupFormer module to group speech tokens, reducing the length of speech sequences to match that of text sequences. This approach accelerates inference, alleviates the challenges of long-sequence modeling, and effectively narrows the gap between speech and text modalities.We think they cannot be considered fully streaming because they are not designed to be streaming on the input side.</p> </div> <div class="ltx_para" id="S5.SS1.SSS1.p11"> <p class="ltx_p" id="S5.SS1.SSS1.p11.1"><math alttext="\bullet" class="ltx_Math" display="inline" id="S5.SS1.SSS1.p11.1.m1.1"><semantics id="S5.SS1.SSS1.p11.1.m1.1a"><mo id="S5.SS1.SSS1.p11.1.m1.1.1" xref="S5.SS1.SSS1.p11.1.m1.1.1.cmml">∙</mo><annotation-xml encoding="MathML-Content" id="S5.SS1.SSS1.p11.1.m1.1b"><ci id="S5.SS1.SSS1.p11.1.m1.1.1.cmml" xref="S5.SS1.SSS1.p11.1.m1.1.1">∙</ci></annotation-xml><annotation encoding="application/x-tex" id="S5.SS1.SSS1.p11.1.m1.1c">\bullet</annotation><annotation encoding="application/x-llamapun" id="S5.SS1.SSS1.p11.1.m1.1d">∙</annotation></semantics></math> <em class="ltx_emph ltx_font_italic" id="S5.SS1.SSS1.p11.1.1">Moshi. <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib44" title="">44</a>]</cite></em> In contrast, Moshi references the architecture of SpeechTokenizer to train a streaming codec from scratch, serving as the audio tokenizer-detokenizer. The entire model, including the codec, transformer, and attention mechanism, is built on a causal structure.</p> </div> <div class="ltx_para" id="S5.SS1.SSS1.p12"> <p class="ltx_p" id="S5.SS1.SSS1.p12.1"><math alttext="\bullet" class="ltx_Math" display="inline" id="S5.SS1.SSS1.p12.1.m1.1"><semantics id="S5.SS1.SSS1.p12.1.m1.1a"><mo id="S5.SS1.SSS1.p12.1.m1.1.1" xref="S5.SS1.SSS1.p12.1.m1.1.1.cmml">∙</mo><annotation-xml encoding="MathML-Content" id="S5.SS1.SSS1.p12.1.m1.1b"><ci id="S5.SS1.SSS1.p12.1.m1.1.1.cmml" xref="S5.SS1.SSS1.p12.1.m1.1.1">∙</ci></annotation-xml><annotation encoding="application/x-tex" id="S5.SS1.SSS1.p12.1.m1.1c">\bullet</annotation><annotation encoding="application/x-llamapun" id="S5.SS1.SSS1.p12.1.m1.1d">∙</annotation></semantics></math> <em class="ltx_emph ltx_font_italic" id="S5.SS1.SSS1.p12.1.1">OmniFlatten. <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib246" title="">246</a>]</cite></em> OmniFlatten proposes chunk-based processing of text and speech along with gradual learning techniques and data handling to reduce turn-taking delays, such as response delays when users finish speaking or interrupt the system. These models have achieved true streaming capabilities and established a foundation for diverse, bidirectional interactions.</p> </div> </section> <section class="ltx_subsubsection" id="S5.SS1.SSS2"> <h4 class="ltx_title ltx_title_subsubsection"> <span class="ltx_tag ltx_tag_subsubsection">5.1.2 </span>Streaming Cascaded Spoken Dialogue Models</h4> <div class="ltx_para" id="S5.SS1.SSS2.p1"> <p class="ltx_p" id="S5.SS1.SSS2.p1.1">Consistent with the above, ensuring streaming capability in a model relies on designing both input and output for streaming. Due to its cascaded nature, a cascaded model typically relies on external streaming ASR and TTS components, placing the streaming responsibility on these ASR and TTS modules.</p> </div> <div class="ltx_para" id="S5.SS1.SSS2.p2"> <p class="ltx_p" id="S5.SS1.SSS2.p2.1">In <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib211" title="">211</a>]</cite>, comparative studies were conducted on the streaming ASR model <span class="ltx_text ltx_font_bold" id="S5.SS1.SSS2.p2.1.1">U2++ Conformer</span> <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib219" title="">219</a>]</cite>, streaming TTS model <span class="ltx_text ltx_font_bold" id="S5.SS1.SSS2.p2.1.2">XTTS-v2</span> <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib21" title="">21</a>]</cite>, non-streaming ASR <span class="ltx_text ltx_font_bold" id="S5.SS1.SSS2.p2.1.3">Whisper</span>, and non-streaming TTS <span class="ltx_text ltx_font_bold" id="S5.SS1.SSS2.p2.1.4">VITS</span> <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib109" title="">109</a>]</cite>. The combination of streaming components achieved the lowest latency and significantly contributed to interactive interruption capabilities.</p> </div> </section> </section> <section class="ltx_subsection" id="S5.SS2"> <h3 class="ltx_title ltx_title_subsection"> <span class="ltx_tag ltx_tag_subsection">5.2 </span>Duplex Technology and Interaction</h3> <section class="ltx_subsubsection" id="S5.SS2.SSS1"> <h4 class="ltx_title ltx_title_subsubsection"> <span class="ltx_tag ltx_tag_subsubsection">5.2.1 </span>Duplex Technology</h4> <div class="ltx_para" id="S5.SS2.SSS1.p1"> <p class="ltx_p" id="S5.SS2.SSS1.p1.1">The term Duplex originates from the field of communications, used to describe interaction modes between two parties in data transmission. Depending on the type of communication, duplex is divided into half-duplex and full-duplex.</p> </div> <div class="ltx_para" id="S5.SS2.SSS1.p2"> <p class="ltx_p" id="S5.SS2.SSS1.p2.1">With the development of audio processing and generation technology , the concept of duplex has been introduced to speech systems, especially within the context of speech language models. Here, duplex doesn’t just refer to signal transmission but emphasizes the synchronization and natural interaction in human-computer dialogue. Specifically, within model architecture, it means that the model must retain its ability to perceive external input even while generating a response—essentially, the ability to listen while speaking.</p> </div> <div class="ltx_para" id="S5.SS2.SSS1.p3"> <p class="ltx_p" id="S5.SS2.SSS1.p3.1"><span class="ltx_text ltx_font_bold" id="S5.SS2.SSS1.p3.1.1">Simplex.</span> In simplex communication, data flows in only one direction. The speaker can send data, while the listener can only receive it. As shown in Figure <a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#S5.F8.sf1" title="In Figure 8 ‣ 5.2.1 Duplex Technology ‣ 5.2 Duplex Technology and Interaction ‣ 5 Streaming, Duplex, and Interaction ‣ WavChat: A Survey of Spoken Dialogue Models"><span class="ltx_text ltx_ref_tag">8(a)</span></a>, the robot continuously transmits audio, while the user has no ability to respond. This fixed-direction, one-way communication has the limitation of lacking interactivity.</p> </div> <div class="ltx_para" id="S5.SS2.SSS1.p4"> <p class="ltx_p" id="S5.SS2.SSS1.p4.1"><span class="ltx_text ltx_font_bold" id="S5.SS2.SSS1.p4.1.1">Half-Duplex.</span> In half-duplex communication, data flows in both directions but not simultaneously. The two parties must take turns speaking and listening. As illustrated in Figure <a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#S5.F8.sf2" title="In Figure 8 ‣ 5.2.1 Duplex Technology ‣ 5.2 Duplex Technology and Interaction ‣ 5 Streaming, Duplex, and Interaction ‣ WavChat: A Survey of Spoken Dialogue Models"><span class="ltx_text ltx_ref_tag">8(b)</span></a>, the user speaks first, followed by a response delay during which the robot "thinks" before replying. The robot’s response occurs only after the user has finished speaking, and vice versa. This turn-taking method is similar to using a walkie-talkie, where each party can only transmit after the other has finished, limiting efficiency.Half-duplex is a common mode in early voice interaction systems. In a typical half-duplex interaction, there are noticeable pauses in the conversation; the user and the system cannot “speak” simultaneously, making the conversation feel less smooth, much like communication through a walkie-talkie. For example, voice assistants like Siri use wake words or button presses to trigger the dialogue and require the speaker to finish a complete sentence before responding. These systems typically adopt an ASR-LM-TTS cascaded structure and are often constrained by cascade delays and the turn-based nature of text language models. Although this interaction method is simple and easy to implement, it can feel rigid and disjointed in natural conversational settings, with notable latency. It is designed more for command execution rather than interactive communication.</p> </div> <div class="ltx_para" id="S5.SS2.SSS1.p5"> <p class="ltx_p" id="S5.SS2.SSS1.p5.1"><span class="ltx_text ltx_font_bold" id="S5.SS2.SSS1.p5.1.1">Full-Duplex.</span> Full-duplex communication allows both parties to send and receive data simultaneously <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib142" title="">142</a>]</cite>. Figure <a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#S5.F8.sf3" title="In Figure 8 ‣ 5.2.1 Duplex Technology ‣ 5.2 Duplex Technology and Interaction ‣ 5 Streaming, Duplex, and Interaction ‣ WavChat: A Survey of Spoken Dialogue Models"><span class="ltx_text ltx_ref_tag">8(c)</span></a> shows the user and robot engaging in overlapping, real-time interaction, where backchannels and interruptions are possible. This mode enables a natural, two-way conversation, where both the user and robot can speak, respond, and even interrupt each other as needed, much like a phone call.In dialogue systems, full-duplex means that the system and user can speak simultaneously and interrupt each other, making it closer to natural conversation in real life. Full-duplex large voice models allow the system not only to listen and understand the user while they speak but also to interrupt at appropriate moments or respond with backchannel cues. Moreover, the system can detect the user’s intent to interrupt and pause itself accordingly, maintaining a smooth flow in the interaction.</p> </div> <figure class="ltx_figure" id="S5.F8"> <div class="ltx_flex_figure"> <div class="ltx_flex_cell ltx_flex_size_1"> <figure class="ltx_figure ltx_figure_panel ltx_align_center" id="S5.F8.sf1"><img alt="Refer to caption" class="ltx_graphics ltx_centering ltx_img_landscape" height="66" id="S5.F8.sf1.g1" src="extracted/6000571/images/img-duplex/tu_01.png" width="598"/> <figcaption class="ltx_caption ltx_centering"><span class="ltx_tag ltx_tag_figure"><span class="ltx_text" id="S5.F8.sf1.2.1.1" style="font-size:90%;">(a)</span> </span><span class="ltx_text" id="S5.F8.sf1.3.2" style="font-size:90%;">Simplex: One-way communication, and the direction is fixed.</span></figcaption> </figure> </div> <div class="ltx_flex_break"></div> <div class="ltx_flex_cell ltx_flex_size_1"> <figure class="ltx_figure ltx_figure_panel ltx_align_center" id="S5.F8.sf2"><img alt="Refer to caption" class="ltx_graphics ltx_centering ltx_img_landscape" height="162" id="S5.F8.sf2.g1" src="extracted/6000571/images/img-duplex/tu_02.png" width="598"/> <figcaption class="ltx_caption ltx_centering"><span class="ltx_tag ltx_tag_figure"><span class="ltx_text" id="S5.F8.sf2.2.1.1" style="font-size:90%;">(b)</span> </span><span class="ltx_text" id="S5.F8.sf2.3.2" style="font-size:90%;">Half-duplex: Two-way communication, but not simultaneously.</span></figcaption> </figure> </div> <div class="ltx_flex_break"></div> <div class="ltx_flex_cell ltx_flex_size_1"> <figure class="ltx_figure ltx_figure_panel ltx_align_center" id="S5.F8.sf3"><img alt="Refer to caption" class="ltx_graphics ltx_centering ltx_img_landscape" height="175" id="S5.F8.sf3.g1" src="extracted/6000571/images/img-duplex/tu_03.png" width="598"/> <figcaption class="ltx_caption ltx_centering"><span class="ltx_tag ltx_tag_figure"><span class="ltx_text" id="S5.F8.sf3.2.1.1" style="font-size:90%;">(c)</span> </span><span class="ltx_text" id="S5.F8.sf3.3.2" style="font-size:90%;">Full-duplex: Two-way communication, simultaneously.</span></figcaption> </figure> </div> </div> <figcaption class="ltx_caption ltx_centering"><span class="ltx_tag ltx_tag_figure"><span class="ltx_text" id="S5.F8.2.1.1" style="font-size:90%;">Figure 8</span>: </span><span class="ltx_text" id="S5.F8.3.2" style="font-size:90%;">The illustration of Simplex, Half-Duplex, and Full-Duplex.</span></figcaption> </figure> <div class="ltx_para" id="S5.SS2.SSS1.p6"> <p class="ltx_p" id="S5.SS2.SSS1.p6.1">The ultimate goal of a spoken dialogue moded is to make the user feel as though they are conversing with a real human friend. Clearly, full-duplex technology is essential for achieving natural voice dialogue systems, enabling the system to send and receive audio signals simultaneously, thus facilitating real-time interaction. Unlike text-based models, it doesn’t “cover its ears” while speaking. Users and intelligent agents can interrupt each other while listening or express their attitude through non-verbal signals, such as interjections or laughter. The challenges in realizing this lie in ensuring conversational fluidity, seamless turn-taking, and precise timing of interactions. Developing a full-duplex system that can both generate and receive voice signals in complex interactive scenarios remains a key focus in academic and industrial research.</p> </div> </section> <section class="ltx_subsubsection" id="S5.SS2.SSS2"> <h4 class="ltx_title ltx_title_subsubsection"> <span class="ltx_tag ltx_tag_subsubsection">5.2.2 </span>Interaction</h4> <div class="ltx_para" id="S5.SS2.SSS2.p1"> <p class="ltx_p" id="S5.SS2.SSS2.p1.1">Now that we understand duplex technology, we can further explore duplex spoken dialogue model.</p> </div> <div class="ltx_para" id="S5.SS2.SSS2.p2"> <p class="ltx_p" id="S5.SS2.SSS2.p2.1">We start with some concept.Turn-taking is the core concept in duplex dialogue. It refers to the process in which speakers take turns speaking in an orderly manner during a conversation, forming a pattern of turn-taking. Over the past few decades and has been extensively studied across fields such as linguistics, phonetics, and sociology. Some research <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib173" title="">173</a>, <a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib180" title="">180</a>]</cite>uses a non-deterministic finite-state machine with six states to describe the turn-taking behavior between the system and the user in a spoken dialogue system (SDS). It outlines all possible states of turn-taking within an SDS, defining the objective of turn-taking as minimizing mutual silence or overlap between interlocutors, thereby improving communication efficiency. Turn-taking encompasses three fundamental concepts:</p> </div> <div class="ltx_para" id="S5.SS2.SSS2.p3"> <p class="ltx_p" id="S5.SS2.SSS2.p3.1"><math alttext="\bullet" class="ltx_Math" display="inline" id="S5.SS2.SSS2.p3.1.m1.1"><semantics id="S5.SS2.SSS2.p3.1.m1.1a"><mo id="S5.SS2.SSS2.p3.1.m1.1.1" xref="S5.SS2.SSS2.p3.1.m1.1.1.cmml">∙</mo><annotation-xml encoding="MathML-Content" id="S5.SS2.SSS2.p3.1.m1.1b"><ci id="S5.SS2.SSS2.p3.1.m1.1.1.cmml" xref="S5.SS2.SSS2.p3.1.m1.1.1">∙</ci></annotation-xml><annotation encoding="application/x-tex" id="S5.SS2.SSS2.p3.1.m1.1c">\bullet</annotation><annotation encoding="application/x-llamapun" id="S5.SS2.SSS2.p3.1.m1.1d">∙</annotation></semantics></math> <em class="ltx_emph ltx_font_italic" id="S5.SS2.SSS2.p3.1.1">Turn-taking cues</em> <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib53" title="">53</a>, <a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib54" title="">54</a>]</cite>. These include voice, rhythm, breathing, gaze, or gestures. Agents can use these cues to determine whether to take a turn from the user or to relinquish the turn.</p> </div> <div class="ltx_para" id="S5.SS2.SSS2.p4"> <p class="ltx_p" id="S5.SS2.SSS2.p4.1"><math alttext="\bullet" class="ltx_Math" display="inline" id="S5.SS2.SSS2.p4.1.m1.1"><semantics id="S5.SS2.SSS2.p4.1.m1.1a"><mo id="S5.SS2.SSS2.p4.1.m1.1.1" xref="S5.SS2.SSS2.p4.1.m1.1.1.cmml">∙</mo><annotation-xml encoding="MathML-Content" id="S5.SS2.SSS2.p4.1.m1.1b"><ci id="S5.SS2.SSS2.p4.1.m1.1.1.cmml" xref="S5.SS2.SSS2.p4.1.m1.1.1">∙</ci></annotation-xml><annotation encoding="application/x-tex" id="S5.SS2.SSS2.p4.1.m1.1c">\bullet</annotation><annotation encoding="application/x-llamapun" id="S5.SS2.SSS2.p4.1.m1.1d">∙</annotation></semantics></math> <em class="ltx_emph ltx_font_italic" id="S5.SS2.SSS2.p4.1.1">Turn-end detection or prediction.</em> The distinction between detection <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib73" title="">73</a>, <a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib115" title="">115</a>]</cite> and prediction <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib114" title="">114</a>, <a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib55" title="">55</a>]</cite> lies in that detection determines whether the agent should take a turn at the current moment, whereas prediction decides when the turn-taking should occur in the future.</p> </div> <div class="ltx_para" id="S5.SS2.SSS2.p5"> <p class="ltx_p" id="S5.SS2.SSS2.p5.1"><math alttext="\bullet" class="ltx_Math" display="inline" id="S5.SS2.SSS2.p5.1.m1.1"><semantics id="S5.SS2.SSS2.p5.1.m1.1a"><mo id="S5.SS2.SSS2.p5.1.m1.1.1" xref="S5.SS2.SSS2.p5.1.m1.1.1.cmml">∙</mo><annotation-xml encoding="MathML-Content" id="S5.SS2.SSS2.p5.1.m1.1b"><ci id="S5.SS2.SSS2.p5.1.m1.1.1.cmml" xref="S5.SS2.SSS2.p5.1.m1.1.1">∙</ci></annotation-xml><annotation encoding="application/x-tex" id="S5.SS2.SSS2.p5.1.m1.1c">\bullet</annotation><annotation encoding="application/x-llamapun" id="S5.SS2.SSS2.p5.1.m1.1d">∙</annotation></semantics></math> <em class="ltx_emph ltx_font_italic" id="S5.SS2.SSS2.p5.1.1">Overlap.</em> This mainly involves two situations. When the user and agent’s voices overlap, if the user intends to take the turn from the agent, this behavior is defined as an <span class="ltx_text ltx_font_italic" id="S5.SS2.SSS2.p5.1.2">interruption</span> <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib103" title="">103</a>, <a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib146" title="">146</a>]</cite>. If the user has no intention of taking the turn, this behavior is considered <span class="ltx_text ltx_font_italic" id="S5.SS2.SSS2.p5.1.3">backchannel</span> <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib72" title="">72</a>]</cite> or a listener response, such as "uh-huh," "right."</p> </div> <div class="ltx_para" id="S5.SS2.SSS2.p6"> <p class="ltx_p" id="S5.SS2.SSS2.p6.1">Through these concepts, we can better understand turn-taking behavior in duplex dialogues. In summary, our interactions with voice dialogue systems can be categorized as <span class="ltx_text ltx_font_italic" id="S5.SS2.SSS2.p6.1.1">interruptions</span>, <span class="ltx_text ltx_font_italic" id="S5.SS2.SSS2.p6.1.2">backchannels</span>, and <span class="ltx_text ltx_font_italic" id="S5.SS2.SSS2.p6.1.3">normal turn exchanges</span>.</p> </div> <div class="ltx_para" id="S5.SS2.SSS2.p7"> <p class="ltx_p" id="S5.SS2.SSS2.p7.1">The earliest full-duplex systems used a simple Voice Activity Detection (VAD) component to model whether the user intended to interrupt. However, this approach is inadequate for handling backchannel interaction forms, leading to frequent interruptions and introducing considerable delays.</p> </div> <div class="ltx_para" id="S5.SS2.SSS2.p8"> <p class="ltx_p" id="S5.SS2.SSS2.p8.1">We can briefly categorize the exploration of interactions into cascaded systems and end-to-end systems based on duplex technology. Regardless of the system type, the critical core idea is that the system must continuously track external information in real-time, analyze it, and determine the model’s operational state accordingly. An interactive voice system must meet two requirements: 1) The ability to accept external information in real-time at any moment. 2) The ability to respond to this information accurately. This includes:</p> </div> <div class="ltx_para" id="S5.SS2.SSS2.p9"> <p class="ltx_p" id="S5.SS2.SSS2.p9.1"><math alttext="\bullet" class="ltx_Math" display="inline" id="S5.SS2.SSS2.p9.1.m1.1"><semantics id="S5.SS2.SSS2.p9.1.m1.1a"><mo id="S5.SS2.SSS2.p9.1.m1.1.1" xref="S5.SS2.SSS2.p9.1.m1.1.1.cmml">∙</mo><annotation-xml encoding="MathML-Content" id="S5.SS2.SSS2.p9.1.m1.1b"><ci id="S5.SS2.SSS2.p9.1.m1.1.1.cmml" xref="S5.SS2.SSS2.p9.1.m1.1.1">∙</ci></annotation-xml><annotation encoding="application/x-tex" id="S5.SS2.SSS2.p9.1.m1.1c">\bullet</annotation><annotation encoding="application/x-llamapun" id="S5.SS2.SSS2.p9.1.m1.1d">∙</annotation></semantics></math> <em class="ltx_emph ltx_font_italic" id="S5.SS2.SSS2.p9.1.1">Detecting User Interactions.</em> When the user tries to interject or provide new information, the system can recognize this intent and immediately stop its output to allow the user to speak.</p> </div> <div class="ltx_para" id="S5.SS2.SSS2.p10"> <p class="ltx_p" id="S5.SS2.SSS2.p10.1"><math alttext="\bullet" class="ltx_Math" display="inline" id="S5.SS2.SSS2.p10.1.m1.1"><semantics id="S5.SS2.SSS2.p10.1.m1.1a"><mo id="S5.SS2.SSS2.p10.1.m1.1.1" xref="S5.SS2.SSS2.p10.1.m1.1.1.cmml">∙</mo><annotation-xml encoding="MathML-Content" id="S5.SS2.SSS2.p10.1.m1.1b"><ci id="S5.SS2.SSS2.p10.1.m1.1.1.cmml" xref="S5.SS2.SSS2.p10.1.m1.1.1">∙</ci></annotation-xml><annotation encoding="application/x-tex" id="S5.SS2.SSS2.p10.1.m1.1c">\bullet</annotation><annotation encoding="application/x-llamapun" id="S5.SS2.SSS2.p10.1.m1.1d">∙</annotation></semantics></math> <em class="ltx_emph ltx_font_italic" id="S5.SS2.SSS2.p10.1.1">Backchanneling During User Speech.</em> While the user is speaking, the system can provide brief acknowledgments like "uh-huh" or "I see" to indicate active listening, which encourages the user to continue.</p> </div> <div class="ltx_para" id="S5.SS2.SSS2.p11"> <p class="ltx_p" id="S5.SS2.SSS2.p11.1"><math alttext="\bullet" class="ltx_Math" display="inline" id="S5.SS2.SSS2.p11.1.m1.1"><semantics id="S5.SS2.SSS2.p11.1.m1.1a"><mo id="S5.SS2.SSS2.p11.1.m1.1.1" xref="S5.SS2.SSS2.p11.1.m1.1.1.cmml">∙</mo><annotation-xml encoding="MathML-Content" id="S5.SS2.SSS2.p11.1.m1.1b"><ci id="S5.SS2.SSS2.p11.1.m1.1.1.cmml" xref="S5.SS2.SSS2.p11.1.m1.1.1">∙</ci></annotation-xml><annotation encoding="application/x-tex" id="S5.SS2.SSS2.p11.1.m1.1c">\bullet</annotation><annotation encoding="application/x-llamapun" id="S5.SS2.SSS2.p11.1.m1.1d">∙</annotation></semantics></math> <em class="ltx_emph ltx_font_italic" id="S5.SS2.SSS2.p11.1.1">Quickly Responding After User Completion.</em> When the user finishes speaking, the system can promptly recognize this cue and respond without unnecessary delays, maintaining a smooth conversational flow.</p> </div> <div class="ltx_para" id="S5.SS2.SSS2.p12"> <p class="ltx_p" id="S5.SS2.SSS2.p12.1"><math alttext="\bullet" class="ltx_Math" display="inline" id="S5.SS2.SSS2.p12.1.m1.1"><semantics id="S5.SS2.SSS2.p12.1.m1.1a"><mo id="S5.SS2.SSS2.p12.1.m1.1.1" xref="S5.SS2.SSS2.p12.1.m1.1.1.cmml">∙</mo><annotation-xml encoding="MathML-Content" id="S5.SS2.SSS2.p12.1.m1.1b"><ci id="S5.SS2.SSS2.p12.1.m1.1.1.cmml" xref="S5.SS2.SSS2.p12.1.m1.1.1">∙</ci></annotation-xml><annotation encoding="application/x-tex" id="S5.SS2.SSS2.p12.1.m1.1c">\bullet</annotation><annotation encoding="application/x-llamapun" id="S5.SS2.SSS2.p12.1.m1.1d">∙</annotation></semantics></math> <em class="ltx_emph ltx_font_italic" id="S5.SS2.SSS2.p12.1.1">Handling Pauses in User Speech.</em> When the user briefly pauses, the system can interpret this as a moment of thought rather than an invitation to respond, thus avoiding premature interruptions and preserving the natural flow.</p> </div> <div class="ltx_para" id="S5.SS2.SSS2.p13"> <p class="ltx_p" id="S5.SS2.SSS2.p13.1"><math alttext="\bullet" class="ltx_Math" display="inline" id="S5.SS2.SSS2.p13.1.m1.1"><semantics id="S5.SS2.SSS2.p13.1.m1.1a"><mo id="S5.SS2.SSS2.p13.1.m1.1.1" xref="S5.SS2.SSS2.p13.1.m1.1.1.cmml">∙</mo><annotation-xml encoding="MathML-Content" id="S5.SS2.SSS2.p13.1.m1.1b"><ci id="S5.SS2.SSS2.p13.1.m1.1.1.cmml" xref="S5.SS2.SSS2.p13.1.m1.1.1">∙</ci></annotation-xml><annotation encoding="application/x-tex" id="S5.SS2.SSS2.p13.1.m1.1c">\bullet</annotation><annotation encoding="application/x-llamapun" id="S5.SS2.SSS2.p13.1.m1.1d">∙</annotation></semantics></math> <em class="ltx_emph ltx_font_italic" id="S5.SS2.SSS2.p13.1.1">Interrupting the User When Necessary.</em> In situations where the system detects critical information, it can choose to interrupt the user to provide immediate feedback. For example, if the user is speaking but the system needs to alert them to an error, it can intervene in real-time to ensure effective communication.</p> </div> <div class="ltx_para" id="S5.SS2.SSS2.p14"> <p class="ltx_p" id="S5.SS2.SSS2.p14.1"><span class="ltx_text ltx_font_bold" id="S5.SS2.SSS2.p14.1.1">Cascaded Systems.</span> To enable interactive functionality, cascaded spoken dialogue models typically require explicit modeling of dialogue turns. As the core, the large language model needs effective context and turn management. Next, we introduce several representative works on interaction in cascaded systems.</p> </div> <div class="ltx_para" id="S5.SS2.SSS2.p15"> <p class="ltx_p" id="S5.SS2.SSS2.p15.1"><math alttext="\bullet" class="ltx_Math" display="inline" id="S5.SS2.SSS2.p15.1.m1.1"><semantics id="S5.SS2.SSS2.p15.1.m1.1a"><mo id="S5.SS2.SSS2.p15.1.m1.1.1" xref="S5.SS2.SSS2.p15.1.m1.1.1.cmml">∙</mo><annotation-xml encoding="MathML-Content" id="S5.SS2.SSS2.p15.1.m1.1b"><ci id="S5.SS2.SSS2.p15.1.m1.1.1.cmml" xref="S5.SS2.SSS2.p15.1.m1.1.1">∙</ci></annotation-xml><annotation encoding="application/x-tex" id="S5.SS2.SSS2.p15.1.m1.1c">\bullet</annotation><annotation encoding="application/x-llamapun" id="S5.SS2.SSS2.p15.1.m1.1d">∙</annotation></semantics></math> <em class="ltx_emph ltx_font_italic" id="S5.SS2.SSS2.p15.1.1">Duplex Conversation.</em> In <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib130" title="">130</a>]</cite>, three core modules are proposed to achieve smooth full-duplex dialogue: user state detection, response signal selection, and interruption detection. The user state detection module not only focuses on traditional turn-end detection but also identifies whether the user intends to switch turns, continue speaking, or hesitates during their speech. To achieve this, the system uses a multimodal model, taking audio and text as inputs, and incorporates features such as speech rhythm, pitch, and pauses for more accurate assessment of the user’s state, determining whether to respond immediately or wait longer. The response signal selection module inserts small backchannel cues (such as "uh-huh" or "right") at appropriate times to simulate natural human conversation. By analyzing a large volume of real dialogues, this module extracts and trains suitable response signals for various conversation scenarios. Using multi-label classification, the system selects the optimal response for each dialogue context, significantly reducing user waiting time and enhancing conversation flow. The interruption detection module flexibly responds to user interruptions. Unlike traditional rule-based detection methods, this system builds an end-to-end detection model with multimodal input (audio and text) that not only identifies genuine user interruptions but also avoids misinterpreting background noise or unintended voice signals as interruptions.</p> </div> <div class="ltx_para" id="S5.SS2.SSS2.p16"> <p class="ltx_p" id="S5.SS2.SSS2.p16.1"><math alttext="\bullet" class="ltx_Math" display="inline" id="S5.SS2.SSS2.p16.1.m1.1"><semantics id="S5.SS2.SSS2.p16.1.m1.1a"><mo id="S5.SS2.SSS2.p16.1.m1.1.1" xref="S5.SS2.SSS2.p16.1.m1.1.1.cmml">∙</mo><annotation-xml encoding="MathML-Content" id="S5.SS2.SSS2.p16.1.m1.1b"><ci id="S5.SS2.SSS2.p16.1.m1.1.1.cmml" xref="S5.SS2.SSS2.p16.1.m1.1.1">∙</ci></annotation-xml><annotation encoding="application/x-tex" id="S5.SS2.SSS2.p16.1.m1.1c">\bullet</annotation><annotation encoding="application/x-llamapun" id="S5.SS2.SSS2.p16.1.m1.1d">∙</annotation></semantics></math> <em class="ltx_emph ltx_font_italic" id="S5.SS2.SSS2.p16.1.1">Outbound Agent System.</em> <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib98" title="">98</a>]</cite> proposed a full-duplex dialogue scheme for outbound systems, focusing on the issues of conversational fluidity and timing of interaction in speech dialogue. This scheme uses semantic analysis to determine whether the user truly intends to interrupt the system and can handle disjointed expressions when users mention named entities. The core of this system is a full-duplex interaction finite-state machine (FSM), which retrieves text snippets from ASR results every 300 milliseconds to decide whether to interrupt. Through continuous semantic analysis of user speech, the interruption model identifies meaningful user interruptions and avoids frequent interruptions caused by brief, meaningless responses (like "uh-huh"). The model employs a pre-trained BERT-based text classifier and utilizes streaming input, ensuring that the system can process and analyze user speech in real-time as it is received. Additionally, the system includes a Discontinuous Expression module to handle user pauses when mentioning named entities. Specifically, when users hesitate over entities (such as numbers, locations, or company names), VAD may erroneously detect turn-end.</p> </div> <div class="ltx_para" id="S5.SS2.SSS2.p17"> <p class="ltx_p" id="S5.SS2.SSS2.p17.1">The advent of Large Language Models has significantly advanced generative AI development. Models like ChatGPT demonstrate strong capabilities in semantic understanding and logical reasoning, offering a simplified method to integrate various dialogue components into a unified framework, which may simplify SDS construction. GPT-4o represents a milestone for dialogue systems, showcasing a nearly human-like conversational voice model. Its flexible interaction style and interruption mechanisms make human-computer interaction more natural and fluid. However, as a commercial model, its training data and implementation details remain proprietary, making replication challenging.</p> </div> <div class="ltx_para" id="S5.SS2.SSS2.p18"> <p class="ltx_p" id="S5.SS2.SSS2.p18.1"><math alttext="\bullet" class="ltx_Math" display="inline" id="S5.SS2.SSS2.p18.1.m1.1"><semantics id="S5.SS2.SSS2.p18.1.m1.1a"><mo id="S5.SS2.SSS2.p18.1.m1.1.1" xref="S5.SS2.SSS2.p18.1.m1.1.1.cmml">∙</mo><annotation-xml encoding="MathML-Content" id="S5.SS2.SSS2.p18.1.m1.1b"><ci id="S5.SS2.SSS2.p18.1.m1.1.1.cmml" xref="S5.SS2.SSS2.p18.1.m1.1.1">∙</ci></annotation-xml><annotation encoding="application/x-tex" id="S5.SS2.SSS2.p18.1.m1.1c">\bullet</annotation><annotation encoding="application/x-llamapun" id="S5.SS2.SSS2.p18.1.m1.1d">∙</annotation></semantics></math> <em class="ltx_emph ltx_font_italic" id="S5.SS2.SSS2.p18.1.1">Full-duplex LLM.</em> <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib211" title="">211</a>]</cite> proposed a full-duplex spoken dialogue models based on LLMs, enabling simultaneous reception and transmission of voice signals through a perception module, an action module, and a neural finite-state machine (FSM). The perception module uses a streaming ASR model, capturing and processing user speech in real-time with 640-millisecond intervals per time step, converting it into token inputs for the LLM. The action module, utilizing a streaming TTS model, instantly converts the LLM-generated text into audio output and can pause or resume playback as needed, ensuring the system can generate audio while receiving user input. At the core is the neural FSM, allowing the LLM to switch between "speaking" and "listening" states. Controlled by FSM signals, the system can dynamically decide to continue speaking, listen, or interrupt based on the dialogue context. Experimental results show that Wang et al.’s full-duplex streaming system reduces response latency by threefold, achieves a response time within 500 milliseconds in over 50% of dialogues, and handles user interruptions at a rate of 96.7%, with an interruption accuracy of 54.7%.</p> </div> <div class="ltx_para" id="S5.SS2.SSS2.p19"> <p class="ltx_p" id="S5.SS2.SSS2.p19.1"><math alttext="\bullet" class="ltx_Math" display="inline" id="S5.SS2.SSS2.p19.1.m1.1"><semantics id="S5.SS2.SSS2.p19.1.m1.1a"><mo id="S5.SS2.SSS2.p19.1.m1.1.1" xref="S5.SS2.SSS2.p19.1.m1.1.1.cmml">∙</mo><annotation-xml encoding="MathML-Content" id="S5.SS2.SSS2.p19.1.m1.1b"><ci id="S5.SS2.SSS2.p19.1.m1.1.1.cmml" xref="S5.SS2.SSS2.p19.1.m1.1.1">∙</ci></annotation-xml><annotation encoding="application/x-tex" id="S5.SS2.SSS2.p19.1.m1.1c">\bullet</annotation><annotation encoding="application/x-llamapun" id="S5.SS2.SSS2.p19.1.m1.1d">∙</annotation></semantics></math> <em class="ltx_emph ltx_font_italic" id="S5.SS2.SSS2.p19.1.1">VITA.</em> VITA is an open-source multimodal large language model which aimed at enhancing multimodal interaction experiences. VITA can process multiple modalities, such as video, image, text, and audio, and achieves fluid human-computer interaction through a new duplex architecture involving two simultaneously operating models: one for generating responses to user queries, and another for continuously monitoring environmental inputs. When a new user query is detected, the generation model pauses, and the monitoring model processes the new query and generates an updated response. This setup enables VITA to support audio interruption, allowing users to ask new questions during system generation, with the system immediately pausing the current response to handle new input. VITA’s perception abilities are achieved through multimodal alignment and instruction fine-tuning, enabling it to switch automatically between different inputs. Additionally, VITA employs state tokens to distinguish user input types, such as query audio, background noise, and text input, facilitating wake-free interaction. VITA’s enhanced listening module prevents unnecessary user feedback from interrupting system responses, improving robustness.</p> </div> <div class="ltx_para" id="S5.SS2.SSS2.p20"> <p class="ltx_p" id="S5.SS2.SSS2.p20.1"><math alttext="\bullet" class="ltx_Math" display="inline" id="S5.SS2.SSS2.p20.1.m1.1"><semantics id="S5.SS2.SSS2.p20.1.m1.1a"><mo id="S5.SS2.SSS2.p20.1.m1.1.1" xref="S5.SS2.SSS2.p20.1.m1.1.1.cmml">∙</mo><annotation-xml encoding="MathML-Content" id="S5.SS2.SSS2.p20.1.m1.1b"><ci id="S5.SS2.SSS2.p20.1.m1.1.1.cmml" xref="S5.SS2.SSS2.p20.1.m1.1.1">∙</ci></annotation-xml><annotation encoding="application/x-tex" id="S5.SS2.SSS2.p20.1.m1.1c">\bullet</annotation><annotation encoding="application/x-llamapun" id="S5.SS2.SSS2.p20.1.m1.1d">∙</annotation></semantics></math> <em class="ltx_emph ltx_font_italic" id="S5.SS2.SSS2.p20.1.1">CleanS2S.</em><cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib159" title="">159</a>]</cite> This model employs a structured pipeline to enable responsive and flexible interactions in a spoken dialogue setting. Designed to facilitate seamless turn-taking and interruption handling, the model consists of several interconnected modules working in a coordinated sequence to optimize user experience. Starting with user input, the system uses a Voice Activity Detection (VAD) module to continuously monitor for incoming audio signals. As soon as a user starts speaking, VAD captures the input and immediately initiates processing by sending the audio data to the Automatic Speech Recognition (ASR) module. This quick detection and response setup allows the system to react to user input without delay. Once ASR transcribes the audio into text, the transcription is passed to the Large Language Model (LLM), which generates a relevant response based on the user’s query. Meanwhile, the model is designed to be interruption-aware. During response generation, if VAD detects a new user input (indicating an interruption or a follow-up query), the system can promptly adjust its processing flow. In this case, the LLM temporarily pauses its current task, allowing ASR to transcribe the new input, which the LLM then uses to generate an updated response. This interruption capability is achieved through the model’s layered processing design, allowing for adaptive turn-taking that feels natural and responsive. The Text-to-Speech (TTS) module then converts the generated text response into audio, which is transmitted to the user via WebSocket. To further support interruption handling, TTS breaks down lengthy responses into smaller audio segments that are sent progressively. This segmentation allows the system to stop audio output instantly if an interruption occurs, switching to the new input without delay. Each segment is prepared and sent only after a brief VAD check, ensuring that the system is ready to pause and handle new input at any time. This interconnected processing chain—VAD detecting input, ASR transcribing, LLM generating responses, and TTS outputting segmented audio—creates a duplex interaction framework that balances response generation and user-driven interruptions. By seamlessly coordinating these components, the model provides a fluid, real-time dialogue experience that adapts to user interactions dynamically.</p> </div> <div class="ltx_para" id="S5.SS2.SSS2.p21"> <p class="ltx_p" id="S5.SS2.SSS2.p21.1"><span class="ltx_text ltx_font_bold" id="S5.SS2.SSS2.p21.1.1">End-to-End Systems.</span> In contrast, end-to-end spoken dialogue models do not require explicit modeling of dialogue turns; instead, they learn interaction modeling directly from training data. Next, we introduce several representative works on interaction in end-to-end systems.</p> </div> <div class="ltx_para" id="S5.SS2.SSS2.p22"> <p class="ltx_p" id="S5.SS2.SSS2.p22.1"><math alttext="\bullet" class="ltx_Math" display="inline" id="S5.SS2.SSS2.p22.1.m1.1"><semantics id="S5.SS2.SSS2.p22.1.m1.1a"><mo id="S5.SS2.SSS2.p22.1.m1.1.1" xref="S5.SS2.SSS2.p22.1.m1.1.1.cmml">∙</mo><annotation-xml encoding="MathML-Content" id="S5.SS2.SSS2.p22.1.m1.1b"><ci id="S5.SS2.SSS2.p22.1.m1.1.1.cmml" xref="S5.SS2.SSS2.p22.1.m1.1.1">∙</ci></annotation-xml><annotation encoding="application/x-tex" id="S5.SS2.SSS2.p22.1.m1.1c">\bullet</annotation><annotation encoding="application/x-llamapun" id="S5.SS2.SSS2.p22.1.m1.1d">∙</annotation></semantics></math> <em class="ltx_emph ltx_font_italic" id="S5.SS2.SSS2.p22.1.1">dGSLM.</em> In end-to-end systems, the introduction of the dGSLM model marks a significant milestone in full-duplex technology development. Within the dGSLM framework, duplex technology is effectively implemented. This model demonstrates how to capture complex interactions within dialogues directly from raw audio data through generative spoken dialogue modeling, without relying on text. The core innovation of dGSLM is the dual-tower Transformer architecture, called the Dialogue Transformer Language Model (DLM), which uses a cross-attention mechanism to enable the system to process two parallel audio channels simultaneously. Through this architecture, the model not only independently generates speech for each channel but also shares information between channels using cross-attention, effectively modeling silences and interaction events. It leverages the HuBERT encoder and HiFi-GAN decoder, combined with the dual-tower DLM, and is trained on 2,000 hours of dual-channel telephone conversation audio (Fisher dataset), where each speaker in a conversation is allocated an independent audio track. The dGSLM model transforms the audio on both channels into discrete tokens using HuBERT, and the DLM model autoregressively predicts the next audio token and its duration. Finally, the HiFi-GAN<cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib108" title="">108</a>]</cite> decoder reconstructs the audio for both channels. This approach differs significantly from traditional text-dependent spoken dialogue models, with a particular emphasis on modeling turn-taking and backchanneling capabilities. This capability gives dGSLM a notable advantage in duplex voice interaction, better mimicking the natural dynamics of human conversation. Through its duplex model design, dGSLM represents an essential step forward in interactive capabilities and provides a foundation for further advancements.</p> </div> <div class="ltx_para" id="S5.SS2.SSS2.p23"> <p class="ltx_p" id="S5.SS2.SSS2.p23.1"><math alttext="\bullet" class="ltx_Math" display="inline" id="S5.SS2.SSS2.p23.1.m1.1"><semantics id="S5.SS2.SSS2.p23.1.m1.1a"><mo id="S5.SS2.SSS2.p23.1.m1.1.1" xref="S5.SS2.SSS2.p23.1.m1.1.1.cmml">∙</mo><annotation-xml encoding="MathML-Content" id="S5.SS2.SSS2.p23.1.m1.1b"><ci id="S5.SS2.SSS2.p23.1.m1.1.1.cmml" xref="S5.SS2.SSS2.p23.1.m1.1.1">∙</ci></annotation-xml><annotation encoding="application/x-tex" id="S5.SS2.SSS2.p23.1.m1.1c">\bullet</annotation><annotation encoding="application/x-llamapun" id="S5.SS2.SSS2.p23.1.m1.1d">∙</annotation></semantics></math> <em class="ltx_emph ltx_font_italic" id="S5.SS2.SSS2.p23.1.1">Moshi.</em> As a novel full-duplex architecture, Moshi incorporates a rich array of design concepts. Unlike dGSLM, Moshi does not abandon the language model’s ability in text dialogue. Moshi’s architecture is based on the Helium language model and Mimi neural audio codec, both trained from scratch. Helium, as a large pre-trained text language model, provides strong reasoning capabilities, while Mimi handles audio signal encoding and decoding. To achieve real-time interaction, Moshi is designed as a multi-stream architecture, simultaneously processing "user" and "moshi" audio streams without explicitly modeling speaker turns. Moshi also introduces the "Inner Monologue" method within the "moshi" audio stream, a process that jointly models text and audio tokens during training and inference. This approach allows the model to fully utilize textual knowledge while maintaining speech-to-speech system characteristics, significantly enhancing generation quality. Mimi, a neural audio codec integrating semantic and acoustic information through residual vector quantization and knowledge distillation, captures high-quality user input audio and Moshi’s output voice efficiently. To jointly model Moshi and user audio streams alongside Moshi’s text tokens, Depth Transformer with streaming inference capabilities is employed. The Mimi encoder and decoder combine convolutional and Transformer layers, with causal convolutions, allowing for streaming operation. Moshi is pre-trained on unsupervised audio data to handle speech scenarios and then fine-tuned on the Fisher dataset to address overlapping speech and interruptions. Finally, the system is further optimized on a custom instruction-tuning dataset, ensuring robust performance across various interactive scenarios. Experimental results show that Moshi excels in speech modeling and spoken QA tasks, especially in latency, achieving a theoretical latency of 160 milliseconds and 200 milliseconds in practice, significantly lower than the typical 230 milliseconds in natural conversation, enhancing real-time interaction and conversation flow.</p> </div> <div class="ltx_para" id="S5.SS2.SSS2.p24"> <p class="ltx_p" id="S5.SS2.SSS2.p24.1"><math alttext="\bullet" class="ltx_Math" display="inline" id="S5.SS2.SSS2.p24.1.m1.1"><semantics id="S5.SS2.SSS2.p24.1.m1.1a"><mo id="S5.SS2.SSS2.p24.1.m1.1.1" xref="S5.SS2.SSS2.p24.1.m1.1.1.cmml">∙</mo><annotation-xml encoding="MathML-Content" id="S5.SS2.SSS2.p24.1.m1.1b"><ci id="S5.SS2.SSS2.p24.1.m1.1.1.cmml" xref="S5.SS2.SSS2.p24.1.m1.1.1">∙</ci></annotation-xml><annotation encoding="application/x-tex" id="S5.SS2.SSS2.p24.1.m1.1c">\bullet</annotation><annotation encoding="application/x-llamapun" id="S5.SS2.SSS2.p24.1.m1.1d">∙</annotation></semantics></math> <em class="ltx_emph ltx_font_italic" id="S5.SS2.SSS2.p24.1.1">Parrot.</em> Parrot <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib148" title="">148</a>]</cite> model incorporates multiple features specifically designed to enhance interaction in spoken dialogue. It uses a dual-channel audio setup, where each channel represents a different speaker. This configuration allows Parrot to manage both sides of a conversation independently, facilitating real-time turn-taking. By distinguishing between the user’s input and the system’s response on separate channels, the model can listen and respond in parallel, creating a more natural conversational flow. To handle simultaneous speaker inputs effectively, Parrot employs a "next-token-pair prediction" mechanism, allowing it to predict tokens for both channels in a coordinated sequence. This approach helps the model manage conversational dynamics such as overlapping speech and smooth transitions between turns, adjusting response timing based on the user’s input. During inference, Parrot supports streaming input, enabling continuous processing of user audio on one channel while generating responses on the other. This streaming capability allows the model to respond to live spoken input in real-time, handling turn-taking, pauses, and interruptions dynamically. Unlike cascaded systems that rely on intermediate text conversions, Parrot processes audio directly, reducing latency and allowing immediate responses to spoken input. These interaction-focused design choices make Parrot highly responsive, enabling it to manage turn-taking naturally, respond to interruptions, and handle overlapping speech,</p> </div> <div class="ltx_para" id="S5.SS2.SSS2.p25"> <p class="ltx_p" id="S5.SS2.SSS2.p25.1"><math alttext="\bullet" class="ltx_Math" display="inline" id="S5.SS2.SSS2.p25.1.m1.1"><semantics id="S5.SS2.SSS2.p25.1.m1.1a"><mo id="S5.SS2.SSS2.p25.1.m1.1.1" xref="S5.SS2.SSS2.p25.1.m1.1.1.cmml">∙</mo><annotation-xml encoding="MathML-Content" id="S5.SS2.SSS2.p25.1.m1.1b"><ci id="S5.SS2.SSS2.p25.1.m1.1.1.cmml" xref="S5.SS2.SSS2.p25.1.m1.1.1">∙</ci></annotation-xml><annotation encoding="application/x-tex" id="S5.SS2.SSS2.p25.1.m1.1c">\bullet</annotation><annotation encoding="application/x-llamapun" id="S5.SS2.SSS2.p25.1.m1.1d">∙</annotation></semantics></math> <em class="ltx_emph ltx_font_italic" id="S5.SS2.SSS2.p25.1.1">Mini-Omni2.</em> Mini-Omni2 is an open-source multimodal large language model aimed at simulating the multimodal capabilities of GPT-4o in vision, hearing, and text, supporting real-time full-duplex interaction. Mini-Omni2 combines visual and audio encoders with a language model to enable simultaneous input and output of images, audio, and text. The model incorporates an interrupt mechanism based on instruction design for more flexible user interactions. This system uses a delayed parallel generation algorithm, allowing the model to generate text and audio responses simultaneously, greatly improving conversational real-time capabilities and response speed. To achieve full-duplex interaction, Mini-Omni2 introduces an interrupt mechanism based on a limited instruction approach, trained on a specially constructed dataset with specific irq (interrupt) and n-irq (non-interrupt) state markers for model optimization. For training Mini-Omni2’s interruption functionality, the researchers used noisy speech data synthesized with specific command phrases (such as "Stop Omni") in various voices and tones to simulate scenarios where users might issue interrupt commands. The dataset also includes background noises, such as environmental sounds, music, and other dialogues, enhancing the model’s robustness in complex environments. During training, Mini-Omni2 controls output flow through irq and n-irq state markers, generating these markers in real-time to determine whether to continue output. In this way, the model can immediately halt generation based on user instructions and switch to "listening" mode in real-time dialogue. The training data consists of long audio streams from which the model extracts and encodes user commands like "Stop Omni." Researchers inserted interrupt commands at various time points, marking data after the insertion point as irq (interrupt) and data before as n-irq (non-interrupt). This labeling method ensures that the model learns to accurately identify interrupt commands in complex audio inputs and respond appropriately.</p> </div> <div class="ltx_para" id="S5.SS2.SSS2.p26"> <p class="ltx_p" id="S5.SS2.SSS2.p26.1"><math alttext="\bullet" class="ltx_Math" display="inline" id="S5.SS2.SSS2.p26.1.m1.1"><semantics id="S5.SS2.SSS2.p26.1.m1.1a"><mo id="S5.SS2.SSS2.p26.1.m1.1.1" xref="S5.SS2.SSS2.p26.1.m1.1.1.cmml">∙</mo><annotation-xml encoding="MathML-Content" id="S5.SS2.SSS2.p26.1.m1.1b"><ci id="S5.SS2.SSS2.p26.1.m1.1.1.cmml" xref="S5.SS2.SSS2.p26.1.m1.1.1">∙</ci></annotation-xml><annotation encoding="application/x-tex" id="S5.SS2.SSS2.p26.1.m1.1c">\bullet</annotation><annotation encoding="application/x-llamapun" id="S5.SS2.SSS2.p26.1.m1.1d">∙</annotation></semantics></math> <em class="ltx_emph ltx_font_italic" id="S5.SS2.SSS2.p26.1.1">SyncLLM.</em> SyncLLM achieves full-duplex dialogue and interruption capabilities through multi-stream interleaving and chunk processing. SyncLLM divides the conversation’s audio stream into fixed-sized chunks, each corresponding to a specific time interval. The model alternates between generating user and system speech segments within each time step (chunk), ensuring real-time system responses while processing user speech input. To maintain temporal synchronization with the user, SyncLLM predicts the user’s speech at each time step before generating each system chunk, using it as context to infer the system’s next response. This mechanism enables the system to keep pace with the conversation even with network latency. The chunk method allows SyncLLM to handle both user and system audio streams simultaneously, supporting complex dialogue features like speech overlap, interruption, and real-time feedback. Additionally, by using de-duplicated speech token sequences and periodic synchronization markers, the model efficiently performs chunk-level real-time inference, making conversation more fluid and natural.</p> </div> <div class="ltx_para" id="S5.SS2.SSS2.p27"> <p class="ltx_p" id="S5.SS2.SSS2.p27.1"><math alttext="\bullet" class="ltx_Math" display="inline" id="S5.SS2.SSS2.p27.1.m1.1"><semantics id="S5.SS2.SSS2.p27.1.m1.1a"><mo id="S5.SS2.SSS2.p27.1.m1.1.1" xref="S5.SS2.SSS2.p27.1.m1.1.1.cmml">∙</mo><annotation-xml encoding="MathML-Content" id="S5.SS2.SSS2.p27.1.m1.1b"><ci id="S5.SS2.SSS2.p27.1.m1.1.1.cmml" xref="S5.SS2.SSS2.p27.1.m1.1.1">∙</ci></annotation-xml><annotation encoding="application/x-tex" id="S5.SS2.SSS2.p27.1.m1.1c">\bullet</annotation><annotation encoding="application/x-llamapun" id="S5.SS2.SSS2.p27.1.m1.1d">∙</annotation></semantics></math> <em class="ltx_emph ltx_font_italic" id="S5.SS2.SSS2.p27.1.1">OmniFlatten.</em> Similar to SyncLLM, the OmniFlatten model achieves full-duplex and interruption functionality primarily through multi-stream data processing and progressive training. To enable full-duplex dialogue, the model adopts a multi-stream architecture that interleaves the user’s speech stream with the assistant’s speech and text streams into a single sequence for training, simplifying multimodal modeling and enhancing real-time capability. The model first aligns the text language model with modality through multitask supervised fine-tuning, enabling it to understand and generate both speech and text, ensuring basic capability for handling speech and text simultaneously. Through a progressive training process, OmniFlatten attains full-duplex capability in three stages: initial training for half-duplex dialogue, then removing the user’s text stream to support real-time prediction with multi-stream data, and finally removing the assistant’s text stream to enable pure speech stream generation. These steps reduce reliance on text and decrease latency, allowing the system to generate voice responses while receiving user speech input. By using a block-by-block generation strategy, OmniFlatten divides the input and output speech sequences into fixed-size blocks, processing each segment in turn. This effectively implements streaming processing, ensuring low latency and high responsiveness in full-duplex dialogue, thereby providing a more natural response to user interruptions.</p> </div> <div class="ltx_para" id="S5.SS2.SSS2.p28"> <p class="ltx_p" id="S5.SS2.SSS2.p28.1"><math alttext="\bullet" class="ltx_Math" display="inline" id="S5.SS2.SSS2.p28.1.m1.1"><semantics id="S5.SS2.SSS2.p28.1.m1.1a"><mo id="S5.SS2.SSS2.p28.1.m1.1.1" xref="S5.SS2.SSS2.p28.1.m1.1.1.cmml">∙</mo><annotation-xml encoding="MathML-Content" id="S5.SS2.SSS2.p28.1.m1.1b"><ci id="S5.SS2.SSS2.p28.1.m1.1.1.cmml" xref="S5.SS2.SSS2.p28.1.m1.1.1">∙</ci></annotation-xml><annotation encoding="application/x-tex" id="S5.SS2.SSS2.p28.1.m1.1c">\bullet</annotation><annotation encoding="application/x-llamapun" id="S5.SS2.SSS2.p28.1.m1.1d">∙</annotation></semantics></math> <em class="ltx_emph ltx_font_italic" id="S5.SS2.SSS2.p28.1.1">Freeze-Omni.</em> To support duplex dialogue, Freeze-Omni <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib213" title="">213</a>]</cite> uses a chunk-level state prediction mechanism for natural turn-taking. When the user begins speaking, a voice activity detection module identifies the audio input, prompting the model to process the audio chunk by chunk. After processing each chunk, the model’s classification layer predicts the conversation state to determine the next action. There are three possible states: State 0, where the model continues listening for more input, assuming the user hasn’t completed their turn; State 1, where the model interrupts to provide an immediate response if a quick acknowledgment or feedback is needed; and State 2, where the model has completed processing the current user input and is ready to generate and output a response, thus transitioning smoothly into the response phase without further listening. This chunk-wise state prediction enables the model to decide effectively when to respond and when to continue listening, enhancing its ability to handle natural conversational cues and support interactive dialogue.</p> </div> </section> <section class="ltx_subsubsection" id="S5.SS2.SSS3"> <h4 class="ltx_title ltx_title_subsubsection"> <span class="ltx_tag ltx_tag_subsubsection">5.2.3 </span>Discussions about streaming and interaction</h4> <div class="ltx_para" id="S5.SS2.SSS3.p1"> <p class="ltx_p" id="S5.SS2.SSS3.p1.1">Significant progress has been made in dialogues models, particularly in real-time interaction and semantic understanding, with notable achievements in streaming processing and full-duplex interaction. Current systems exhibit strong technical capabilities in reducing response latency, enhancing interruption handling, and improving the naturalness of conversation. However, existing spoken dialogues models still lack a unified system that can handle all forms of interaction seamlessly. Future research could explore new frameworks to better manage both user interruptions and the system’s ability to interrupt users, making interactions more natural. Additionally, standardized benchmarks for evaluating interaction capabilities remain underdeveloped. A unified evaluation benchmark would provide a consistent method for assessing and comparing the performance of different models, thereby advancing the development of more intelligent and responsive interaction systems.</p> </div> </section> </section> </section> <section class="ltx_section" id="S6"> <h2 class="ltx_title ltx_title_section"> <span class="ltx_tag ltx_tag_section">6 </span>Training Resources and Evaluation</h2> <section class="ltx_subsection" id="S6.SS1"> <h3 class="ltx_title ltx_title_subsection"> <span class="ltx_tag ltx_tag_subsection">6.1 </span>Training resources</h3> <div class="ltx_para" id="S6.SS1.p1"> <p class="ltx_p" id="S6.SS1.p1.1">Training a spoken dialogue system is a complex, multi-stage process, with each stage relying on specific datasets to achieve distinct training objectives and enhance system performance. This section provides an in-depth analysis of the training resources about the spoken dialogue models, showcasing the data collection and processing methods at each stage and illustrating how these elements contribute to the system’s intelligence. It further reveals how key steps, from foundational architecture to fine-tuning, shape the intelligent development of dialogue systems.</p> </div> <div class="ltx_para" id="S6.SS1.p2"> <p class="ltx_p" id="S6.SS1.p2.1">To address the limitations of existing training spoken dialogue data and leverage the knowledge and reasoning capabilities of mature text-based models, many approaches involve <span class="ltx_text ltx_font_italic" id="S6.SS1.p2.1.1">Continue Training</span> on pre-trained text language models. This training paradigm encompasses nearly all data types required to build a spoken dialogue system. The following sections focus on analyzing data acquisition and processing methods under this training flow, covering the following core stages: <span class="ltx_text ltx_font_italic" id="S6.SS1.p2.1.2">Text Language Model Pre-training</span>, <span class="ltx_text ltx_font_italic" id="S6.SS1.p2.1.3">Post-Train for Audio Modal Adaption</span>, <span class="ltx_text ltx_font_italic" id="S6.SS1.p2.1.4">Post-Train for Dual-Stream Audio Processing</span>, <span class="ltx_text ltx_font_italic" id="S6.SS1.p2.1.5">Enhancing Conversational Abilities and Instruction Tuning</span>. We have listed commonly used datasets for training in Table <a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#S6.T2" title="Table 2 ‣ 6.1.2 Training resources about Post-Train for Audio Modal Alignment ‣ 6.1 Training resources ‣ 6 Training Resources and Evaluation ‣ WavChat: A Survey of Spoken Dialogue Models"><span class="ltx_text ltx_ref_tag">2</span></a>. However, current spoken dialogue models lack exploration in music and sound. To support future development in spoken dialogue systems, we provide a list of common music and sound datasets in the appendix <a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#A1" title="Appendix A Resources about Music and Sound Datasets ‣ WavChat: A Survey of Spoken Dialogue Models"><span class="ltx_text ltx_ref_tag">A</span></a> as a reference.</p> </div> <section class="ltx_subsubsection" id="S6.SS1.SSS1"> <h4 class="ltx_title ltx_title_subsubsection"> <span class="ltx_tag ltx_tag_subsubsection">6.1.1 </span>Training resources about Text LLM Pre-training</h4> <div class="ltx_para" id="S6.SS1.SSS1.p1"> <p class="ltx_p" id="S6.SS1.SSS1.p1.1">Text Language Model pre-training serves as the foundational stage for spoken dialogue models. Through unsupervised learning on large-scale text data, the model acquires knowledge of vocabulary, grammar, and contextual relationships, gaining essential knowledge and reasoning capabilities. Most spoken dialogue systems are built upon pre-existing open-source text language models (such as Llama <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib200" title="">200</a>]</cite>, Palm <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib6" title="">6</a>]</cite>, etc). Although we does not delve into this stage in detail, it provides a solid foundation for the model’s natural language understanding and generation capabilities.</p> </div> </section> <section class="ltx_subsubsection" id="S6.SS1.SSS2"> <h4 class="ltx_title ltx_title_subsubsection"> <span class="ltx_tag ltx_tag_subsubsection">6.1.2 </span>Training resources about Post-Train for Audio Modal Alignment</h4> <div class="ltx_para" id="S6.SS1.SSS2.p1"> <p class="ltx_p" id="S6.SS1.SSS2.p1.1">After establishing a text-based foundational model, the system possesses essential knowledge and reasoning abilities. In this stage, we introduce the audio modality, enabling the text language model to understand and generate speech while minimizing any potential loss of textual knowledge. This process is known as <span class="ltx_text ltx_font_italic" id="S6.SS1.SSS2.p1.1.1">modal adaption</span> or <span class="ltx_text ltx_font_italic" id="S6.SS1.SSS2.p1.1.2">modal alignment</span>. This multimodal structure incorporates an audio encoder with a codebook, helping the model recognize linguistic, emotional, and tonal information in speech. The audio decoder supports the generation of natural and fluent speech output, while audio signal embeddings and special token types (e.g., speaker-distinguishing tokens for Synchronous LLM, task-distinguishing tokens for OmniFlatten, and state tokens for VITA) are added to the vocabulary of the text language model.</p> </div> <div class="ltx_para" id="S6.SS1.SSS2.p2"> <p class="ltx_p" id="S6.SS1.SSS2.p2.1">The primary goal at this stage is to align information from different modalities into a unified space or representation, allowing the model to correlate and comprehend such information. Consequently, the model is often trained on cross-modal tasks such as TTS , ASR , and audio captioning. The datasets used include numerous paired audio and text samples to ensure effective conversion between modalities. Commonly used TTS and ASR datasets include Aishell-3 <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib190" title="">190</a>]</cite>, LibriTTS <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib240" title="">240</a>]</cite>, TED-LIUM <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib178" title="">178</a>]</cite>, VoxPopuli <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib207" title="">207</a>]</cite>, Librispeech <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib160" title="">160</a>]</cite>, MLS <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib168" title="">168</a>]</cite>, Wenetspeech <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib241" title="">241</a>]</cite>, Gigaspeech <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib24" title="">24</a>]</cite>, VCTK <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib202" title="">202</a>]</cite>, LJSpeech <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib88" title="">88</a>]</cite>, Common Voice <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib8" title="">8</a>]</cite>, and others. For audio captioning, Wavcaps <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib147" title="">147</a>]</cite> are frequently used. Some speech datasets require ASR model transcription to generate corresponding text.</p> </div> <div class="ltx_para" id="S6.SS1.SSS2.p3"> <p class="ltx_p" id="S6.SS1.SSS2.p3.1">In this phase, the emphasis is placed on capturing and generating audio features and aligning them with text in vector space, rather than focusing on dialogue functionality.Therefore, the data typically consists of single-channel audio, which can be used after resampling. Notably, in some works, it is essential to ensure word-level alignment between text tokens and audio tokens (e.g., Spirit-LM, Moshi, and OmniFlatten), achievable through tools like the Whisper-timestamped package or other alignment tool. In Moshi, to prevent catastrophic forgetting, half of the training time is allocated to text data, highlighting the importance of balancing text and audio data during training.</p> </div> <figure class="ltx_table" id="S6.T2"> <figcaption class="ltx_caption ltx_centering"><span class="ltx_tag ltx_tag_table"><span class="ltx_text" id="S6.T2.2.1.1" style="font-size:90%;">Table 2</span>: </span><span class="ltx_text" id="S6.T2.3.2" style="font-size:90%;">Datasets used in the various training stages</span></figcaption> <div class="ltx_inline-block ltx_align_center ltx_transformed_outer" id="S6.T2.4" style="width:433.6pt;height:247.5pt;vertical-align:-0.0pt;"><span class="ltx_transformed_inner" style="transform:translate(-240.4pt,137.2pt) scale(0.474188659764415,0.474188659764415) ;"> <table class="ltx_tabular ltx_align_middle" id="S6.T2.4.1"> <tr class="ltx_tr" id="S6.T2.4.1.1"> <td class="ltx_td ltx_align_left ltx_border_t" id="S6.T2.4.1.1.1"><span class="ltx_text ltx_font_bold" id="S6.T2.4.1.1.1.1">Stage</span></td> <td class="ltx_td ltx_align_left ltx_border_t" id="S6.T2.4.1.1.2"><span class="ltx_text ltx_font_bold" id="S6.T2.4.1.1.2.1">Task</span></td> <td class="ltx_td ltx_align_center ltx_border_t" id="S6.T2.4.1.1.3"><span class="ltx_text ltx_font_bold" id="S6.T2.4.1.1.3.1">Dataset</span></td> <td class="ltx_td ltx_align_center ltx_border_t" id="S6.T2.4.1.1.4"><span class="ltx_text ltx_font_bold" id="S6.T2.4.1.1.4.1">Size</span></td> <td class="ltx_td ltx_align_center ltx_border_t" id="S6.T2.4.1.1.5"><span class="ltx_text ltx_font_bold" id="S6.T2.4.1.1.5.1">URL</span></td> <td class="ltx_td ltx_align_left ltx_border_t" id="S6.T2.4.1.1.6"><span class="ltx_text ltx_font_bold" id="S6.T2.4.1.1.6.1">Modality</span></td> </tr> <tr class="ltx_tr" id="S6.T2.4.1.2"> <td class="ltx_td ltx_align_left ltx_border_t" id="S6.T2.4.1.2.1" rowspan="13"><span class="ltx_text" id="S6.T2.4.1.2.1.1">Modal Alignment</span></td> <td class="ltx_td ltx_align_left ltx_border_t" id="S6.T2.4.1.2.2">Mandarin ASR</td> <td class="ltx_td ltx_align_center ltx_border_t" id="S6.T2.4.1.2.3">AISHELL-1<cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib18" title="">18</a>]</cite> </td> <td class="ltx_td ltx_align_center ltx_border_t" id="S6.T2.4.1.2.4">170 hrs</td> <td class="ltx_td ltx_align_center ltx_border_t" id="S6.T2.4.1.2.5"><a class="ltx_ref ltx_url ltx_font_typewriter" href="https://www.openslr.org/33/" title="">https://www.openslr.org/33/</a></td> <td class="ltx_td ltx_align_left ltx_border_t" id="S6.T2.4.1.2.6">Text, Speech</td> </tr> <tr class="ltx_tr" id="S6.T2.4.1.3"> <td class="ltx_td ltx_align_left" id="S6.T2.4.1.3.1">Mandarin ASR</td> <td class="ltx_td ltx_align_center" id="S6.T2.4.1.3.2">AISHELL-2<cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib48" title="">48</a>]</cite> </td> <td class="ltx_td ltx_align_center" id="S6.T2.4.1.3.3">1k hrs</td> <td class="ltx_td ltx_align_center" id="S6.T2.4.1.3.4"><a class="ltx_ref ltx_url ltx_font_typewriter" href="https://github.com/kaldi-asr/kaldi/tree/master/egs/aishell2" title="">https://github.com/kaldi-asr/kaldi/tree/master/egs/aishell2</a></td> <td class="ltx_td ltx_align_left" id="S6.T2.4.1.3.5">Text, Speech</td> </tr> <tr class="ltx_tr" id="S6.T2.4.1.4"> <td class="ltx_td ltx_align_left" id="S6.T2.4.1.4.1">Mandarin TTS</td> <td class="ltx_td ltx_align_center" id="S6.T2.4.1.4.2">AISHELL-3<cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib190" title="">190</a>]</cite> </td> <td class="ltx_td ltx_align_center" id="S6.T2.4.1.4.3">85 hrs, 88,035 utt., 218 spk.</td> <td class="ltx_td ltx_align_center" id="S6.T2.4.1.4.4"><a class="ltx_ref ltx_url ltx_font_typewriter" href="https://www.aishelltech.com/aishell_3" title="">https://www.aishelltech.com/aishell_3</a></td> <td class="ltx_td ltx_align_left" id="S6.T2.4.1.4.5">Text, Speech</td> </tr> <tr class="ltx_tr" id="S6.T2.4.1.5"> <td class="ltx_td ltx_align_left" id="S6.T2.4.1.5.1">TTS</td> <td class="ltx_td ltx_align_center" id="S6.T2.4.1.5.2">LibriTTS<cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib240" title="">240</a>]</cite> </td> <td class="ltx_td ltx_align_center" id="S6.T2.4.1.5.3">585 hrs</td> <td class="ltx_td ltx_align_center" id="S6.T2.4.1.5.4"><a class="ltx_ref ltx_url ltx_font_typewriter" href="https://www.openslr.org/60/" title="">https://www.openslr.org/60/</a></td> <td class="ltx_td ltx_align_left" id="S6.T2.4.1.5.5">Text, Speech</td> </tr> <tr class="ltx_tr" id="S6.T2.4.1.6"> <td class="ltx_td ltx_align_left" id="S6.T2.4.1.6.1">ASR</td> <td class="ltx_td ltx_align_center" id="S6.T2.4.1.6.2">TED-LIUM<cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib178" title="">178</a>]</cite> </td> <td class="ltx_td ltx_align_center" id="S6.T2.4.1.6.3">452 hrs</td> <td class="ltx_td ltx_align_center" id="S6.T2.4.1.6.4"><a class="ltx_ref ltx_url ltx_font_typewriter" href="https://lium.univ-lemans.fr/ted-lium3/" title="">https://lium.univ-lemans.fr/ted-lium3/</a></td> <td class="ltx_td ltx_align_left" id="S6.T2.4.1.6.5">Text, Speech</td> </tr> <tr class="ltx_tr" id="S6.T2.4.1.7"> <td class="ltx_td ltx_align_left" id="S6.T2.4.1.7.1">ASR</td> <td class="ltx_td ltx_align_center" id="S6.T2.4.1.7.2">VoxPopuli<cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib207" title="">207</a>]</cite> </td> <td class="ltx_td ltx_align_center" id="S6.T2.4.1.7.3">1.8k hrs</td> <td class="ltx_td ltx_align_center" id="S6.T2.4.1.7.4"><a class="ltx_ref ltx_url ltx_font_typewriter" href="https://github.com/facebookresearch/voxpopuli" title="">https://github.com/facebookresearch/voxpopuli</a></td> <td class="ltx_td ltx_align_left" id="S6.T2.4.1.7.5">Text, Speech</td> </tr> <tr class="ltx_tr" id="S6.T2.4.1.8"> <td class="ltx_td ltx_align_left" id="S6.T2.4.1.8.1">ASR</td> <td class="ltx_td ltx_align_center" id="S6.T2.4.1.8.2">Librispeech<cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib160" title="">160</a>]</cite> </td> <td class="ltx_td ltx_align_center" id="S6.T2.4.1.8.3">1,000 hrs</td> <td class="ltx_td ltx_align_center" id="S6.T2.4.1.8.4"><a class="ltx_ref ltx_url ltx_font_typewriter" href="https://www.openslr.org/12" title="">https://www.openslr.org/12</a></td> <td class="ltx_td ltx_align_left" id="S6.T2.4.1.8.5">Text, Speech</td> </tr> <tr class="ltx_tr" id="S6.T2.4.1.9"> <td class="ltx_td ltx_align_left" id="S6.T2.4.1.9.1">ASR</td> <td class="ltx_td ltx_align_center" id="S6.T2.4.1.9.2">MLS<cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib168" title="">168</a>]</cite> </td> <td class="ltx_td ltx_align_center" id="S6.T2.4.1.9.3">44.5k hrs</td> <td class="ltx_td ltx_align_center" id="S6.T2.4.1.9.4"><a class="ltx_ref ltx_url ltx_font_typewriter" href="https://www.openslr.org/" title="">https://www.openslr.org/</a></td> <td class="ltx_td ltx_align_left" id="S6.T2.4.1.9.5">Text, Speech</td> </tr> <tr class="ltx_tr" id="S6.T2.4.1.10"> <td class="ltx_td ltx_align_left" id="S6.T2.4.1.10.1">TTS</td> <td class="ltx_td ltx_align_center" id="S6.T2.4.1.10.2">Wenetspeech<cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib241" title="">241</a>]</cite> </td> <td class="ltx_td ltx_align_center" id="S6.T2.4.1.10.3">22.4k hrs</td> <td class="ltx_td ltx_align_center" id="S6.T2.4.1.10.4"><a class="ltx_ref ltx_url ltx_font_typewriter" href="https://wenet.org.cn/WenetSpeech/" title="">https://wenet.org.cn/WenetSpeech/</a></td> <td class="ltx_td ltx_align_left" id="S6.T2.4.1.10.5">Text, Speech</td> </tr> <tr class="ltx_tr" id="S6.T2.4.1.11"> <td class="ltx_td ltx_align_left" id="S6.T2.4.1.11.1">ASR</td> <td class="ltx_td ltx_align_center" id="S6.T2.4.1.11.2">Gigaspeech<cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib24" title="">24</a>]</cite> </td> <td class="ltx_td ltx_align_center" id="S6.T2.4.1.11.3">40k hrs</td> <td class="ltx_td ltx_align_center" id="S6.T2.4.1.11.4"><a class="ltx_ref ltx_url ltx_font_typewriter" href="https://github.com/SpeechColab/GigaSpeech" title="">https://github.com/SpeechColab/GigaSpeech</a></td> <td class="ltx_td ltx_align_left" id="S6.T2.4.1.11.5">Text, Speech</td> </tr> <tr class="ltx_tr" id="S6.T2.4.1.12"> <td class="ltx_td ltx_align_left" id="S6.T2.4.1.12.1">ASR</td> <td class="ltx_td ltx_align_center" id="S6.T2.4.1.12.2">VCTK<cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib202" title="">202</a>]</cite> </td> <td class="ltx_td ltx_align_center" id="S6.T2.4.1.12.3">300 hrs</td> <td class="ltx_td ltx_align_center" id="S6.T2.4.1.12.4"><a class="ltx_ref ltx_url ltx_font_typewriter" href="https://paperswithcode.com/dataset/voice-bank-demand" title="">https://paperswithcode.com/dataset/voice-bank-demand</a></td> <td class="ltx_td ltx_align_left" id="S6.T2.4.1.12.5">Text, Speech</td> </tr> <tr class="ltx_tr" id="S6.T2.4.1.13"> <td class="ltx_td ltx_align_left" id="S6.T2.4.1.13.1">TTS</td> <td class="ltx_td ltx_align_center" id="S6.T2.4.1.13.2">LJSpeech<cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib88" title="">88</a>]</cite> </td> <td class="ltx_td ltx_align_center" id="S6.T2.4.1.13.3">24 hrs</td> <td class="ltx_td ltx_align_center" id="S6.T2.4.1.13.4"><a class="ltx_ref ltx_url ltx_font_typewriter" href="https://keithito.com/LJ-Speech-Dataset/" title="">https://keithito.com/LJ-Speech-Dataset/</a></td> <td class="ltx_td ltx_align_left" id="S6.T2.4.1.13.5">Text, Speech</td> </tr> <tr class="ltx_tr" id="S6.T2.4.1.14"> <td class="ltx_td ltx_align_left" id="S6.T2.4.1.14.1">ASR</td> <td class="ltx_td ltx_align_center" id="S6.T2.4.1.14.2">Common Voice<cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib8" title="">8</a>]</cite> </td> <td class="ltx_td ltx_align_center" id="S6.T2.4.1.14.3">2,500 hrs</td> <td class="ltx_td ltx_align_center" id="S6.T2.4.1.14.4"><a class="ltx_ref ltx_url ltx_font_typewriter" href="https://commonvoice.mozilla.org/zh-CN" title="">https://commonvoice.mozilla.org/zh-CN</a></td> <td class="ltx_td ltx_align_left" id="S6.T2.4.1.14.5">Text, Speech</td> </tr> <tr class="ltx_tr" id="S6.T2.4.1.15"> <td class="ltx_td" id="S6.T2.4.1.15.1"></td> <td class="ltx_td ltx_align_left" id="S6.T2.4.1.15.2">Audio Caption</td> <td class="ltx_td ltx_align_center" id="S6.T2.4.1.15.3">Wavcaps<cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib147" title="">147</a>]</cite> </td> <td class="ltx_td ltx_align_center" id="S6.T2.4.1.15.4">400k clips</td> <td class="ltx_td ltx_align_center" id="S6.T2.4.1.15.5"><a class="ltx_ref ltx_url ltx_font_typewriter" href="https://github.com/XinhaoMei/WavCaps" title="">https://github.com/XinhaoMei/WavCaps</a></td> <td class="ltx_td ltx_align_left" id="S6.T2.4.1.15.6">Text, Speech</td> </tr> <tr class="ltx_tr" id="S6.T2.4.1.16"> <td class="ltx_td" id="S6.T2.4.1.16.1"></td> <td class="ltx_td ltx_align_left" id="S6.T2.4.1.16.2">ASR</td> <td class="ltx_td ltx_align_center" id="S6.T2.4.1.16.3">LibriLight<cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib101" title="">101</a>]</cite> </td> <td class="ltx_td ltx_align_center" id="S6.T2.4.1.16.4">60k hrs</td> <td class="ltx_td ltx_align_center" id="S6.T2.4.1.16.5"><a class="ltx_ref ltx_url ltx_font_typewriter" href="https://github.com/facebookresearch/libri-light" title="">https://github.com/facebookresearch/libri-light</a></td> <td class="ltx_td ltx_align_left" id="S6.T2.4.1.16.6">Text, Speech</td> </tr> <tr class="ltx_tr" id="S6.T2.4.1.17"> <td class="ltx_td" id="S6.T2.4.1.17.1"></td> <td class="ltx_td ltx_align_left" id="S6.T2.4.1.17.2">ASR</td> <td class="ltx_td ltx_align_center" id="S6.T2.4.1.17.3">PeopleSpeech<cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib63" title="">63</a>]</cite> </td> <td class="ltx_td ltx_align_center" id="S6.T2.4.1.17.4">30k hrs</td> <td class="ltx_td ltx_align_center" id="S6.T2.4.1.17.5"><a class="ltx_ref ltx_url ltx_font_typewriter" href="https://huggingface.co/datasets/MLCommons/peoples_speech" title="">https://huggingface.co/datasets/MLCommons/peoples_speech</a></td> <td class="ltx_td ltx_align_left" id="S6.T2.4.1.17.6">Text, Speech</td> </tr> <tr class="ltx_tr" id="S6.T2.4.1.18"> <td class="ltx_td" id="S6.T2.4.1.18.1"></td> <td class="ltx_td ltx_align_left" id="S6.T2.4.1.18.2">Mandarin ASR</td> <td class="ltx_td ltx_align_center" id="S6.T2.4.1.18.3">KeSpeech<cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib199" title="">199</a>]</cite> </td> <td class="ltx_td ltx_align_center" id="S6.T2.4.1.18.4">1,542 hrs</td> <td class="ltx_td ltx_align_center" id="S6.T2.4.1.18.5"><a class="ltx_ref ltx_url ltx_font_typewriter" href="https://github.com/KeSpeech/KeSpeech" title="">https://github.com/KeSpeech/KeSpeech</a></td> <td class="ltx_td ltx_align_left" id="S6.T2.4.1.18.6">Text, Speech</td> </tr> <tr class="ltx_tr" id="S6.T2.4.1.19"> <td class="ltx_td ltx_align_left ltx_border_t" id="S6.T2.4.1.19.1" rowspan="7"><span class="ltx_text" id="S6.T2.4.1.19.1.1">Dual-Stream Processing</span></td> <td class="ltx_td ltx_align_left ltx_border_t" id="S6.T2.4.1.19.2">Instruction</td> <td class="ltx_td ltx_align_center ltx_border_t" id="S6.T2.4.1.19.3">Alpaca<cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib144" title="">144</a>]</cite> </td> <td class="ltx_td ltx_align_center ltx_border_t" id="S6.T2.4.1.19.4">52,000 items</td> <td class="ltx_td ltx_align_center ltx_border_t" id="S6.T2.4.1.19.5"><a class="ltx_ref ltx_url ltx_font_typewriter" href="https://huggingface.co/datasets/tatsu-lab/alpaca" title="">https://huggingface.co/datasets/tatsu-lab/alpaca</a></td> <td class="ltx_td ltx_align_left ltx_border_t" id="S6.T2.4.1.19.6">Text + TTS</td> </tr> <tr class="ltx_tr" id="S6.T2.4.1.20"> <td class="ltx_td ltx_align_left" id="S6.T2.4.1.20.1">Instruction</td> <td class="ltx_td ltx_align_center" id="S6.T2.4.1.20.2">Moss</td> <td class="ltx_td ltx_align_center" id="S6.T2.4.1.20.3">-</td> <td class="ltx_td ltx_align_center" id="S6.T2.4.1.20.4"><a class="ltx_ref ltx_url ltx_font_typewriter" href="https://huggingface.co/fnlp/moss-moon-003-sft" title="">https://huggingface.co/fnlp/moss-moon-003-sft</a></td> <td class="ltx_td ltx_align_left" id="S6.T2.4.1.20.5">Text + TTS</td> </tr> <tr class="ltx_tr" id="S6.T2.4.1.21"> <td class="ltx_td ltx_align_left" id="S6.T2.4.1.21.1">Instruction</td> <td class="ltx_td ltx_align_center" id="S6.T2.4.1.21.2">BelleCN</td> <td class="ltx_td ltx_align_center" id="S6.T2.4.1.21.3">-</td> <td class="ltx_td ltx_align_center" id="S6.T2.4.1.21.4"><a class="ltx_ref ltx_url ltx_font_typewriter" href="https://github.com/LianjiaTech/BELLE/tree/main" title="">https://github.com/LianjiaTech/BELLE/tree/main</a></td> <td class="ltx_td ltx_align_left" id="S6.T2.4.1.21.5">Text + TTS</td> </tr> <tr class="ltx_tr" id="S6.T2.4.1.22"> <td class="ltx_td ltx_align_left" id="S6.T2.4.1.22.1">Dialogue</td> <td class="ltx_td ltx_align_center" id="S6.T2.4.1.22.2">UltraChat<cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib46" title="">46</a>]</cite> </td> <td class="ltx_td ltx_align_center" id="S6.T2.4.1.22.3">1.5 million</td> <td class="ltx_td ltx_align_center" id="S6.T2.4.1.22.4"><a class="ltx_ref ltx_url ltx_font_typewriter" href="https://github.com/thunlp/UltraChat" title="">https://github.com/thunlp/UltraChat</a></td> <td class="ltx_td ltx_align_left" id="S6.T2.4.1.22.5">Text + TTS</td> </tr> <tr class="ltx_tr" id="S6.T2.4.1.23"> <td class="ltx_td ltx_align_left" id="S6.T2.4.1.23.1">Instruction</td> <td class="ltx_td ltx_align_center" id="S6.T2.4.1.23.2">Open-Orca<cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib124" title="">124</a>]</cite> </td> <td class="ltx_td ltx_align_center" id="S6.T2.4.1.23.3">-</td> <td class="ltx_td ltx_align_center" id="S6.T2.4.1.23.4"><a class="ltx_ref ltx_url ltx_font_typewriter" href="https://huggingface.co/datasets/Open-Orca/OpenOrca" title="">https://huggingface.co/datasets/Open-Orca/OpenOrca</a></td> <td class="ltx_td ltx_align_left" id="S6.T2.4.1.23.5">Text + TTS</td> </tr> <tr class="ltx_tr" id="S6.T2.4.1.24"> <td class="ltx_td ltx_align_left" id="S6.T2.4.1.24.1">Noise</td> <td class="ltx_td ltx_align_center" id="S6.T2.4.1.24.2">DNS <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib174" title="">174</a>]</cite> </td> <td class="ltx_td ltx_align_center" id="S6.T2.4.1.24.3">2425 hrs</td> <td class="ltx_td ltx_align_center" id="S6.T2.4.1.24.4"><a class="ltx_ref ltx_url ltx_font_typewriter" href="https://github.com/microsoft/DNS-Challenge" title="">https://github.com/microsoft/DNS-Challenge</a></td> <td class="ltx_td ltx_align_left" id="S6.T2.4.1.24.5">Noise data</td> </tr> <tr class="ltx_tr" id="S6.T2.4.1.25"> <td class="ltx_td ltx_align_left" id="S6.T2.4.1.25.1">Noise</td> <td class="ltx_td ltx_align_center" id="S6.T2.4.1.25.2">MUSAN <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib194" title="">194</a>]</cite> </td> <td class="ltx_td ltx_align_center" id="S6.T2.4.1.25.3">-</td> <td class="ltx_td ltx_align_center" id="S6.T2.4.1.25.4"><a class="ltx_ref ltx_url ltx_font_typewriter" href="https://www.openslr.org/17/" title="">https://www.openslr.org/17/</a></td> <td class="ltx_td ltx_align_left" id="S6.T2.4.1.25.5">Noise data</td> </tr> <tr class="ltx_tr" id="S6.T2.4.1.26"> <td class="ltx_td ltx_align_left ltx_border_b ltx_border_t" id="S6.T2.4.1.26.1" rowspan="4"><span class="ltx_text" id="S6.T2.4.1.26.1.1">Conversation Fine-Tune</span></td> <td class="ltx_td ltx_align_left ltx_border_t" id="S6.T2.4.1.26.2">Dialogue</td> <td class="ltx_td ltx_align_center ltx_border_t" id="S6.T2.4.1.26.3">Fisher</td> <td class="ltx_td ltx_align_center ltx_border_t" id="S6.T2.4.1.26.4">964 hrs</td> <td class="ltx_td ltx_align_center ltx_border_t" id="S6.T2.4.1.26.5"><a class="ltx_ref ltx_url ltx_font_typewriter" href="https://catalog.ldc.upenn.edu/LDC2004T19" title="">https://catalog.ldc.upenn.edu/LDC2004T19</a></td> <td class="ltx_td ltx_align_left ltx_border_t" id="S6.T2.4.1.26.6">Text, Speech</td> </tr> <tr class="ltx_tr" id="S6.T2.4.1.27"> <td class="ltx_td ltx_align_left" id="S6.T2.4.1.27.1">Dialogue</td> <td class="ltx_td ltx_align_center" id="S6.T2.4.1.27.2">GPT-Talker<cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib137" title="">137</a>]</cite> </td> <td class="ltx_td ltx_align_center" id="S6.T2.4.1.27.3">-</td> <td class="ltx_td ltx_align_center" id="S6.T2.4.1.27.4"><a class="ltx_ref ltx_url ltx_font_typewriter" href="https://github.com/AI-S2-Lab/GPT-Talker" title="">https://github.com/AI-S2-Lab/GPT-Talker</a></td> <td class="ltx_td ltx_align_left" id="S6.T2.4.1.27.5">Text, Speech</td> </tr> <tr class="ltx_tr" id="S6.T2.4.1.28"> <td class="ltx_td ltx_align_left" id="S6.T2.4.1.28.1">Instruction</td> <td class="ltx_td ltx_align_center" id="S6.T2.4.1.28.2">INSTRUCTS2S-200K</td> <td class="ltx_td ltx_align_center" id="S6.T2.4.1.28.3">200k items</td> <td class="ltx_td ltx_align_center" id="S6.T2.4.1.28.4"><a class="ltx_ref ltx_url ltx_font_typewriter" href="https://github.com/ictnlp/LLaMA-Omni" title="">https://github.com/ictnlp/LLaMA-Omni</a></td> <td class="ltx_td ltx_align_left" id="S6.T2.4.1.28.5">Text + TTS</td> </tr> <tr class="ltx_tr" id="S6.T2.4.1.29"> <td class="ltx_td ltx_align_left ltx_border_b" id="S6.T2.4.1.29.1">Instruction</td> <td class="ltx_td ltx_align_center ltx_border_b" id="S6.T2.4.1.29.2">Open Hermes</td> <td class="ltx_td ltx_align_center ltx_border_b" id="S6.T2.4.1.29.3">900k items</td> <td class="ltx_td ltx_align_center ltx_border_b" id="S6.T2.4.1.29.4"><a class="ltx_ref ltx_url ltx_font_typewriter" href="https://ollama.com/library/openhermes" title="">https://ollama.com/library/openhermes</a></td> <td class="ltx_td ltx_align_left ltx_border_b" id="S6.T2.4.1.29.5">Text + TTS</td> </tr> </table> </span></div> </figure> </section> <section class="ltx_subsubsection" id="S6.SS1.SSS3"> <h4 class="ltx_title ltx_title_subsubsection"> <span class="ltx_tag ltx_tag_subsubsection">6.1.3 </span>Training resources about Post-Train for Dual-Stream Dialogue Processing</h4> <div class="ltx_para" id="S6.SS1.SSS3.p1"> <p class="ltx_p" id="S6.SS1.SSS3.p1.1">To ensure that the model possesses the ability to “listen while speaking”. Most research such as Moshi <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib44" title="">44</a>]</cite> and OmniFlatten <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib246" title="">246</a>]</cite> has implemented a dual audio-stream model: one audio stream generates model output, while the other captures user audio. The objective of this training phase is to enable the model’s dual-stream processing without requiring complex human-computer interaction modeling. Consequently, text dialogue data can be converted to speech and processed into dual-track audio format. However, text dialogue data typically contains content unsuitable for TTS conversion to speech (such as code, formulas, URLs) or long, formal dialogue passages that do not align with spoken language, as real dialogue is often more concise. Therefore, when synthesizing from text dialogue data, it is necessary to preprocess the text data. High-quality, open-source text dialogue data is first collected, including datasets like Alpaca <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib144" title="">144</a>]</cite>, Moss, BelleCN, ultraChat <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib46" title="">46</a>]</cite>, and Open-Orca <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib124" title="">124</a>]</cite>. To ensure suitability for speech synthesis (TTS), heuristic rules are applied to filter out samples with high proportions of non-text elements (such as code, mathematical expressions), samples exceeding 200 words, and samples containing rare symbols.</p> </div> <div class="ltx_para" id="S6.SS1.SSS3.p2"> <p class="ltx_p" id="S6.SS1.SSS3.p2.1">After filtering the text, TTS models <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib49" title="">49</a>]</cite> are used to synthesize speech for each turn in the dialogues. For consistent voice effects, the model audio stream maintains a uniform voice, while the user audio stream is sampled with varied voices to enhance the model’s robustness. The synthesized dialogue audio is arranged using simulation strategies to achieve natural timing, such as turn-taking, well-timed interruptions, and pauses to maintain fluency and naturalness. The final dialogue audio is organized in dual-channel format: the conversation begins with a user utterance, followed by alternating user and assistant turns. After each user turn, the assistant responds immediately; upon completion of the assistant’s turn, a sampled pause length is introduced to simulate the natural rhythm of alternating dialogue. To better simulate real scenarios, further data augmentation can be applied. For example, random gain adjustments can be applied to the user audio stream, and background noise randomly selected from datasets like MUSAN <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib194" title="">194</a>]</cite> and DNS <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib174" title="">174</a>]</cite> can be added to the user audio channel (OmniFlatten). To simulate echo effects from a user’s microphone, portions of the audio stream can be scaled down and added to the user’s audio stream with random delays between 100 to 500 milliseconds, along with reverberation-like enhancements, helping the model adapt to real-world environments.</p> </div> </section> <section class="ltx_subsubsection" id="S6.SS1.SSS4"> <h4 class="ltx_title ltx_title_subsubsection"> <span class="ltx_tag ltx_tag_subsubsection">6.1.4 </span>Training resources about Enhancing Conversational Abilities and Instruction Tuning</h4> <div class="ltx_para" id="S6.SS1.SSS4.p1"> <p class="ltx_p" id="S6.SS1.SSS4.p1.1">While the foundational model has been established, there remains a gap between this and a complete dialogue system. The above model utilizes non-overlapping dialogue audio, where one party remains silent while the other speaks, failing to fully simulate real conversational dynamics. Some speech datasets, such as <span class="ltx_text ltx_font_italic" id="S6.SS1.SSS4.p1.1.1">Generative Expressive Conversational Speech Synthesis</span> <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib137" title="">137</a>]</cite> and <span class="ltx_text ltx_font_italic" id="S6.SS1.SSS4.p1.1.2">Fisher</span>, contain dialogues from real-world settings, providing a basis for modeling interruptions and backchannels scenarios in voice dialogue systems.</p> </div> <div class="ltx_para" id="S6.SS1.SSS4.p2"> <p class="ltx_p" id="S6.SS1.SSS4.p2.1">Currently, there is no suitable dataset for real-world speech instructions. Most approaches use synthetic methods based on text instruction data to perform <span class="ltx_text ltx_font_italic" id="S6.SS1.SSS4.p2.1.1">instruction tuning</span> in this stage. Common text instruction datasets include <span class="ltx_text ltx_font_italic" id="S6.SS1.SSS4.p2.1.2">Open Hermes</span> and <span class="ltx_text ltx_font_italic" id="S6.SS1.SSS4.p2.1.3">moss-002-sft-data</span>, though they face similar challenges as text dialogue data, such as unsuitability for TTS conversion and inconsistency with spoken language conventions. Following the synthetic processes provided by Moshi and Llama-Omni, this aims to generate instruction data in the format of (SpeechInstruction, TextInstruction, TextResponse, SpeechResponse).</p> </div> <div class="ltx_para" id="S6.SS1.SSS4.p3"> <p class="ltx_p" id="S6.SS1.SSS4.p3.1">The first method is synthetic generation from scratch. Contexts and summaries are first generated by sourcing high-quality text data from sources like Wikipedia and StackExchange, producing thematic paragraphs as the dialogue foundation, referred to as “context.” Based on these contexts, dialogue summaries are generated. Next, a specific prompt template guides the generation of complete dialogues, including context and requesting dialogues around the theme with roles as user and system. The model is prompted to exhibit knowledge on the topic and include interruptions (backchannels) and brief turn-taking, simulating the natural flow of conversation. To enhance dialogue diversity, additional instructions involving speech emotion and role-playing can be generated, requesting dialogues in specific tones or styles. Furthermore, dialogues containing spelling errors or misinformation are synthesized to train the system in handling scenarios where user clarification or repetition is required. Single-turn interactions on basic mathematics, grammar, and factual questions are also generated to ensure the system can handle simple factual tasks. Finally, scenarios involving ethical or NSFW requests are created to train the system in declining to answer under such conditions.</p> </div> <div class="ltx_para" id="S6.SS1.SSS4.p4"> <p class="ltx_p" id="S6.SS1.SSS4.p4.1">The second method involves filtering and refining existing text instruction datasets. Initially, open-source text language models paraphrase text instructions to match spoken language traits, adding fillers like “uh” and “um” to mimic natural speech tone, while converting numbers and symbols into spoken language to ensure the instructions are concise and conversational. Generated text responses are also optimized to meet TTS output requirements, removing lengthy expressions and complex grammatical structures to make content clear and concise for TTS output. After adjusting the instruction and response text, a TTS system converts the text to audio.</p> </div> </section> </section> <section class="ltx_subsection" id="S6.SS2"> <h3 class="ltx_title ltx_title_subsection"> <span class="ltx_tag ltx_tag_subsection">6.2 </span>Evaluation</h3> <div class="ltx_para" id="S6.SS2.p1"> <p class="ltx_p" id="S6.SS2.p1.1">Fair and comprehensive evaluation of spoken dialogue models presents a multifaceted challenge. On the one hand, the field of spoken dialogue still lacks publicly available test sets, comprehensive evaluation metrics, and established benchmarks. On the other hand, assessing the performance of spoken dialogue systems requires consideration from multiple perspectives. Basic aspects include the quality of generated speech, robustness, dialogue naturalness and accuracy, as well as response speed and generation time. Beyond these, more advanced evaluations are needed to assess multi-turn dialogue capabilities (such as long-form speech editing), interaction abilities, and the system’s proficiency in audio and music understanding and generation. Given these requirements, and in line with the comprehensive expectations for spoken dialogue systems outlined in Section <a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#S2.SS1" title="2.1 Functions of Spoken Dialogue Systems ‣ 2 Overall ‣ WavChat: A Survey of Spoken Dialogue Models"><span class="ltx_text ltx_ref_tag">2.1</span></a>, we will evaluate these systems from two angles: common evaluations and advanced evaluations. Specifically, we will assess eleven key factors: speech generation quality, text intelligence, speech intelligence, audio and music generation, audio and music understanding, multilingual capability, context learning, interaction capability, streaming latency, multimodal capability, and the safety of dialogue systems. Finally, we will list the current benchmarks and summarize the common conclusions derived from them.</p> </div> <figure class="ltx_table" id="S6.T3"> <figcaption class="ltx_caption"><span class="ltx_tag ltx_tag_table"><span class="ltx_text" id="S6.T3.5.1.1" style="font-size:90%;">Table 3</span>: </span><span class="ltx_text" id="S6.T3.6.2" style="font-size:90%;">This table provides a comprehensive overview of the different components used to evaluate dialogue systems, including various abilities, common tasks, representative benchmarks, and corresponding metrics. The abilities include Text Intelligence, Speech Quality, Audio Understanding and Generation, Music Understanding and Generation, Multilingual Capability, Context Learning, Interaction Capability, Multimodal Capability, Security, and Speech Intelligence. The table aligns these tasks with widely used benchmarks such as VoiceBench, SUPERB, AudioBench, AirBench, SpokenWOZ, SD-EVAL, SuperCLUE, and MMAU, highlighting the dimensions they assess. To ensure comprehensive evaluation some metrics are defined: <span class="ltx_text ltx_font_bold" id="S6.T3.6.2.1">MT-Metrics</span>, which evaluate the quality of generated outputs using semantic and syntactic similarity; <span class="ltx_text ltx_font_bold" id="S6.T3.6.2.2">Acc-Metrics</span>, which measure recognition performance using precision, recall, and F-score; <span class="ltx_text ltx_font_bold" id="S6.T3.6.2.3">Subjective Metrics</span>, which assess creative and generative tasks like speech quality and audio generation. This structured framework provides a holistic view of benchmarks, tasks, and evaluation criteria for assessing diverse model capabilities.</span></figcaption> <div class="ltx_inline-block ltx_transformed_outer" id="S6.T3.7" style="width:433.6pt;height:199.3pt;vertical-align:-0.0pt;"><span class="ltx_transformed_inner" style="transform:translate(-272.8pt,125.4pt) scale(0.442794908864811,0.442794908864811) ;"> <table class="ltx_tabular ltx_align_middle" id="S6.T3.7.1"> <tr class="ltx_tr" id="S6.T3.7.1.1"> <td class="ltx_td ltx_align_center" id="S6.T3.7.1.1.1" rowspan="2" style="padding-top:2pt;padding-bottom:2pt;"> <span class="ltx_rule" style="width:100%;height:1.5pt;background:black;display:inline-block;"> </span> <span class="ltx_text ltx_font_bold" id="S6.T3.7.1.1.1.1">Level</span> </td> <td class="ltx_td ltx_align_center" id="S6.T3.7.1.1.2" rowspan="2" style="padding-top:2pt;padding-bottom:2pt;"><span class="ltx_text ltx_font_bold" id="S6.T3.7.1.1.2.1">Ability</span></td> <td class="ltx_td ltx_align_center" id="S6.T3.7.1.1.3" rowspan="2" style="padding-top:2pt;padding-bottom:2pt;"><span class="ltx_text ltx_font_bold" id="S6.T3.7.1.1.3.1">Task</span></td> <td class="ltx_td ltx_align_center" colspan="8" id="S6.T3.7.1.1.4" style="padding-top:2pt;padding-bottom:2pt;"><span class="ltx_text ltx_font_bold" id="S6.T3.7.1.1.4.1">Benchmark</span></td> <td class="ltx_td ltx_align_center" id="S6.T3.7.1.1.5" rowspan="2" style="padding-top:2pt;padding-bottom:2pt;"><span class="ltx_text ltx_font_bold" id="S6.T3.7.1.1.5.1">Metric</span></td> </tr> <tr class="ltx_tr" id="S6.T3.7.1.2"> <td class="ltx_td ltx_align_center ltx_border_r" id="S6.T3.7.1.2.1" style="padding-top:2pt;padding-bottom:2pt;">VoiceBench</td> <td class="ltx_td ltx_align_center ltx_border_r" id="S6.T3.7.1.2.2" style="padding-top:2pt;padding-bottom:2pt;">SUPERB</td> <td class="ltx_td ltx_align_center ltx_border_r" id="S6.T3.7.1.2.3" style="padding-top:2pt;padding-bottom:2pt;">AudioBench</td> <td class="ltx_td ltx_align_center ltx_border_r" id="S6.T3.7.1.2.4" style="padding-top:2pt;padding-bottom:2pt;">AirBench</td> <td class="ltx_td ltx_align_center ltx_border_r" id="S6.T3.7.1.2.5" style="padding-top:2pt;padding-bottom:2pt;">SpokenWOZ</td> <td class="ltx_td ltx_align_center ltx_border_r" id="S6.T3.7.1.2.6" style="padding-top:2pt;padding-bottom:2pt;">SD-EVAL</td> <td class="ltx_td ltx_align_center ltx_border_r" id="S6.T3.7.1.2.7" style="padding-top:2pt;padding-bottom:2pt;">SuperCLUE</td> <td class="ltx_td ltx_align_center ltx_border_r" id="S6.T3.7.1.2.8" style="padding-top:2pt;padding-bottom:2pt;">MMAU</td> </tr> <tr class="ltx_tr" id="S6.T3.7.1.3"> <td class="ltx_td ltx_align_center" id="S6.T3.7.1.3.1" rowspan="5" style="padding-top:2pt;padding-bottom:2pt;"> <span class="ltx_rule" style="width:100%;height:1.5pt;background:black;display:inline-block;"> </span> <span class="ltx_text" id="S6.T3.7.1.3.1.1">Basic</span> </td> <td class="ltx_td ltx_align_center" id="S6.T3.7.1.3.2" rowspan="3" style="padding-top:2pt;padding-bottom:2pt;"><span class="ltx_text" id="S6.T3.7.1.3.2.1">Text Intelligence</span></td> <td class="ltx_td ltx_align_center" id="S6.T3.7.1.3.3" style="padding-top:2pt;padding-bottom:2pt;">Reasoning</td> <td class="ltx_td ltx_align_center" id="S6.T3.7.1.3.4" style="padding-top:2pt;padding-bottom:2pt;"><span class="ltx_text" id="S6.T3.7.1.3.4.1" style="color:#FF0000;">✗</span></td> <td class="ltx_td ltx_align_center" id="S6.T3.7.1.3.5" style="padding-top:2pt;padding-bottom:2pt;"><span class="ltx_text" id="S6.T3.7.1.3.5.1" style="color:#FF0000;">✗</span></td> <td class="ltx_td ltx_align_center" id="S6.T3.7.1.3.6" style="padding-top:2pt;padding-bottom:2pt;"><span class="ltx_text" id="S6.T3.7.1.3.6.1" style="color:#FF0000;">✗</span></td> <td class="ltx_td ltx_align_center" id="S6.T3.7.1.3.7" style="padding-top:2pt;padding-bottom:2pt;"><span class="ltx_text" id="S6.T3.7.1.3.7.1" style="color:#FF0000;">✗</span></td> <td class="ltx_td ltx_align_center" id="S6.T3.7.1.3.8" style="padding-top:2pt;padding-bottom:2pt;"><span class="ltx_text" id="S6.T3.7.1.3.8.1" style="color:#00FF00;">✓</span></td> <td class="ltx_td ltx_align_center" id="S6.T3.7.1.3.9" style="padding-top:2pt;padding-bottom:2pt;"><span class="ltx_text" id="S6.T3.7.1.3.9.1" style="color:#FF0000;">✗</span></td> <td class="ltx_td ltx_align_center" id="S6.T3.7.1.3.10" style="padding-top:2pt;padding-bottom:2pt;"><span class="ltx_text" id="S6.T3.7.1.3.10.1" style="color:#00FF00;">✓</span></td> <td class="ltx_td ltx_align_center" id="S6.T3.7.1.3.11" style="padding-top:2pt;padding-bottom:2pt;"><span class="ltx_text" id="S6.T3.7.1.3.11.1" style="color:#00FF00;">✓</span></td> <td class="ltx_td ltx_align_center" id="S6.T3.7.1.3.12" style="padding-top:2pt;padding-bottom:2pt;">Acc-Metrics</td> </tr> <tr class="ltx_tr" id="S6.T3.7.1.4"> <td class="ltx_td ltx_align_center" id="S6.T3.7.1.4.1" style="padding-top:2pt;padding-bottom:2pt;">Instruction Following</td> <td class="ltx_td ltx_align_center" id="S6.T3.7.1.4.2" style="padding-top:2pt;padding-bottom:2pt;"><span class="ltx_text" id="S6.T3.7.1.4.2.1" style="color:#00FF00;">✓</span></td> <td class="ltx_td ltx_align_center" id="S6.T3.7.1.4.3" style="padding-top:2pt;padding-bottom:2pt;"><span class="ltx_text" id="S6.T3.7.1.4.3.1" style="color:#FF0000;">✗</span></td> <td class="ltx_td ltx_align_center" id="S6.T3.7.1.4.4" style="padding-top:2pt;padding-bottom:2pt;"><span class="ltx_text" id="S6.T3.7.1.4.4.1" style="color:#00FF00;">✓</span></td> <td class="ltx_td ltx_align_center" id="S6.T3.7.1.4.5" style="padding-top:2pt;padding-bottom:2pt;"><span class="ltx_text" id="S6.T3.7.1.4.5.1" style="color:#FF0000;">✗</span></td> <td class="ltx_td ltx_align_center" id="S6.T3.7.1.4.6" style="padding-top:2pt;padding-bottom:2pt;"><span class="ltx_text" id="S6.T3.7.1.4.6.1" style="color:#00FF00;">✓</span></td> <td class="ltx_td ltx_align_center" id="S6.T3.7.1.4.7" style="padding-top:2pt;padding-bottom:2pt;"><span class="ltx_text" id="S6.T3.7.1.4.7.1" style="color:#FF0000;">✗</span></td> <td class="ltx_td ltx_align_center" id="S6.T3.7.1.4.8" style="padding-top:2pt;padding-bottom:2pt;"><span class="ltx_text" id="S6.T3.7.1.4.8.1" style="color:#00FF00;">✓</span></td> <td class="ltx_td ltx_align_center" id="S6.T3.7.1.4.9" style="padding-top:2pt;padding-bottom:2pt;"><span class="ltx_text" id="S6.T3.7.1.4.9.1" style="color:#FF0000;">✗</span></td> <td class="ltx_td ltx_align_center" id="S6.T3.7.1.4.10" style="padding-top:2pt;padding-bottom:2pt;">MT-Metrics</td> </tr> <tr class="ltx_tr" id="S6.T3.7.1.5"> <td class="ltx_td ltx_align_center" id="S6.T3.7.1.5.1" style="padding-top:2pt;padding-bottom:2pt;">Conversational QA</td> <td class="ltx_td ltx_align_center" id="S6.T3.7.1.5.2" style="padding-top:2pt;padding-bottom:2pt;"><span class="ltx_text" id="S6.T3.7.1.5.2.1" style="color:#00FF00;">✓</span></td> <td class="ltx_td ltx_align_center" id="S6.T3.7.1.5.3" style="padding-top:2pt;padding-bottom:2pt;"><span class="ltx_text" id="S6.T3.7.1.5.3.1" style="color:#FF0000;">✗</span></td> <td class="ltx_td ltx_align_center" id="S6.T3.7.1.5.4" style="padding-top:2pt;padding-bottom:2pt;"><span class="ltx_text" id="S6.T3.7.1.5.4.1" style="color:#00FF00;">✓</span></td> <td class="ltx_td ltx_align_center" id="S6.T3.7.1.5.5" style="padding-top:2pt;padding-bottom:2pt;"><span class="ltx_text" id="S6.T3.7.1.5.5.1" style="color:#FF0000;">✗</span></td> <td class="ltx_td ltx_align_center" id="S6.T3.7.1.5.6" style="padding-top:2pt;padding-bottom:2pt;"><span class="ltx_text" id="S6.T3.7.1.5.6.1" style="color:#00FF00;">✓</span></td> <td class="ltx_td ltx_align_center" id="S6.T3.7.1.5.7" style="padding-top:2pt;padding-bottom:2pt;"><span class="ltx_text" id="S6.T3.7.1.5.7.1" style="color:#FF0000;">✗</span></td> <td class="ltx_td ltx_align_center" id="S6.T3.7.1.5.8" style="padding-top:2pt;padding-bottom:2pt;"><span class="ltx_text" id="S6.T3.7.1.5.8.1" style="color:#00FF00;">✓</span></td> <td class="ltx_td ltx_align_center" id="S6.T3.7.1.5.9" style="padding-top:2pt;padding-bottom:2pt;"><span class="ltx_text" id="S6.T3.7.1.5.9.1" style="color:#00FF00;">✓</span></td> <td class="ltx_td ltx_align_center" id="S6.T3.7.1.5.10" style="padding-top:2pt;padding-bottom:2pt;">MT-Metrics</td> </tr> <tr class="ltx_tr" id="S6.T3.7.1.6"> <td class="ltx_td ltx_align_center ltx_border_t" id="S6.T3.7.1.6.1" style="padding-top:2pt;padding-bottom:2pt;">Speech Quality</td> <td class="ltx_td ltx_align_center ltx_border_t" id="S6.T3.7.1.6.2" style="padding-top:2pt;padding-bottom:2pt;">MOS, WER Evaluation</td> <td class="ltx_td ltx_align_center ltx_border_t" id="S6.T3.7.1.6.3" style="padding-top:2pt;padding-bottom:2pt;"><span class="ltx_text" id="S6.T3.7.1.6.3.1" style="color:#FF0000;">✗</span></td> <td class="ltx_td ltx_align_center ltx_border_t" id="S6.T3.7.1.6.4" style="padding-top:2pt;padding-bottom:2pt;"><span class="ltx_text" id="S6.T3.7.1.6.4.1" style="color:#00FF00;">✓</span></td> <td class="ltx_td ltx_align_center ltx_border_t" id="S6.T3.7.1.6.5" style="padding-top:2pt;padding-bottom:2pt;"><span class="ltx_text" id="S6.T3.7.1.6.5.1" style="color:#FF0000;">✗</span></td> <td class="ltx_td ltx_align_center ltx_border_t" id="S6.T3.7.1.6.6" style="padding-top:2pt;padding-bottom:2pt;"><span class="ltx_text" id="S6.T3.7.1.6.6.1" style="color:#FF0000;">✗</span></td> <td class="ltx_td ltx_align_center ltx_border_t" id="S6.T3.7.1.6.7" style="padding-top:2pt;padding-bottom:2pt;"><span class="ltx_text" id="S6.T3.7.1.6.7.1" style="color:#FF0000;">✗</span></td> <td class="ltx_td ltx_align_center ltx_border_t" id="S6.T3.7.1.6.8" style="padding-top:2pt;padding-bottom:2pt;"><span class="ltx_text" id="S6.T3.7.1.6.8.1" style="color:#FF0000;">✗</span></td> <td class="ltx_td ltx_align_center ltx_border_t" id="S6.T3.7.1.6.9" style="padding-top:2pt;padding-bottom:2pt;"><span class="ltx_text" id="S6.T3.7.1.6.9.1" style="color:#FF0000;">✗</span></td> <td class="ltx_td ltx_align_center ltx_border_t" id="S6.T3.7.1.6.10" style="padding-top:2pt;padding-bottom:2pt;"><span class="ltx_text" id="S6.T3.7.1.6.10.1" style="color:#FF0000;">✗</span></td> <td class="ltx_td ltx_align_center ltx_border_t" id="S6.T3.7.1.6.11" style="padding-top:2pt;padding-bottom:2pt;">MOS, WER</td> </tr> <tr class="ltx_tr" id="S6.T3.7.1.7"> <td class="ltx_td ltx_align_center ltx_border_t" id="S6.T3.7.1.7.1" style="padding-top:2pt;padding-bottom:2pt;">Streaming Latency</td> <td class="ltx_td ltx_align_center ltx_border_t" id="S6.T3.7.1.7.2" style="padding-top:2pt;padding-bottom:2pt;">Real-Time Dialogue</td> <td class="ltx_td ltx_align_center ltx_border_t" id="S6.T3.7.1.7.3" style="padding-top:2pt;padding-bottom:2pt;"><span class="ltx_text" id="S6.T3.7.1.7.3.1" style="color:#FF0000;">✗</span></td> <td class="ltx_td ltx_align_center ltx_border_t" id="S6.T3.7.1.7.4" style="padding-top:2pt;padding-bottom:2pt;"><span class="ltx_text" id="S6.T3.7.1.7.4.1" style="color:#FF0000;">✗</span></td> <td class="ltx_td ltx_align_center ltx_border_t" id="S6.T3.7.1.7.5" style="padding-top:2pt;padding-bottom:2pt;"><span class="ltx_text" id="S6.T3.7.1.7.5.1" style="color:#FF0000;">✗</span></td> <td class="ltx_td ltx_align_center ltx_border_t" id="S6.T3.7.1.7.6" style="padding-top:2pt;padding-bottom:2pt;"><span class="ltx_text" id="S6.T3.7.1.7.6.1" style="color:#FF0000;">✗</span></td> <td class="ltx_td ltx_align_center ltx_border_t" id="S6.T3.7.1.7.7" style="padding-top:2pt;padding-bottom:2pt;"><span class="ltx_text" id="S6.T3.7.1.7.7.1" style="color:#FF0000;">✗</span></td> <td class="ltx_td ltx_align_center ltx_border_t" id="S6.T3.7.1.7.8" style="padding-top:2pt;padding-bottom:2pt;"><span class="ltx_text" id="S6.T3.7.1.7.8.1" style="color:#FF0000;">✗</span></td> <td class="ltx_td ltx_align_center ltx_border_t" id="S6.T3.7.1.7.9" style="padding-top:2pt;padding-bottom:2pt;"><span class="ltx_text" id="S6.T3.7.1.7.9.1" style="color:#00FF00;">✓</span></td> <td class="ltx_td ltx_align_center ltx_border_t" id="S6.T3.7.1.7.10" style="padding-top:2pt;padding-bottom:2pt;"><span class="ltx_text" id="S6.T3.7.1.7.10.1" style="color:#FF0000;">✗</span></td> <td class="ltx_td ltx_align_center ltx_border_t" id="S6.T3.7.1.7.11" style="padding-top:2pt;padding-bottom:2pt;">Real-Time Factor</td> </tr> <tr class="ltx_tr" id="S6.T3.7.1.8"> <td class="ltx_td ltx_align_center" id="S6.T3.7.1.8.1" rowspan="16" style="padding-top:2pt;padding-bottom:2pt;"> <span class="ltx_rule" style="width:100%;height:1.2pt;background:black;display:inline-block;"> </span> <span class="ltx_text" id="S6.T3.7.1.8.1.1">Advanced</span> </td> <td class="ltx_td ltx_align_center" id="S6.T3.7.1.8.2" rowspan="5" style="padding-top:2pt;padding-bottom:2pt;"><span class="ltx_text" id="S6.T3.7.1.8.2.1">Audio U&G</span></td> <td class="ltx_td ltx_align_center" id="S6.T3.7.1.8.3" style="padding-top:2pt;padding-bottom:2pt;">Audio Classification</td> <td class="ltx_td ltx_align_center" id="S6.T3.7.1.8.4" style="padding-top:2pt;padding-bottom:2pt;"><span class="ltx_text" id="S6.T3.7.1.8.4.1" style="color:#FF0000;">✗</span></td> <td class="ltx_td ltx_align_center" id="S6.T3.7.1.8.5" style="padding-top:2pt;padding-bottom:2pt;"><span class="ltx_text" id="S6.T3.7.1.8.5.1" style="color:#00FF00;">✓</span></td> <td class="ltx_td ltx_align_center" id="S6.T3.7.1.8.6" style="padding-top:2pt;padding-bottom:2pt;"><span class="ltx_text" id="S6.T3.7.1.8.6.1" style="color:#00FF00;">✓</span></td> <td class="ltx_td ltx_align_center" id="S6.T3.7.1.8.7" style="padding-top:2pt;padding-bottom:2pt;"><span class="ltx_text" id="S6.T3.7.1.8.7.1" style="color:#00FF00;">✓</span></td> <td class="ltx_td ltx_align_center" id="S6.T3.7.1.8.8" style="padding-top:2pt;padding-bottom:2pt;"><span class="ltx_text" id="S6.T3.7.1.8.8.1" style="color:#FF0000;">✗</span></td> <td class="ltx_td ltx_align_center" id="S6.T3.7.1.8.9" style="padding-top:2pt;padding-bottom:2pt;"><span class="ltx_text" id="S6.T3.7.1.8.9.1" style="color:#00FF00;">✓</span></td> <td class="ltx_td ltx_align_center" id="S6.T3.7.1.8.10" style="padding-top:2pt;padding-bottom:2pt;"><span class="ltx_text" id="S6.T3.7.1.8.10.1" style="color:#FF0000;">✗</span></td> <td class="ltx_td ltx_align_center" id="S6.T3.7.1.8.11" style="padding-top:2pt;padding-bottom:2pt;"><span class="ltx_text" id="S6.T3.7.1.8.11.1" style="color:#00FF00;">✓</span></td> <td class="ltx_td ltx_align_center" id="S6.T3.7.1.8.12" style="padding-top:2pt;padding-bottom:2pt;">Acc-Metrics</td> </tr> <tr class="ltx_tr" id="S6.T3.7.1.9"> <td class="ltx_td ltx_align_center" id="S6.T3.7.1.9.1" style="padding-top:2pt;padding-bottom:2pt;">Sound Event Detection</td> <td class="ltx_td ltx_align_center" id="S6.T3.7.1.9.2" style="padding-top:2pt;padding-bottom:2pt;"><span class="ltx_text" id="S6.T3.7.1.9.2.1" style="color:#FF0000;">✗</span></td> <td class="ltx_td ltx_align_center" id="S6.T3.7.1.9.3" style="padding-top:2pt;padding-bottom:2pt;"><span class="ltx_text" id="S6.T3.7.1.9.3.1" style="color:#00FF00;">✓</span></td> <td class="ltx_td ltx_align_center" id="S6.T3.7.1.9.4" style="padding-top:2pt;padding-bottom:2pt;"><span class="ltx_text" id="S6.T3.7.1.9.4.1" style="color:#FF0000;">✗</span></td> <td class="ltx_td ltx_align_center" id="S6.T3.7.1.9.5" style="padding-top:2pt;padding-bottom:2pt;"><span class="ltx_text" id="S6.T3.7.1.9.5.1" style="color:#00FF00;">✓</span></td> <td class="ltx_td ltx_align_center" id="S6.T3.7.1.9.6" style="padding-top:2pt;padding-bottom:2pt;"><span class="ltx_text" id="S6.T3.7.1.9.6.1" style="color:#FF0000;">✗</span></td> <td class="ltx_td ltx_align_center" id="S6.T3.7.1.9.7" style="padding-top:2pt;padding-bottom:2pt;"><span class="ltx_text" id="S6.T3.7.1.9.7.1" style="color:#00FF00;">✓</span></td> <td class="ltx_td ltx_align_center" id="S6.T3.7.1.9.8" style="padding-top:2pt;padding-bottom:2pt;"><span class="ltx_text" id="S6.T3.7.1.9.8.1" style="color:#FF0000;">✗</span></td> <td class="ltx_td ltx_align_center" id="S6.T3.7.1.9.9" style="padding-top:2pt;padding-bottom:2pt;"><span class="ltx_text" id="S6.T3.7.1.9.9.1" style="color:#00FF00;">✓</span></td> <td class="ltx_td ltx_align_center" id="S6.T3.7.1.9.10" style="padding-top:2pt;padding-bottom:2pt;">Acc-Metrics</td> </tr> <tr class="ltx_tr" id="S6.T3.7.1.10"> <td class="ltx_td ltx_align_center" id="S6.T3.7.1.10.1" style="padding-top:2pt;padding-bottom:2pt;">Audio Captioning</td> <td class="ltx_td ltx_align_center" id="S6.T3.7.1.10.2" style="padding-top:2pt;padding-bottom:2pt;"><span class="ltx_text" id="S6.T3.7.1.10.2.1" style="color:#FF0000;">✗</span></td> <td class="ltx_td ltx_align_center" id="S6.T3.7.1.10.3" style="padding-top:2pt;padding-bottom:2pt;"><span class="ltx_text" id="S6.T3.7.1.10.3.1" style="color:#FF0000;">✗</span></td> <td class="ltx_td ltx_align_center" id="S6.T3.7.1.10.4" style="padding-top:2pt;padding-bottom:2pt;"><span class="ltx_text" id="S6.T3.7.1.10.4.1" style="color:#00FF00;">✓</span></td> <td class="ltx_td ltx_align_center" id="S6.T3.7.1.10.5" style="padding-top:2pt;padding-bottom:2pt;"><span class="ltx_text" id="S6.T3.7.1.10.5.1" style="color:#00FF00;">✓</span></td> <td class="ltx_td ltx_align_center" id="S6.T3.7.1.10.6" style="padding-top:2pt;padding-bottom:2pt;"><span class="ltx_text" id="S6.T3.7.1.10.6.1" style="color:#FF0000;">✗</span></td> <td class="ltx_td ltx_align_center" id="S6.T3.7.1.10.7" style="padding-top:2pt;padding-bottom:2pt;"><span class="ltx_text" id="S6.T3.7.1.10.7.1" style="color:#00FF00;">✓</span></td> <td class="ltx_td ltx_align_center" id="S6.T3.7.1.10.8" style="padding-top:2pt;padding-bottom:2pt;"><span class="ltx_text" id="S6.T3.7.1.10.8.1" style="color:#FF0000;">✗</span></td> <td class="ltx_td ltx_align_center" id="S6.T3.7.1.10.9" style="padding-top:2pt;padding-bottom:2pt;"><span class="ltx_text" id="S6.T3.7.1.10.9.1" style="color:#FF0000;">✗</span></td> <td class="ltx_td ltx_align_center" id="S6.T3.7.1.10.10" style="padding-top:2pt;padding-bottom:2pt;">MT-Metrics</td> </tr> <tr class="ltx_tr" id="S6.T3.7.1.11"> <td class="ltx_td ltx_align_center" id="S6.T3.7.1.11.1" style="padding-top:2pt;padding-bottom:2pt;">Audio-Motivated Creative Writing</td> <td class="ltx_td ltx_align_center" id="S6.T3.7.1.11.2" style="padding-top:2pt;padding-bottom:2pt;"><span class="ltx_text" id="S6.T3.7.1.11.2.1" style="color:#FF0000;">✗</span></td> <td class="ltx_td ltx_align_center" id="S6.T3.7.1.11.3" style="padding-top:2pt;padding-bottom:2pt;"><span class="ltx_text" id="S6.T3.7.1.11.3.1" style="color:#FF0000;">✗</span></td> <td class="ltx_td ltx_align_center" id="S6.T3.7.1.11.4" style="padding-top:2pt;padding-bottom:2pt;"><span class="ltx_text" id="S6.T3.7.1.11.4.1" style="color:#FF0000;">✗</span></td> <td class="ltx_td ltx_align_center" id="S6.T3.7.1.11.5" style="padding-top:2pt;padding-bottom:2pt;"><span class="ltx_text" id="S6.T3.7.1.11.5.1" style="color:#FF0000;">✗</span></td> <td class="ltx_td ltx_align_center" id="S6.T3.7.1.11.6" style="padding-top:2pt;padding-bottom:2pt;"><span class="ltx_text" id="S6.T3.7.1.11.6.1" style="color:#FF0000;">✗</span></td> <td class="ltx_td ltx_align_center" id="S6.T3.7.1.11.7" style="padding-top:2pt;padding-bottom:2pt;"><span class="ltx_text" id="S6.T3.7.1.11.7.1" style="color:#FF0000;">✗</span></td> <td class="ltx_td ltx_align_center" id="S6.T3.7.1.11.8" style="padding-top:2pt;padding-bottom:2pt;"><span class="ltx_text" id="S6.T3.7.1.11.8.1" style="color:#FF0000;">✗</span></td> <td class="ltx_td ltx_align_center" id="S6.T3.7.1.11.9" style="padding-top:2pt;padding-bottom:2pt;"><span class="ltx_text" id="S6.T3.7.1.11.9.1" style="color:#FF0000;">✗</span></td> <td class="ltx_td ltx_align_center" id="S6.T3.7.1.11.10" style="padding-top:2pt;padding-bottom:2pt;">Subjective Metrics</td> </tr> <tr class="ltx_tr" id="S6.T3.7.1.12"> <td class="ltx_td ltx_align_center" id="S6.T3.7.1.12.1" style="padding-top:2pt;padding-bottom:2pt;">Audio Generation</td> <td class="ltx_td ltx_align_center" id="S6.T3.7.1.12.2" style="padding-top:2pt;padding-bottom:2pt;"><span class="ltx_text" id="S6.T3.7.1.12.2.1" style="color:#FF0000;">✗</span></td> <td class="ltx_td ltx_align_center" id="S6.T3.7.1.12.3" style="padding-top:2pt;padding-bottom:2pt;"><span class="ltx_text" id="S6.T3.7.1.12.3.1" style="color:#FF0000;">✗</span></td> <td class="ltx_td ltx_align_center" id="S6.T3.7.1.12.4" style="padding-top:2pt;padding-bottom:2pt;"><span class="ltx_text" id="S6.T3.7.1.12.4.1" style="color:#FF0000;">✗</span></td> <td class="ltx_td ltx_align_center" id="S6.T3.7.1.12.5" style="padding-top:2pt;padding-bottom:2pt;"><span class="ltx_text" id="S6.T3.7.1.12.5.1" style="color:#FF0000;">✗</span></td> <td class="ltx_td ltx_align_center" id="S6.T3.7.1.12.6" style="padding-top:2pt;padding-bottom:2pt;"><span class="ltx_text" id="S6.T3.7.1.12.6.1" style="color:#FF0000;">✗</span></td> <td class="ltx_td ltx_align_center" id="S6.T3.7.1.12.7" style="padding-top:2pt;padding-bottom:2pt;"><span class="ltx_text" id="S6.T3.7.1.12.7.1" style="color:#FF0000;">✗</span></td> <td class="ltx_td ltx_align_center" id="S6.T3.7.1.12.8" style="padding-top:2pt;padding-bottom:2pt;"><span class="ltx_text" id="S6.T3.7.1.12.8.1" style="color:#FF0000;">✗</span></td> <td class="ltx_td ltx_align_center" id="S6.T3.7.1.12.9" style="padding-top:2pt;padding-bottom:2pt;"><span class="ltx_text" id="S6.T3.7.1.12.9.1" style="color:#FF0000;">✗</span></td> <td class="ltx_td ltx_align_center" id="S6.T3.7.1.12.10" style="padding-top:2pt;padding-bottom:2pt;">MOS, FD, IS, KL, FAD, CLAP Score</td> </tr> <tr class="ltx_tr" id="S6.T3.7.1.13"> <td class="ltx_td ltx_align_center ltx_border_t" id="S6.T3.7.1.13.1" rowspan="3" style="padding-top:2pt;padding-bottom:2pt;"><span class="ltx_text" id="S6.T3.7.1.13.1.1">Music U&G</span></td> <td class="ltx_td ltx_align_center ltx_border_t" id="S6.T3.7.1.13.2" style="padding-top:2pt;padding-bottom:2pt;">Music Captioning</td> <td class="ltx_td ltx_align_center ltx_border_t" id="S6.T3.7.1.13.3" style="padding-top:2pt;padding-bottom:2pt;"><span class="ltx_text" id="S6.T3.7.1.13.3.1" style="color:#FF0000;">✗</span></td> <td class="ltx_td ltx_align_center ltx_border_t" id="S6.T3.7.1.13.4" style="padding-top:2pt;padding-bottom:2pt;"><span class="ltx_text" id="S6.T3.7.1.13.4.1" style="color:#FF0000;">✗</span></td> <td class="ltx_td ltx_align_center ltx_border_t" id="S6.T3.7.1.13.5" style="padding-top:2pt;padding-bottom:2pt;"><span class="ltx_text" id="S6.T3.7.1.13.5.1" style="color:#FF0000;">✗</span></td> <td class="ltx_td ltx_align_center ltx_border_t" id="S6.T3.7.1.13.6" style="padding-top:2pt;padding-bottom:2pt;"><span class="ltx_text" id="S6.T3.7.1.13.6.1" style="color:#00FF00;">✓</span></td> <td class="ltx_td ltx_align_center ltx_border_t" id="S6.T3.7.1.13.7" style="padding-top:2pt;padding-bottom:2pt;"><span class="ltx_text" id="S6.T3.7.1.13.7.1" style="color:#FF0000;">✗</span></td> <td class="ltx_td ltx_align_center ltx_border_t" id="S6.T3.7.1.13.8" style="padding-top:2pt;padding-bottom:2pt;"><span class="ltx_text" id="S6.T3.7.1.13.8.1" style="color:#FF0000;">✗</span></td> <td class="ltx_td ltx_align_center ltx_border_t" id="S6.T3.7.1.13.9" style="padding-top:2pt;padding-bottom:2pt;"><span class="ltx_text" id="S6.T3.7.1.13.9.1" style="color:#FF0000;">✗</span></td> <td class="ltx_td ltx_align_center ltx_border_t" id="S6.T3.7.1.13.10" style="padding-top:2pt;padding-bottom:2pt;"><span class="ltx_text" id="S6.T3.7.1.13.10.1" style="color:#FF0000;">✗</span></td> <td class="ltx_td ltx_align_center ltx_border_t" id="S6.T3.7.1.13.11" style="padding-top:2pt;padding-bottom:2pt;">MT-Metrics</td> </tr> <tr class="ltx_tr" id="S6.T3.7.1.14"> <td class="ltx_td ltx_align_center" id="S6.T3.7.1.14.1" style="padding-top:2pt;padding-bottom:2pt;">Music Classification</td> <td class="ltx_td ltx_align_center" id="S6.T3.7.1.14.2" style="padding-top:2pt;padding-bottom:2pt;"><span class="ltx_text" id="S6.T3.7.1.14.2.1" style="color:#FF0000;">✗</span></td> <td class="ltx_td ltx_align_center" id="S6.T3.7.1.14.3" style="padding-top:2pt;padding-bottom:2pt;"><span class="ltx_text" id="S6.T3.7.1.14.3.1" style="color:#FF0000;">✗</span></td> <td class="ltx_td ltx_align_center" id="S6.T3.7.1.14.4" style="padding-top:2pt;padding-bottom:2pt;"><span class="ltx_text" id="S6.T3.7.1.14.4.1" style="color:#FF0000;">✗</span></td> <td class="ltx_td ltx_align_center" id="S6.T3.7.1.14.5" style="padding-top:2pt;padding-bottom:2pt;"><span class="ltx_text" id="S6.T3.7.1.14.5.1" style="color:#FF0000;">✗</span></td> <td class="ltx_td ltx_align_center" id="S6.T3.7.1.14.6" style="padding-top:2pt;padding-bottom:2pt;"><span class="ltx_text" id="S6.T3.7.1.14.6.1" style="color:#FF0000;">✗</span></td> <td class="ltx_td ltx_align_center" id="S6.T3.7.1.14.7" style="padding-top:2pt;padding-bottom:2pt;"><span class="ltx_text" id="S6.T3.7.1.14.7.1" style="color:#FF0000;">✗</span></td> <td class="ltx_td ltx_align_center" id="S6.T3.7.1.14.8" style="padding-top:2pt;padding-bottom:2pt;"><span class="ltx_text" id="S6.T3.7.1.14.8.1" style="color:#FF0000;">✗</span></td> <td class="ltx_td ltx_align_center" id="S6.T3.7.1.14.9" style="padding-top:2pt;padding-bottom:2pt;"><span class="ltx_text" id="S6.T3.7.1.14.9.1" style="color:#00FF00;">✓</span></td> <td class="ltx_td ltx_align_center" id="S6.T3.7.1.14.10" style="padding-top:2pt;padding-bottom:2pt;">Acc-Metrics</td> </tr> <tr class="ltx_tr" id="S6.T3.7.1.15"> <td class="ltx_td ltx_align_center" id="S6.T3.7.1.15.1" style="padding-top:2pt;padding-bottom:2pt;">Music Synthesis</td> <td class="ltx_td ltx_align_center" id="S6.T3.7.1.15.2" style="padding-top:2pt;padding-bottom:2pt;"><span class="ltx_text" id="S6.T3.7.1.15.2.1" style="color:#FF0000;">✗</span></td> <td class="ltx_td ltx_align_center" id="S6.T3.7.1.15.3" style="padding-top:2pt;padding-bottom:2pt;"><span class="ltx_text" id="S6.T3.7.1.15.3.1" style="color:#FF0000;">✗</span></td> <td class="ltx_td ltx_align_center" id="S6.T3.7.1.15.4" style="padding-top:2pt;padding-bottom:2pt;"><span class="ltx_text" id="S6.T3.7.1.15.4.1" style="color:#FF0000;">✗</span></td> <td class="ltx_td ltx_align_center" id="S6.T3.7.1.15.5" style="padding-top:2pt;padding-bottom:2pt;"><span class="ltx_text" id="S6.T3.7.1.15.5.1" style="color:#00FF00;">✓</span></td> <td class="ltx_td ltx_align_center" id="S6.T3.7.1.15.6" style="padding-top:2pt;padding-bottom:2pt;"><span class="ltx_text" id="S6.T3.7.1.15.6.1" style="color:#FF0000;">✗</span></td> <td class="ltx_td ltx_align_center" id="S6.T3.7.1.15.7" style="padding-top:2pt;padding-bottom:2pt;"><span class="ltx_text" id="S6.T3.7.1.15.7.1" style="color:#FF0000;">✗</span></td> <td class="ltx_td ltx_align_center" id="S6.T3.7.1.15.8" style="padding-top:2pt;padding-bottom:2pt;"><span class="ltx_text" id="S6.T3.7.1.15.8.1" style="color:#FF0000;">✗</span></td> <td class="ltx_td ltx_align_center" id="S6.T3.7.1.15.9" style="padding-top:2pt;padding-bottom:2pt;"><span class="ltx_text" id="S6.T3.7.1.15.9.1" style="color:#FF0000;">✗</span></td> <td class="ltx_td ltx_align_center" id="S6.T3.7.1.15.10" style="padding-top:2pt;padding-bottom:2pt;">MOS</td> </tr> <tr class="ltx_tr" id="S6.T3.7.1.16"> <td class="ltx_td ltx_align_center ltx_border_t" id="S6.T3.7.1.16.1" style="padding-top:2pt;padding-bottom:2pt;">Multilingual Capability</td> <td class="ltx_td ltx_align_center ltx_border_t" id="S6.T3.7.1.16.2" style="padding-top:2pt;padding-bottom:2pt;">Speech Translation</td> <td class="ltx_td ltx_align_center ltx_border_t" id="S6.T3.7.1.16.3" style="padding-top:2pt;padding-bottom:2pt;"><span class="ltx_text" id="S6.T3.7.1.16.3.1" style="color:#FF0000;">✗</span></td> <td class="ltx_td ltx_align_center ltx_border_t" id="S6.T3.7.1.16.4" style="padding-top:2pt;padding-bottom:2pt;"><span class="ltx_text" id="S6.T3.7.1.16.4.1" style="color:#00FF00;">✓</span></td> <td class="ltx_td ltx_align_center ltx_border_t" id="S6.T3.7.1.16.5" style="padding-top:2pt;padding-bottom:2pt;"><span class="ltx_text" id="S6.T3.7.1.16.5.1" style="color:#FF0000;">✗</span></td> <td class="ltx_td ltx_align_center ltx_border_t" id="S6.T3.7.1.16.6" style="padding-top:2pt;padding-bottom:2pt;"><span class="ltx_text" id="S6.T3.7.1.16.6.1" style="color:#FF0000;">✗</span></td> <td class="ltx_td ltx_align_center ltx_border_t" id="S6.T3.7.1.16.7" style="padding-top:2pt;padding-bottom:2pt;"><span class="ltx_text" id="S6.T3.7.1.16.7.1" style="color:#FF0000;">✗</span></td> <td class="ltx_td ltx_align_center ltx_border_t" id="S6.T3.7.1.16.8" style="padding-top:2pt;padding-bottom:2pt;"><span class="ltx_text" id="S6.T3.7.1.16.8.1" style="color:#FF0000;">✗</span></td> <td class="ltx_td ltx_align_center ltx_border_t" id="S6.T3.7.1.16.9" style="padding-top:2pt;padding-bottom:2pt;"><span class="ltx_text" id="S6.T3.7.1.16.9.1" style="color:#FF0000;">✗</span></td> <td class="ltx_td ltx_align_center ltx_border_t" id="S6.T3.7.1.16.10" style="padding-top:2pt;padding-bottom:2pt;"><span class="ltx_text" id="S6.T3.7.1.16.10.1" style="color:#FF0000;">✗</span></td> <td class="ltx_td ltx_align_center ltx_border_t" id="S6.T3.7.1.16.11" style="padding-top:2pt;padding-bottom:2pt;">MT-Metrics</td> </tr> <tr class="ltx_tr" id="S6.T3.7.1.17"> <td class="ltx_td ltx_align_center ltx_border_t" id="S6.T3.7.1.17.1" style="padding-top:2pt;padding-bottom:2pt;">Context Learning</td> <td class="ltx_td ltx_align_center ltx_border_t" id="S6.T3.7.1.17.2" style="padding-top:2pt;padding-bottom:2pt;">Context-Aware QA</td> <td class="ltx_td ltx_align_center ltx_border_t" id="S6.T3.7.1.17.3" style="padding-top:2pt;padding-bottom:2pt;"><span class="ltx_text" id="S6.T3.7.1.17.3.1" style="color:#FF0000;">✗</span></td> <td class="ltx_td ltx_align_center ltx_border_t" id="S6.T3.7.1.17.4" style="padding-top:2pt;padding-bottom:2pt;"><span class="ltx_text" id="S6.T3.7.1.17.4.1" style="color:#FF0000;">✗</span></td> <td class="ltx_td ltx_align_center ltx_border_t" id="S6.T3.7.1.17.5" style="padding-top:2pt;padding-bottom:2pt;"><span class="ltx_text" id="S6.T3.7.1.17.5.1" style="color:#FF0000;">✗</span></td> <td class="ltx_td ltx_align_center ltx_border_t" id="S6.T3.7.1.17.6" style="padding-top:2pt;padding-bottom:2pt;"><span class="ltx_text" id="S6.T3.7.1.17.6.1" style="color:#FF0000;">✗</span></td> <td class="ltx_td ltx_align_center ltx_border_t" id="S6.T3.7.1.17.7" style="padding-top:2pt;padding-bottom:2pt;"><span class="ltx_text" id="S6.T3.7.1.17.7.1" style="color:#FF0000;">✗</span></td> <td class="ltx_td ltx_align_center ltx_border_t" id="S6.T3.7.1.17.8" style="padding-top:2pt;padding-bottom:2pt;"><span class="ltx_text" id="S6.T3.7.1.17.8.1" style="color:#FF0000;">✗</span></td> <td class="ltx_td ltx_align_center ltx_border_t" id="S6.T3.7.1.17.9" style="padding-top:2pt;padding-bottom:2pt;"><span class="ltx_text" id="S6.T3.7.1.17.9.1" style="color:#00FF00;">✓</span></td> <td class="ltx_td ltx_align_center ltx_border_t" id="S6.T3.7.1.17.10" style="padding-top:2pt;padding-bottom:2pt;"><span class="ltx_text" id="S6.T3.7.1.17.10.1" style="color:#FF0000;">✗</span></td> <td class="ltx_td ltx_align_center ltx_border_t" id="S6.T3.7.1.17.11" style="padding-top:2pt;padding-bottom:2pt;">MT-Metrics</td> </tr> <tr class="ltx_tr" id="S6.T3.7.1.18"> <td class="ltx_td ltx_align_center ltx_border_t" id="S6.T3.7.1.18.1" style="padding-top:2pt;padding-bottom:2pt;">Interaction Capability</td> <td class="ltx_td ltx_align_center ltx_border_t" id="S6.T3.7.1.18.2" style="padding-top:2pt;padding-bottom:2pt;">Interaction Events</td> <td class="ltx_td ltx_align_center ltx_border_t" id="S6.T3.7.1.18.3" style="padding-top:2pt;padding-bottom:2pt;"><span class="ltx_text" id="S6.T3.7.1.18.3.1" style="color:#FF0000;">✗</span></td> <td class="ltx_td ltx_align_center ltx_border_t" id="S6.T3.7.1.18.4" style="padding-top:2pt;padding-bottom:2pt;"><span class="ltx_text" id="S6.T3.7.1.18.4.1" style="color:#FF0000;">✗</span></td> <td class="ltx_td ltx_align_center ltx_border_t" id="S6.T3.7.1.18.5" style="padding-top:2pt;padding-bottom:2pt;"><span class="ltx_text" id="S6.T3.7.1.18.5.1" style="color:#FF0000;">✗</span></td> <td class="ltx_td ltx_align_center ltx_border_t" id="S6.T3.7.1.18.6" style="padding-top:2pt;padding-bottom:2pt;"><span class="ltx_text" id="S6.T3.7.1.18.6.1" style="color:#FF0000;">✗</span></td> <td class="ltx_td ltx_align_center ltx_border_t" id="S6.T3.7.1.18.7" style="padding-top:2pt;padding-bottom:2pt;"><span class="ltx_text" id="S6.T3.7.1.18.7.1" style="color:#FF0000;">✗</span></td> <td class="ltx_td ltx_align_center ltx_border_t" id="S6.T3.7.1.18.8" style="padding-top:2pt;padding-bottom:2pt;"><span class="ltx_text" id="S6.T3.7.1.18.8.1" style="color:#FF0000;">✗</span></td> <td class="ltx_td ltx_align_center ltx_border_t" id="S6.T3.7.1.18.9" style="padding-top:2pt;padding-bottom:2pt;"><span class="ltx_text" id="S6.T3.7.1.18.9.1" style="color:#00FF00;">✓</span></td> <td class="ltx_td ltx_align_center ltx_border_t" id="S6.T3.7.1.18.10" style="padding-top:2pt;padding-bottom:2pt;"><span class="ltx_text" id="S6.T3.7.1.18.10.1" style="color:#FF0000;">✗</span></td> <td class="ltx_td ltx_align_center ltx_border_t" id="S6.T3.7.1.18.11" style="padding-top:2pt;padding-bottom:2pt;">Statistic-Method</td> </tr> <tr class="ltx_tr" id="S6.T3.7.1.19"> <td class="ltx_td ltx_align_center ltx_border_t" id="S6.T3.7.1.19.1" style="padding-top:2pt;padding-bottom:2pt;">Multimodal Capability</td> <td class="ltx_td ltx_align_center ltx_border_t" id="S6.T3.7.1.19.2" style="padding-top:2pt;padding-bottom:2pt;">Multimodal QA</td> <td class="ltx_td ltx_align_center ltx_border_t" id="S6.T3.7.1.19.3" style="padding-top:2pt;padding-bottom:2pt;"><span class="ltx_text" id="S6.T3.7.1.19.3.1" style="color:#FF0000;">✗</span></td> <td class="ltx_td ltx_align_center ltx_border_t" id="S6.T3.7.1.19.4" style="padding-top:2pt;padding-bottom:2pt;"><span class="ltx_text" id="S6.T3.7.1.19.4.1" style="color:#FF0000;">✗</span></td> <td class="ltx_td ltx_align_center ltx_border_t" id="S6.T3.7.1.19.5" style="padding-top:2pt;padding-bottom:2pt;"><span class="ltx_text" id="S6.T3.7.1.19.5.1" style="color:#FF0000;">✗</span></td> <td class="ltx_td ltx_align_center ltx_border_t" id="S6.T3.7.1.19.6" style="padding-top:2pt;padding-bottom:2pt;"><span class="ltx_text" id="S6.T3.7.1.19.6.1" style="color:#FF0000;">✗</span></td> <td class="ltx_td ltx_align_center ltx_border_t" id="S6.T3.7.1.19.7" style="padding-top:2pt;padding-bottom:2pt;"><span class="ltx_text" id="S6.T3.7.1.19.7.1" style="color:#FF0000;">✗</span></td> <td class="ltx_td ltx_align_center ltx_border_t" id="S6.T3.7.1.19.8" style="padding-top:2pt;padding-bottom:2pt;"><span class="ltx_text" id="S6.T3.7.1.19.8.1" style="color:#FF0000;">✗</span></td> <td class="ltx_td ltx_align_center ltx_border_t" id="S6.T3.7.1.19.9" style="padding-top:2pt;padding-bottom:2pt;"><span class="ltx_text" id="S6.T3.7.1.19.9.1" style="color:#FF0000;">✗</span></td> <td class="ltx_td ltx_align_center ltx_border_t" id="S6.T3.7.1.19.10" style="padding-top:2pt;padding-bottom:2pt;"><span class="ltx_text" id="S6.T3.7.1.19.10.1" style="color:#FF0000;">✗</span></td> <td class="ltx_td ltx_align_center ltx_border_t" id="S6.T3.7.1.19.11" style="padding-top:2pt;padding-bottom:2pt;">MT-Metrics</td> </tr> <tr class="ltx_tr" id="S6.T3.7.1.20"> <td class="ltx_td ltx_align_center ltx_border_t" id="S6.T3.7.1.20.1" style="padding-top:2pt;padding-bottom:2pt;">Security</td> <td class="ltx_td ltx_align_center ltx_border_t" id="S6.T3.7.1.20.2" style="padding-top:2pt;padding-bottom:2pt;">Attack Events</td> <td class="ltx_td ltx_align_center ltx_border_t" id="S6.T3.7.1.20.3" style="padding-top:2pt;padding-bottom:2pt;"><span class="ltx_text" id="S6.T3.7.1.20.3.1" style="color:#00FF00;">✓</span></td> <td class="ltx_td ltx_align_center ltx_border_t" id="S6.T3.7.1.20.4" style="padding-top:2pt;padding-bottom:2pt;"><span class="ltx_text" id="S6.T3.7.1.20.4.1" style="color:#FF0000;">✗</span></td> <td class="ltx_td ltx_align_center ltx_border_t" id="S6.T3.7.1.20.5" style="padding-top:2pt;padding-bottom:2pt;"><span class="ltx_text" id="S6.T3.7.1.20.5.1" style="color:#FF0000;">✗</span></td> <td class="ltx_td ltx_align_center ltx_border_t" id="S6.T3.7.1.20.6" style="padding-top:2pt;padding-bottom:2pt;"><span class="ltx_text" id="S6.T3.7.1.20.6.1" style="color:#FF0000;">✗</span></td> <td class="ltx_td ltx_align_center ltx_border_t" id="S6.T3.7.1.20.7" style="padding-top:2pt;padding-bottom:2pt;"><span class="ltx_text" id="S6.T3.7.1.20.7.1" style="color:#FF0000;">✗</span></td> <td class="ltx_td ltx_align_center ltx_border_t" id="S6.T3.7.1.20.8" style="padding-top:2pt;padding-bottom:2pt;"><span class="ltx_text" id="S6.T3.7.1.20.8.1" style="color:#FF0000;">✗</span></td> <td class="ltx_td ltx_align_center ltx_border_t" id="S6.T3.7.1.20.9" style="padding-top:2pt;padding-bottom:2pt;"><span class="ltx_text" id="S6.T3.7.1.20.9.1" style="color:#00FF00;">✓</span></td> <td class="ltx_td ltx_align_center ltx_border_t" id="S6.T3.7.1.20.10" style="padding-top:2pt;padding-bottom:2pt;"><span class="ltx_text" id="S6.T3.7.1.20.10.1" style="color:#FF0000;">✗</span></td> <td class="ltx_td ltx_align_center ltx_border_t" id="S6.T3.7.1.20.11" style="padding-top:2pt;padding-bottom:2pt;">Attack Success Rate</td> </tr> <tr class="ltx_tr" id="S6.T3.7.1.21"> <td class="ltx_td ltx_align_center ltx_border_t" id="S6.T3.7.1.21.1" rowspan="4" style="padding-top:2pt;padding-bottom:2pt;"><span class="ltx_text" id="S6.T3.7.1.21.1.1">Speech Intelligence</span></td> <td class="ltx_td ltx_align_center ltx_border_t" id="S6.T3.7.1.21.2" style="padding-top:2pt;padding-bottom:2pt;">Speaker Info</td> <td class="ltx_td ltx_align_center ltx_border_t" id="S6.T3.7.1.21.3" style="padding-top:2pt;padding-bottom:2pt;"><span class="ltx_text" id="S6.T3.7.1.21.3.1" style="color:#00FF00;">✓</span></td> <td class="ltx_td ltx_align_center ltx_border_t" id="S6.T3.7.1.21.4" style="padding-top:2pt;padding-bottom:2pt;"><span class="ltx_text" id="S6.T3.7.1.21.4.1" style="color:#00FF00;">✓</span></td> <td class="ltx_td ltx_align_center ltx_border_t" id="S6.T3.7.1.21.5" style="padding-top:2pt;padding-bottom:2pt;"><span class="ltx_text" id="S6.T3.7.1.21.5.1" style="color:#00FF00;">✓</span></td> <td class="ltx_td ltx_align_center ltx_border_t" id="S6.T3.7.1.21.6" style="padding-top:2pt;padding-bottom:2pt;"><span class="ltx_text" id="S6.T3.7.1.21.6.1" style="color:#00FF00;">✓</span></td> <td class="ltx_td ltx_align_center ltx_border_t" id="S6.T3.7.1.21.7" style="padding-top:2pt;padding-bottom:2pt;"><span class="ltx_text" id="S6.T3.7.1.21.7.1" style="color:#FF0000;">✗</span></td> <td class="ltx_td ltx_align_center ltx_border_t" id="S6.T3.7.1.21.8" style="padding-top:2pt;padding-bottom:2pt;"><span class="ltx_text" id="S6.T3.7.1.21.8.1" style="color:#FF0000;">✗</span></td> <td class="ltx_td ltx_align_center ltx_border_t" id="S6.T3.7.1.21.9" style="padding-top:2pt;padding-bottom:2pt;"><span class="ltx_text" id="S6.T3.7.1.21.9.1" style="color:#FF0000;">✗</span></td> <td class="ltx_td ltx_align_center ltx_border_t" id="S6.T3.7.1.21.10" style="padding-top:2pt;padding-bottom:2pt;"><span class="ltx_text" id="S6.T3.7.1.21.10.1" style="color:#FF0000;">✗</span></td> <td class="ltx_td ltx_align_center ltx_border_t" id="S6.T3.7.1.21.11" style="padding-top:2pt;padding-bottom:2pt;">Acc-Metrics</td> </tr> <tr class="ltx_tr" id="S6.T3.7.1.22"> <td class="ltx_td ltx_align_center" id="S6.T3.7.1.22.1" style="padding-top:2pt;padding-bottom:2pt;">Paralinguistic info Classification</td> <td class="ltx_td ltx_align_center" id="S6.T3.7.1.22.2" style="padding-top:2pt;padding-bottom:2pt;"><span class="ltx_text" id="S6.T3.7.1.22.2.1" style="color:#00FF00;">✓</span></td> <td class="ltx_td ltx_align_center" id="S6.T3.7.1.22.3" style="padding-top:2pt;padding-bottom:2pt;"><span class="ltx_text" id="S6.T3.7.1.22.3.1" style="color:#00FF00;">✓</span></td> <td class="ltx_td ltx_align_center" id="S6.T3.7.1.22.4" style="padding-top:2pt;padding-bottom:2pt;"><span class="ltx_text" id="S6.T3.7.1.22.4.1" style="color:#00FF00;">✓</span></td> <td class="ltx_td ltx_align_center" id="S6.T3.7.1.22.5" style="padding-top:2pt;padding-bottom:2pt;"><span class="ltx_text" id="S6.T3.7.1.22.5.1" style="color:#00FF00;">✓</span></td> <td class="ltx_td ltx_align_center" id="S6.T3.7.1.22.6" style="padding-top:2pt;padding-bottom:2pt;"><span class="ltx_text" id="S6.T3.7.1.22.6.1" style="color:#FF0000;">✗</span></td> <td class="ltx_td ltx_align_center" id="S6.T3.7.1.22.7" style="padding-top:2pt;padding-bottom:2pt;"><span class="ltx_text" id="S6.T3.7.1.22.7.1" style="color:#00FF00;">✓</span></td> <td class="ltx_td ltx_align_center" id="S6.T3.7.1.22.8" style="padding-top:2pt;padding-bottom:2pt;"><span class="ltx_text" id="S6.T3.7.1.22.8.1" style="color:#FF0000;">✗</span></td> <td class="ltx_td ltx_align_center" id="S6.T3.7.1.22.9" style="padding-top:2pt;padding-bottom:2pt;"><span class="ltx_text" id="S6.T3.7.1.22.9.1" style="color:#00FF00;">✓</span></td> <td class="ltx_td ltx_align_center" id="S6.T3.7.1.22.10" style="padding-top:2pt;padding-bottom:2pt;">Acc-Metrics</td> </tr> <tr class="ltx_tr" id="S6.T3.7.1.23"> <td class="ltx_td ltx_align_center" id="S6.T3.7.1.23.1" style="padding-top:2pt;padding-bottom:2pt;">Conditioned response</td> <td class="ltx_td ltx_align_center" id="S6.T3.7.1.23.2" style="padding-top:2pt;padding-bottom:2pt;"><span class="ltx_text" id="S6.T3.7.1.23.2.1" style="color:#FF0000;">✗</span></td> <td class="ltx_td ltx_align_center" id="S6.T3.7.1.23.3" style="padding-top:2pt;padding-bottom:2pt;"><span class="ltx_text" id="S6.T3.7.1.23.3.1" style="color:#FF0000;">✗</span></td> <td class="ltx_td ltx_align_center" id="S6.T3.7.1.23.4" style="padding-top:2pt;padding-bottom:2pt;"><span class="ltx_text" id="S6.T3.7.1.23.4.1" style="color:#FF0000;">✗</span></td> <td class="ltx_td ltx_align_center" id="S6.T3.7.1.23.5" style="padding-top:2pt;padding-bottom:2pt;"><span class="ltx_text" id="S6.T3.7.1.23.5.1" style="color:#00FF00;">✓</span></td> <td class="ltx_td ltx_align_center" id="S6.T3.7.1.23.6" style="padding-top:2pt;padding-bottom:2pt;"><span class="ltx_text" id="S6.T3.7.1.23.6.1" style="color:#FF0000;">✗</span></td> <td class="ltx_td ltx_align_center" id="S6.T3.7.1.23.7" style="padding-top:2pt;padding-bottom:2pt;"><span class="ltx_text" id="S6.T3.7.1.23.7.1" style="color:#00FF00;">✓</span></td> <td class="ltx_td ltx_align_center" id="S6.T3.7.1.23.8" style="padding-top:2pt;padding-bottom:2pt;"><span class="ltx_text" id="S6.T3.7.1.23.8.1" style="color:#FF0000;">✗</span></td> <td class="ltx_td ltx_align_center" id="S6.T3.7.1.23.9" style="padding-top:2pt;padding-bottom:2pt;"><span class="ltx_text" id="S6.T3.7.1.23.9.1" style="color:#FF0000;">✗</span></td> <td class="ltx_td ltx_align_center" id="S6.T3.7.1.23.10" style="padding-top:2pt;padding-bottom:2pt;">MT-Metrics</td> </tr> <tr class="ltx_tr" id="S6.T3.7.1.24"> <td class="ltx_td" id="S6.T3.7.1.24.1" style="padding-top:2pt;padding-bottom:2pt;"></td> <td class="ltx_td ltx_align_center" id="S6.T3.7.1.24.2" style="padding-top:2pt;padding-bottom:2pt;">Controllable Style Generation</td> <td class="ltx_td ltx_align_center" id="S6.T3.7.1.24.3" style="padding-top:2pt;padding-bottom:2pt;"><span class="ltx_text" id="S6.T3.7.1.24.3.1" style="color:#FF0000;">✗</span></td> <td class="ltx_td ltx_align_center" id="S6.T3.7.1.24.4" style="padding-top:2pt;padding-bottom:2pt;"><span class="ltx_text" id="S6.T3.7.1.24.4.1" style="color:#FF0000;">✗</span></td> <td class="ltx_td ltx_align_center" id="S6.T3.7.1.24.5" style="padding-top:2pt;padding-bottom:2pt;"><span class="ltx_text" id="S6.T3.7.1.24.5.1" style="color:#FF0000;">✗</span></td> <td class="ltx_td ltx_align_center" id="S6.T3.7.1.24.6" style="padding-top:2pt;padding-bottom:2pt;"><span class="ltx_text" id="S6.T3.7.1.24.6.1" style="color:#FF0000;">✗</span></td> <td class="ltx_td ltx_align_center" id="S6.T3.7.1.24.7" style="padding-top:2pt;padding-bottom:2pt;"><span class="ltx_text" id="S6.T3.7.1.24.7.1" style="color:#FF0000;">✗</span></td> <td class="ltx_td ltx_align_center" id="S6.T3.7.1.24.8" style="padding-top:2pt;padding-bottom:2pt;"><span class="ltx_text" id="S6.T3.7.1.24.8.1" style="color:#FF0000;">✗</span></td> <td class="ltx_td ltx_align_center" id="S6.T3.7.1.24.9" style="padding-top:2pt;padding-bottom:2pt;"><span class="ltx_text" id="S6.T3.7.1.24.9.1" style="color:#00FF00;">✓</span></td> <td class="ltx_td ltx_align_center" id="S6.T3.7.1.24.10" style="padding-top:2pt;padding-bottom:2pt;"><span class="ltx_text" id="S6.T3.7.1.24.10.1" style="color:#FF0000;">✗</span></td> <td class="ltx_td ltx_align_center" id="S6.T3.7.1.24.11" style="padding-top:2pt;padding-bottom:2pt;">MT-Metrics</td> </tr> <tr class="ltx_tr" id="S6.T3.7.1.25"> <td class="ltx_td ltx_align_center" id="S6.T3.7.1.25.1" style="padding-top:2pt;padding-bottom:2pt;"><span class="ltx_rule" style="width:100%;height:1.5pt;background:black;display:inline-block;"> </span></td> <td class="ltx_td" id="S6.T3.7.1.25.2" style="padding-top:2pt;padding-bottom:2pt;"></td> <td class="ltx_td" id="S6.T3.7.1.25.3" style="padding-top:2pt;padding-bottom:2pt;"></td> <td class="ltx_td" id="S6.T3.7.1.25.4" style="padding-top:2pt;padding-bottom:2pt;"></td> <td class="ltx_td" id="S6.T3.7.1.25.5" style="padding-top:2pt;padding-bottom:2pt;"></td> <td class="ltx_td" id="S6.T3.7.1.25.6" style="padding-top:2pt;padding-bottom:2pt;"></td> <td class="ltx_td" id="S6.T3.7.1.25.7" style="padding-top:2pt;padding-bottom:2pt;"></td> <td class="ltx_td" id="S6.T3.7.1.25.8" style="padding-top:2pt;padding-bottom:2pt;"></td> <td class="ltx_td" id="S6.T3.7.1.25.9" style="padding-top:2pt;padding-bottom:2pt;"></td> <td class="ltx_td" id="S6.T3.7.1.25.10" style="padding-top:2pt;padding-bottom:2pt;"></td> <td class="ltx_td" id="S6.T3.7.1.25.11" style="padding-top:2pt;padding-bottom:2pt;"></td> <td class="ltx_td" id="S6.T3.7.1.25.12" style="padding-top:2pt;padding-bottom:2pt;"></td> </tr> </table> </span></div> </figure> <section class="ltx_subsubsection" id="S6.SS2.SSS1"> <h4 class="ltx_title ltx_title_subsubsection"> <span class="ltx_tag ltx_tag_subsubsection">6.2.1 </span>Common Evaluation</h4> <div class="ltx_para" id="S6.SS2.SSS1.p1"> <p class="ltx_p" id="S6.SS2.SSS1.p1.1"><span class="ltx_text ltx_font_bold" id="S6.SS2.SSS1.p1.1.1">Text Intelligence.</span> As shown in Figure <a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#S2.F4" title="Figure 4 ‣ 2.1.4 Audio and Music Understanding ‣ 2.1 Functions of Spoken Dialogue Systems ‣ 2 Overall ‣ WavChat: A Survey of Spoken Dialogue Models"><span class="ltx_text ltx_ref_tag">4</span></a> (a), text intelligence refers to the fundamental understanding and generation capabilities of the spoken dialogue model. When evaluating text intelligence, the focus is solely on the semantic content generated by the model, without considering other aspects such as timbre, emotion, or style. In practical evaluations of this kind, some spoken dialogue models output only text <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib191" title="">191</a>, <a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib198" title="">198</a>, <a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib34" title="">34</a>, <a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib33" title="">33</a>, <a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib227" title="">227</a>]</cite>, while others generate both text and speech <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib44" title="">44</a>, <a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib222" title="">222</a>, <a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib223" title="">223</a>]</cite>, or only speech <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib246" title="">246</a>]</cite>. Regardless of the output format, we are concerned only with the generated text or the transcribed text from the speech when evaluation the text intelligence in the spoken dialogue models. There are typically two categories of metrics and benchmarks used to assess text intelligence, MT-Metrics and Acc-Metrics. The details are outlined as follows:</p> </div> <div class="ltx_para" id="S6.SS2.SSS1.p2"> <p class="ltx_p" id="S6.SS2.SSS1.p2.1"><math alttext="\bullet" class="ltx_Math" display="inline" id="S6.SS2.SSS1.p2.1.m1.1"><semantics id="S6.SS2.SSS1.p2.1.m1.1a"><mo id="S6.SS2.SSS1.p2.1.m1.1.1" xref="S6.SS2.SSS1.p2.1.m1.1.1.cmml">∙</mo><annotation-xml encoding="MathML-Content" id="S6.SS2.SSS1.p2.1.m1.1b"><ci id="S6.SS2.SSS1.p2.1.m1.1.1.cmml" xref="S6.SS2.SSS1.p2.1.m1.1.1">∙</ci></annotation-xml><annotation encoding="application/x-tex" id="S6.SS2.SSS1.p2.1.m1.1c">\bullet</annotation><annotation encoding="application/x-llamapun" id="S6.SS2.SSS1.p2.1.m1.1d">∙</annotation></semantics></math> <em class="ltx_emph ltx_font_italic" id="S6.SS2.SSS1.p2.1.1">ACC-Metrics.</em> A common approach to evaluating text intelligence is to use benchmarks typically <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib197" title="">197</a>, <a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib125" title="">125</a>, <a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib239" title="">239</a>, <a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib38" title="">38</a>, <a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib181" title="">181</a>, <a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib26" title="">26</a>, <a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib255" title="">255</a>, <a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib153" title="">153</a>, <a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib215" title="">215</a>, <a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib58" title="">58</a>]</cite> employed for large language models, such as the classic MMLU <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib75" title="">75</a>]</cite> and GSM-8K <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib39" title="">39</a>]</cite>. These benchmarks often include complex multiple-choice questions, which assess the model’s reasoning abilities through Acc-Metrics. Acc-Metrics refers to metrics that measure recognition accuracy, such as accuracy, F-score, and Mean Average Precision (mAP). It is noteworthy that these benchmarks often evaluate the text-based intelligence of spoken dialogue models from various perspectives. For example, MMLU <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib75" title="">75</a>]</cite> and GSM-8K <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib39" title="">39</a>]</cite> are more focused on LLM’s core knowledge, Flan <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib139" title="">139</a>, <a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib217" title="">217</a>]</cite> and Self-instruct <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib214" title="">214</a>]</cite> are more focused on LLM’s instruction following capability, CoQA <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib175" title="">175</a>]</cite> and OpenAssistant <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib112" title="">112</a>]</cite> are more focused on LLM’s conversational capability. These benchmarks often contain questions and corresponding answers. Most of these questions are close-ended questions with short answers, so that they can have good generalization ability, any model that can generate text answers can be evaluated with these benchmarks and accuracy and F-Score can be easily adopted as the evaluation metrics.</p> </div> <div class="ltx_para" id="S6.SS2.SSS1.p3"> <p class="ltx_p" id="S6.SS2.SSS1.p3.1"><math alttext="\bullet" class="ltx_Math" display="inline" id="S6.SS2.SSS1.p3.1.m1.1"><semantics id="S6.SS2.SSS1.p3.1.m1.1a"><mo id="S6.SS2.SSS1.p3.1.m1.1.1" xref="S6.SS2.SSS1.p3.1.m1.1.1.cmml">∙</mo><annotation-xml encoding="MathML-Content" id="S6.SS2.SSS1.p3.1.m1.1b"><ci id="S6.SS2.SSS1.p3.1.m1.1.1.cmml" xref="S6.SS2.SSS1.p3.1.m1.1.1">∙</ci></annotation-xml><annotation encoding="application/x-tex" id="S6.SS2.SSS1.p3.1.m1.1c">\bullet</annotation><annotation encoding="application/x-llamapun" id="S6.SS2.SSS1.p3.1.m1.1d">∙</annotation></semantics></math> <em class="ltx_emph ltx_font_italic" id="S6.SS2.SSS1.p3.1.1">MT-Metrics.</em> With the development of the LLMs, LLMs can follow instructions to accomplish many complex problems, so the scope of the evaluation was further expanded to include open-ended questions. These open-ended questions often lack standard answers, therefore it’s difficult to measure them by common ACC-Metrics. A common approach is to measure the grammatical similarity between generated and reference utterances using the metrics used to measure grammatical similarity in mechanical translation (e.g. BLEU <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib161" title="">161</a>]</cite>, METEOR <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib13" title="">13</a>]</cite>, ROUGE <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib126" title="">126</a>]</cite>). We collectively refer to these evaluation metrics as <span class="ltx_text ltx_font_bold" id="S6.SS2.SSS1.p3.1.2">MT-Metrics</span>. However, these metrics have certain limitations since one meaning has many different ways to convey. So there are some metrics like BertScore <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib247" title="">247</a>]</cite> focus on evaluating the semantic similarity between two sentences. And there are also been some methods utilizing LLM to judge the effectiveness of the responses which focusing on human preference <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib252" title="">252</a>, <a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib138" title="">138</a>]</cite>. The results of these large model-based especially GPT4o-based ratings of evaluation metrics demonstrated a high degree of correlation with human.</p> </div> <div class="ltx_para" id="S6.SS2.SSS1.p4"> <p class="ltx_p" id="S6.SS2.SSS1.p4.1"><span class="ltx_text ltx_font_bold" id="S6.SS2.SSS1.p4.1.1">Speech Quality.</span> Speech quality is one of the fundamental aspects for evaluating the performance of spoken dialogue systems, as it is closely tied to the experience of users. There are two common dimensions for assessing speech quality: the clarity and naturalness (expressiveness and prosody) of the generated audio, and the robustness of the generated speech, such as the presence of missing or extra words. The former is typically evaluated by using subjective MOS (Mean Opinion Score) ratings, while the latter is commonly assessed by using WER (Word Error Rate) or CER (Character Error Rate) metrics.</p> </div> <div class="ltx_para" id="S6.SS2.SSS1.p5"> <p class="ltx_p" id="S6.SS2.SSS1.p5.1"><span class="ltx_text ltx_font_bold" id="S6.SS2.SSS1.p5.1.1">Streaming Latency.</span> In addition to evaluating the quality of text understanding and generated speech, the speed at which a spoken dialogue system generates speech responses is also crucial. This necessitates the ability to stream both the comprehension and generation of speech in real time, achieving an effect of generating speech while speaking <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib248" title="">248</a>, <a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib44" title="">44</a>, <a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib57" title="">57</a>]</cite>. To assess the streaming performance of a model, one typically measures the time taken to generate the first token of speech (i.e., the waiting time after the user finishes speaking) and calculates the overall Real-Time Factor (RTF) of the spoken dialogue model’s response. The RTF value is obtained by dividing the total duration of the speech segment generated by the model by the time taken by the model to generate that response.</p> </div> </section> <section class="ltx_subsubsection" id="S6.SS2.SSS2"> <h4 class="ltx_title ltx_title_subsubsection"> <span class="ltx_tag ltx_tag_subsubsection">6.2.2 </span>Advanced Evaluation</h4> <div class="ltx_para" id="S6.SS2.SSS2.p1"> <p class="ltx_p" id="S6.SS2.SSS2.p1.1"><span class="ltx_text ltx_font_bold" id="S6.SS2.SSS2.p1.1.1">Speech Intelligence.</span> Evaluating the speech intelligence of spoken dialogue systems is one of the key aspects. The definition of speech intelligence in spoken dialogue systems is discussed in detail in Section <a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#S2.SS1.SSS2" title="2.1.2 Speech Intelligence ‣ 2.1 Functions of Spoken Dialogue Systems ‣ 2 Overall ‣ WavChat: A Survey of Spoken Dialogue Models"><span class="ltx_text ltx_ref_tag">2.1.2</span></a>. Given that speech intelligence encompasses a wide range of application scenarios, we address the evaluation separately for the understanding and generation components during the assessment.</p> </div> <div class="ltx_para" id="S6.SS2.SSS2.p2"> <p class="ltx_p" id="S6.SS2.SSS2.p2.1"><math alttext="\bullet" class="ltx_Math" display="inline" id="S6.SS2.SSS2.p2.1.m1.1"><semantics id="S6.SS2.SSS2.p2.1.m1.1a"><mo id="S6.SS2.SSS2.p2.1.m1.1.1" xref="S6.SS2.SSS2.p2.1.m1.1.1.cmml">∙</mo><annotation-xml encoding="MathML-Content" id="S6.SS2.SSS2.p2.1.m1.1b"><ci id="S6.SS2.SSS2.p2.1.m1.1.1.cmml" xref="S6.SS2.SSS2.p2.1.m1.1.1">∙</ci></annotation-xml><annotation encoding="application/x-tex" id="S6.SS2.SSS2.p2.1.m1.1c">\bullet</annotation><annotation encoding="application/x-llamapun" id="S6.SS2.SSS2.p2.1.m1.1d">∙</annotation></semantics></math> <em class="ltx_emph ltx_font_italic" id="S6.SS2.SSS2.p2.1.1">Understanding.</em> Ordinary cascaded spoken dialog models based on ASR getting text input will loss many paralinguistic information like speaking style, accent, emotion, etc. Thus many spoken dialogue models <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib227" title="">227</a>, <a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib128" title="">128</a>, <a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib127" title="">127</a>]</cite> devoted into helping dialog models understand the paralinguistic information. Evaluating this capability can start from two aspects: a) the accuracy of the paralinguistic information’s understanding, b) the ability of <span class="ltx_text ltx_font_bold" id="S6.SS2.SSS2.p2.1.2">automatically</span> generating appropriate and coherent content responses and acoustic information based on the varying acoustic input. <span class="ltx_text ltx_font_bold" id="S6.SS2.SSS2.p2.1.3">For the former</span>, since the classes of the paralinguistic information are always limited, for example, sentiments are generally categorized as neural, negative, positive. So the researchers always use Accuracy or F-Score to evaluate the models’ paralinguistic information understanding capability. Recently, there are many studies <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib66" title="">66</a>, <a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib19" title="">19</a>, <a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib167" title="">167</a>, <a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib227" title="">227</a>, <a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib127" title="">127</a>, <a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib59" title="">59</a>, <a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib20" title="">20</a>]</cite> available for researchers to use in identifying speech emotions in the dialogue scenes. In addition to recognizing speech emotions, recent benchmarks <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib7" title="">7</a>, <a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib234" title="">234</a>]</cite> has also begun to investigate the influence of speaker age, accent, and other factors on the evaluation of spoken dialogue models. <span class="ltx_text ltx_font_bold" id="S6.SS2.SSS2.p2.1.4">For the latter</span>, recent work <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib227" title="">227</a>]</cite> has increasingly focused on the possibility of generating appropriate content responses based on acoustic information from the input. The current evaluation methods usually transcript the output audio into text through Automatic Speech Recognition and then evaluate the relevance between generated content and the reference content in the internal dataset. Evaluations are usually conducted in text, so commonly used evaluation metrics are as the same as in the section <a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#S6.SS2.SSS1" title="6.2.1 Common Evaluation ‣ 6.2 Evaluation ‣ 6 Training Resources and Evaluation ‣ WavChat: A Survey of Spoken Dialogue Models"><span class="ltx_text ltx_ref_tag">6.2.1</span></a>, like BLEU and METEOR, which are used to measure the similarity between two sentences. Currently, there is limited research exploring whether spoken dialogue models can autonomously generate appropriate acoustic responses based on varying acoustic information, making it a promising area for future investigation.</p> </div> <div class="ltx_para" id="S6.SS2.SSS2.p3"> <p class="ltx_p" id="S6.SS2.SSS2.p3.1"><math alttext="\bullet" class="ltx_Math" display="inline" id="S6.SS2.SSS2.p3.1.m1.1"><semantics id="S6.SS2.SSS2.p3.1.m1.1a"><mo id="S6.SS2.SSS2.p3.1.m1.1.1" xref="S6.SS2.SSS2.p3.1.m1.1.1.cmml">∙</mo><annotation-xml encoding="MathML-Content" id="S6.SS2.SSS2.p3.1.m1.1b"><ci id="S6.SS2.SSS2.p3.1.m1.1.1.cmml" xref="S6.SS2.SSS2.p3.1.m1.1.1">∙</ci></annotation-xml><annotation encoding="application/x-tex" id="S6.SS2.SSS2.p3.1.m1.1c">\bullet</annotation><annotation encoding="application/x-llamapun" id="S6.SS2.SSS2.p3.1.m1.1d">∙</annotation></semantics></math> <em class="ltx_emph ltx_font_italic" id="S6.SS2.SSS2.p3.1.1">Generation.</em> In the generation component, evaluating the speech intelligence of spoken dialogue systems primarily focuses on controllability, i.e., the ability of the dialogue model to respond in a user-specified style and timbre in the zero-shot scenarios. There are various dimensions to assess style, such as pitch, speech rate, energy, emotion, and accent, among others. ACC-metrics can be used to evaluate whether the spoken dialogue model can generate speech in the desired style. Additionally, the evaluation of voice cloning capabilities within the model can borrow metrics from the zero-shot TTS domain <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib209" title="">209</a>, <a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib189" title="">189</a>, <a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib91" title="">91</a>, <a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib210" title="">210</a>]</cite>, using speaker similarity indices <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib27" title="">27</a>]</cite>. Currently, there are few models that explore the generation of speech intelligence in spoken dialogue systems, and this area warrants further refinement and exploration in future work.</p> </div> <div class="ltx_para" id="S6.SS2.SSS2.p4"> <p class="ltx_p" id="S6.SS2.SSS2.p4.1"><span class="ltx_text ltx_font_bold" id="S6.SS2.SSS2.p4.1.1">Audio Understanding and Generation.</span> In real-world scenarios, the broader definition of speech modality encompasses not only clear human speech but also a wide range of natural sounds such as dog barking and bird chirping, all of which can be considered forms of audio. Evaluating the ability of spoken dialogue models to understand and generate such audio is a critical aspect of assessing the model’s performance.</p> </div> <div class="ltx_para" id="S6.SS2.SSS2.p5"> <p class="ltx_p" id="S6.SS2.SSS2.p5.1"><math alttext="\bullet" class="ltx_Math" display="inline" id="S6.SS2.SSS2.p5.1.m1.1"><semantics id="S6.SS2.SSS2.p5.1.m1.1a"><mo id="S6.SS2.SSS2.p5.1.m1.1.1" xref="S6.SS2.SSS2.p5.1.m1.1.1.cmml">∙</mo><annotation-xml encoding="MathML-Content" id="S6.SS2.SSS2.p5.1.m1.1b"><ci id="S6.SS2.SSS2.p5.1.m1.1.1.cmml" xref="S6.SS2.SSS2.p5.1.m1.1.1">∙</ci></annotation-xml><annotation encoding="application/x-tex" id="S6.SS2.SSS2.p5.1.m1.1c">\bullet</annotation><annotation encoding="application/x-llamapun" id="S6.SS2.SSS2.p5.1.m1.1d">∙</annotation></semantics></math> <em class="ltx_emph ltx_font_italic" id="S6.SS2.SSS2.p5.1.1">Audio Understanding.</em> On the audio comprehension side, various sub-tasks are commonly employed to measure a system’s capacity to understand audio, including tasks such as Audio Captioning (AudioCap) <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib105" title="">105</a>]</cite>, Sound Event Detection (SED) <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib152" title="">152</a>]</cite>, audio classification, and audio-motivated creative writing <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib34" title="">34</a>]</cite>, among others. The core of these tasks lies in evaluating the model’s ability to process and interpret the complex acoustic information embedded within the audio. For tasks like audio classification and SED, which involve fixed outputs, evaluation is relatively straightforward, typically using objective metrics such as accuracy or Mean Average Precision (mAP). However, for the AudioCap task, the problem is generally open-ended, meaning there are no fixed answers. As a result, existing evaluation methods are primarily based on measuring the similarity between the generated text and the reference text, using traditional metrics such as BLEU <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib161" title="">161</a>]</cite> and METEOR <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib13" title="">13</a>]</cite>, or newer evaluation approaches involving large language models such as GPT-4o <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib252" title="">252</a>]</cite>. In the case of audio-motivated creative writing, where the objective is to generate inventive descriptions from a given audio input, evaluation typically relies on subjective measures, given the divergent nature of the creative process involved.</p> </div> <div class="ltx_para" id="S6.SS2.SSS2.p6"> <p class="ltx_p" id="S6.SS2.SSS2.p6.1"><math alttext="\bullet" class="ltx_Math" display="inline" id="S6.SS2.SSS2.p6.1.m1.1"><semantics id="S6.SS2.SSS2.p6.1.m1.1a"><mo id="S6.SS2.SSS2.p6.1.m1.1.1" xref="S6.SS2.SSS2.p6.1.m1.1.1.cmml">∙</mo><annotation-xml encoding="MathML-Content" id="S6.SS2.SSS2.p6.1.m1.1b"><ci id="S6.SS2.SSS2.p6.1.m1.1.1.cmml" xref="S6.SS2.SSS2.p6.1.m1.1.1">∙</ci></annotation-xml><annotation encoding="application/x-tex" id="S6.SS2.SSS2.p6.1.m1.1c">\bullet</annotation><annotation encoding="application/x-llamapun" id="S6.SS2.SSS2.p6.1.m1.1d">∙</annotation></semantics></math> <em class="ltx_emph ltx_font_italic" id="S6.SS2.SSS2.p6.1.1">Audio Generation.</em> Additionally, on the audio generation side, producing high-quality audio should be considered an advanced capability for a conversational spoken dialogue model. However, as most current spoken dialogue systems lack the ability to generate audio, this remains an area for further exploration in the future end-to-end spoken dialogue systems. The evaluation of generated audio can draw from methods used in the text-to-audio domain <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib81" title="">81</a>, <a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib83" title="">83</a>]</cite>. Typically, such evaluations focus on the quality of the generated audio itself, using metrics such as Mean Opinion Score (MOS) and the similarity between generated and target audio. Objective evaluation metrics for audio similarity often include Fréchet Distance (FD), Inception Score (IS), Kullback-Leibler (KL) divergence, Fréchet Audio Distance (FAD), and CLAP score. Specifically, Fréchet Audio Distance (FAD) <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib104" title="">104</a>]</cite> is adapted from the Fréchet Inception Distance (FID) to the audio domain and serves as a reference-free perceptual metric that quantifies the distance between the generated and ground-truth audio distributions. The Inception Score (IS) is an effective metric that evaluates both the quality and diversity of generated audio. KL divergence is computed at the paired sample level between generated and ground-truth audio, based on the label distribution and averaged to produce a final result. Fréchet Distance (FD) evaluates the similarity between the generated and ground-truth audio distributions. FD, KL, and IS are built upon the PANNs model <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib110" title="">110</a>]</cite>, which takes mel-spectrograms as input. In contrast, FAD uses VGGish <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib76" title="">76</a>]</cite> as an audio classifier, processing raw audio waveforms as input. The CLAP score, adapted from the CLIP score <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib77" title="">77</a>]</cite>, is a reference-free metric used to assess audio-text alignment and strongly correlates with human perception.</p> </div> <div class="ltx_para" id="S6.SS2.SSS2.p7"> <p class="ltx_p" id="S6.SS2.SSS2.p7.1"><span class="ltx_text ltx_font_bold" id="S6.SS2.SSS2.p7.1.1">Music Understanding and Generation.</span> In advanced spoken dialogue models, the evaluation of music modality understanding and generation follows a methodology similar to that used for audio modality. Unlike Audio Understanding, which only requires a general description of the events that occur in the audio, Music Understanding requires appreciating the style and genre of music, understanding its keys, themes, and other rich information. For classification, emotion recognition tasks in music, common metrics such as accuracy can be used. For music captioning task, MusicCaps <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib2" title="">2</a>]</cite> offers a general dataset for evaluating a model’s music understanding capability. For music analysis, Nsynth <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib56" title="">56</a>]</cite> provides rich note data information. In terms of evaluation for music generation, subjective Mean Opinion Score (MOS) assessments or measures of similarity between generated music and target music are commonly used.</p> </div> <div class="ltx_para" id="S6.SS2.SSS2.p8"> <p class="ltx_p" id="S6.SS2.SSS2.p8.1"><span class="ltx_text ltx_font_bold" id="S6.SS2.SSS2.p8.1.1">Multilingual Capability.</span> The ability to speak multiple languages is also required for a spoken dialogue model, but most current models <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib68" title="">68</a>, <a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib127" title="">127</a>, <a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib128" title="">128</a>, <a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib191" title="">191</a>, <a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib208" title="">208</a>, <a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib227" title="">227</a>]</cite> only focus on English and Chinese. A naive idea is to directly evaluate spoken dialogue models’ capability in speech-to-speech or speech-to-text translation tasks <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib94" title="">94</a>, <a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib206" title="">206</a>]</cite>. These evaluations can be done with common machine learning metrics like BLEU <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib161" title="">161</a>]</cite> or BertScore <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib247" title="">247</a>]</cite>. However, evaluating the capability of translation is insufficient to measure the model’s multilingual conversational ability, and further exploration is still needed in this area of evaluation. Explicitly requiring a spoken dialogue model to perform speech translation is not a typical use case in conversational scenarios. In most cases, when a user asks a question in a different language or with a distinct accent, the model is expected to automatically respond in the same language that the user is using. In this context, it seems more reasonable to evaluate the accuracy of the model’s generated speech in terms of language identification, combined with subjective human assessments, as a more intuitive and appropriate evaluation method.</p> </div> <div class="ltx_para" id="S6.SS2.SSS2.p9"> <p class="ltx_p" id="S6.SS2.SSS2.p9.1"><span class="ltx_text ltx_font_bold" id="S6.SS2.SSS2.p9.1.1">Context Learning.</span> The context learning capability is crucial for maintaining the coherence of an entire conversation. Similar to a memory function, the challenge lies in how to preserve this capability when relying solely on speech. Typically, the evaluation of a spoken dialogue model’s context learning ability depends on specific long-duration dialogue test sets, after which standard MT-Metrics or Acc-Metrics used in text intelligence evaluations can be applied. For instance, a model’s context learning capability can be assessed by evaluating its QA performance based on the given context <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib132" title="">132</a>]</cite>. However, it is important to note the relevance of editing scenarios in long-duration spoken dialogues. In real spoken dialogue scenarios, the users will modify some certain key information, the model needs to promptly understand and respond accordingly, e.g., the users offer wrong information for solving a problem and modify the condition in the next dialog. So how to evaluate the model’s online understanding ability is still needed further study.</p> </div> <div class="ltx_para" id="S6.SS2.SSS2.p10"> <p class="ltx_p" id="S6.SS2.SSS2.p10.1"><span class="ltx_text ltx_font_bold" id="S6.SS2.SSS2.p10.1.1">Interaction Capability.</span> Interactive ability is also an essential metric for assessing the advanced capabilities of spoken dialogue systems. As illustrated in Figure <a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#S2.F4" title="Figure 4 ‣ 2.1.4 Audio and Music Understanding ‣ 2.1 Functions of Spoken Dialogue Systems ‣ 2 Overall ‣ WavChat: A Survey of Spoken Dialogue Models"><span class="ltx_text ltx_ref_tag">4</span></a> (b), basic interactive ability refers to the system’s capacity to allow users to interrupt the conversation at any time. In this context, it is crucial to evaluate whether the spoken dialogue model can promptly comprehend the user’s new input and halt its current response. This is commonly measured using accuracy. Furthermore, it is important to assess whether the model can generate a coherent and appropriate response based on the new input, which ties back to previous evaluation standards related to text and speech intelligence.</p> </div> <div class="ltx_para" id="S6.SS2.SSS2.p11"> <p class="ltx_p" id="S6.SS2.SSS2.p11.1">In addition, in real-world scenarios, beyond basic interruptions, various discourse markers such as "okay", "haha" are often used to indicate interaction. Current spoken dialogue systems <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib157" title="">157</a>]</cite> typically track the frequency of these markers as a standard evaluation metric. Looking ahead, it may be valuable to assess whether future spoken dialogue models can effectively and appropriately interrupt human speakers, which could also represent a key dimension for evaluation the interaction capability.</p> </div> <div class="ltx_para" id="S6.SS2.SSS2.p12"> <p class="ltx_p" id="S6.SS2.SSS2.p12.1"><span class="ltx_text ltx_font_bold" id="S6.SS2.SSS2.p12.1.1">Multimodal Capability.</span> Spoken dialogue models primarily focus on the audio modality for both input and output. However, considering the close coupling between video and audio modalities in practical applications of dialogue systems, recent advancements in spoken dialogue models have incorporated the understanding of video and images in the input stage <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib61" title="">61</a>, <a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib122" title="">122</a>, <a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib162" title="">162</a>]</cite> , indicate that future spoken dialogue models need to simultaneously understand visual information and audio information to achieve real-time Audio-Visual Understandings. The evaluation of such models generally still focuses on the evaluation of dialogue quality, that is, whether the generated dialogue and the reference dialogue are similar. Therefore, this aspect can still be evaluated using metrics such as BLEU <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib161" title="">161</a>]</cite> and METEOR <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib13" title="">13</a>]</cite> to assess sentence semantic similarity. However, research in this area also focuses on the understanding of visual information, and how to evaluate the model’s correct understanding of real-time visual information in dialogue is also a difficulty, still can be a future benchmark direction.</p> </div> <div class="ltx_para" id="S6.SS2.SSS2.p13"> <p class="ltx_p" id="S6.SS2.SSS2.p13.1"><span class="ltx_text ltx_font_bold" id="S6.SS2.SSS2.p13.1.1">Security.</span> Security is also an integral part of the evaluation, how to ensure that the output of the model complies with ethical and social norms is a critical aspect. Spoken dialogue models may encounter security issues such as harmful content generation, privacy pitfalls, bias, and adversarial attacks. There has been considerable research progress in evaluating text modalities <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib47" title="">47</a>]</cite>. The commonly used metric is to evaluate the attack success rate of injection attacks and so on. However, there are relatively few evaluation methods in the field of speech modality. How to construct a dataset for attacking spoken dialogue models, avoid poisoning of speech data, and evaluate the model’s speech defense capabilities as benchmarks are required further research in the field of spoken dialogue model evaluation in the future.</p> </div> </section> </section> <section class="ltx_subsection" id="S6.SS3"> <h3 class="ltx_title ltx_title_subsection"> <span class="ltx_tag ltx_tag_subsection">6.3 </span>Benchmark</h3> <div class="ltx_para" id="S6.SS3.p1"> <p class="ltx_p" id="S6.SS3.p1.1">We list the common benchmarks for evaluating voice dialogue systems in the table<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#S6.T3" title="Table 3 ‣ 6.2 Evaluation ‣ 6 Training Resources and Evaluation ‣ WavChat: A Survey of Spoken Dialogue Models"><span class="ltx_text ltx_ref_tag">3</span></a>, and briefly introduce each benchmark in this section.</p> </div> <div class="ltx_para" id="S6.SS3.p2"> <p class="ltx_p" id="S6.SS3.p2.1"><math alttext="\bullet" class="ltx_Math" display="inline" id="S6.SS3.p2.1.m1.1"><semantics id="S6.SS3.p2.1.m1.1a"><mo id="S6.SS3.p2.1.m1.1.1" xref="S6.SS3.p2.1.m1.1.1.cmml">∙</mo><annotation-xml encoding="MathML-Content" id="S6.SS3.p2.1.m1.1b"><ci id="S6.SS3.p2.1.m1.1.1.cmml" xref="S6.SS3.p2.1.m1.1.1">∙</ci></annotation-xml><annotation encoding="application/x-tex" id="S6.SS3.p2.1.m1.1c">\bullet</annotation><annotation encoding="application/x-llamapun" id="S6.SS3.p2.1.m1.1d">∙</annotation></semantics></math> <em class="ltx_emph ltx_font_italic" id="S6.SS3.p2.1.1">VoiceBench.</em> VoiceBench’s <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib29" title="">29</a>]</cite> Key evaluation dimensions include general knowledge, instruction-following ability, and safety compliance. The benchmark incorporates both synthetic and real spoken instructions to simulate diverse speaker styles, environmental conditions, and content variations. It challenges systems with tasks involving accent adaptability, handling noisy environments, and robustness against content irregularities such as grammatical errors, disfluencies, and mispronunciations. Additionally, it explores the systems’ resilience under varying speaker characteristics (age, pitch, and speaking speed) and environmental challenges like reverberation, background noise, and far-field effects.</p> </div> <div class="ltx_para" id="S6.SS3.p3"> <p class="ltx_p" id="S6.SS3.p3.1"><math alttext="\bullet" class="ltx_Math" display="inline" id="S6.SS3.p3.1.m1.1"><semantics id="S6.SS3.p3.1.m1.1a"><mo id="S6.SS3.p3.1.m1.1.1" xref="S6.SS3.p3.1.m1.1.1.cmml">∙</mo><annotation-xml encoding="MathML-Content" id="S6.SS3.p3.1.m1.1b"><ci id="S6.SS3.p3.1.m1.1.1.cmml" xref="S6.SS3.p3.1.m1.1.1">∙</ci></annotation-xml><annotation encoding="application/x-tex" id="S6.SS3.p3.1.m1.1c">\bullet</annotation><annotation encoding="application/x-llamapun" id="S6.SS3.p3.1.m1.1d">∙</annotation></semantics></math> <em class="ltx_emph ltx_font_italic" id="S6.SS3.p3.1.1">SUPERB.<cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib235" title="">235</a>]</cite></em> The benchmark evaluates speech processing models across multiple dimensions, including content recognition, speaker modeling, semantic understanding, and paralinguistic analysis. Tasks in content recognition cover phoneme recognition, automatic speech recognition, keyword spotting, and query-by-example spoken term detection, focusing on transcription and content detection accuracy. Speaker modeling involves tasks like speaker identification, automatic speaker verification, and speaker diarization to assess speaker-related features. Semantic understanding includes intent classification and slot filling, testing models’ ability to infer high-level meaning directly from raw audio. Paralinguistic analysis focuses on emotion recognition, capturing models’ ability to interpret affective cues from speech. The evaluation framework uses publicly available datasets and conventional metrics to provide a standardized testbed for assessing generalizability and task-specific performance.</p> </div> <div class="ltx_para" id="S6.SS3.p4"> <p class="ltx_p" id="S6.SS3.p4.1"><math alttext="\bullet" class="ltx_Math" display="inline" id="S6.SS3.p4.1.m1.1"><semantics id="S6.SS3.p4.1.m1.1a"><mo id="S6.SS3.p4.1.m1.1.1" xref="S6.SS3.p4.1.m1.1.1.cmml">∙</mo><annotation-xml encoding="MathML-Content" id="S6.SS3.p4.1.m1.1b"><ci id="S6.SS3.p4.1.m1.1.1.cmml" xref="S6.SS3.p4.1.m1.1.1">∙</ci></annotation-xml><annotation encoding="application/x-tex" id="S6.SS3.p4.1.m1.1c">\bullet</annotation><annotation encoding="application/x-llamapun" id="S6.SS3.p4.1.m1.1d">∙</annotation></semantics></math> <em class="ltx_emph ltx_font_italic" id="S6.SS3.p4.1.1">AudioBench.</em> AudioBench <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib205" title="">205</a>]</cite> evaluates spoken dialogue models across three primary dimensions: speech understanding, audio scene understanding, and voice (paralinguistic) understanding. It encompasses eight distinct tasks and leverages 26 datasets, including seven newly developed datasets. The evaluation emphasizes models’ ability to handle instruction-following tasks conditioned on audio signals, addressing aspects such as speech recognition accuracy, environmental sound interpretation, and paralinguistic feature extraction (e.g., emotion, gender, accent).</p> </div> <div class="ltx_para" id="S6.SS3.p5"> <p class="ltx_p" id="S6.SS3.p5.1"><math alttext="\bullet" class="ltx_Math" display="inline" id="S6.SS3.p5.1.m1.1"><semantics id="S6.SS3.p5.1.m1.1a"><mo id="S6.SS3.p5.1.m1.1.1" xref="S6.SS3.p5.1.m1.1.1.cmml">∙</mo><annotation-xml encoding="MathML-Content" id="S6.SS3.p5.1.m1.1b"><ci id="S6.SS3.p5.1.m1.1.1.cmml" xref="S6.SS3.p5.1.m1.1.1">∙</ci></annotation-xml><annotation encoding="application/x-tex" id="S6.SS3.p5.1.m1.1c">\bullet</annotation><annotation encoding="application/x-llamapun" id="S6.SS3.p5.1.m1.1d">∙</annotation></semantics></math> <em class="ltx_emph ltx_font_italic" id="S6.SS3.p5.1.1">AirBench.</em> AIR-Bench <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib234" title="">234</a>]</cite> assesses the capabilities of Spoken dialogue models to understand and interact based on various audio types, including human speech, natural sounds, and music. It consists of two primary components: a foundation benchmark with 19 specific audio tasks and over 19,000 single-choice questions, and a chat benchmark featuring more than 2,000 open-ended audio-prompted questions. The foundation benchmark evaluates fundamental skills such as speech recognition, acoustic scene classification, and music genre identification, focusing on specific subtasks to diagnose model weaknesses. The chat benchmark tests the models’ ability to handle complex, real-world audio-based queries, including mixed audio with varying loudness and temporal offsets. AIR-Bench introduces a novel audio mixing strategy to simulate complex real-world scenarios and employs GPT-4-based evaluation to judge model-generated hypotheses against reference answers.</p> </div> <div class="ltx_para" id="S6.SS3.p6"> <p class="ltx_p" id="S6.SS3.p6.1"><math alttext="\bullet" class="ltx_Math" display="inline" id="S6.SS3.p6.1.m1.1"><semantics id="S6.SS3.p6.1.m1.1a"><mo id="S6.SS3.p6.1.m1.1.1" xref="S6.SS3.p6.1.m1.1.1.cmml">∙</mo><annotation-xml encoding="MathML-Content" id="S6.SS3.p6.1.m1.1b"><ci id="S6.SS3.p6.1.m1.1.1.cmml" xref="S6.SS3.p6.1.m1.1.1">∙</ci></annotation-xml><annotation encoding="application/x-tex" id="S6.SS3.p6.1.m1.1c">\bullet</annotation><annotation encoding="application/x-llamapun" id="S6.SS3.p6.1.m1.1d">∙</annotation></semantics></math> <em class="ltx_emph ltx_font_italic" id="S6.SS3.p6.1.1">SpokenWOZ.</em> SpokenWOZ <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib192" title="">192</a>]</cite> evaluates task-oriented dialogue (TOD) systems in spoken scenarios, addressing challenges unique to spoken conversations, such as incremental processing, disfluencies, incomplete utterances, and Automatic Speech Recognition (ASR) noise. It introduces novel metrics to assess performance in tasks like cross-turn slot detection and reasoning slot detection, which require integrating information across multiple turns and reasoning from implicit cues. The benchmark encompasses multi-domain, human-to-human dialogues with diverse speech characteristics, testing systems on both textual and auditory inputs through large-scale annotated datasets with over 200,000 utterances and 249 hours of audio</p> </div> <div class="ltx_para" id="S6.SS3.p7"> <p class="ltx_p" id="S6.SS3.p7.1"><math alttext="\bullet" class="ltx_Math" display="inline" id="S6.SS3.p7.1.m1.1"><semantics id="S6.SS3.p7.1.m1.1a"><mo id="S6.SS3.p7.1.m1.1.1" xref="S6.SS3.p7.1.m1.1.1.cmml">∙</mo><annotation-xml encoding="MathML-Content" id="S6.SS3.p7.1.m1.1b"><ci id="S6.SS3.p7.1.m1.1.1.cmml" xref="S6.SS3.p7.1.m1.1.1">∙</ci></annotation-xml><annotation encoding="application/x-tex" id="S6.SS3.p7.1.m1.1c">\bullet</annotation><annotation encoding="application/x-llamapun" id="S6.SS3.p7.1.m1.1d">∙</annotation></semantics></math> <em class="ltx_emph ltx_font_italic" id="S6.SS3.p7.1.1">SD-EVAL.</em> SD-Eval <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib7" title="">7</a>]</cite> evaluates spoken dialogue models across multiple dimensions, focusing on both spoken understanding and response generation beyond textual content. It assesses models’ abilities to process three key types of information embedded in speech: content (e.g., linguistic meaning), paralinguistic cues (e.g., emotion, accent, age), and environmental context (e.g., background sounds). The benchmark consists of four sub-tasks—emotion, accent, age, and environment—constructed from diverse datasets and totaling 7,303 utterances spanning 8.76 hours.</p> </div> <div class="ltx_para" id="S6.SS3.p8"> <p class="ltx_p" id="S6.SS3.p8.1"><math alttext="\bullet" class="ltx_Math" display="inline" id="S6.SS3.p8.1.m1.1"><semantics id="S6.SS3.p8.1.m1.1a"><mo id="S6.SS3.p8.1.m1.1.1" xref="S6.SS3.p8.1.m1.1.1.cmml">∙</mo><annotation-xml encoding="MathML-Content" id="S6.SS3.p8.1.m1.1b"><ci id="S6.SS3.p8.1.m1.1.1.cmml" xref="S6.SS3.p8.1.m1.1.1">∙</ci></annotation-xml><annotation encoding="application/x-tex" id="S6.SS3.p8.1.m1.1c">\bullet</annotation><annotation encoding="application/x-llamapun" id="S6.SS3.p8.1.m1.1d">∙</annotation></semantics></math> <em class="ltx_emph ltx_font_italic" id="S6.SS3.p8.1.1">SuperCLUE.</em> SuperCLUE evaluates spoken dialogue systems across four main dimensions: voice interaction, general capabilities, scenario applications, and response speed. Key metrics include interruption recognition, speech tone adjustment, semantic understanding, naturalness of speech, and memory accuracy. Additionally, it measures real-time data retrieval, reasoning ability, compliance with commands, and multilingual translation accuracy. Scenario-specific applications like emotional counseling, health consultations, and customer service are assessed for precision and effectiveness. The final aspect is response timeliness, focusing on latency and delay management.However, this benchmark is not open source and focuse on Mandarine ability</p> </div> <div class="ltx_para" id="S6.SS3.p9"> <p class="ltx_p" id="S6.SS3.p9.1"><math alttext="\bullet" class="ltx_Math" display="inline" id="S6.SS3.p9.1.m1.1"><semantics id="S6.SS3.p9.1.m1.1a"><mo id="S6.SS3.p9.1.m1.1.1" xref="S6.SS3.p9.1.m1.1.1.cmml">∙</mo><annotation-xml encoding="MathML-Content" id="S6.SS3.p9.1.m1.1b"><ci id="S6.SS3.p9.1.m1.1.1.cmml" xref="S6.SS3.p9.1.m1.1.1">∙</ci></annotation-xml><annotation encoding="application/x-tex" id="S6.SS3.p9.1.m1.1c">\bullet</annotation><annotation encoding="application/x-llamapun" id="S6.SS3.p9.1.m1.1d">∙</annotation></semantics></math> <em class="ltx_emph ltx_font_italic" id="S6.SS3.p9.1.1">MMAU.</em> MMAU <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib182" title="">182</a>]</cite> evaluates spoken dialogue models across multiple dimensions, encompassing 27 distinct tasks divided into reasoning and information extraction categories. It assesses models on their ability to comprehend and reason about speech, sound, and music by leveraging advanced cognitive skills and domain-specific knowledge. Key evaluated areas include temporal event reasoning, speaker role mapping, emotional tone interpretation, eco-acoustic knowledge, phonemic stress pattern analysis, and melodic structure interpretation. It examines not just basic recognition or transcription capabilities but also models’ proficiency in complex reasoning, contextual understanding, and the ability to extract and apply world knowledge. Additionally, MMAU scrutinizes performance consistency across varying difficulty levels, testing systems’ depth of reasoning and robustness in real-world audio scenarios.</p> </div> </section> </section> <section class="ltx_section" id="S7"> <h2 class="ltx_title ltx_title_section"> <span class="ltx_tag ltx_tag_section">7 </span>Conclusion</h2> <div class="ltx_para" id="S7.p1"> <p class="ltx_p" id="S7.p1.1">In this work, we systematically review the research related to spoken dialogue models, categorizing it according to two paradigms: cascaded spoken dialogue models and end-to-end spoken dialogue models. Additionally, we provide a detailed overview of the core technologies behind spoken dialogue models, including speech representation, training paradigms, streaming duplex systems, and interaction mechanisms. In the speech representation module, we classify and explain the representations from both the input and output perspectives, focusing on different types of semantic and acoustic representations. In the training paradigm module, we thoroughly discuss five modalities of alignment for spoken dialogue models, multi-stage training strategies, model architectures, and generation paradigms. Following this, we provide an in-depth analysis of streaming input and output for spoken dialogue models, as well as the related duplex interaction technologies. Finally, we compile key training resources, evaluation metrics, and benchmarks relevant to spoken dialogue models. We specifically address the evaluation of different levels of intelligence in spoken dialogue models across various scenarios. It is important to note that, given that spoken dialogue models are a relatively new and emerging technology, many aspects such as semantic and acoustic representations, still lack well-established paradigms. Therefore, at the end of each section, we include a dedicated discussion module to explore these open issues. We hope that this survey will contribute to the further development of the field of spoken dialogue systems.</p> </div> </section> <section class="ltx_bibliography" id="bib"> <h2 class="ltx_title ltx_title_bibliography">References</h2> <ul class="ltx_biblist"> <li class="ltx_bibitem" id="bib.bib1"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">[1]</span> <span class="ltx_bibblock"> Josh Achiam, Steven Adler, Sandhini Agarwal, Lama Ahmad, Ilge Akkaya, Florencia Leoni Aleman, Diogo Almeida, Janko Altenschmidt, Sam Altman, Shyamal Anadkat, et al. </span> <span class="ltx_bibblock">Gpt-4 technical report. </span> <span class="ltx_bibblock"><span class="ltx_text ltx_font_italic" id="bib.bib1.1.1">arXiv preprint arXiv:2303.08774</span>, 2023. </span> </li> <li class="ltx_bibitem" id="bib.bib2"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">[2]</span> <span class="ltx_bibblock"> Andrea Agostinelli, Timo I Denk, Zalán Borsos, Jesse Engel, Mauro Verzetti, Antoine Caillon, Qingqing Huang, Aren Jansen, Adam Roberts, Marco Tagliasacchi, et al. </span> <span class="ltx_bibblock">Musiclm: Generating music from text. </span> <span class="ltx_bibblock"><span class="ltx_text ltx_font_italic" id="bib.bib2.1.1">arXiv preprint arXiv:2301.11325</span>, 2023. </span> </li> <li class="ltx_bibitem" id="bib.bib3"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">[3]</span> <span class="ltx_bibblock"> Sunghwan Ahn, Beom Jun Woo, Min Hyun Han, Chanyeong Moon, and Nam Soo Kim. </span> <span class="ltx_bibblock">Hilcodec: High fidelity and lightweight neural audio codec. </span> <span class="ltx_bibblock"><span class="ltx_text ltx_font_italic" id="bib.bib3.1.1">arXiv preprint arXiv:2405.04752</span>, 2024. </span> </li> <li class="ltx_bibitem" id="bib.bib4"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">[4]</span> <span class="ltx_bibblock"> Yang Ai, Xiao-Hang Jiang, Ye-Xin Lu, Hui-Peng Du, and Zhen-Hua Ling. </span> <span class="ltx_bibblock">Apcodec: A neural audio codec with parallel amplitude and phase spectrum encoding and decoding. </span> <span class="ltx_bibblock"><span class="ltx_text ltx_font_italic" id="bib.bib4.1.1">arXiv preprint arXiv:2402.10533</span>, 2024. </span> </li> <li class="ltx_bibitem" id="bib.bib5"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">[5]</span> <span class="ltx_bibblock"> Philip Anastassiou, Jiawei Chen, Jitong Chen, Yuanzhe Chen, Zhuo Chen, Ziyi Chen, Jian Cong, Lelai Deng, Chuang Ding, Lu Gao, et al. </span> <span class="ltx_bibblock">Seed-tts: A family of high-quality versatile speech generation models. </span> <span class="ltx_bibblock"><span class="ltx_text ltx_font_italic" id="bib.bib5.1.1">arXiv preprint arXiv:2406.02430</span>, 2024. </span> </li> <li class="ltx_bibitem" id="bib.bib6"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">[6]</span> <span class="ltx_bibblock"> Rohan Anil, Andrew M Dai, Orhan Firat, Melvin Johnson, Dmitry Lepikhin, Alexandre Passos, Siamak Shakeri, Emanuel Taropa, Paige Bailey, Zhifeng Chen, et al. </span> <span class="ltx_bibblock">Palm 2 technical report. </span> <span class="ltx_bibblock"><span class="ltx_text ltx_font_italic" id="bib.bib6.1.1">arXiv preprint arXiv:2305.10403</span>, 2023. </span> </li> <li class="ltx_bibitem" id="bib.bib7"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">[7]</span> <span class="ltx_bibblock"> Junyi Ao, Yuancheng Wang, Xiaohai Tian, Dekun Chen, Jun Zhang, Lu Lu, Yuxuan Wang, Haizhou Li, and Zhizheng Wu. </span> <span class="ltx_bibblock">Sd-eval: A benchmark dataset for spoken dialogue understanding beyond words. </span> <span class="ltx_bibblock"><span class="ltx_text ltx_font_italic" id="bib.bib7.1.1">arXiv preprint arXiv:2406.13340</span>, 2024. </span> </li> <li class="ltx_bibitem" id="bib.bib8"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">[8]</span> <span class="ltx_bibblock"> Rosana Ardila, Megan Branson, Kelly Davis, Michael Henretty, Michael Kohler, Josh Meyer, Reuben Morais, Lindsay Saunders, Francis M Tyers, and Gregor Weber. </span> <span class="ltx_bibblock">Common voice: A massively-multilingual speech corpus. </span> <span class="ltx_bibblock"><span class="ltx_text ltx_font_italic" id="bib.bib8.1.1">arXiv preprint arXiv:1912.06670</span>, 2019. </span> </li> <li class="ltx_bibitem" id="bib.bib9"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">[9]</span> <span class="ltx_bibblock"> Arun Babu, Changhan Wang, Andros Tjandra, Kushal Lakhotia, Qiantong Xu, Naman Goyal, Kritika Singh, Patrick Von Platen, Yatharth Saraf, Juan Pino, et al. </span> <span class="ltx_bibblock">Xls-r: Self-supervised cross-lingual speech representation learning at scale. </span> <span class="ltx_bibblock"><span class="ltx_text ltx_font_italic" id="bib.bib9.1.1">arXiv preprint arXiv:2111.09296</span>, 2021. </span> </li> <li class="ltx_bibitem" id="bib.bib10"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">[10]</span> <span class="ltx_bibblock"> Alexei Baevski, Yuhao Zhou, Abdelrahman Mohamed, and Michael Auli. </span> <span class="ltx_bibblock">wav2vec 2.0: A framework for self-supervised learning of speech representations. </span> <span class="ltx_bibblock"><span class="ltx_text ltx_font_italic" id="bib.bib10.1.1">Advances in neural information processing systems</span>, 33:12449–12460, 2020. </span> </li> <li class="ltx_bibitem" id="bib.bib11"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">[11]</span> <span class="ltx_bibblock"> Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, et al. </span> <span class="ltx_bibblock">Qwen technical report. </span> <span class="ltx_bibblock"><span class="ltx_text ltx_font_italic" id="bib.bib11.1.1">arXiv preprint arXiv:2309.16609</span>, 2023. </span> </li> <li class="ltx_bibitem" id="bib.bib12"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">[12]</span> <span class="ltx_bibblock"> Shaojie Bai, J Zico Kolter, and Vladlen Koltun. </span> <span class="ltx_bibblock">An empirical evaluation of generic convolutional and recurrent networks for sequence modeling. </span> <span class="ltx_bibblock"><span class="ltx_text ltx_font_italic" id="bib.bib12.1.1">arXiv preprint arXiv:1803.01271</span>, 2018. </span> </li> <li class="ltx_bibitem" id="bib.bib13"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">[13]</span> <span class="ltx_bibblock"> Satanjeev Banerjee and Alon Lavie. </span> <span class="ltx_bibblock">Meteor: An automatic metric for mt evaluation with improved correlation with human judgments. </span> <span class="ltx_bibblock">In <span class="ltx_text ltx_font_italic" id="bib.bib13.1.1">Proceedings of the acl workshop on intrinsic and extrinsic evaluation measures for machine translation and/or summarization</span>, pages 65–72, 2005. </span> </li> <li class="ltx_bibitem" id="bib.bib14"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">[14]</span> <span class="ltx_bibblock"> Loïc Barrault, Yu-An Chung, Mariano Coria Meglioli, David Dale, Ning Dong, Mark Duppenthaler, Paul-Ambroise Duquenne, Brian Ellis, Hady Elsahar, Justin Haaheim, et al. </span> <span class="ltx_bibblock">Seamless: Multilingual expressive and streaming speech translation. </span> <span class="ltx_bibblock"><span class="ltx_text ltx_font_italic" id="bib.bib14.1.1">arXiv preprint arXiv:2312.05187</span>, 2023. </span> </li> <li class="ltx_bibitem" id="bib.bib15"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">[15]</span> <span class="ltx_bibblock"> Yoshua Bengio, Jérôme Louradour, Ronan Collobert, and Jason Weston. </span> <span class="ltx_bibblock">Curriculum learning. </span> <span class="ltx_bibblock">In <span class="ltx_text ltx_font_italic" id="bib.bib15.1.1">Proceedings of the 26th annual international conference on machine learning</span>, pages 41–48, 2009. </span> </li> <li class="ltx_bibitem" id="bib.bib16"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">[16]</span> <span class="ltx_bibblock"> Rachel M Bittner, Justin Salamon, Mike Tierney, Matthias Mauch, Chris Cannam, and Juan Pablo Bello. </span> <span class="ltx_bibblock">Medleydb: A multitrack dataset for annotation-intensive mir research. </span> <span class="ltx_bibblock">In <span class="ltx_text ltx_font_italic" id="bib.bib16.1.1">ISMIR</span>, volume 14, pages 155–160, 2014. </span> </li> <li class="ltx_bibitem" id="bib.bib17"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">[17]</span> <span class="ltx_bibblock"> Juan J Bosch, Jordi Janer, Ferdinand Fuhrmann, and Perfecto Herrera. </span> <span class="ltx_bibblock">A comparison of sound segregation techniques for predominant instrument recognition in musical audio signals. </span> <span class="ltx_bibblock">In <span class="ltx_text ltx_font_italic" id="bib.bib17.1.1">ISMIR</span>, pages 559–564, 2012. </span> </li> <li class="ltx_bibitem" id="bib.bib18"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">[18]</span> <span class="ltx_bibblock"> Hui Bu, Jiayu Du, Xingyu Na, Bengu Wu, and Hao Zheng. </span> <span class="ltx_bibblock">Aishell-1: An open-source mandarin speech corpus and a speech recognition baseline. </span> <span class="ltx_bibblock">In <span class="ltx_text ltx_font_italic" id="bib.bib18.1.1">2017 20th conference of the oriental chapter of the international coordinating committee on speech databases and speech I/O systems and assessment (O-COCOSDA)</span>, pages 1–5. IEEE, 2017. </span> </li> <li class="ltx_bibitem" id="bib.bib19"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">[19]</span> <span class="ltx_bibblock"> Carlos Busso, Murtaza Bulut, Chi-Chun Lee, Abe Kazemzadeh, Emily Mower, Samuel Kim, Jeannette N Chang, Sungbok Lee, and Shrikanth S Narayanan. </span> <span class="ltx_bibblock">Iemocap: Interactive emotional dyadic motion capture database. </span> <span class="ltx_bibblock"><span class="ltx_text ltx_font_italic" id="bib.bib19.1.1">Language resources and evaluation</span>, 42:335–359, 2008. </span> </li> <li class="ltx_bibitem" id="bib.bib20"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">[20]</span> <span class="ltx_bibblock"> Carlos Busso, Srinivas Parthasarathy, Alec Burmania, Mohammed AbdelWahab, Najmeh Sadoughi, and Emily Mower Provost. </span> <span class="ltx_bibblock">Msp-improv: An acted corpus of dyadic interactions to study emotion perception. </span> <span class="ltx_bibblock"><span class="ltx_text ltx_font_italic" id="bib.bib20.1.1">IEEE Transactions on Affective Computing</span>, 8(1):67–80, 2016. </span> </li> <li class="ltx_bibitem" id="bib.bib21"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">[21]</span> <span class="ltx_bibblock"> Edresson Casanova, Kelly Davis, Eren Gölge, Görkem Göknar, Iulian Gulea, Logan Hart, Aya Aljafari, Joshua Meyer, Reuben Morais, Samuel Olayemi, et al. </span> <span class="ltx_bibblock">Xtts: a massively multilingual zero-shot text-to-speech model. </span> <span class="ltx_bibblock"><span class="ltx_text ltx_font_italic" id="bib.bib21.1.1">arXiv preprint arXiv:2406.04904</span>, 2024. </span> </li> <li class="ltx_bibitem" id="bib.bib22"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">[22]</span> <span class="ltx_bibblock"> Devendra Singh Chaplot. </span> <span class="ltx_bibblock">Albert q. jiang, alexandre sablayrolles, arthur mensch, chris bamford, devendra singh chaplot, diego de las casas, florian bressand, gianna lengyel, guillaume lample, lucile saulnier, lélio renard lavaud, marie-anne lachaux, pierre stock, teven le scao, thibaut lavril, thomas wang, timothée lacroix, william el sayed. </span> <span class="ltx_bibblock"><span class="ltx_text ltx_font_italic" id="bib.bib22.1.1">arXiv preprint arXiv:2310.06825</span>, 2023. </span> </li> <li class="ltx_bibitem" id="bib.bib23"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">[23]</span> <span class="ltx_bibblock"> Chen Chen, Yuchen Hu, Wen Wu, Helin Wang, Eng Siong Chng, and Chao Zhang. </span> <span class="ltx_bibblock">Enhancing zero-shot text-to-speech synthesis with human feedback. </span> <span class="ltx_bibblock"><span class="ltx_text ltx_font_italic" id="bib.bib23.1.1">arXiv preprint arXiv:2406.00654</span>, 2024. </span> </li> <li class="ltx_bibitem" id="bib.bib24"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">[24]</span> <span class="ltx_bibblock"> Guoguo Chen, Shuzhou Chai, Guanbo Wang, Jiayu Du, Wei-Qiang Zhang, Chao Weng, Dan Su, Daniel Povey, Jan Trmal, Junbo Zhang, et al. </span> <span class="ltx_bibblock">Gigaspeech: An evolving, multi-domain asr corpus with 10,000 hours of transcribed audio. </span> <span class="ltx_bibblock"><span class="ltx_text ltx_font_italic" id="bib.bib24.1.1">arXiv preprint arXiv:2106.06909</span>, 2021. </span> </li> <li class="ltx_bibitem" id="bib.bib25"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">[25]</span> <span class="ltx_bibblock"> Kai Chen, Yunhao Gou, Runhui Huang, Zhili Liu, Daxin Tan, Jing Xu, Chunwei Wang, Yi Zhu, Yihan Zeng, Kuo Yang, et al. </span> <span class="ltx_bibblock">Emova: Empowering language models to see, hear and speak with vivid emotions. </span> <span class="ltx_bibblock"><span class="ltx_text ltx_font_italic" id="bib.bib25.1.1">arXiv preprint arXiv:2409.18042</span>, 2024. </span> </li> <li class="ltx_bibitem" id="bib.bib26"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">[26]</span> <span class="ltx_bibblock"> Mark Chen, Jerry Tworek, Heewoo Jun, Qiming Yuan, Henrique Ponde De Oliveira Pinto, Jared Kaplan, Harri Edwards, Yuri Burda, Nicholas Joseph, Greg Brockman, et al. </span> <span class="ltx_bibblock">Evaluating large language models trained on code. </span> <span class="ltx_bibblock"><span class="ltx_text ltx_font_italic" id="bib.bib26.1.1">arXiv preprint arXiv:2107.03374</span>, 2021. </span> </li> <li class="ltx_bibitem" id="bib.bib27"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">[27]</span> <span class="ltx_bibblock"> Sanyuan Chen, Chengyi Wang, Zhengyang Chen, Yu Wu, Shujie Liu, Zhuo Chen, Jinyu Li, Naoyuki Kanda, Takuya Yoshioka, Xiong Xiao, et al. </span> <span class="ltx_bibblock">Wavlm: Large-scale self-supervised pre-training for full stack speech processing. </span> <span class="ltx_bibblock"><span class="ltx_text ltx_font_italic" id="bib.bib27.1.1">IEEE Journal of Selected Topics in Signal Processing</span>, 16(6):1505–1518, 2022. </span> </li> <li class="ltx_bibitem" id="bib.bib28"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">[28]</span> <span class="ltx_bibblock"> Sanyuan Chen, Yu Wu, Chengyi Wang, Shujie Liu, Daniel Tompkins, Zhuo Chen, and Furu Wei. </span> <span class="ltx_bibblock">Beats: Audio pre-training with acoustic tokenizers. </span> <span class="ltx_bibblock"><span class="ltx_text ltx_font_italic" id="bib.bib28.1.1">arXiv preprint arXiv:2212.09058</span>, 2022. </span> </li> <li class="ltx_bibitem" id="bib.bib29"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">[29]</span> <span class="ltx_bibblock"> Yiming Chen, Xianghu Yue, Chen Zhang, Xiaoxue Gao, Robby T Tan, and Haizhou Li. </span> <span class="ltx_bibblock">Voicebench: Benchmarking llm-based voice assistants. </span> <span class="ltx_bibblock"><span class="ltx_text ltx_font_italic" id="bib.bib29.1.1">arXiv preprint arXiv:2410.17196</span>, 2024. </span> </li> <li class="ltx_bibitem" id="bib.bib30"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">[30]</span> <span class="ltx_bibblock"> Yushen Chen, Zhikang Niu, Ziyang Ma, Keqi Deng, Chunhui Wang, Jian Zhao, Kai Yu, and Xie Chen. </span> <span class="ltx_bibblock">F5-tts: A fairytaler that fakes fluent and faithful speech with flow matching. </span> <span class="ltx_bibblock"><span class="ltx_text ltx_font_italic" id="bib.bib30.1.1">arXiv preprint arXiv:2410.06885</span>, 2024. </span> </li> <li class="ltx_bibitem" id="bib.bib31"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">[31]</span> <span class="ltx_bibblock"> Po-Han Chi, Pei-Hung Chung, Tsung-Han Wu, Chun-Cheng Hsieh, Yen-Hao Chen, Shang-Wen Li, and Hung-yi Lee. </span> <span class="ltx_bibblock">Audio albert: A lite bert for self-supervised learning of audio representation. </span> <span class="ltx_bibblock">In <span class="ltx_text ltx_font_italic" id="bib.bib31.1.1">2021 IEEE Spoken Language Technology Workshop (SLT)</span>, pages 344–350. IEEE, 2021. </span> </li> <li class="ltx_bibitem" id="bib.bib32"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">[32]</span> <span class="ltx_bibblock"> Cheol Jun Cho, Peter Wu, Tejas S Prabhune, Dhruv Agarwal, and Gopala K Anumanchipalli. </span> <span class="ltx_bibblock">Articulatory encodec: Vocal tract kinematics as a codec for speech. </span> <span class="ltx_bibblock"><span class="ltx_text ltx_font_italic" id="bib.bib32.1.1">arXiv preprint arXiv:2406.12998</span>, 2024. </span> </li> <li class="ltx_bibitem" id="bib.bib33"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">[33]</span> <span class="ltx_bibblock"> Yunfei Chu, Jin Xu, Qian Yang, Haojie Wei, Xipin Wei, Zhifang Guo, Yichong Leng, Yuanjun Lv, Jinzheng He, Junyang Lin, et al. </span> <span class="ltx_bibblock">Qwen2-audio technical report. </span> <span class="ltx_bibblock"><span class="ltx_text ltx_font_italic" id="bib.bib33.1.1">arXiv preprint arXiv:2407.10759</span>, 2024. </span> </li> <li class="ltx_bibitem" id="bib.bib34"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">[34]</span> <span class="ltx_bibblock"> Yunfei Chu, Jin Xu, Xiaohuan Zhou, Qian Yang, Shiliang Zhang, Zhijie Yan, Chang Zhou, and Jingren Zhou. </span> <span class="ltx_bibblock">Qwen-audio: Advancing universal audio understanding via unified large-scale audio-language models. </span> <span class="ltx_bibblock"><span class="ltx_text ltx_font_italic" id="bib.bib34.1.1">arXiv preprint arXiv:2311.07919</span>, 2023. </span> </li> <li class="ltx_bibitem" id="bib.bib35"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">[35]</span> <span class="ltx_bibblock"> Yu-An Chung, Hao Tang, and James Glass. </span> <span class="ltx_bibblock">Vector-quantized autoregressive predictive coding. </span> <span class="ltx_bibblock"><span class="ltx_text ltx_font_italic" id="bib.bib35.1.1">arXiv preprint arXiv:2005.08392</span>, 2020. </span> </li> <li class="ltx_bibitem" id="bib.bib36"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">[36]</span> <span class="ltx_bibblock"> Geoffrey Cideron, Sertan Girgin, Mauro Verzetti, Damien Vincent, Matej Kastelic, Zalán Borsos, Brian McWilliams, Victor Ungureanu, Olivier Bachem, Olivier Pietquin, et al. </span> <span class="ltx_bibblock">Musicrl: Aligning music generation to human preferences. </span> <span class="ltx_bibblock"><span class="ltx_text ltx_font_italic" id="bib.bib36.1.1">arXiv preprint arXiv:2402.04229</span>, 2024. </span> </li> <li class="ltx_bibitem" id="bib.bib37"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">[37]</span> <span class="ltx_bibblock"> Christopher Cieri, David Miller, and Kevin Walker. </span> <span class="ltx_bibblock">The fisher corpus: A resource for the next generations of speech-to-text. </span> <span class="ltx_bibblock">In <span class="ltx_text ltx_font_italic" id="bib.bib37.1.1">LREC</span>, volume 4, pages 69–71, 2004. </span> </li> <li class="ltx_bibitem" id="bib.bib38"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">[38]</span> <span class="ltx_bibblock"> Peter Clark, Isaac Cowhey, Oren Etzioni, Tushar Khot, Ashish Sabharwal, Carissa Schoenick, and Oyvind Tafjord. </span> <span class="ltx_bibblock">Think you have solved question answering? try arc, the ai2 reasoning challenge. </span> <span class="ltx_bibblock"><span class="ltx_text ltx_font_italic" id="bib.bib38.1.1">arXiv preprint arXiv:1803.05457</span>, 2018. </span> </li> <li class="ltx_bibitem" id="bib.bib39"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">[39]</span> <span class="ltx_bibblock"> Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Mark Chen, Heewoo Jun, Lukasz Kaiser, Matthias Plappert, Jerry Tworek, Jacob Hilton, Reiichiro Nakano, et al. </span> <span class="ltx_bibblock">Training verifiers to solve math word problems. </span> <span class="ltx_bibblock"><span class="ltx_text ltx_font_italic" id="bib.bib39.1.1">arXiv preprint arXiv:2110.14168</span>, 2021. </span> </li> <li class="ltx_bibitem" id="bib.bib40"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">[40]</span> <span class="ltx_bibblock"> Jade Copet, Felix Kreuk, Itai Gat, Tal Remez, David Kant, Gabriel Synnaeve, Yossi Adi, and Alexandre Défossez. </span> <span class="ltx_bibblock">Simple and controllable music generation. </span> <span class="ltx_bibblock"><span class="ltx_text ltx_font_italic" id="bib.bib40.1.1">Advances in Neural Information Processing Systems</span>, 36, 2024. </span> </li> <li class="ltx_bibitem" id="bib.bib41"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">[41]</span> <span class="ltx_bibblock"> Nilaksh Das, Saket Dingliwal, Srikanth Ronanki, Rohit Paturi, David Huang, Prashant Mathur, Jie Yuan, Dhanush Bekal, Xing Niu, Sai Muralidhar Jayanthi, et al. </span> <span class="ltx_bibblock">Speechverse: A large-scale generalizable audio language model. </span> <span class="ltx_bibblock"><span class="ltx_text ltx_font_italic" id="bib.bib41.1.1">arXiv preprint arXiv:2405.08295</span>, 2024. </span> </li> <li class="ltx_bibitem" id="bib.bib42"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">[42]</span> <span class="ltx_bibblock"> Michaël Defferrard, Kirell Benzi, Pierre Vandergheynst, and Xavier Bresson. </span> <span class="ltx_bibblock">FMA: A dataset for music analysis. </span> <span class="ltx_bibblock">In <span class="ltx_text ltx_font_italic" id="bib.bib42.1.1">18th International Society for Music Information Retrieval Conference (ISMIR)</span>, 2017. </span> </li> <li class="ltx_bibitem" id="bib.bib43"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">[43]</span> <span class="ltx_bibblock"> Alexandre Défossez, Jade Copet, Gabriel Synnaeve, and Yossi Adi. </span> <span class="ltx_bibblock">High fidelity neural audio compression. </span> <span class="ltx_bibblock"><span class="ltx_text ltx_font_italic" id="bib.bib43.1.1">arXiv preprint arXiv:2210.13438</span>, 2022. </span> </li> <li class="ltx_bibitem" id="bib.bib44"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">[44]</span> <span class="ltx_bibblock"> Alexandre Défossez, Laurent Mazaré, Manu Orsini, Amélie Royer, Patrick Pérez, Hervé Jégou, Edouard Grave, and Neil Zeghidour. </span> <span class="ltx_bibblock">Moshi: a speech-text foundation model for real-time dialogue. </span> <span class="ltx_bibblock"><span class="ltx_text ltx_font_italic" id="bib.bib44.1.1">arXiv preprint arXiv:2410.00037</span>, 2024. </span> </li> <li class="ltx_bibitem" id="bib.bib45"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">[45]</span> <span class="ltx_bibblock"> Jacob Devlin. </span> <span class="ltx_bibblock">Bert: Pre-training of deep bidirectional transformers for language understanding. </span> <span class="ltx_bibblock"><span class="ltx_text ltx_font_italic" id="bib.bib45.1.1">arXiv preprint arXiv:1810.04805</span>, 2018. </span> </li> <li class="ltx_bibitem" id="bib.bib46"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">[46]</span> <span class="ltx_bibblock"> Ning Ding, Yulin Chen, Bokai Xu, Yujia Qin, Zhi Zheng, Shengding Hu, Zhiyuan Liu, Maosong Sun, and Bowen Zhou. </span> <span class="ltx_bibblock">Enhancing chat language models by scaling high-quality instructional conversations. </span> <span class="ltx_bibblock"><span class="ltx_text ltx_font_italic" id="bib.bib46.1.1">arXiv preprint arXiv:2305.14233</span>, 2023. </span> </li> <li class="ltx_bibitem" id="bib.bib47"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">[47]</span> <span class="ltx_bibblock"> Zhichen Dong, Zhanhui Zhou, Chao Yang, Jing Shao, and Yu Qiao. </span> <span class="ltx_bibblock">Attacks, defenses and evaluations for llm conversation safety: A survey. </span> <span class="ltx_bibblock"><span class="ltx_text ltx_font_italic" id="bib.bib47.1.1">arXiv preprint arXiv:2402.09283</span>, 2024. </span> </li> <li class="ltx_bibitem" id="bib.bib48"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">[48]</span> <span class="ltx_bibblock"> Jiayu Du, Xingyu Na, Xuechen Liu, and Hui Bu. </span> <span class="ltx_bibblock">Aishell-2: Transforming mandarin asr research into industrial scale. </span> <span class="ltx_bibblock"><span class="ltx_text ltx_font_italic" id="bib.bib48.1.1">arXiv preprint arXiv:1808.10583</span>, 2018. </span> </li> <li class="ltx_bibitem" id="bib.bib49"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">[49]</span> <span class="ltx_bibblock"> Zhihao Du, Qian Chen, Shiliang Zhang, Kai Hu, Heng Lu, Yexin Yang, Hangrui Hu, Siqi Zheng, Yue Gu, Ziyang Ma, et al. </span> <span class="ltx_bibblock">Cosyvoice: A scalable multilingual zero-shot text-to-speech synthesizer based on supervised semantic tokens. </span> <span class="ltx_bibblock"><span class="ltx_text ltx_font_italic" id="bib.bib49.1.1">arXiv preprint arXiv:2407.05407</span>, 2024. </span> </li> <li class="ltx_bibitem" id="bib.bib50"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">[50]</span> <span class="ltx_bibblock"> Zhihao Du, Jiaming Wang, Qian Chen, Yunfei Chu, Zhifu Gao, Zerui Li, Kai Hu, Xiaohuan Zhou, Jin Xu, Ziyang Ma, et al. </span> <span class="ltx_bibblock">Lauragpt: Listen, attend, understand, and regenerate audio with gpt. </span> <span class="ltx_bibblock"><span class="ltx_text ltx_font_italic" id="bib.bib50.1.1">arXiv preprint arXiv:2310.04673</span>, 2023. </span> </li> <li class="ltx_bibitem" id="bib.bib51"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">[51]</span> <span class="ltx_bibblock"> Zhihao Du, Shiliang Zhang, Kai Hu, and Siqi Zheng. </span> <span class="ltx_bibblock">Funcodec: A fundamental, reproducible and integrable open-source toolkit for neural speech codec. </span> <span class="ltx_bibblock">In <span class="ltx_text ltx_font_italic" id="bib.bib51.1.1">ICASSP 2024-2024 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)</span>, pages 591–595. IEEE, 2024. </span> </li> <li class="ltx_bibitem" id="bib.bib52"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">[52]</span> <span class="ltx_bibblock"> Abhimanyu Dubey, Abhinav Jauhri, Abhinav Pandey, Abhishek Kadian, Ahmad Al-Dahle, Aiesha Letman, Akhil Mathur, Alan Schelten, Amy Yang, Angela Fan, et al. </span> <span class="ltx_bibblock">The llama 3 herd of models. </span> <span class="ltx_bibblock"><span class="ltx_text ltx_font_italic" id="bib.bib52.1.1">arXiv preprint arXiv:2407.21783</span>, 2024. </span> </li> <li class="ltx_bibitem" id="bib.bib53"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">[53]</span> <span class="ltx_bibblock"> Starkey Duncan. </span> <span class="ltx_bibblock">Some signals and rules for taking speaking turns in conversations. </span> <span class="ltx_bibblock"><span class="ltx_text ltx_font_italic" id="bib.bib53.1.1">Journal of personality and social psychology</span>, 23(2):283, 1972. </span> </li> <li class="ltx_bibitem" id="bib.bib54"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">[54]</span> <span class="ltx_bibblock"> Starkey Duncan Jr and George Niederehe. </span> <span class="ltx_bibblock">On signalling that it’s your turn to speak. </span> <span class="ltx_bibblock"><span class="ltx_text ltx_font_italic" id="bib.bib54.1.1">Journal of experimental social psychology</span>, 10(3):234–247, 1974. </span> </li> <li class="ltx_bibitem" id="bib.bib55"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">[55]</span> <span class="ltx_bibblock"> Erik Ekstedt and Gabriel Skantze. </span> <span class="ltx_bibblock">Turngpt: a transformer-based language model for predicting turn-taking in spoken dialog. </span> <span class="ltx_bibblock"><span class="ltx_text ltx_font_italic" id="bib.bib55.1.1">arXiv preprint arXiv:2010.10874</span>, 2020. </span> </li> <li class="ltx_bibitem" id="bib.bib56"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">[56]</span> <span class="ltx_bibblock"> Jesse Engel, Cinjon Resnick, Adam Roberts, Sander Dieleman, Mohammad Norouzi, Douglas Eck, and Karen Simonyan. </span> <span class="ltx_bibblock">Neural audio synthesis of musical notes with wavenet autoencoders. </span> <span class="ltx_bibblock">In <span class="ltx_text ltx_font_italic" id="bib.bib56.1.1">International Conference on Machine Learning</span>, pages 1068–1077. PMLR, 2017. </span> </li> <li class="ltx_bibitem" id="bib.bib57"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">[57]</span> <span class="ltx_bibblock"> Qingkai Fang, Shoutao Guo, Yan Zhou, Zhengrui Ma, Shaolei Zhang, and Yang Feng. </span> <span class="ltx_bibblock">Llama-omni: Seamless speech interaction with large language models. </span> <span class="ltx_bibblock"><span class="ltx_text ltx_font_italic" id="bib.bib57.1.1">arXiv preprint arXiv:2409.06666</span>, 2024. </span> </li> <li class="ltx_bibitem" id="bib.bib58"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">[58]</span> <span class="ltx_bibblock"> Jiazhan Feng, Qingfeng Sun, Can Xu, Pu Zhao, Yaming Yang, Chongyang Tao, Dongyan Zhao, and Qingwei Lin. </span> <span class="ltx_bibblock">Mmdialog: A large-scale multi-turn dialogue dataset towards multi-modal open-domain conversation. </span> <span class="ltx_bibblock"><span class="ltx_text ltx_font_italic" id="bib.bib58.1.1">arXiv preprint arXiv:2211.05719</span>, 2022. </span> </li> <li class="ltx_bibitem" id="bib.bib59"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">[59]</span> <span class="ltx_bibblock"> Mauajama Firdaus, Hardik Chauhan, Asif Ekbal, and Pushpak Bhattacharyya. </span> <span class="ltx_bibblock">Meisd: A multimodal multi-label emotion, intensity and sentiment dialogue dataset for emotion recognition and sentiment analysis in conversations. </span> <span class="ltx_bibblock">In <span class="ltx_text ltx_font_italic" id="bib.bib59.1.1">Proceedings of the 28th international conference on computational linguistics</span>, pages 4441–4453, 2020. </span> </li> <li class="ltx_bibitem" id="bib.bib60"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">[60]</span> <span class="ltx_bibblock"> Eduardo Fonseca, Xavier Favory, Jordi Pons, Frederic Font, and Xavier Serra. </span> <span class="ltx_bibblock">Fsd50k: an open dataset of human-labeled sound events. </span> <span class="ltx_bibblock"><span class="ltx_text ltx_font_italic" id="bib.bib60.1.1">IEEE/ACM Transactions on Audio, Speech, and Language Processing</span>, 30:829–852, 2021. </span> </li> <li class="ltx_bibitem" id="bib.bib61"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">[61]</span> <span class="ltx_bibblock"> Chaoyou Fu, Haojia Lin, Zuwei Long, Yunhang Shen, Meng Zhao, Yifan Zhang, Xiong Wang, Di Yin, Long Ma, Xiawu Zheng, et al. </span> <span class="ltx_bibblock">Vita: Towards open-source interactive omni multimodal llm. </span> <span class="ltx_bibblock"><span class="ltx_text ltx_font_italic" id="bib.bib61.1.1">arXiv preprint arXiv:2408.05211</span>, 2024. </span> </li> <li class="ltx_bibitem" id="bib.bib62"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">[62]</span> <span class="ltx_bibblock"> Philip Gage. </span> <span class="ltx_bibblock">A new algorithm for data compression. </span> <span class="ltx_bibblock"><span class="ltx_text ltx_font_italic" id="bib.bib62.1.1">The C Users Journal</span>, 12(2):23–38, 1994. </span> </li> <li class="ltx_bibitem" id="bib.bib63"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">[63]</span> <span class="ltx_bibblock"> Daniel Galvez, Greg Diamos, Juan Ciro, Juan Felipe Cerón, Keith Achorn, Anjali Gopi, David Kanter, Maximilian Lam, Mark Mazumder, and Vijay Janapa Reddi. </span> <span class="ltx_bibblock">The people’s speech: A large-scale diverse english speech recognition dataset for commercial usage. </span> <span class="ltx_bibblock"><span class="ltx_text ltx_font_italic" id="bib.bib63.1.1">arXiv preprint arXiv:2111.09344</span>, 2021. </span> </li> <li class="ltx_bibitem" id="bib.bib64"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">[64]</span> <span class="ltx_bibblock"> Itai Gat, Felix Kreuk, Tu Anh Nguyen, Ann Lee, Jade Copet, Gabriel Synnaeve, Emmanuel Dupoux, and Yossi Adi. </span> <span class="ltx_bibblock">Augmentation invariant discrete representation for generative spoken language modeling. </span> <span class="ltx_bibblock"><span class="ltx_text ltx_font_italic" id="bib.bib64.1.1">arXiv preprint arXiv:2209.15483</span>, 2022. </span> </li> <li class="ltx_bibitem" id="bib.bib65"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">[65]</span> <span class="ltx_bibblock"> Jort F Gemmeke, Daniel PW Ellis, Dylan Freedman, Aren Jansen, Wade Lawrence, R Channing Moore, Manoj Plakal, and Marvin Ritter. </span> <span class="ltx_bibblock">Audio set: An ontology and human-labeled dataset for audio events. </span> <span class="ltx_bibblock">In <span class="ltx_text ltx_font_italic" id="bib.bib65.1.1">2017 IEEE international conference on acoustics, speech and signal processing (ICASSP)</span>, pages 776–780. IEEE, 2017. </span> </li> <li class="ltx_bibitem" id="bib.bib66"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">[66]</span> <span class="ltx_bibblock"> Arushi Goel, Zhifeng Kong, Rafael Valle, and Bryan Catanzaro. </span> <span class="ltx_bibblock">Audio dialogues: Dialogues dataset for audio and music understanding. </span> <span class="ltx_bibblock"><span class="ltx_text ltx_font_italic" id="bib.bib66.1.1">arXiv preprint arXiv:2404.07616</span>, 2024. </span> </li> <li class="ltx_bibitem" id="bib.bib67"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">[67]</span> <span class="ltx_bibblock"> Yuan Gong, Alexander H Liu, Hongyin Luo, Leonid Karlinsky, and James Glass. </span> <span class="ltx_bibblock">Joint audio and speech understanding. </span> <span class="ltx_bibblock">In <span class="ltx_text ltx_font_italic" id="bib.bib67.1.1">2023 IEEE Automatic Speech Recognition and Understanding Workshop (ASRU)</span>, pages 1–8. IEEE, 2023. </span> </li> <li class="ltx_bibitem" id="bib.bib68"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">[68]</span> <span class="ltx_bibblock"> Yuan Gong, Hongyin Luo, Alexander H Liu, Leonid Karlinsky, and James Glass. </span> <span class="ltx_bibblock">Listen, think, and understand. </span> <span class="ltx_bibblock"><span class="ltx_text ltx_font_italic" id="bib.bib68.1.1">arXiv preprint arXiv:2305.10790</span>, 2023. </span> </li> <li class="ltx_bibitem" id="bib.bib69"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">[69]</span> <span class="ltx_bibblock"> Alex Graves, Santiago Fernández, Faustino Gomez, and Jürgen Schmidhuber. </span> <span class="ltx_bibblock">Connectionist temporal classification: labelling unsegmented sequence data with recurrent neural networks. </span> <span class="ltx_bibblock">In <span class="ltx_text ltx_font_italic" id="bib.bib69.1.1">Proceedings of the 23rd international conference on Machine learning</span>, pages 369–376, 2006. </span> </li> <li class="ltx_bibitem" id="bib.bib70"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">[70]</span> <span class="ltx_bibblock"> Haohan Guo, Fenglong Xie, Kun Xie, Dongchao Yang, Dake Guo, Xixin Wu, and Helen Meng. </span> <span class="ltx_bibblock">Socodec: A semantic-ordered multi-stream speech codec for efficient language model based text-to-speech synthesis. </span> <span class="ltx_bibblock"><span class="ltx_text ltx_font_italic" id="bib.bib70.1.1">arXiv preprint arXiv:2409.00933</span>, 2024. </span> </li> <li class="ltx_bibitem" id="bib.bib71"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">[71]</span> <span class="ltx_bibblock"> Zhifang Guo, Yichong Leng, Yihan Wu, Sheng Zhao, and Xu Tan. </span> <span class="ltx_bibblock">Prompttts: Controllable text-to-speech with text descriptions. </span> <span class="ltx_bibblock">In <span class="ltx_text ltx_font_italic" id="bib.bib71.1.1">ICASSP 2023-2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)</span>, pages 1–5. IEEE, 2023. </span> </li> <li class="ltx_bibitem" id="bib.bib72"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">[72]</span> <span class="ltx_bibblock"> Kohei Hara, Koji Inoue, Katsuya Takanashi, and Tatsuya Kawahara. </span> <span class="ltx_bibblock">Prediction of turn-taking using multitask learning with prediction of backchannels and fillers. </span> <span class="ltx_bibblock"><span class="ltx_text ltx_font_italic" id="bib.bib72.1.1">Listener</span>, 162:364, 2018. </span> </li> <li class="ltx_bibitem" id="bib.bib73"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">[73]</span> <span class="ltx_bibblock"> Kohei Hara, Koji Inoue, Katsuya Takanashi, and Tatsuya Kawahara. </span> <span class="ltx_bibblock">Turn-taking prediction based on detection of transition relevance place. </span> <span class="ltx_bibblock">In <span class="ltx_text ltx_font_italic" id="bib.bib73.1.1">INTERSPEECH</span>, pages 4170–4174, 2019. </span> </li> <li class="ltx_bibitem" id="bib.bib74"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">[74]</span> <span class="ltx_bibblock"> Michael Hassid, Tal Remez, Tu Anh Nguyen, Itai Gat, Alexis Conneau, Felix Kreuk, Jade Copet, Alexandre Defossez, Gabriel Synnaeve, Emmanuel Dupoux, et al. </span> <span class="ltx_bibblock">Textually pretrained speech language models. </span> <span class="ltx_bibblock"><span class="ltx_text ltx_font_italic" id="bib.bib74.1.1">Advances in Neural Information Processing Systems</span>, 36, 2024. </span> </li> <li class="ltx_bibitem" id="bib.bib75"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">[75]</span> <span class="ltx_bibblock"> Dan Hendrycks, Collin Burns, Steven Basart, Andy Zou, Mantas Mazeika, Dawn Song, and Jacob Steinhardt. </span> <span class="ltx_bibblock">Measuring massive multitask language understanding. </span> <span class="ltx_bibblock"><span class="ltx_text ltx_font_italic" id="bib.bib75.1.1">arXiv preprint arXiv:2009.03300</span>, 2020. </span> </li> <li class="ltx_bibitem" id="bib.bib76"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">[76]</span> <span class="ltx_bibblock"> Shawn Hershey, Sourish Chaudhuri, Daniel PW Ellis, Jort F Gemmeke, Aren Jansen, R Channing Moore, Manoj Plakal, Devin Platt, Rif A Saurous, Bryan Seybold, et al. </span> <span class="ltx_bibblock">Cnn architectures for large-scale audio classification. </span> <span class="ltx_bibblock">In <span class="ltx_text ltx_font_italic" id="bib.bib76.1.1">2017 ieee international conference on acoustics, speech and signal processing (icassp)</span>, pages 131–135. IEEE, 2017. </span> </li> <li class="ltx_bibitem" id="bib.bib77"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">[77]</span> <span class="ltx_bibblock"> Jack Hessel, Ari Holtzman, Maxwell Forbes, Ronan Le Bras, and Yejin Choi. </span> <span class="ltx_bibblock">Clipscore: A reference-free evaluation metric for image captioning. </span> <span class="ltx_bibblock"><span class="ltx_text ltx_font_italic" id="bib.bib77.1.1">arXiv preprint arXiv:2104.08718</span>, 2021. </span> </li> <li class="ltx_bibitem" id="bib.bib78"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">[78]</span> <span class="ltx_bibblock"> Wei-Ning Hsu, Benjamin Bolte, Yao-Hung Hubert Tsai, Kushal Lakhotia, Ruslan Salakhutdinov, and Abdelrahman Mohamed. </span> <span class="ltx_bibblock">Hubert: Self-supervised speech representation learning by masked prediction of hidden units. </span> <span class="ltx_bibblock"><span class="ltx_text ltx_font_italic" id="bib.bib78.1.1">IEEE/ACM transactions on audio, speech, and language processing</span>, 29:3451–3460, 2021. </span> </li> <li class="ltx_bibitem" id="bib.bib79"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">[79]</span> <span class="ltx_bibblock"> Edward J Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, and Weizhu Chen. </span> <span class="ltx_bibblock">Lora: Low-rank adaptation of large language models. </span> <span class="ltx_bibblock"><span class="ltx_text ltx_font_italic" id="bib.bib79.1.1">arXiv preprint arXiv:2106.09685</span>, 2021. </span> </li> <li class="ltx_bibitem" id="bib.bib80"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">[80]</span> <span class="ltx_bibblock"> Shujie Hu, Long Zhou, Shujie Liu, Sanyuan Chen, Lingwei Meng, Hongkun Hao, Jing Pan, Xunying Liu, Jinyu Li, Sunit Sivasankaran, et al. </span> <span class="ltx_bibblock">Wavllm: Towards robust and adaptive speech large language model. </span> <span class="ltx_bibblock"><span class="ltx_text ltx_font_italic" id="bib.bib80.1.1">arXiv preprint arXiv:2404.00656</span>, 2024. </span> </li> <li class="ltx_bibitem" id="bib.bib81"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">[81]</span> <span class="ltx_bibblock"> Jiawei Huang, Yi Ren, Rongjie Huang, Dongchao Yang, Zhenhui Ye, Chen Zhang, Jinglin Liu, Xiang Yin, Zejun Ma, and Zhou Zhao. </span> <span class="ltx_bibblock">Make-an-audio 2: Temporal-enhanced text-to-audio generation. </span> <span class="ltx_bibblock"><span class="ltx_text ltx_font_italic" id="bib.bib81.1.1">arXiv preprint arXiv:2305.18474</span>, 2023. </span> </li> <li class="ltx_bibitem" id="bib.bib82"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">[82]</span> <span class="ltx_bibblock"> Lei Huang, Weijiang Yu, Weitao Ma, Weihong Zhong, Zhangyin Feng, Haotian Wang, Qianglong Chen, Weihua Peng, Xiaocheng Feng, Bing Qin, et al. </span> <span class="ltx_bibblock">A survey on hallucination in large language models: Principles, taxonomy, challenges, and open questions. </span> <span class="ltx_bibblock"><span class="ltx_text ltx_font_italic" id="bib.bib82.1.1">arXiv preprint arXiv:2311.05232</span>, 2023. </span> </li> <li class="ltx_bibitem" id="bib.bib83"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">[83]</span> <span class="ltx_bibblock"> Rongjie Huang, Jiawei Huang, Dongchao Yang, Yi Ren, Luping Liu, Mingze Li, Zhenhui Ye, Jinglin Liu, Xiang Yin, and Zhou Zhao. </span> <span class="ltx_bibblock">Make-an-audio: Text-to-audio generation with prompt-enhanced diffusion models. </span> <span class="ltx_bibblock">In <span class="ltx_text ltx_font_italic" id="bib.bib83.1.1">International Conference on Machine Learning</span>, pages 13916–13932. PMLR, 2023. </span> </li> <li class="ltx_bibitem" id="bib.bib84"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">[84]</span> <span class="ltx_bibblock"> Rongjie Huang, Mingze Li, Dongchao Yang, Jiatong Shi, Xuankai Chang, Zhenhui Ye, Yuning Wu, Zhiqing Hong, Jiawei Huang, Jinglin Liu, et al. </span> <span class="ltx_bibblock">Audiogpt: Understanding and generating speech, music, sound, and talking head. </span> <span class="ltx_bibblock">In <span class="ltx_text ltx_font_italic" id="bib.bib84.1.1">Proceedings of the AAAI Conference on Artificial Intelligence</span>, volume 38, pages 23802–23804, 2024. </span> </li> <li class="ltx_bibitem" id="bib.bib85"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">[85]</span> <span class="ltx_bibblock"> Wenyong Huang, Zhenhe Zhang, Yu Ting Yeung, Xin Jiang, and Qun Liu. </span> <span class="ltx_bibblock">Spiral: Self-supervised perturbation-invariant representation learning for speech pre-training. </span> <span class="ltx_bibblock"><span class="ltx_text ltx_font_italic" id="bib.bib85.1.1">arXiv preprint arXiv:2201.10207</span>, 2022. </span> </li> <li class="ltx_bibitem" id="bib.bib86"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">[86]</span> <span class="ltx_bibblock"> Zhichao Huang, Chutong Meng, and Tom Ko. </span> <span class="ltx_bibblock">Repcodec: A speech representation codec for speech tokenization. </span> <span class="ltx_bibblock"><span class="ltx_text ltx_font_italic" id="bib.bib86.1.1">arXiv preprint arXiv:2309.00169</span>, 2023. </span> </li> <li class="ltx_bibitem" id="bib.bib87"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">[87]</span> <span class="ltx_bibblock"> Iris AM Huijben, Matthijs Douze, Matthew Muckley, Ruud JG van Sloun, and Jakob Verbeek. </span> <span class="ltx_bibblock">Residual quantization with implicit neural codebooks. </span> <span class="ltx_bibblock"><span class="ltx_text ltx_font_italic" id="bib.bib87.1.1">arXiv preprint arXiv:2401.14732</span>, 2024. </span> </li> <li class="ltx_bibitem" id="bib.bib88"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">[88]</span> <span class="ltx_bibblock"> Keith Ito and Linda Johnson. </span> <span class="ltx_bibblock">The lj speech dataset. </span> <span class="ltx_bibblock"><a class="ltx_ref ltx_url ltx_font_typewriter" href="https://keithito.com/LJ-Speech-Dataset/" title="">https://keithito.com/LJ-Speech-Dataset/</a>, 2017. </span> </li> <li class="ltx_bibitem" id="bib.bib89"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">[89]</span> <span class="ltx_bibblock"> Shengpeng Ji, Minghui Fang, Ziyue Jiang, Rongjie Huang, Jialung Zuo, Shulei Wang, and Zhou Zhao. </span> <span class="ltx_bibblock">Language-codec: Reducing the gaps between discrete codec representation and speech language models. </span> <span class="ltx_bibblock"><span class="ltx_text ltx_font_italic" id="bib.bib89.1.1">arXiv preprint arXiv:2402.12208</span>, 2024. </span> </li> <li class="ltx_bibitem" id="bib.bib90"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">[90]</span> <span class="ltx_bibblock"> Shengpeng Ji, Ziyue Jiang, Xize Cheng, Yifu Chen, Minghui Fang, Jialong Zuo, Qian Yang, Ruiqi Li, Ziang Zhang, Xiaoda Yang, et al. </span> <span class="ltx_bibblock">Wavtokenizer: an efficient acoustic discrete codec tokenizer for audio language modeling. </span> <span class="ltx_bibblock"><span class="ltx_text ltx_font_italic" id="bib.bib90.1.1">arXiv preprint arXiv:2408.16532</span>, 2024. </span> </li> <li class="ltx_bibitem" id="bib.bib91"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">[91]</span> <span class="ltx_bibblock"> Shengpeng Ji, Ziyue Jiang, Hanting Wang, Jialong Zuo, and Zhou Zhao. </span> <span class="ltx_bibblock">Mobilespeech: A fast and high-fidelity framework for mobile zero-shot text-to-speech. </span> <span class="ltx_bibblock"><span class="ltx_text ltx_font_italic" id="bib.bib91.1.1">arXiv preprint arXiv:2402.09378</span>, 2024. </span> </li> <li class="ltx_bibitem" id="bib.bib92"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">[92]</span> <span class="ltx_bibblock"> Shengpeng Ji, Jialong Zuo, Minghui Fang, Ziyue Jiang, Feiyang Chen, Xinyu Duan, Baoxing Huai, and Zhou Zhao. </span> <span class="ltx_bibblock">Textrolspeech: A text style control speech corpus with codec language text-to-speech models. </span> <span class="ltx_bibblock">In <span class="ltx_text ltx_font_italic" id="bib.bib92.1.1">ICASSP 2024-2024 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)</span>, pages 10301–10305. IEEE, 2024. </span> </li> <li class="ltx_bibitem" id="bib.bib93"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">[93]</span> <span class="ltx_bibblock"> Shengpeng Ji, Jialong Zuo, Minghui Fang, Siqi Zheng, Qian Chen, Wen Wang, Ziyue Jiang, Hai Huang, Xize Cheng, Rongjie Huang, et al. </span> <span class="ltx_bibblock">Controlspeech: Towards simultaneous zero-shot speaker cloning and zero-shot language style control with decoupled codec. </span> <span class="ltx_bibblock"><span class="ltx_text ltx_font_italic" id="bib.bib93.1.1">arXiv preprint arXiv:2406.01205</span>, 2024. </span> </li> <li class="ltx_bibitem" id="bib.bib94"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">[94]</span> <span class="ltx_bibblock"> Ye Jia, Michelle Tadmor Ramanovich, Quan Wang, and Heiga Zen. </span> <span class="ltx_bibblock">Cvss corpus and massively multilingual speech-to-speech translation. </span> <span class="ltx_bibblock"><span class="ltx_text ltx_font_italic" id="bib.bib94.1.1">arXiv preprint arXiv:2201.03713</span>, 2022. </span> </li> <li class="ltx_bibitem" id="bib.bib95"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">[95]</span> <span class="ltx_bibblock"> Albert Q Jiang, Alexandre Sablayrolles, Antoine Roux, Arthur Mensch, Blanche Savary, Chris Bamford, Devendra Singh Chaplot, Diego de las Casas, Emma Bou Hanna, Florian Bressand, et al. </span> <span class="ltx_bibblock">Mixtral of experts. </span> <span class="ltx_bibblock"><span class="ltx_text ltx_font_italic" id="bib.bib95.1.1">arXiv preprint arXiv:2401.04088</span>, 2024. </span> </li> <li class="ltx_bibitem" id="bib.bib96"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">[96]</span> <span class="ltx_bibblock"> Ziyue Jiang, Jinglin Liu, Yi Ren, Jinzheng He, Zhenhui Ye, Shengpeng Ji, Qian Yang, Chen Zhang, Pengfei Wei, Chunfeng Wang, et al. </span> <span class="ltx_bibblock">Mega-tts 2: Boosting prompting mechanisms for zero-shot speech synthesis. </span> <span class="ltx_bibblock">In <span class="ltx_text ltx_font_italic" id="bib.bib96.1.1">The Twelfth International Conference on Learning Representations</span>, 2024. </span> </li> <li class="ltx_bibitem" id="bib.bib97"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">[97]</span> <span class="ltx_bibblock"> Ziyue Jiang, Yi Ren, Zhenhui Ye, Jinglin Liu, Chen Zhang, Qian Yang, Shengpeng Ji, Rongjie Huang, Chunfeng Wang, Xiang Yin, et al. </span> <span class="ltx_bibblock">Mega-tts: Zero-shot text-to-speech at scale with intrinsic inductive bias. </span> <span class="ltx_bibblock"><span class="ltx_text ltx_font_italic" id="bib.bib97.1.1">arXiv preprint arXiv:2306.03509</span>, 2023. </span> </li> <li class="ltx_bibitem" id="bib.bib98"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">[98]</span> <span class="ltx_bibblock"> Chunxiang Jin, Minghui Yang, and Zujie Wen. </span> <span class="ltx_bibblock">Duplex conversation in outbound agent system. </span> <span class="ltx_bibblock">In <span class="ltx_text ltx_font_italic" id="bib.bib98.1.1">Interspeech</span>, pages 4866–4867, 2021. </span> </li> <li class="ltx_bibitem" id="bib.bib99"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">[99]</span> <span class="ltx_bibblock"> Yizhang Jin, Jian Li, Yexin Liu, Tianjun Gu, Kai Wu, Zhengkai Jiang, Muyang He, Bo Zhao, Xin Tan, Zhenye Gan, et al. </span> <span class="ltx_bibblock">Efficient multimodal large language models: A survey. </span> <span class="ltx_bibblock"><span class="ltx_text ltx_font_italic" id="bib.bib99.1.1">arXiv preprint arXiv:2405.10739</span>, 2024. </span> </li> <li class="ltx_bibitem" id="bib.bib100"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">[100]</span> <span class="ltx_bibblock"> Zeqian Ju, Yuancheng Wang, Kai Shen, Xu Tan, Detai Xin, Dongchao Yang, Yanqing Liu, Yichong Leng, Kaitao Song, Siliang Tang, et al. </span> <span class="ltx_bibblock">Naturalspeech 3: Zero-shot speech synthesis with factorized codec and diffusion models. </span> <span class="ltx_bibblock"><span class="ltx_text ltx_font_italic" id="bib.bib100.1.1">arXiv preprint arXiv:2403.03100</span>, 2024. </span> </li> <li class="ltx_bibitem" id="bib.bib101"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">[101]</span> <span class="ltx_bibblock"> Jacob Kahn, Morgane Riviere, Weiyi Zheng, Evgeny Kharitonov, Qiantong Xu, Pierre-Emmanuel Mazaré, Julien Karadayi, Vitaliy Liptchinsky, Ronan Collobert, Christian Fuegen, et al. </span> <span class="ltx_bibblock">Libri-light: A benchmark for asr with limited or no supervision. </span> <span class="ltx_bibblock">In <span class="ltx_text ltx_font_italic" id="bib.bib101.1.1">ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)</span>, pages 7669–7673. IEEE, 2020. </span> </li> <li class="ltx_bibitem" id="bib.bib102"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">[102]</span> <span class="ltx_bibblock"> Eugene Kharitonov, Jade Copet, Kushal Lakhotia, Tu Anh Nguyen, Paden Tomasello, Ann Lee, Ali Elkahky, Wei-Ning Hsu, Abdelrahman Mohamed, Emmanuel Dupoux, et al. </span> <span class="ltx_bibblock">textless-lib: A library for textless spoken language processing. </span> <span class="ltx_bibblock"><span class="ltx_text ltx_font_italic" id="bib.bib102.1.1">arXiv preprint arXiv:2202.07359</span>, 2022. </span> </li> <li class="ltx_bibitem" id="bib.bib103"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">[103]</span> <span class="ltx_bibblock"> Hatim Khouzaimi, Romain Laroche, and Fabrice Lefèvre. </span> <span class="ltx_bibblock">Reinforcement learning for turn-taking management in incremental spoken dialogue systems. </span> <span class="ltx_bibblock">In <span class="ltx_text ltx_font_italic" id="bib.bib103.1.1">IJCAI</span>, pages 2831–2837, 2016. </span> </li> <li class="ltx_bibitem" id="bib.bib104"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">[104]</span> <span class="ltx_bibblock"> Kevin Kilgour, Mauricio Zuluaga, Dominik Roblek, and Matthew Sharifi. </span> <span class="ltx_bibblock">Fr<math alttext="\backslash" class="ltx_Math" display="inline" id="bib.bib104.1.m1.1"><semantics id="bib.bib104.1.m1.1a"><mo id="bib.bib104.1.m1.1.1" xref="bib.bib104.1.m1.1.1.cmml">\</mo><annotation-xml encoding="MathML-Content" id="bib.bib104.1.m1.1b"><ci id="bib.bib104.1.m1.1.1.cmml" xref="bib.bib104.1.m1.1.1">\</ci></annotation-xml><annotation encoding="application/x-tex" id="bib.bib104.1.m1.1c">\backslash</annotation><annotation encoding="application/x-llamapun" id="bib.bib104.1.m1.1d">\</annotation></semantics></math>’echet audio distance: A metric for evaluating music enhancement algorithms. </span> <span class="ltx_bibblock"><span class="ltx_text ltx_font_italic" id="bib.bib104.2.1">arXiv preprint arXiv:1812.08466</span>, 2018. </span> </li> <li class="ltx_bibitem" id="bib.bib105"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">[105]</span> <span class="ltx_bibblock"> Chris Dongjoo Kim, Byeongchang Kim, Hyunmin Lee, and Gunhee Kim. </span> <span class="ltx_bibblock">Audiocaps: Generating captions for audios in the wild. </span> <span class="ltx_bibblock">In <span class="ltx_text ltx_font_italic" id="bib.bib105.1.1">Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers)</span>, pages 119–132, 2019. </span> </li> <li class="ltx_bibitem" id="bib.bib106"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">[106]</span> <span class="ltx_bibblock"> Heeseung Kim, Soonshin Seo, Kyeongseok Jeong, Ohsung Kwon, Jungwhan Kim, Jaehong Lee, Eunwoo Song, Myungwoo Oh, Sungroh Yoon, and Kang Min Yoo. </span> <span class="ltx_bibblock">Unified speech-text pretraining for spoken dialog modeling. </span> <span class="ltx_bibblock"><span class="ltx_text ltx_font_italic" id="bib.bib106.1.1">arXiv preprint arXiv:2402.05706</span>, 2024. </span> </li> <li class="ltx_bibitem" id="bib.bib107"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">[107]</span> <span class="ltx_bibblock"> Jaehyeon Kim, Jungil Kong, and Juhee Son. </span> <span class="ltx_bibblock">Conditional variational autoencoder with adversarial learning for end-to-end text-to-speech. </span> <span class="ltx_bibblock">In <span class="ltx_text ltx_font_italic" id="bib.bib107.1.1">International Conference on Machine Learning</span>, pages 5530–5540. PMLR, 2021. </span> </li> <li class="ltx_bibitem" id="bib.bib108"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">[108]</span> <span class="ltx_bibblock"> Jungil Kong, Jaehyeon Kim, and Jaekyoung Bae. </span> <span class="ltx_bibblock">Hifi-gan: Generative adversarial networks for efficient and high fidelity speech synthesis. </span> <span class="ltx_bibblock"><span class="ltx_text ltx_font_italic" id="bib.bib108.1.1">Advances in neural information processing systems</span>, 33:17022–17033, 2020. </span> </li> <li class="ltx_bibitem" id="bib.bib109"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">[109]</span> <span class="ltx_bibblock"> Jungil Kong, Jihoon Park, Beomjeong Kim, Jeongmin Kim, Dohee Kong, and Sangjin Kim. </span> <span class="ltx_bibblock">Vits2: Improving quality and efficiency of single-stage text-to-speech with adversarial learning and architecture design. </span> <span class="ltx_bibblock"><span class="ltx_text ltx_font_italic" id="bib.bib109.1.1">arXiv preprint arXiv:2307.16430</span>, 2023. </span> </li> <li class="ltx_bibitem" id="bib.bib110"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">[110]</span> <span class="ltx_bibblock"> Qiuqiang Kong, Yin Cao, Turab Iqbal, Yuxuan Wang, Wenwu Wang, and Mark D Plumbley. </span> <span class="ltx_bibblock">Panns: Large-scale pretrained audio neural networks for audio pattern recognition. </span> <span class="ltx_bibblock"><span class="ltx_text ltx_font_italic" id="bib.bib110.1.1">IEEE/ACM Transactions on Audio, Speech, and Language Processing</span>, 28:2880–2894, 2020. </span> </li> <li class="ltx_bibitem" id="bib.bib111"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">[111]</span> <span class="ltx_bibblock"> Zhifeng Kong, Arushi Goel, Rohan Badlani, Wei Ping, Rafael Valle, and Bryan Catanzaro. </span> <span class="ltx_bibblock">Audio flamingo: A novel audio language model with few-shot learning and dialogue abilities. </span> <span class="ltx_bibblock"><span class="ltx_text ltx_font_italic" id="bib.bib111.1.1">arXiv preprint arXiv:2402.01831</span>, 2024. </span> </li> <li class="ltx_bibitem" id="bib.bib112"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">[112]</span> <span class="ltx_bibblock"> Andreas Köpf, Yannic Kilcher, Dimitri von Rütte, Sotiris Anagnostidis, Zhi Rui Tam, Keith Stevens, Abdullah Barhoum, Duc Nguyen, Oliver Stanley, Richárd Nagyfi, et al. </span> <span class="ltx_bibblock">Openassistant conversations-democratizing large language model alignment. </span> <span class="ltx_bibblock"><span class="ltx_text ltx_font_italic" id="bib.bib112.1.1">Advances in Neural Information Processing Systems</span>, 36, 2024. </span> </li> <li class="ltx_bibitem" id="bib.bib113"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">[113]</span> <span class="ltx_bibblock"> Rithesh Kumar, Prem Seetharaman, Alejandro Luebs, Ishaan Kumar, and Kundan Kumar. </span> <span class="ltx_bibblock">High-fidelity audio compression with improved rvqgan. </span> <span class="ltx_bibblock"><span class="ltx_text ltx_font_italic" id="bib.bib113.1.1">Advances in Neural Information Processing Systems</span>, 36, 2024. </span> </li> <li class="ltx_bibitem" id="bib.bib114"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">[114]</span> <span class="ltx_bibblock"> Divesh Lala, Koji Inoue, and Tatsuya Kawahara. </span> <span class="ltx_bibblock">Smooth turn-taking by a robot using an online continuous model to generate turn-taking cues. </span> <span class="ltx_bibblock">In <span class="ltx_text ltx_font_italic" id="bib.bib114.1.1">2019 International Conference on Multimodal Interaction</span>, pages 226–234, 2019. </span> </li> <li class="ltx_bibitem" id="bib.bib115"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">[115]</span> <span class="ltx_bibblock"> Divesh Lala, Pierrick Milhorat, Koji Inoue, Masanari Ishida, Katsuya Takanashi, and Tatsuya Kawahara. </span> <span class="ltx_bibblock">Attentive listening system with backchanneling, response generation and flexible turn-taking. </span> <span class="ltx_bibblock">In <span class="ltx_text ltx_font_italic" id="bib.bib115.1.1">Proceedings of the 18th Annual SIGdial Meeting on Discourse and Dialogue</span>, pages 127–136, 2017. </span> </li> <li class="ltx_bibitem" id="bib.bib116"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">[116]</span> <span class="ltx_bibblock"> Max WY Lam, Qiao Tian, Tang Li, Zongyu Yin, Siyuan Feng, Ming Tu, Yuliang Ji, Rui Xia, Mingbo Ma, Xuchen Song, et al. </span> <span class="ltx_bibblock">Efficient neural music generation. </span> <span class="ltx_bibblock"><span class="ltx_text ltx_font_italic" id="bib.bib116.1.1">Advances in Neural Information Processing Systems</span>, 36, 2024. </span> </li> <li class="ltx_bibitem" id="bib.bib117"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">[117]</span> <span class="ltx_bibblock"> Matthew Le, Apoorv Vyas, Bowen Shi, Brian Karrer, Leda Sari, Rashel Moritz, Mary Williamson, Vimal Manohar, Yossi Adi, Jay Mahadeokar, et al. </span> <span class="ltx_bibblock">Voicebox: Text-guided multilingual universal speech generation at scale. </span> <span class="ltx_bibblock"><span class="ltx_text ltx_font_italic" id="bib.bib117.1.1">Advances in neural information processing systems</span>, 36, 2024. </span> </li> <li class="ltx_bibitem" id="bib.bib118"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">[118]</span> <span class="ltx_bibblock"> Yichong Leng, Zhifang Guo, Kai Shen, Xu Tan, Zeqian Ju, Yanqing Liu, Yufei Liu, Dongchao Yang, Leying Zhang, Kaitao Song, et al. </span> <span class="ltx_bibblock">Prompttts 2: Describing and generating voices with text prompt. </span> <span class="ltx_bibblock"><span class="ltx_text ltx_font_italic" id="bib.bib118.1.1">arXiv preprint arXiv:2309.02285</span>, 2023. </span> </li> <li class="ltx_bibitem" id="bib.bib119"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">[119]</span> <span class="ltx_bibblock"> Hanzhao Li, Liumeng Xue, Haohan Guo, Xinfa Zhu, Yuanjun Lv, Lei Xie, Yunlin Chen, Hao Yin, and Zhifei Li. </span> <span class="ltx_bibblock">Single-codec: Single-codebook speech codec towards high-performance speech generation. </span> <span class="ltx_bibblock"><span class="ltx_text ltx_font_italic" id="bib.bib119.1.1">arXiv preprint arXiv:2406.07422</span>, 2024. </span> </li> <li class="ltx_bibitem" id="bib.bib120"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">[120]</span> <span class="ltx_bibblock"> Jian Li, Weiheng Lu, Hao Fei, Meng Luo, Ming Dai, Min Xia, Yizhang Jin, Zhenye Gan, Ding Qi, Chaoyou Fu, Ying Tai, Wankou Yang, Yabiao Wang, and Chengjie Wang. </span> <span class="ltx_bibblock">A survey on benchmarks of multimodal large language models, 2024. </span> </li> <li class="ltx_bibitem" id="bib.bib121"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">[121]</span> <span class="ltx_bibblock"> Junnan Li, Dongxu Li, Silvio Savarese, and Steven Hoi. </span> <span class="ltx_bibblock">Blip-2: Bootstrapping language-image pre-training with frozen image encoders and large language models. </span> <span class="ltx_bibblock">In <span class="ltx_text ltx_font_italic" id="bib.bib121.1.1">International conference on machine learning</span>, pages 19730–19742. PMLR, 2023. </span> </li> <li class="ltx_bibitem" id="bib.bib122"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">[122]</span> <span class="ltx_bibblock"> Yadong Li, Haoze Sun, Mingan Lin, Tianpeng Li, Guosheng Dong, Tao Zhang, Bowen Ding, Wei Song, Zhenglin Cheng, Yuqi Huo, et al. </span> <span class="ltx_bibblock">Baichuan-omni technical report. </span> <span class="ltx_bibblock"><span class="ltx_text ltx_font_italic" id="bib.bib122.1.1">arXiv preprint arXiv:2410.08565</span>, 2024. </span> </li> <li class="ltx_bibitem" id="bib.bib123"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">[123]</span> <span class="ltx_bibblock"> Yuanchao Li, Yumnah Mohamied, Peter Bell, and Catherine Lai. </span> <span class="ltx_bibblock">Exploration of a self-supervised speech model: A study on emotional corpora. </span> <span class="ltx_bibblock">In <span class="ltx_text ltx_font_italic" id="bib.bib123.1.1">2022 IEEE Spoken Language Technology Workshop (SLT)</span>, pages 868–875. IEEE, 2023. </span> </li> <li class="ltx_bibitem" id="bib.bib124"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">[124]</span> <span class="ltx_bibblock"> Wing Lian, Bleys Goodson, Eugene Pentland, Austin Cook, Chanvichet Vong, and "Teknium". </span> <span class="ltx_bibblock">Openorca: An open dataset of gpt augmented flan reasoning traces. </span> <span class="ltx_bibblock"><a class="ltx_ref ltx_url ltx_font_typewriter" href="https://https://huggingface.co/Open-Orca/OpenOrca" title="">https://https://huggingface.co/Open-Orca/OpenOrca</a>, 2023. </span> </li> <li class="ltx_bibitem" id="bib.bib125"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">[125]</span> <span class="ltx_bibblock"> Percy Liang, Rishi Bommasani, Tony Lee, Dimitris Tsipras, Dilara Soylu, Michihiro Yasunaga, Yian Zhang, Deepak Narayanan, Yuhuai Wu, Ananya Kumar, et al. </span> <span class="ltx_bibblock">Holistic evaluation of language models. </span> <span class="ltx_bibblock"><span class="ltx_text ltx_font_italic" id="bib.bib125.1.1">arXiv preprint arXiv:2211.09110</span>, 2022. </span> </li> <li class="ltx_bibitem" id="bib.bib126"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">[126]</span> <span class="ltx_bibblock"> Chin-Yew Lin. </span> <span class="ltx_bibblock">Rouge: A package for automatic evaluation of summaries. </span> <span class="ltx_bibblock">In <span class="ltx_text ltx_font_italic" id="bib.bib126.1.1">Text summarization branches out</span>, pages 74–81, 2004. </span> </li> <li class="ltx_bibitem" id="bib.bib127"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">[127]</span> <span class="ltx_bibblock"> Guan-Ting Lin, Cheng-Han Chiang, and Hung-yi Lee. </span> <span class="ltx_bibblock">Advancing large language models to capture varied speaking styles and respond properly in spoken conversations. </span> <span class="ltx_bibblock"><span class="ltx_text ltx_font_italic" id="bib.bib127.1.1">arXiv preprint arXiv:2402.12786</span>, 2024. </span> </li> <li class="ltx_bibitem" id="bib.bib128"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">[128]</span> <span class="ltx_bibblock"> Guan-Ting Lin, Prashanth Gurunath Shivakumar, Ankur Gandhe, Chao-Han Huck Yang, Yile Gu, Shalini Ghosh, Andreas Stolcke, Hung-yi Lee, and Ivan Bulyko. </span> <span class="ltx_bibblock">Paralinguistics-enhanced large language modeling of spoken dialogue. </span> <span class="ltx_bibblock">In <span class="ltx_text ltx_font_italic" id="bib.bib128.1.1">ICASSP 2024-2024 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)</span>, pages 10316–10320. IEEE, 2024. </span> </li> <li class="ltx_bibitem" id="bib.bib129"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">[129]</span> <span class="ltx_bibblock"> Guan-Ting Lin, Prashanth Gurunath Shivakumar, Aditya Gourav, Yile Gu, Ankur Gandhe, Hung yi Lee, and Ivan Bulyko. </span> <span class="ltx_bibblock">Align-slm: Textless spoken language models with reinforcement learning from ai feedback, 2024. </span> </li> <li class="ltx_bibitem" id="bib.bib130"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">[130]</span> <span class="ltx_bibblock"> Ting-En Lin, Yuchuan Wu, Fei Huang, Luo Si, Jian Sun, and Yongbin Li. </span> <span class="ltx_bibblock">Duplex conversation: Towards human-like interaction in spoken dialogue systems. </span> <span class="ltx_bibblock">In <span class="ltx_text ltx_font_italic" id="bib.bib130.1.1">Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining</span>, pages 3299–3308, 2022. </span> </li> <li class="ltx_bibitem" id="bib.bib131"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">[131]</span> <span class="ltx_bibblock"> Yaron Lipman, Ricky TQ Chen, Heli Ben-Hamu, Maximilian Nickel, and Matt Le. </span> <span class="ltx_bibblock">Flow matching for generative modeling. </span> <span class="ltx_bibblock"><span class="ltx_text ltx_font_italic" id="bib.bib131.1.1">arXiv preprint arXiv:2210.02747</span>, 2022. </span> </li> <li class="ltx_bibitem" id="bib.bib132"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">[132]</span> <span class="ltx_bibblock"> Samuel Lipping, Parthasaarathy Sudarsanam, Konstantinos Drossos, and Tuomas Virtanen. </span> <span class="ltx_bibblock">Clotho-aqa: A crowdsourced dataset for audio question answering. </span> <span class="ltx_bibblock">In <span class="ltx_text ltx_font_italic" id="bib.bib132.1.1">2022 30th European Signal Processing Conference (EUSIPCO)</span>, pages 1140–1144. IEEE, 2022. </span> </li> <li class="ltx_bibitem" id="bib.bib133"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">[133]</span> <span class="ltx_bibblock"> Andy T Liu, Shu-wen Yang, Po-Han Chi, Po-chun Hsu, and Hung-yi Lee. </span> <span class="ltx_bibblock">Mockingjay: Unsupervised speech representation learning with deep bidirectional transformer encoders. </span> <span class="ltx_bibblock">In <span class="ltx_text ltx_font_italic" id="bib.bib133.1.1">ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)</span>, pages 6419–6423. IEEE, 2020. </span> </li> <li class="ltx_bibitem" id="bib.bib134"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">[134]</span> <span class="ltx_bibblock"> Haohe Liu, Zehua Chen, Yi Yuan, Xinhao Mei, Xubo Liu, Danilo Mandic, Wenwu Wang, and Mark D Plumbley. </span> <span class="ltx_bibblock">Audioldm: Text-to-audio generation with latent diffusion models. </span> <span class="ltx_bibblock"><span class="ltx_text ltx_font_italic" id="bib.bib134.1.1">arXiv preprint arXiv:2301.12503</span>, 2023. </span> </li> <li class="ltx_bibitem" id="bib.bib135"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">[135]</span> <span class="ltx_bibblock"> Haohe Liu, Xuenan Xu, Yi Yuan, Mengyue Wu, Wenwu Wang, and Mark D Plumbley. </span> <span class="ltx_bibblock">Semanticodec: An ultra low bitrate semantic audio codec for general sound. </span> <span class="ltx_bibblock"><span class="ltx_text ltx_font_italic" id="bib.bib135.1.1">arXiv preprint arXiv:2405.00233</span>, 2024. </span> </li> <li class="ltx_bibitem" id="bib.bib136"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">[136]</span> <span class="ltx_bibblock"> Haohe Liu, Yi Yuan, Xubo Liu, Xinhao Mei, Qiuqiang Kong, Qiao Tian, Yuping Wang, Wenwu Wang, Yuxuan Wang, and Mark D Plumbley. </span> <span class="ltx_bibblock">Audioldm 2: Learning holistic audio generation with self-supervised pretraining. </span> <span class="ltx_bibblock"><span class="ltx_text ltx_font_italic" id="bib.bib136.1.1">IEEE/ACM Transactions on Audio, Speech, and Language Processing</span>, 2024. </span> </li> <li class="ltx_bibitem" id="bib.bib137"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">[137]</span> <span class="ltx_bibblock"> Rui Liu, Yifan Hu, Ren Yi, Yin Xiang, and Haizhou Li. </span> <span class="ltx_bibblock">Generative expressive conversational speech synthesis. </span> <span class="ltx_bibblock"><span class="ltx_text ltx_font_italic" id="bib.bib137.1.1">arXiv preprint arXiv:2407.21491</span>, 2024. </span> </li> <li class="ltx_bibitem" id="bib.bib138"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">[138]</span> <span class="ltx_bibblock"> Yang Liu, Dan Iter, Yichong Xu, Shuohang Wang, Ruochen Xu, and Chenguang Zhu. </span> <span class="ltx_bibblock">G-eval: Nlg evaluation using gpt-4 with better human alignment. </span> <span class="ltx_bibblock"><span class="ltx_text ltx_font_italic" id="bib.bib138.1.1">arXiv preprint arXiv:2303.16634</span>, 2023. </span> </li> <li class="ltx_bibitem" id="bib.bib139"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">[139]</span> <span class="ltx_bibblock"> Shayne Longpre, Le Hou, Tu Vu, Albert Webson, Hyung Won Chung, Yi Tay, Denny Zhou, Quoc V Le, Barret Zoph, Jason Wei, et al. </span> <span class="ltx_bibblock">The flan collection: Designing data and methods for effective instruction tuning. </span> <span class="ltx_bibblock">In <span class="ltx_text ltx_font_italic" id="bib.bib139.1.1">International Conference on Machine Learning</span>, pages 22631–22648. PMLR, 2023. </span> </li> <li class="ltx_bibitem" id="bib.bib140"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">[140]</span> <span class="ltx_bibblock"> Dan Lyth and Simon King. </span> <span class="ltx_bibblock">Natural language guidance of high-fidelity text-to-speech with synthetic annotations. </span> <span class="ltx_bibblock"><span class="ltx_text ltx_font_italic" id="bib.bib140.1.1">arXiv preprint arXiv:2402.01912</span>, 2024. </span> </li> <li class="ltx_bibitem" id="bib.bib141"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">[141]</span> <span class="ltx_bibblock"> Yinghao Ma, Anders Øland, Anton Ragni, Bleiz MacSen Del Sette, Charalampos Saitis, Chris Donahue, Chenghua Lin, Christos Plachouras, Emmanouil Benetos, Elio Quinton, et al. </span> <span class="ltx_bibblock">Foundation models for music: A survey. </span> <span class="ltx_bibblock"><span class="ltx_text ltx_font_italic" id="bib.bib141.1.1">arXiv preprint arXiv:2408.14340</span>, 2024. </span> </li> <li class="ltx_bibitem" id="bib.bib142"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">[142]</span> <span class="ltx_bibblock"> Ziyang Ma, Yakun Song, Chenpeng Du, Jian Cong, Zhuo Chen, Yuping Wang, Yuxuan Wang, and Xie Chen. </span> <span class="ltx_bibblock">Language model can listen while speaking. </span> <span class="ltx_bibblock"><span class="ltx_text ltx_font_italic" id="bib.bib142.1.1">arXiv preprint arXiv:2408.02622</span>, 2024. </span> </li> <li class="ltx_bibitem" id="bib.bib143"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">[143]</span> <span class="ltx_bibblock"> Ziyang Ma, Zhisheng Zheng, Jiaxin Ye, Jinchao Li, Zhifu Gao, Shiliang Zhang, and Xie Chen. </span> <span class="ltx_bibblock">emotion2vec: Self-supervised pre-training for speech emotion representation. </span> <span class="ltx_bibblock"><span class="ltx_text ltx_font_italic" id="bib.bib143.1.1">arXiv preprint arXiv:2312.15185</span>, 2023. </span> </li> <li class="ltx_bibitem" id="bib.bib144"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">[144]</span> <span class="ltx_bibblock"> Kiwan Maeng, Alexei Colin, and Brandon Lucia. </span> <span class="ltx_bibblock">Alpaca: Intermittent execution without checkpoints. </span> <span class="ltx_bibblock"><span class="ltx_text ltx_font_italic" id="bib.bib144.1.1">Proceedings of the ACM on Programming Languages</span>, 1(OOPSLA):1–30, 2017. </span> </li> <li class="ltx_bibitem" id="bib.bib145"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">[145]</span> <span class="ltx_bibblock"> Soumi Maiti, Yifan Peng, Shukjae Choi, Jee-weon Jung, Xuankai Chang, and Shinji Watanabe. </span> <span class="ltx_bibblock">Voxtlm: Unified decoder-only models for consolidating speech recognition, synthesis and speech, text continuation tasks. </span> <span class="ltx_bibblock">In <span class="ltx_text ltx_font_italic" id="bib.bib145.1.1">ICASSP 2024-2024 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)</span>, pages 13326–13330. IEEE, 2024. </span> </li> <li class="ltx_bibitem" id="bib.bib146"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">[146]</span> <span class="ltx_bibblock"> Matthew Marge, Carol Espy-Wilson, Nigel G Ward, Abeer Alwan, Yoav Artzi, Mohit Bansal, Gil Blankenship, Joyce Chai, Hal Daumé III, Debadeepta Dey, et al. </span> <span class="ltx_bibblock">Spoken language interaction with robots: Recommendations for future research. </span> <span class="ltx_bibblock"><span class="ltx_text ltx_font_italic" id="bib.bib146.1.1">Computer Speech & Language</span>, 71:101255, 2022. </span> </li> <li class="ltx_bibitem" id="bib.bib147"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">[147]</span> <span class="ltx_bibblock"> Xinhao Mei, Chutong Meng, Haohe Liu, Qiuqiang Kong, Tom Ko, Chengqi Zhao, Mark D Plumbley, Yuexian Zou, and Wenwu Wang. </span> <span class="ltx_bibblock">Wavcaps: A chatgpt-assisted weakly-labelled audio captioning dataset for audio-language multimodal research. </span> <span class="ltx_bibblock"><span class="ltx_text ltx_font_italic" id="bib.bib147.1.1">IEEE/ACM Transactions on Audio, Speech, and Language Processing</span>, 2024. </span> </li> <li class="ltx_bibitem" id="bib.bib148"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">[148]</span> <span class="ltx_bibblock"> Ziqiao Meng, Qichao Wang, Wenqian Cui, Yifei Zhang, Bingzhe Wu, Irwin King, Liang Chen, and Peilin Zhao. </span> <span class="ltx_bibblock">Sd-gpt: Autoregressive spoken dialogue language modeling with decoder-only transformers. </span> <span class="ltx_bibblock">In <span class="ltx_text ltx_font_italic" id="bib.bib148.1.1">Audio Imagination: NeurIPS 2024 Workshop AI-Driven Speech, Music, and Sound Generation</span>. </span> </li> <li class="ltx_bibitem" id="bib.bib149"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">[149]</span> <span class="ltx_bibblock"> Fabian Mentzer, David Minnen, Eirikur Agustsson, and Michael Tschannen. </span> <span class="ltx_bibblock">Finite scalar quantization: Vq-vae made simple. </span> <span class="ltx_bibblock"><span class="ltx_text ltx_font_italic" id="bib.bib149.1.1">arXiv preprint arXiv:2309.15505</span>, 2023. </span> </li> <li class="ltx_bibitem" id="bib.bib150"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">[150]</span> <span class="ltx_bibblock"> Annamaria Mesaros, Toni Heittola, Aleksandr Diment, Benjamin Elizalde, Ankit Shah, Emmanuel Vincent, Bhiksha Raj, and Tuomas Virtanen. </span> <span class="ltx_bibblock">Dcase 2017 challenge setup: Tasks, datasets and baseline system. </span> <span class="ltx_bibblock">In <span class="ltx_text ltx_font_italic" id="bib.bib150.1.1">DCASE 2017-workshop on detection and classification of acoustic scenes and events</span>, 2017. </span> </li> <li class="ltx_bibitem" id="bib.bib151"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">[151]</span> <span class="ltx_bibblock"> Annamaria Mesaros, Toni Heittola, and Tuomas Virtanen. </span> <span class="ltx_bibblock">Tut database for acoustic scene classification and sound event detection. </span> <span class="ltx_bibblock">In <span class="ltx_text ltx_font_italic" id="bib.bib151.1.1">2016 24th European Signal Processing Conference (EUSIPCO)</span>, pages 1128–1132. IEEE, 2016. </span> </li> <li class="ltx_bibitem" id="bib.bib152"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">[152]</span> <span class="ltx_bibblock"> Annamaria Mesaros, Toni Heittola, Tuomas Virtanen, and Mark D Plumbley. </span> <span class="ltx_bibblock">Sound event detection: A tutorial. </span> <span class="ltx_bibblock"><span class="ltx_text ltx_font_italic" id="bib.bib152.1.1">IEEE Signal Processing Magazine</span>, 38(5):67–83, 2021. </span> </li> <li class="ltx_bibitem" id="bib.bib153"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">[153]</span> <span class="ltx_bibblock"> Swaroop Mishra, Daniel Khashabi, Chitta Baral, and Hannaneh Hajishirzi. </span> <span class="ltx_bibblock">Cross-task generalization via natural language crowdsourcing instructions. </span> <span class="ltx_bibblock"><span class="ltx_text ltx_font_italic" id="bib.bib153.1.1">arXiv preprint arXiv:2104.08773</span>, 2021. </span> </li> <li class="ltx_bibitem" id="bib.bib154"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">[154]</span> <span class="ltx_bibblock"> Kentaro Mitsui, Koh Mitsuda, Toshiaki Wakatsuki, Yukiya Hono, and Kei Sawada. </span> <span class="ltx_bibblock">Pslm: Parallel generation of text and speech with llms for low-latency spoken dialogue systems. </span> <span class="ltx_bibblock"><span class="ltx_text ltx_font_italic" id="bib.bib154.1.1">arXiv preprint arXiv:2406.12428</span>, 2024. </span> </li> <li class="ltx_bibitem" id="bib.bib155"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">[155]</span> <span class="ltx_bibblock"> Abdelrahman Mohamed, Hung-yi Lee, Lasse Borgholt, Jakob D Havtorn, Joakim Edin, Christian Igel, Katrin Kirchhoff, Shang-Wen Li, Karen Livescu, Lars Maaløe, et al. </span> <span class="ltx_bibblock">Self-supervised speech representation learning: A review. </span> <span class="ltx_bibblock"><span class="ltx_text ltx_font_italic" id="bib.bib155.1.1">IEEE Journal of Selected Topics in Signal Processing</span>, 16(6):1179–1210, 2022. </span> </li> <li class="ltx_bibitem" id="bib.bib156"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">[156]</span> <span class="ltx_bibblock"> Eliya Nachmani, Alon Levkovitch, Roy Hirsch, Julian Salazar, Chulayuth Asawaroengchai, Soroosh Mariooryad, Ehud Rivlin, RJ Skerry-Ryan, and Michelle Tadmor Ramanovich. </span> <span class="ltx_bibblock">Spoken question answering and speech continuation using spectrogram-powered llm. </span> <span class="ltx_bibblock"><span class="ltx_text ltx_font_italic" id="bib.bib156.1.1">arXiv preprint arXiv:2305.15255</span>, 2023. </span> </li> <li class="ltx_bibitem" id="bib.bib157"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">[157]</span> <span class="ltx_bibblock"> Tu Anh Nguyen, Eugene Kharitonov, Jade Copet, Yossi Adi, Wei-Ning Hsu, Ali Elkahky, Paden Tomasello, Robin Algayres, Benoit Sagot, Abdelrahman Mohamed, et al. </span> <span class="ltx_bibblock">Generative spoken dialogue language modeling. </span> <span class="ltx_bibblock"><span class="ltx_text ltx_font_italic" id="bib.bib157.1.1">Transactions of the Association for Computational Linguistics</span>, 11:250–266, 2023. </span> </li> <li class="ltx_bibitem" id="bib.bib158"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">[158]</span> <span class="ltx_bibblock"> Tu Anh Nguyen, Benjamin Muller, Bokai Yu, Marta R Costa-Jussa, Maha Elbayad, Sravya Popuri, Paul-Ambroise Duquenne, Robin Algayres, Ruslan Mavlyutov, Itai Gat, et al. </span> <span class="ltx_bibblock">Spirit-lm: Interleaved spoken and written language model. </span> <span class="ltx_bibblock"><span class="ltx_text ltx_font_italic" id="bib.bib158.1.1">arXiv preprint arXiv:2402.05755</span>, 2024. </span> </li> <li class="ltx_bibitem" id="bib.bib159"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">[159]</span> <span class="ltx_bibblock"> Yazhe Niu, Shuai Hu, and Yun Chen. </span> <span class="ltx_bibblock">Cleans2s: High-quality and streaming speech-to-speech interactive agent in a single file. </span> <span class="ltx_bibblock"><a class="ltx_ref ltx_url ltx_font_typewriter" href="https://github.com/opendilab/CleanS2S" title="">https://github.com/opendilab/CleanS2S</a>, 2024. </span> </li> <li class="ltx_bibitem" id="bib.bib160"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">[160]</span> <span class="ltx_bibblock"> Vassil Panayotov, Guoguo Chen, Daniel Povey, and Sanjeev Khudanpur. </span> <span class="ltx_bibblock">Librispeech: an asr corpus based on public domain audio books. </span> <span class="ltx_bibblock">In <span class="ltx_text ltx_font_italic" id="bib.bib160.1.1">2015 IEEE international conference on acoustics, speech and signal processing (ICASSP)</span>, pages 5206–5210. IEEE, 2015. </span> </li> <li class="ltx_bibitem" id="bib.bib161"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">[161]</span> <span class="ltx_bibblock"> Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. </span> <span class="ltx_bibblock">Bleu: a method for automatic evaluation of machine translation. </span> <span class="ltx_bibblock">In <span class="ltx_text ltx_font_italic" id="bib.bib161.1.1">Proceedings of the 40th annual meeting of the Association for Computational Linguistics</span>, pages 311–318, 2002. </span> </li> <li class="ltx_bibitem" id="bib.bib162"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">[162]</span> <span class="ltx_bibblock"> Se Jin Park, Chae Won Kim, Hyeongseop Rha, Minsu Kim, Joanna Hong, Jeong Hun Yeo, and Yong Man Ro. </span> <span class="ltx_bibblock">Let’s go real talk: Spoken dialogue model for face-to-face conversation. </span> <span class="ltx_bibblock"><span class="ltx_text ltx_font_italic" id="bib.bib162.1.1">arXiv preprint arXiv:2406.07867</span>, 2024. </span> </li> <li class="ltx_bibitem" id="bib.bib163"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">[163]</span> <span class="ltx_bibblock"> Puyuan Peng, Po-Yao Huang, Daniel Li, Abdelrahman Mohamed, and David Harwath. </span> <span class="ltx_bibblock">Voicecraft: Zero-shot speech editing and text-to-speech in the wild. </span> <span class="ltx_bibblock"><span class="ltx_text ltx_font_italic" id="bib.bib163.1.1">arXiv preprint arXiv:2403.16973</span>, 2024. </span> </li> <li class="ltx_bibitem" id="bib.bib164"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">[164]</span> <span class="ltx_bibblock"> Leonardo Pepino, Pablo Riera, and Luciana Ferrer. </span> <span class="ltx_bibblock">Encodecmae: Leveraging neural codecs for universal audio representation learning. </span> <span class="ltx_bibblock"><span class="ltx_text ltx_font_italic" id="bib.bib164.1.1">arXiv preprint arXiv:2309.07391</span>, 2023. </span> </li> <li class="ltx_bibitem" id="bib.bib165"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">[165]</span> <span class="ltx_bibblock"> Karol J Piczak. </span> <span class="ltx_bibblock">Esc: Dataset for environmental sound classification. </span> <span class="ltx_bibblock">In <span class="ltx_text ltx_font_italic" id="bib.bib165.1.1">Proceedings of the 23rd ACM international conference on Multimedia</span>, pages 1015–1018, 2015. </span> </li> <li class="ltx_bibitem" id="bib.bib166"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">[166]</span> <span class="ltx_bibblock"> Adam Polyak, Yossi Adi, Jade Copet, Eugene Kharitonov, Kushal Lakhotia, Wei-Ning Hsu, Abdelrahman Mohamed, and Emmanuel Dupoux. </span> <span class="ltx_bibblock">Speech resynthesis from discrete disentangled self-supervised representations. </span> <span class="ltx_bibblock"><span class="ltx_text ltx_font_italic" id="bib.bib166.1.1">arXiv preprint arXiv:2104.00355</span>, 2021. </span> </li> <li class="ltx_bibitem" id="bib.bib167"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">[167]</span> <span class="ltx_bibblock"> Soujanya Poria, Devamanyu Hazarika, Navonil Majumder, Gautam Naik, Erik Cambria, and Rada Mihalcea. </span> <span class="ltx_bibblock">Meld: A multimodal multi-party dataset for emotion recognition in conversations. </span> <span class="ltx_bibblock"><span class="ltx_text ltx_font_italic" id="bib.bib167.1.1">arXiv preprint arXiv:1810.02508</span>, 2018. </span> </li> <li class="ltx_bibitem" id="bib.bib168"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">[168]</span> <span class="ltx_bibblock"> Vineel Pratap, Qiantong Xu, Anuroop Sriram, Gabriel Synnaeve, and Ronan Collobert. </span> <span class="ltx_bibblock">Mls: A large-scale multilingual dataset for speech research. </span> <span class="ltx_bibblock"><span class="ltx_text ltx_font_italic" id="bib.bib168.1.1">arXiv preprint arXiv:2012.03411</span>, 2020. </span> </li> <li class="ltx_bibitem" id="bib.bib169"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">[169]</span> <span class="ltx_bibblock"> Alec Radford, Jong Wook Kim, Tao Xu, Greg Brockman, Christine McLeavey, and Ilya Sutskever. </span> <span class="ltx_bibblock">Robust speech recognition via large-scale weak supervision. </span> <span class="ltx_bibblock">In <span class="ltx_text ltx_font_italic" id="bib.bib169.1.1">International conference on machine learning</span>, pages 28492–28518. PMLR, 2023. </span> </li> <li class="ltx_bibitem" id="bib.bib170"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">[170]</span> <span class="ltx_bibblock"> Rafael Rafailov, Archit Sharma, Eric Mitchell, Christopher D Manning, Stefano Ermon, and Chelsea Finn. </span> <span class="ltx_bibblock">Direct preference optimization: Your language model is secretly a reward model. </span> <span class="ltx_bibblock"><span class="ltx_text ltx_font_italic" id="bib.bib170.1.1">Advances in Neural Information Processing Systems</span>, 36, 2024. </span> </li> <li class="ltx_bibitem" id="bib.bib171"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">[171]</span> <span class="ltx_bibblock"> Zafar Rafii, Antoine Liutkus, Fabian-Robert Stöter, Stylianos Ioannis Mimilakis, and Rachel Bittner. </span> <span class="ltx_bibblock">Musdb18-a corpus for music separation. </span> <span class="ltx_bibblock">2017. </span> </li> <li class="ltx_bibitem" id="bib.bib172"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">[172]</span> <span class="ltx_bibblock"> Anton Ratnarajah, Shi-Xiong Zhang, and Dong Yu. </span> <span class="ltx_bibblock">M3-audiodec: Multi-channel multi-speaker multi-spatial audio codec. </span> <span class="ltx_bibblock"><span class="ltx_text ltx_font_italic" id="bib.bib172.1.1">arXiv preprint arXiv:2309.07416</span>, 2023. </span> </li> <li class="ltx_bibitem" id="bib.bib173"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">[173]</span> <span class="ltx_bibblock"> Antoine Raux and Maxine Eskenazi. </span> <span class="ltx_bibblock">A finite-state turn-taking model for spoken dialog systems. </span> <span class="ltx_bibblock">In <span class="ltx_text ltx_font_italic" id="bib.bib173.1.1">Proceedings of human language technologies: The 2009 annual conference of the North American chapter of the association for computational linguistics</span>, pages 629–637, 2009. </span> </li> <li class="ltx_bibitem" id="bib.bib174"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">[174]</span> <span class="ltx_bibblock"> CK Reddy, E Beyrami, H Dubey, V Gopal, R Cheng, R Cutler, S Matusevych, R Aichner, A Aazami, S Braun, et al. </span> <span class="ltx_bibblock">The interspeech 2020 deep noise suppression challenge: Datasets, subjective speech quality and testing framework. arxiv 2020. </span> <span class="ltx_bibblock"><span class="ltx_text ltx_font_italic" id="bib.bib174.1.1">arXiv preprint arXiv:2001.08662</span>. </span> </li> <li class="ltx_bibitem" id="bib.bib175"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">[175]</span> <span class="ltx_bibblock"> Siva Reddy, Danqi Chen, and Christopher D Manning. </span> <span class="ltx_bibblock">Coqa: A conversational question answering challenge. </span> <span class="ltx_bibblock"><span class="ltx_text ltx_font_italic" id="bib.bib175.1.1">Transactions of the Association for Computational Linguistics</span>, 7:249–266, 2019. </span> </li> <li class="ltx_bibitem" id="bib.bib176"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">[176]</span> <span class="ltx_bibblock"> Yi Ren, Chenxu Hu, Xu Tan, Tao Qin, Sheng Zhao, Zhou Zhao, and Tie-Yan Liu. </span> <span class="ltx_bibblock">Fastspeech 2: Fast and high-quality end-to-end text to speech. </span> <span class="ltx_bibblock"><span class="ltx_text ltx_font_italic" id="bib.bib176.1.1">arXiv preprint arXiv:2006.04558</span>, 2020. </span> </li> <li class="ltx_bibitem" id="bib.bib177"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">[177]</span> <span class="ltx_bibblock"> Yong Ren, Tao Wang, Jiangyan Yi, Le Xu, Jianhua Tao, Chu Yuan Zhang, and Junzuo Zhou. </span> <span class="ltx_bibblock">Fewer-token neural speech codec with time-invariant codes. </span> <span class="ltx_bibblock">In <span class="ltx_text ltx_font_italic" id="bib.bib177.1.1">ICASSP 2024-2024 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)</span>, pages 12737–12741. IEEE, 2024. </span> </li> <li class="ltx_bibitem" id="bib.bib178"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">[178]</span> <span class="ltx_bibblock"> Anthony Rousseau, Paul Deléglise, and Yannick Esteve. </span> <span class="ltx_bibblock">Ted-lium: an automatic speech recognition dedicated corpus. </span> <span class="ltx_bibblock">In <span class="ltx_text ltx_font_italic" id="bib.bib178.1.1">LREC</span>, pages 125–129, 2012. </span> </li> <li class="ltx_bibitem" id="bib.bib179"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">[179]</span> <span class="ltx_bibblock"> Paul K Rubenstein, Chulayuth Asawaroengchai, Duc Dung Nguyen, Ankur Bapna, Zalán Borsos, Félix de Chaumont Quitry, Peter Chen, Dalia El Badawy, Wei Han, Eugene Kharitonov, et al. </span> <span class="ltx_bibblock">Audiopalm: A large language model that can speak and listen. </span> <span class="ltx_bibblock"><span class="ltx_text ltx_font_italic" id="bib.bib179.1.1">arXiv preprint arXiv:2306.12925</span>, 2023. </span> </li> <li class="ltx_bibitem" id="bib.bib180"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">[180]</span> <span class="ltx_bibblock"> Harvey Sacks, Emanuel A Schegloff, and Gail Jefferson. </span> <span class="ltx_bibblock">A simplest systematics for the organization of turn-taking for conversation. </span> <span class="ltx_bibblock"><span class="ltx_text ltx_font_italic" id="bib.bib180.1.1">language</span>, 50(4):696–735, 1974. </span> </li> <li class="ltx_bibitem" id="bib.bib181"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">[181]</span> <span class="ltx_bibblock"> Keisuke Sakaguchi, Ronan Le Bras, Chandra Bhagavatula, and Yejin Choi. </span> <span class="ltx_bibblock">Winogrande: An adversarial winograd schema challenge at scale. </span> <span class="ltx_bibblock"><span class="ltx_text ltx_font_italic" id="bib.bib181.1.1">Communications of the ACM</span>, 64(9):99–106, 2021. </span> </li> <li class="ltx_bibitem" id="bib.bib182"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">[182]</span> <span class="ltx_bibblock"> S Sakshi, Utkarsh Tyagi, Sonal Kumar, Ashish Seth, Ramaneswaran Selvakumar, Oriol Nieto, Ramani Duraiswami, Sreyan Ghosh, and Dinesh Manocha. </span> <span class="ltx_bibblock">Mmau: A massive multi-task audio understanding and reasoning benchmark. </span> <span class="ltx_bibblock"><span class="ltx_text ltx_font_italic" id="bib.bib182.1.1">arXiv preprint arXiv:2410.19168</span>, 2024. </span> </li> <li class="ltx_bibitem" id="bib.bib183"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">[183]</span> <span class="ltx_bibblock"> Justin Salamon, Christopher Jacoby, and Juan Pablo Bello. </span> <span class="ltx_bibblock">A dataset and taxonomy for urban sound research. </span> <span class="ltx_bibblock">In <span class="ltx_text ltx_font_italic" id="bib.bib183.1.1">Proceedings of the 22nd ACM international conference on Multimedia</span>, pages 1041–1044, 2014. </span> </li> <li class="ltx_bibitem" id="bib.bib184"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">[184]</span> <span class="ltx_bibblock"> Steffen Schneider, Alexei Baevski, Ronan Collobert, and Michael Auli. </span> <span class="ltx_bibblock">wav2vec: Unsupervised pre-training for speech recognition. </span> <span class="ltx_bibblock"><span class="ltx_text ltx_font_italic" id="bib.bib184.1.1">arXiv preprint arXiv:1904.05862</span>, 2019. </span> </li> <li class="ltx_bibitem" id="bib.bib185"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">[185]</span> <span class="ltx_bibblock"> John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, and Oleg Klimov. </span> <span class="ltx_bibblock">Proximal policy optimization algorithms. </span> <span class="ltx_bibblock"><span class="ltx_text ltx_font_italic" id="bib.bib185.1.1">arXiv preprint arXiv:1707.06347</span>, 2017. </span> </li> <li class="ltx_bibitem" id="bib.bib186"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">[186]</span> <span class="ltx_bibblock"> Rico Sennrich. </span> <span class="ltx_bibblock">Neural machine translation of rare words with subword units. </span> <span class="ltx_bibblock"><span class="ltx_text ltx_font_italic" id="bib.bib186.1.1">arXiv preprint arXiv:1508.07909</span>, 2015. </span> </li> <li class="ltx_bibitem" id="bib.bib187"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">[187]</span> <span class="ltx_bibblock"> Cory Shain and Micha Elsner. </span> <span class="ltx_bibblock">Acquiring language from speech by learning to remember and predict. </span> <span class="ltx_bibblock">In <span class="ltx_text ltx_font_italic" id="bib.bib187.1.1">Proceedings of the 24th Conference on Computational Natural Language Learning</span>, pages 195–214, 2020. </span> </li> <li class="ltx_bibitem" id="bib.bib188"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">[188]</span> <span class="ltx_bibblock"> Slava Shechtman and Avihu Dekel. </span> <span class="ltx_bibblock">Low bitrate high-quality rvqgan-based discrete speech tokenizer. </span> <span class="ltx_bibblock">In <span class="ltx_text ltx_font_italic" id="bib.bib188.1.1">Interspeech 2024</span>, pages 4174–4178, 2024. </span> </li> <li class="ltx_bibitem" id="bib.bib189"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">[189]</span> <span class="ltx_bibblock"> Kai Shen, Zeqian Ju, Xu Tan, Yanqing Liu, Yichong Leng, Lei He, Tao Qin, Sheng Zhao, and Jiang Bian. </span> <span class="ltx_bibblock">Naturalspeech 2: Latent diffusion models are natural and zero-shot speech and singing synthesizers. </span> <span class="ltx_bibblock"><span class="ltx_text ltx_font_italic" id="bib.bib189.1.1">arXiv preprint arXiv:2304.09116</span>, 2023. </span> </li> <li class="ltx_bibitem" id="bib.bib190"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">[190]</span> <span class="ltx_bibblock"> Yao Shi, Hui Bu, Xin Xu, Shaoji Zhang, and Ming Li. </span> <span class="ltx_bibblock">Aishell-3: A multi-speaker mandarin tts corpus and the baselines. </span> <span class="ltx_bibblock"><span class="ltx_text ltx_font_italic" id="bib.bib190.1.1">arXiv preprint arXiv:2010.11567</span>, 2020. </span> </li> <li class="ltx_bibitem" id="bib.bib191"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">[191]</span> <span class="ltx_bibblock"> Yu Shu, Siwei Dong, Guangyao Chen, Wenhao Huang, Ruihua Zhang, Daochen Shi, Qiqi Xiang, and Yemin Shi. </span> <span class="ltx_bibblock">Llasm: Large language and speech model, 2023. </span> </li> <li class="ltx_bibitem" id="bib.bib192"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">[192]</span> <span class="ltx_bibblock"> Shuzheng Si, Wentao Ma, Haoyu Gao, Yuchuan Wu, Ting-En Lin, Yinpei Dai, Hangyu Li, Rui Yan, Fei Huang, and Yongbin Li. </span> <span class="ltx_bibblock">Spokenwoz: A large-scale speech-text benchmark for spoken task-oriented dialogue agents. </span> <span class="ltx_bibblock"><span class="ltx_text ltx_font_italic" id="bib.bib192.1.1">Advances in Neural Information Processing Systems</span>, 36, 2024. </span> </li> <li class="ltx_bibitem" id="bib.bib193"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">[193]</span> <span class="ltx_bibblock"> Hubert Siuzdak, Florian Grötschla, and Luca A Lanzendörfer. </span> <span class="ltx_bibblock">Snac: Multi-scale neural audio codec. </span> <span class="ltx_bibblock"><span class="ltx_text ltx_font_italic" id="bib.bib193.1.1">arXiv preprint arXiv:2410.14411</span>, 2024. </span> </li> <li class="ltx_bibitem" id="bib.bib194"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">[194]</span> <span class="ltx_bibblock"> David Snyder, Guoguo Chen, and Daniel Povey. </span> <span class="ltx_bibblock">Musan: A music, speech, and noise corpus. </span> <span class="ltx_bibblock"><span class="ltx_text ltx_font_italic" id="bib.bib194.1.1">arXiv preprint arXiv:1510.08484</span>, 2015. </span> </li> <li class="ltx_bibitem" id="bib.bib195"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">[195]</span> <span class="ltx_bibblock"> Tongyi SpeechTeam. </span> <span class="ltx_bibblock">Funaudiollm: Voice understanding and generation foundation models for natural interaction between humans and llms. </span> <span class="ltx_bibblock"><span class="ltx_text ltx_font_italic" id="bib.bib195.1.1">arXiv preprint arXiv:2407.04051</span>, 2024. </span> </li> <li class="ltx_bibitem" id="bib.bib196"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">[196]</span> <span class="ltx_bibblock"> Richard S Sutton, David McAllester, Satinder Singh, and Yishay Mansour. </span> <span class="ltx_bibblock">Policy gradient methods for reinforcement learning with function approximation. </span> <span class="ltx_bibblock"><span class="ltx_text ltx_font_italic" id="bib.bib196.1.1">Advances in neural information processing systems</span>, 12, 1999. </span> </li> <li class="ltx_bibitem" id="bib.bib197"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">[197]</span> <span class="ltx_bibblock"> Alon Talmor, Jonathan Herzig, Nicholas Lourie, and Jonathan Berant. </span> <span class="ltx_bibblock">Commonsenseqa: A question answering challenge targeting commonsense knowledge. </span> <span class="ltx_bibblock"><span class="ltx_text ltx_font_italic" id="bib.bib197.1.1">arXiv preprint arXiv:1811.00937</span>, 2018. </span> </li> <li class="ltx_bibitem" id="bib.bib198"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">[198]</span> <span class="ltx_bibblock"> Changli Tang, Wenyi Yu, Guangzhi Sun, Xianzhao Chen, Tian Tan, Wei Li, Lu Lu, Zejun Ma, and Chao Zhang. </span> <span class="ltx_bibblock">Salmonn: Towards generic hearing abilities for large language models. </span> <span class="ltx_bibblock"><span class="ltx_text ltx_font_italic" id="bib.bib198.1.1">arXiv preprint arXiv:2310.13289</span>, 2023. </span> </li> <li class="ltx_bibitem" id="bib.bib199"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">[199]</span> <span class="ltx_bibblock"> Zhiyuan Tang, Dong Wang, Yanguang Xu, Jianwei Sun, Xiaoning Lei, Shuaijiang Zhao, Cheng Wen, Xingjun Tan, Chuandong Xie, Shuran Zhou, et al. </span> <span class="ltx_bibblock">Kespeech: An open source speech dataset of mandarin and its eight subdialects. </span> <span class="ltx_bibblock">In <span class="ltx_text ltx_font_italic" id="bib.bib199.1.1">Thirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track (Round 2)</span>, 2021. </span> </li> <li class="ltx_bibitem" id="bib.bib200"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">[200]</span> <span class="ltx_bibblock"> Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothée Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, et al. </span> <span class="ltx_bibblock">Llama: Open and efficient foundation language models. </span> <span class="ltx_bibblock"><span class="ltx_text ltx_font_italic" id="bib.bib200.1.1">arXiv preprint arXiv:2302.13971</span>, 2023. </span> </li> <li class="ltx_bibitem" id="bib.bib201"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">[201]</span> <span class="ltx_bibblock"> A Vaswani. </span> <span class="ltx_bibblock">Attention is all you need. </span> <span class="ltx_bibblock"><span class="ltx_text ltx_font_italic" id="bib.bib201.1.1">Advances in Neural Information Processing Systems</span>, 2017. </span> </li> <li class="ltx_bibitem" id="bib.bib202"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">[202]</span> <span class="ltx_bibblock"> Christophe Veaux, Junichi Yamagishi, and Simon King. </span> <span class="ltx_bibblock">The voice bank corpus: Design, collection and data analysis of a large regional accent speech database. </span> <span class="ltx_bibblock">In <span class="ltx_text ltx_font_italic" id="bib.bib202.1.1">2013 international conference oriental COCOSDA held jointly with 2013 conference on Asian spoken language research and evaluation (O-COCOSDA/CASLRE)</span>, pages 1–4. IEEE, 2013. </span> </li> <li class="ltx_bibitem" id="bib.bib203"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">[203]</span> <span class="ltx_bibblock"> Bandhav Veluri, Benjamin N Peloquin, Bokai Yu, Hongyu Gong, and Shyamnath Gollakota. </span> <span class="ltx_bibblock">Beyond turn-based interfaces: Synchronous llms as full-duplex dialogue agents. </span> <span class="ltx_bibblock"><span class="ltx_text ltx_font_italic" id="bib.bib203.1.1">arXiv preprint arXiv:2409.15594</span>, 2024. </span> </li> <li class="ltx_bibitem" id="bib.bib204"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">[204]</span> <span class="ltx_bibblock"> Bram Wallace, Meihua Dang, Rafael Rafailov, Linqi Zhou, Aaron Lou, Senthil Purushwalkam, Stefano Ermon, Caiming Xiong, Shafiq Joty, and Nikhil Naik. </span> <span class="ltx_bibblock">Diffusion model alignment using direct preference optimization. </span> <span class="ltx_bibblock">In <span class="ltx_text ltx_font_italic" id="bib.bib204.1.1">Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition</span>, pages 8228–8238, 2024. </span> </li> <li class="ltx_bibitem" id="bib.bib205"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">[205]</span> <span class="ltx_bibblock"> Bin Wang, Xunlong Zou, Geyu Lin, Shuo Sun, Zhuohan Liu, Wenyu Zhang, Zhengyuan Liu, AiTi Aw, and Nancy F Chen. </span> <span class="ltx_bibblock">Audiobench: A universal benchmark for audio large language models. </span> <span class="ltx_bibblock"><span class="ltx_text ltx_font_italic" id="bib.bib205.1.1">arXiv preprint arXiv:2406.16020</span>, 2024. </span> </li> <li class="ltx_bibitem" id="bib.bib206"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">[206]</span> <span class="ltx_bibblock"> Changhan Wang, Juan Pino, Anne Wu, and Jiatao Gu. </span> <span class="ltx_bibblock">Covost: A diverse multilingual speech-to-text translation corpus. </span> <span class="ltx_bibblock"><span class="ltx_text ltx_font_italic" id="bib.bib206.1.1">arXiv preprint arXiv:2002.01320</span>, 2020. </span> </li> <li class="ltx_bibitem" id="bib.bib207"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">[207]</span> <span class="ltx_bibblock"> Changhan Wang, Morgane Riviere, Ann Lee, Anne Wu, Chaitanya Talnikar, Daniel Haziza, Mary Williamson, Juan Pino, and Emmanuel Dupoux. </span> <span class="ltx_bibblock">Voxpopuli: A large-scale multilingual speech corpus for representation learning, semi-supervised learning and interpretation. </span> <span class="ltx_bibblock"><span class="ltx_text ltx_font_italic" id="bib.bib207.1.1">arXiv preprint arXiv:2101.00390</span>, 2021. </span> </li> <li class="ltx_bibitem" id="bib.bib208"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">[208]</span> <span class="ltx_bibblock"> Chen Wang, Minpeng Liao, Zhongqiang Huang, Jinliang Lu, Junhong Wu, Yuchen Liu, Chengqing Zong, and Jiajun Zhang. </span> <span class="ltx_bibblock">Blsp: Bootstrapping language-speech pre-training via behavior alignment of continuation writing. </span> <span class="ltx_bibblock"><span class="ltx_text ltx_font_italic" id="bib.bib208.1.1">arXiv preprint arXiv:2309.00916</span>, 2023. </span> </li> <li class="ltx_bibitem" id="bib.bib209"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">[209]</span> <span class="ltx_bibblock"> Chengyi Wang, Sanyuan Chen, Yu Wu, Ziqiang Zhang, Long Zhou, Shujie Liu, Zhuo Chen, Yanqing Liu, Huaming Wang, Jinyu Li, et al. </span> <span class="ltx_bibblock">Neural codec language models are zero-shot text to speech synthesizers. </span> <span class="ltx_bibblock"><span class="ltx_text ltx_font_italic" id="bib.bib209.1.1">arXiv preprint arXiv:2301.02111</span>, 2023. </span> </li> <li class="ltx_bibitem" id="bib.bib210"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">[210]</span> <span class="ltx_bibblock"> Chunhui Wang, Chang Zeng, Bowen Zhang, Ziyang Ma, Yefan Zhu, Zifeng Cai, Jian Zhao, Zhonglin Jiang, and Yong Chen. </span> <span class="ltx_bibblock">Ham-tts: Hierarchical acoustic modeling for token-based zero-shot text-to-speech with model and data scaling. </span> <span class="ltx_bibblock"><span class="ltx_text ltx_font_italic" id="bib.bib210.1.1">arXiv preprint arXiv:2403.05989</span>, 2024. </span> </li> <li class="ltx_bibitem" id="bib.bib211"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">[211]</span> <span class="ltx_bibblock"> Peng Wang, Songshuo Lu, Yaohua Tang, Sijie Yan, Yuanjun Xiong, and Wei Xia. </span> <span class="ltx_bibblock">A full-duplex speech dialogue scheme based on large language models. </span> <span class="ltx_bibblock"><span class="ltx_text ltx_font_italic" id="bib.bib211.1.1">arXiv preprint arXiv:2405.19487</span>, 2024. </span> </li> <li class="ltx_bibitem" id="bib.bib212"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">[212]</span> <span class="ltx_bibblock"> Xiong Wang, Yangze Li, Chaoyou Fu, Lei Xie, Ke Li, Xing Sun, and Long Ma. </span> <span class="ltx_bibblock">Freeze-omni: A smart and low latency speech-to-speech dialogue model with frozen llm, 2024. </span> </li> <li class="ltx_bibitem" id="bib.bib213"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">[213]</span> <span class="ltx_bibblock"> Xiong Wang, Yangze Li, Chaoyou Fu, Lei Xie, Ke Li, Xing Sun, and Long Ma. </span> <span class="ltx_bibblock">Freeze-omni: A smart and low latency speech-to-speech dialogue model with frozen llm. </span> <span class="ltx_bibblock"><span class="ltx_text ltx_font_italic" id="bib.bib213.1.1">arXiv preprint arXiv:2411.00774</span>, 2024. </span> </li> <li class="ltx_bibitem" id="bib.bib214"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">[214]</span> <span class="ltx_bibblock"> Yizhong Wang, Yeganeh Kordi, Swaroop Mishra, Alisa Liu, Noah A Smith, Daniel Khashabi, and Hannaneh Hajishirzi. </span> <span class="ltx_bibblock">Self-instruct: Aligning language models with self-generated instructions. </span> <span class="ltx_bibblock"><span class="ltx_text ltx_font_italic" id="bib.bib214.1.1">arXiv preprint arXiv:2212.10560</span>, 2022. </span> </li> <li class="ltx_bibitem" id="bib.bib215"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">[215]</span> <span class="ltx_bibblock"> Yizhong Wang, Swaroop Mishra, Pegah Alipoormolabashi, Yeganeh Kordi, Amirreza Mirzaei, Anjana Arunkumar, Arjun Ashok, Arut Selvan Dhanasekaran, Atharva Naik, David Stap, et al. </span> <span class="ltx_bibblock">Super-naturalinstructions: Generalization via declarative instructions on 1600+ nlp tasks. </span> <span class="ltx_bibblock"><span class="ltx_text ltx_font_italic" id="bib.bib215.1.1">arXiv preprint arXiv:2204.07705</span>, 2022. </span> </li> <li class="ltx_bibitem" id="bib.bib216"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">[216]</span> <span class="ltx_bibblock"> Yuancheng Wang, Haoyue Zhan, Liwei Liu, Ruihong Zeng, Haotian Guo, Jiachen Zheng, Qiang Zhang, Shunsi Zhang, and Zhizheng Wu. </span> <span class="ltx_bibblock">Maskgct: Zero-shot text-to-speech with masked generative codec transformer. </span> <span class="ltx_bibblock"><span class="ltx_text ltx_font_italic" id="bib.bib216.1.1">arXiv preprint arXiv:2409.00750</span>, 2024. </span> </li> <li class="ltx_bibitem" id="bib.bib217"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">[217]</span> <span class="ltx_bibblock"> Jason Wei, Maarten Bosma, Vincent Y Zhao, Kelvin Guu, Adams Wei Yu, Brian Lester, Nan Du, Andrew M Dai, and Quoc V Le. </span> <span class="ltx_bibblock">Finetuned language models are zero-shot learners. </span> <span class="ltx_bibblock"><span class="ltx_text ltx_font_italic" id="bib.bib217.1.1">arXiv preprint arXiv:2109.01652</span>, 2021. </span> </li> <li class="ltx_bibitem" id="bib.bib218"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">[218]</span> <span class="ltx_bibblock"> Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Fei Xia, Ed Chi, Quoc V Le, Denny Zhou, et al. </span> <span class="ltx_bibblock">Chain-of-thought prompting elicits reasoning in large language models. </span> <span class="ltx_bibblock"><span class="ltx_text ltx_font_italic" id="bib.bib218.1.1">Advances in neural information processing systems</span>, 35:24824–24837, 2022. </span> </li> <li class="ltx_bibitem" id="bib.bib219"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">[219]</span> <span class="ltx_bibblock"> Di Wu, Binbin Zhang, Chao Yang, Zhendong Peng, Wenjing Xia, Xiaoyu Chen, and Xin Lei. </span> <span class="ltx_bibblock">U2++: Unified two-pass bidirectional end-to-end model for speech recognition. </span> <span class="ltx_bibblock"><span class="ltx_text ltx_font_italic" id="bib.bib219.1.1">arXiv preprint arXiv:2106.05642</span>, 2021. </span> </li> <li class="ltx_bibitem" id="bib.bib220"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">[220]</span> <span class="ltx_bibblock"> Yi-Chiao Wu, Israel D Gebru, Dejan Marković, and Alexander Richard. </span> <span class="ltx_bibblock">Audiodec: An open-source streaming high-fidelity neural audio codec. </span> <span class="ltx_bibblock">In <span class="ltx_text ltx_font_italic" id="bib.bib220.1.1">ICASSP 2023-2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)</span>, pages 1–5. IEEE, 2023. </span> </li> <li class="ltx_bibitem" id="bib.bib221"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">[221]</span> <span class="ltx_bibblock"> Jinheng Xie, Weijia Mao, Zechen Bai, David Junhao Zhang, Weihao Wang, Kevin Qinghong Lin, Yuchao Gu, Zhijie Chen, Zhenheng Yang, and Mike Zheng Shou. </span> <span class="ltx_bibblock">Show-o: One single transformer to unify multimodal understanding and generation. </span> <span class="ltx_bibblock"><span class="ltx_text ltx_font_italic" id="bib.bib221.1.1">arXiv preprint arXiv:2408.12528</span>, 2024. </span> </li> <li class="ltx_bibitem" id="bib.bib222"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">[222]</span> <span class="ltx_bibblock"> Zhifei Xie and Changqiao Wu. </span> <span class="ltx_bibblock">Mini-omni: Language models can hear, talk while thinking in streaming. </span> <span class="ltx_bibblock"><span class="ltx_text ltx_font_italic" id="bib.bib222.1.1">arXiv preprint arXiv:2408.16725</span>, 2024. </span> </li> <li class="ltx_bibitem" id="bib.bib223"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">[223]</span> <span class="ltx_bibblock"> Zhifei Xie and Changqiao Wu. </span> <span class="ltx_bibblock">Mini-omni2: Towards open-source gpt-4o with vision, speech and duplex capabilities, 2024. </span> </li> <li class="ltx_bibitem" id="bib.bib224"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">[224]</span> <span class="ltx_bibblock"> Detai Xin, Xu Tan, Shinnosuke Takamichi, and Hiroshi Saruwatari. </span> <span class="ltx_bibblock">Bigcodec: Pushing the limits of low-bitrate neural speech codec. </span> <span class="ltx_bibblock"><span class="ltx_text ltx_font_italic" id="bib.bib224.1.1">arXiv preprint arXiv:2409.05377</span>, 2024. </span> </li> <li class="ltx_bibitem" id="bib.bib225"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">[225]</span> <span class="ltx_bibblock"> Yaoxun Xu, Hangting Chen, Jianwei Yu, Wei Tan, Rongzhi Gu, Shun Lei, Zhiwei Lin, and Zhiyong Wu. </span> <span class="ltx_bibblock">Mucodec: Ultra low-bitrate music codec. </span> <span class="ltx_bibblock"><span class="ltx_text ltx_font_italic" id="bib.bib225.1.1">arXiv preprint arXiv:2409.13216</span>, 2024. </span> </li> <li class="ltx_bibitem" id="bib.bib226"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">[226]</span> <span class="ltx_bibblock"> Zhongweiyang Xu, Yong Xu, Vinay Kothapally, Heming Wang, Muqiao Yang, and Dong Yu. </span> <span class="ltx_bibblock">Spatialcodec: Neural spatial speech coding. </span> <span class="ltx_bibblock">In <span class="ltx_text ltx_font_italic" id="bib.bib226.1.1">ICASSP 2024-2024 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)</span>, pages 1131–1135. IEEE, 2024. </span> </li> <li class="ltx_bibitem" id="bib.bib227"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">[227]</span> <span class="ltx_bibblock"> Hongfei Xue, Yuhao Liang, Bingshen Mu, Shiliang Zhang, Qian Chen, and Lei Xie. </span> <span class="ltx_bibblock">E-chat: Emotion-sensitive spoken dialogue system with large language models. </span> <span class="ltx_bibblock"><span class="ltx_text ltx_font_italic" id="bib.bib227.1.1">arXiv preprint arXiv:2401.00475</span>, 2023. </span> </li> <li class="ltx_bibitem" id="bib.bib228"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">[228]</span> <span class="ltx_bibblock"> An Yang, Baosong Yang, Binyuan Hui, Bo Zheng, Bowen Yu, Chang Zhou, Chengpeng Li, Chengyuan Li, Dayiheng Liu, Fei Huang, et al. </span> <span class="ltx_bibblock">Qwen2 technical report. </span> <span class="ltx_bibblock"><span class="ltx_text ltx_font_italic" id="bib.bib228.1.1">arXiv preprint arXiv:2407.10671</span>, 2024. </span> </li> <li class="ltx_bibitem" id="bib.bib229"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">[229]</span> <span class="ltx_bibblock"> Dongchao Yang, Haohan Guo, Yuanyuan Wang, Rongjie Huang, Xiang Li, Xu Tan, Xixin Wu, and Helen Meng. </span> <span class="ltx_bibblock">Uniaudio 1.5: Large language model-driven audio codec is a few-shot audio task learner. </span> <span class="ltx_bibblock"><span class="ltx_text ltx_font_italic" id="bib.bib229.1.1">arXiv preprint arXiv:2406.10056</span>, 2024. </span> </li> <li class="ltx_bibitem" id="bib.bib230"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">[230]</span> <span class="ltx_bibblock"> Dongchao Yang, Songxiang Liu, Rongjie Huang, Jinchuan Tian, Chao Weng, and Yuexian Zou. </span> <span class="ltx_bibblock">Hifi-codec: Group-residual vector quantization for high fidelity audio codec. </span> <span class="ltx_bibblock"><span class="ltx_text ltx_font_italic" id="bib.bib230.1.1">arXiv preprint arXiv:2305.02765</span>, 2023. </span> </li> <li class="ltx_bibitem" id="bib.bib231"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">[231]</span> <span class="ltx_bibblock"> Dongchao Yang, Songxiang Liu, Rongjie Huang, Chao Weng, and Helen Meng. </span> <span class="ltx_bibblock">Instructtts: Modelling expressive tts in discrete latent space with natural language style prompt. </span> <span class="ltx_bibblock"><span class="ltx_text ltx_font_italic" id="bib.bib231.1.1">IEEE/ACM Transactions on Audio, Speech, and Language Processing</span>, 2024. </span> </li> <li class="ltx_bibitem" id="bib.bib232"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">[232]</span> <span class="ltx_bibblock"> Dongchao Yang, Jinchuan Tian, Xu Tan, Rongjie Huang, Songxiang Liu, Xuankai Chang, Jiatong Shi, Sheng Zhao, Jiang Bian, Xixin Wu, et al. </span> <span class="ltx_bibblock">Uniaudio: An audio foundation model toward universal audio generation. </span> <span class="ltx_bibblock"><span class="ltx_text ltx_font_italic" id="bib.bib232.1.1">arXiv preprint arXiv:2310.00704</span>, 2023. </span> </li> <li class="ltx_bibitem" id="bib.bib233"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">[233]</span> <span class="ltx_bibblock"> Haici Yang, Inseon Jang, and Minje Kim. </span> <span class="ltx_bibblock">Generative de-quantization for neural speech codec via latent diffusion. </span> <span class="ltx_bibblock">In <span class="ltx_text ltx_font_italic" id="bib.bib233.1.1">ICASSP 2024-2024 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)</span>, pages 1251–1255. IEEE, 2024. </span> </li> <li class="ltx_bibitem" id="bib.bib234"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">[234]</span> <span class="ltx_bibblock"> Qian Yang, Jin Xu, Wenrui Liu, Yunfei Chu, Ziyue Jiang, Xiaohuan Zhou, Yichong Leng, Yuanjun Lv, Zhou Zhao, Chang Zhou, et al. </span> <span class="ltx_bibblock">Air-bench: Benchmarking large audio-language models via generative comprehension. </span> <span class="ltx_bibblock"><span class="ltx_text ltx_font_italic" id="bib.bib234.1.1">arXiv preprint arXiv:2402.07729</span>, 2024. </span> </li> <li class="ltx_bibitem" id="bib.bib235"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">[235]</span> <span class="ltx_bibblock"> Shu-wen Yang, Po-Han Chi, Yung-Sung Chuang, Cheng-I Jeff Lai, Kushal Lakhotia, Yist Y Lin, Andy T Liu, Jiatong Shi, Xuankai Chang, Guan-Ting Lin, et al. </span> <span class="ltx_bibblock">Superb: Speech processing universal performance benchmark. </span> <span class="ltx_bibblock"><span class="ltx_text ltx_font_italic" id="bib.bib235.1.1">arXiv preprint arXiv:2105.01051</span>, 2021. </span> </li> <li class="ltx_bibitem" id="bib.bib236"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">[236]</span> <span class="ltx_bibblock"> Zhen Ye, Peiwen Sun, Jiahe Lei, Hongzhan Lin, Xu Tan, Zheqi Dai, Qiuqiang Kong, Jianyi Chen, Jiahao Pan, Qifeng Liu, Yike Guo, and Wei Xue. </span> <span class="ltx_bibblock">Codec does matter: Exploring the semantic shortcoming of codec for audio language model. </span> <span class="ltx_bibblock"><span class="ltx_text ltx_font_italic" id="bib.bib236.1.1">arXiv preprint arXiv:2408.17175</span>, 2024. </span> </li> <li class="ltx_bibitem" id="bib.bib237"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">[237]</span> <span class="ltx_bibblock"> Lili Yu, Dániel Simig, Colin Flaherty, Armen Aghajanyan, Luke Zettlemoyer, and Mike Lewis. </span> <span class="ltx_bibblock">Megabyte: Predicting million-byte sequences with multiscale transformers. </span> <span class="ltx_bibblock"><span class="ltx_text ltx_font_italic" id="bib.bib237.1.1">Advances in Neural Information Processing Systems</span>, 36:78808–78823, 2023. </span> </li> <li class="ltx_bibitem" id="bib.bib238"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">[238]</span> <span class="ltx_bibblock"> Neil Zeghidour, Alejandro Luebs, Ahmed Omran, Jan Skoglund, and Marco Tagliasacchi. </span> <span class="ltx_bibblock">Soundstream: An end-to-end neural audio codec. </span> <span class="ltx_bibblock"><span class="ltx_text ltx_font_italic" id="bib.bib238.1.1">IEEE/ACM Transactions on Audio, Speech, and Language Processing</span>, 30:495–507, 2021. </span> </li> <li class="ltx_bibitem" id="bib.bib239"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">[239]</span> <span class="ltx_bibblock"> Rowan Zellers, Ari Holtzman, Yonatan Bisk, Ali Farhadi, and Yejin Choi. </span> <span class="ltx_bibblock">Hellaswag: Can a machine really finish your sentence? </span> <span class="ltx_bibblock"><span class="ltx_text ltx_font_italic" id="bib.bib239.1.1">arXiv preprint arXiv:1905.07830</span>, 2019. </span> </li> <li class="ltx_bibitem" id="bib.bib240"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">[240]</span> <span class="ltx_bibblock"> Heiga Zen, Viet Dang, Rob Clark, Yu Zhang, Ron J Weiss, Ye Jia, Zhifeng Chen, and Yonghui Wu. </span> <span class="ltx_bibblock">Libritts: A corpus derived from librispeech for text-to-speech. </span> <span class="ltx_bibblock"><span class="ltx_text ltx_font_italic" id="bib.bib240.1.1">arXiv preprint arXiv:1904.02882</span>, 2019. </span> </li> <li class="ltx_bibitem" id="bib.bib241"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">[241]</span> <span class="ltx_bibblock"> Binbin Zhang, Hang Lv, Pengcheng Guo, Qijie Shao, Chao Yang, Lei Xie, Xin Xu, Hui Bu, Xiaoyu Chen, Chenchen Zeng, et al. </span> <span class="ltx_bibblock">Wenetspeech: A 10000+ hours multi-domain mandarin corpus for speech recognition. </span> <span class="ltx_bibblock">In <span class="ltx_text ltx_font_italic" id="bib.bib241.1.1">ICASSP 2022-2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)</span>, pages 6182–6186. IEEE, 2022. </span> </li> <li class="ltx_bibitem" id="bib.bib242"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">[242]</span> <span class="ltx_bibblock"> Dong Zhang, Shimin Li, Xin Zhang, Jun Zhan, Pengyu Wang, Yaqian Zhou, and Xipeng Qiu. </span> <span class="ltx_bibblock">Speechgpt: Empowering large language models with intrinsic cross-modal conversational abilities. </span> <span class="ltx_bibblock"><span class="ltx_text ltx_font_italic" id="bib.bib242.1.1">arXiv preprint arXiv:2305.11000</span>, 2023. </span> </li> <li class="ltx_bibitem" id="bib.bib243"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">[243]</span> <span class="ltx_bibblock"> Dong Zhang, Zhaowei Li, Shimin Li, Xin Zhang, Pengyu Wang, Yaqian Zhou, and Xipeng Qiu. </span> <span class="ltx_bibblock">Speechalign: Aligning speech generation to human preferences. </span> <span class="ltx_bibblock"><span class="ltx_text ltx_font_italic" id="bib.bib243.1.1">arXiv preprint arXiv:2404.05600</span>, 2024. </span> </li> <li class="ltx_bibitem" id="bib.bib244"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">[244]</span> <span class="ltx_bibblock"> Dong Zhang, Xin Zhang, Jun Zhan, Shimin Li, Yaqian Zhou, and Xipeng Qiu. </span> <span class="ltx_bibblock">Speechgpt-gen: Scaling chain-of-information speech generation. </span> <span class="ltx_bibblock"><span class="ltx_text ltx_font_italic" id="bib.bib244.1.1">arXiv preprint arXiv:2401.13527</span>, 2024. </span> </li> <li class="ltx_bibitem" id="bib.bib245"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">[245]</span> <span class="ltx_bibblock"> Lichao Zhang, Ruiqi Li, Shoutong Wang, Liqun Deng, Jinglin Liu, Yi Ren, Jinzheng He, Rongjie Huang, Jieming Zhu, Xiao Chen, et al. </span> <span class="ltx_bibblock">M4singer: A multi-style, multi-singer and musical score provided mandarin singing corpus. </span> <span class="ltx_bibblock"><span class="ltx_text ltx_font_italic" id="bib.bib245.1.1">Advances in Neural Information Processing Systems</span>, 35:6914–6926, 2022. </span> </li> <li class="ltx_bibitem" id="bib.bib246"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">[246]</span> <span class="ltx_bibblock"> Qinglin Zhang, Luyao Cheng, Chong Deng, Qian Chen, Wen Wang, Siqi Zheng, Jiaqing Liu, Hai Yu, and Chaohong Tan. </span> <span class="ltx_bibblock">Omniflatten: An end-to-end gpt model for seamless voice conversation. </span> <span class="ltx_bibblock"><span class="ltx_text ltx_font_italic" id="bib.bib246.1.1">arXiv preprint arXiv:2410.17799</span>, 2024. </span> </li> <li class="ltx_bibitem" id="bib.bib247"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">[247]</span> <span class="ltx_bibblock"> Tianyi Zhang, Varsha Kishore, Felix Wu, Kilian Q Weinberger, and Yoav Artzi. </span> <span class="ltx_bibblock">Bertscore: Evaluating text generation with bert. </span> <span class="ltx_bibblock"><span class="ltx_text ltx_font_italic" id="bib.bib247.1.1">arXiv preprint arXiv:1904.09675</span>, 2019. </span> </li> <li class="ltx_bibitem" id="bib.bib248"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">[248]</span> <span class="ltx_bibblock"> Xin Zhang, Xiang Lyu, Zhihao Du, Qian Chen, Dong Zhang, Hangrui Hu, Chaohong Tan, Tianyu Zhao, Yuxuan Wang, Bin Zhang, et al. </span> <span class="ltx_bibblock">Intrinsicvoice: Empowering llms with intrinsic real-time voice interaction abilities. </span> <span class="ltx_bibblock"><span class="ltx_text ltx_font_italic" id="bib.bib248.1.1">arXiv preprint arXiv:2410.08035</span>, 2024. </span> </li> <li class="ltx_bibitem" id="bib.bib249"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">[249]</span> <span class="ltx_bibblock"> Xin Zhang, Dong Zhang, Shimin Li, Yaqian Zhou, and Xipeng Qiu. </span> <span class="ltx_bibblock">Speechtokenizer: Unified speech tokenizer for speech large language models. </span> <span class="ltx_bibblock"><span class="ltx_text ltx_font_italic" id="bib.bib249.1.1">arXiv preprint arXiv:2308.16692</span>, 2023. </span> </li> <li class="ltx_bibitem" id="bib.bib250"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">[250]</span> <span class="ltx_bibblock"> Ziqiang Zhang, Long Zhou, Chengyi Wang, Sanyuan Chen, Yu Wu, Shujie Liu, Zhuo Chen, Yanqing Liu, Huaming Wang, Jinyu Li, et al. </span> <span class="ltx_bibblock">Speak foreign languages with your own voice: Cross-lingual neural codec language modeling. </span> <span class="ltx_bibblock"><span class="ltx_text ltx_font_italic" id="bib.bib250.1.1">arXiv preprint arXiv:2303.03926</span>, 2023. </span> </li> <li class="ltx_bibitem" id="bib.bib251"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">[251]</span> <span class="ltx_bibblock"> Fang Zheng, Guoliang Zhang, and Zhanjiang Song. </span> <span class="ltx_bibblock">Comparison of different implementations of mfcc. </span> <span class="ltx_bibblock"><span class="ltx_text ltx_font_italic" id="bib.bib251.1.1">Journal of Computer science and Technology</span>, 16:582–589, 2001. </span> </li> <li class="ltx_bibitem" id="bib.bib252"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">[252]</span> <span class="ltx_bibblock"> Lianmin Zheng, Wei-Lin Chiang, Ying Sheng, Siyuan Zhuang, Zhanghao Wu, Yonghao Zhuang, Zi Lin, Zhuohan Li, Dacheng Li, Eric Xing, et al. </span> <span class="ltx_bibblock">Judging llm-as-a-judge with mt-bench and chatbot arena. </span> <span class="ltx_bibblock"><span class="ltx_text ltx_font_italic" id="bib.bib252.1.1">Advances in Neural Information Processing Systems</span>, 36:46595–46623, 2023. </span> </li> <li class="ltx_bibitem" id="bib.bib253"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">[253]</span> <span class="ltx_bibblock"> Yannan Zheng, Jiawei Luo, Weiling Chen, Zuoyong Li, and Tiesong Zhao. </span> <span class="ltx_bibblock">Fuvc: A flexible codec for underwater video transmission. </span> <span class="ltx_bibblock"><span class="ltx_text ltx_font_italic" id="bib.bib253.1.1">IEEE Transactions on Geoscience and Remote Sensing</span>, 2024. </span> </li> <li class="ltx_bibitem" id="bib.bib254"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">[254]</span> <span class="ltx_bibblock"> Youqiang Zheng, Weiping Tu, Li Xiao, and Xinmeng Xu. </span> <span class="ltx_bibblock">Supercodec: A neural speech codec with selective back-projection network. </span> <span class="ltx_bibblock">In <span class="ltx_text ltx_font_italic" id="bib.bib254.1.1">ICASSP 2024-2024 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)</span>, pages 566–570. IEEE, 2024. </span> </li> <li class="ltx_bibitem" id="bib.bib255"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">[255]</span> <span class="ltx_bibblock"> Wanjun Zhong, Ruixiang Cui, Yiduo Guo, Yaobo Liang, Shuai Lu, Yanlin Wang, Amin Saied, Weizhu Chen, and Nan Duan. </span> <span class="ltx_bibblock">Agieval: A human-centric benchmark for evaluating foundation models. </span> <span class="ltx_bibblock"><span class="ltx_text ltx_font_italic" id="bib.bib255.1.1">arXiv preprint arXiv:2304.06364</span>, 2023. </span> </li> <li class="ltx_bibitem" id="bib.bib256"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">[256]</span> <span class="ltx_bibblock"> Chunting Zhou, Lili Yu, Arun Babu, Kushal Tirumala, Michihiro Yasunaga, Leonid Shamis, Jacob Kahn, Xuezhe Ma, Luke Zettlemoyer, and Omer Levy. </span> <span class="ltx_bibblock">Transfusion: Predict the next token and diffuse images with one multi-modal model. </span> <span class="ltx_bibblock"><span class="ltx_text ltx_font_italic" id="bib.bib256.1.1">arXiv preprint arXiv:2408.11039</span>, 2024. </span> </li> <li class="ltx_bibitem" id="bib.bib257"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">[257]</span> <span class="ltx_bibblock"> Yongxin Zhu, Bocheng Li, Yifei Xin, and Linli Xu. </span> <span class="ltx_bibblock">Addressing representation collapse in vector quantized models with one linear layer. </span> <span class="ltx_bibblock"><span class="ltx_text ltx_font_italic" id="bib.bib257.1.1">arXiv preprint arXiv:2411.02038</span>, 2024. </span> </li> </ul> </section> <section class="ltx_appendix" id="A1"> <h2 class="ltx_title ltx_title_appendix"> <span class="ltx_tag ltx_tag_appendix">Appendix A </span>Resources about Music and Sound Datasets</h2> <div class="ltx_para" id="A1.p1"> <p class="ltx_p" id="A1.p1.1">This section lists commonly used music and sound datasets. These datasets cover different modalities, including environmental sounds, music, and emotional sounds, and provide some help for the development of future voice dialogue systems. The table <a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#A1.T4" title="Table 4 ‣ Appendix A Resources about Music and Sound Datasets ‣ WavChat: A Survey of Spoken Dialogue Models"><span class="ltx_text ltx_ref_tag">4</span></a> shows the basic information of each dataset, including the dataset name, number of samples, dataset link, and modality type.</p> </div> <figure class="ltx_table" id="A1.T4"> <figcaption class="ltx_caption ltx_centering"><span class="ltx_tag ltx_tag_table"><span class="ltx_text" id="A1.T4.2.1.1" style="font-size:90%;">Table 4</span>: </span><span class="ltx_text" id="A1.T4.3.2" style="font-size:90%;">Music and Non-Speech Sound Datasets</span></figcaption> <div class="ltx_inline-block ltx_align_center ltx_transformed_outer" id="A1.T4.4" style="width:433.6pt;height:180pt;vertical-align:-0.0pt;"><span class="ltx_transformed_inner" style="transform:translate(-151.8pt,63.0pt) scale(0.588179666173187,0.588179666173187) ;"> <table class="ltx_tabular ltx_align_middle" id="A1.T4.4.1"> <tr class="ltx_tr" id="A1.T4.4.1.1"> <td class="ltx_td ltx_align_left ltx_border_t" id="A1.T4.4.1.1.1"><span class="ltx_text ltx_font_bold" id="A1.T4.4.1.1.1.1">Dataset</span></td> <td class="ltx_td ltx_align_center ltx_border_t" id="A1.T4.4.1.1.2"><span class="ltx_text ltx_font_bold" id="A1.T4.4.1.1.2.1">Size</span></td> <td class="ltx_td ltx_align_center ltx_border_t" id="A1.T4.4.1.1.3"><span class="ltx_text ltx_font_bold" id="A1.T4.4.1.1.3.1">URL</span></td> <td class="ltx_td ltx_align_center ltx_border_t" id="A1.T4.4.1.1.4"><span class="ltx_text ltx_font_bold" id="A1.T4.4.1.1.4.1">Modality</span></td> </tr> <tr class="ltx_tr" id="A1.T4.4.1.2"> <td class="ltx_td ltx_align_left ltx_border_t" id="A1.T4.4.1.2.1">ESC-50 <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib165" title="">165</a>]</cite> </td> <td class="ltx_td ltx_align_center ltx_border_t" id="A1.T4.4.1.2.2">2,000 clips (5s each)</td> <td class="ltx_td ltx_align_center ltx_border_t" id="A1.T4.4.1.2.3"><a class="ltx_ref ltx_url ltx_font_typewriter" href="https://github.com/karoldvl/ESC-50" title="">https://github.com/karoldvl/ESC-50</a></td> <td class="ltx_td ltx_align_center ltx_border_t" id="A1.T4.4.1.2.4">Sound</td> </tr> <tr class="ltx_tr" id="A1.T4.4.1.3"> <td class="ltx_td ltx_align_left" id="A1.T4.4.1.3.1">UrbanSound8K <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib183" title="">183</a>]</cite> </td> <td class="ltx_td ltx_align_center" id="A1.T4.4.1.3.2">8,732 clips (<=4s each)</td> <td class="ltx_td ltx_align_center" id="A1.T4.4.1.3.3"><a class="ltx_ref ltx_url ltx_font_typewriter" href="https://urbansounddataset.weebly.com/urbansound8k.html" title="">https://urbansounddataset.weebly.com/urbansound8k.html</a></td> <td class="ltx_td ltx_align_center" id="A1.T4.4.1.3.4">Sound</td> </tr> <tr class="ltx_tr" id="A1.T4.4.1.4"> <td class="ltx_td ltx_align_left" id="A1.T4.4.1.4.1">AudioSet <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib65" title="">65</a>]</cite> </td> <td class="ltx_td ltx_align_center" id="A1.T4.4.1.4.2">2000k+ clips (10s each)</td> <td class="ltx_td ltx_align_center" id="A1.T4.4.1.4.3"><a class="ltx_ref ltx_url ltx_font_typewriter" href="https://research.google.com/audioset/" title="">https://research.google.com/audioset/</a></td> <td class="ltx_td ltx_align_center" id="A1.T4.4.1.4.4">Sound</td> </tr> <tr class="ltx_tr" id="A1.T4.4.1.5"> <td class="ltx_td ltx_align_left" id="A1.T4.4.1.5.1">TUT Acoustic Scenes 2017 <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib151" title="">151</a>]</cite> </td> <td class="ltx_td ltx_align_center" id="A1.T4.4.1.5.2">52,630 segments</td> <td class="ltx_td ltx_align_center" id="A1.T4.4.1.5.3"><a class="ltx_ref ltx_url ltx_font_typewriter" href="https://zenodo.org/record/400515" title="">https://zenodo.org/record/400515</a></td> <td class="ltx_td ltx_align_center" id="A1.T4.4.1.5.4">Sound</td> </tr> <tr class="ltx_tr" id="A1.T4.4.1.6"> <td class="ltx_td ltx_align_left" id="A1.T4.4.1.6.1">Warblr</td> <td class="ltx_td ltx_align_center" id="A1.T4.4.1.6.2">10,000 clips</td> <td class="ltx_td ltx_align_center" id="A1.T4.4.1.6.3"><a class="ltx_ref ltx_url ltx_font_typewriter" href="https://warblr.net/" title="">https://warblr.net/</a></td> <td class="ltx_td ltx_align_center" id="A1.T4.4.1.6.4">Sound</td> </tr> <tr class="ltx_tr" id="A1.T4.4.1.7"> <td class="ltx_td ltx_align_left" id="A1.T4.4.1.7.1">FSD50K <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib60" title="">60</a>]</cite> </td> <td class="ltx_td ltx_align_center" id="A1.T4.4.1.7.2">51,197 clips (total 108.3 hours)</td> <td class="ltx_td ltx_align_center" id="A1.T4.4.1.7.3"><a class="ltx_ref ltx_url ltx_font_typewriter" href="https://zenodo.org/record/4060432" title="">https://zenodo.org/record/4060432</a></td> <td class="ltx_td ltx_align_center" id="A1.T4.4.1.7.4">Sound</td> </tr> <tr class="ltx_tr" id="A1.T4.4.1.8"> <td class="ltx_td ltx_align_left" id="A1.T4.4.1.8.1">DCASE Challenge <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib150" title="">150</a>]</cite> </td> <td class="ltx_td ltx_align_center" id="A1.T4.4.1.8.2">varies annually</td> <td class="ltx_td ltx_align_center" id="A1.T4.4.1.8.3"><a class="ltx_ref ltx_url ltx_font_typewriter" href="http://dcase.community/" title="">http://dcase.community/</a></td> <td class="ltx_td ltx_align_center" id="A1.T4.4.1.8.4">Sound</td> </tr> <tr class="ltx_tr" id="A1.T4.4.1.9"> <td class="ltx_td ltx_align_left" id="A1.T4.4.1.9.1">IRMAS <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib17" title="">17</a>]</cite> </td> <td class="ltx_td ltx_align_center" id="A1.T4.4.1.9.2">6,705 audio files (3s each)</td> <td class="ltx_td ltx_align_center" id="A1.T4.4.1.9.3"><a class="ltx_ref ltx_url ltx_font_typewriter" href="https://www.upf.edu/web/mtg/irmas" title="">https://www.upf.edu/web/mtg/irmas</a></td> <td class="ltx_td ltx_align_center" id="A1.T4.4.1.9.4">Music</td> </tr> <tr class="ltx_tr" id="A1.T4.4.1.10"> <td class="ltx_td ltx_align_left" id="A1.T4.4.1.10.1">FMA <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib42" title="">42</a>]</cite> </td> <td class="ltx_td ltx_align_center" id="A1.T4.4.1.10.2">106,574 tracks</td> <td class="ltx_td ltx_align_center" id="A1.T4.4.1.10.3"><a class="ltx_ref ltx_url ltx_font_typewriter" href="https://github.com/mdeff/fma" title="">https://github.com/mdeff/fma</a></td> <td class="ltx_td ltx_align_center" id="A1.T4.4.1.10.4">Music</td> </tr> <tr class="ltx_tr" id="A1.T4.4.1.11"> <td class="ltx_td ltx_align_left" id="A1.T4.4.1.11.1">NSynth <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib56" title="">56</a>]</cite> </td> <td class="ltx_td ltx_align_center" id="A1.T4.4.1.11.2">305,979 notes</td> <td class="ltx_td ltx_align_center" id="A1.T4.4.1.11.3"><a class="ltx_ref ltx_url ltx_font_typewriter" href="https://magenta.tensorflow.org/datasets/nsynth" title="">https://magenta.tensorflow.org/datasets/nsynth</a></td> <td class="ltx_td ltx_align_center" id="A1.T4.4.1.11.4">Music</td> </tr> <tr class="ltx_tr" id="A1.T4.4.1.12"> <td class="ltx_td ltx_align_left" id="A1.T4.4.1.12.1">EMOMusic</td> <td class="ltx_td ltx_align_center" id="A1.T4.4.1.12.2">744 songs</td> <td class="ltx_td ltx_align_center" id="A1.T4.4.1.12.3"><a class="ltx_ref ltx_url ltx_font_typewriter" href="https://cvml.unige.ch/databases/emoMusic/" title="">https://cvml.unige.ch/databases/emoMusic/</a></td> <td class="ltx_td ltx_align_center" id="A1.T4.4.1.12.4">Music</td> </tr> <tr class="ltx_tr" id="A1.T4.4.1.13"> <td class="ltx_td ltx_align_left" id="A1.T4.4.1.13.1">MedleyDB <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib16" title="">16</a>]</cite> </td> <td class="ltx_td ltx_align_center" id="A1.T4.4.1.13.2">122 multitrack recordings</td> <td class="ltx_td ltx_align_center" id="A1.T4.4.1.13.3"><a class="ltx_ref ltx_url ltx_font_typewriter" href="https://medleydb.weebly.com/" title="">https://medleydb.weebly.com/</a></td> <td class="ltx_td ltx_align_center" id="A1.T4.4.1.13.4">Music</td> </tr> <tr class="ltx_tr" id="A1.T4.4.1.14"> <td class="ltx_td ltx_align_left" id="A1.T4.4.1.14.1">MagnaTagATune</td> <td class="ltx_td ltx_align_center" id="A1.T4.4.1.14.2">25,863 clips (30s each)</td> <td class="ltx_td ltx_align_center" id="A1.T4.4.1.14.3"><a class="ltx_ref ltx_url ltx_font_typewriter" href="https://mirg.city.ac.uk/codeapps/the-magnatagatune-dataset" title="">https://mirg.city.ac.uk/codeapps/the-magnatagatune-dataset</a></td> <td class="ltx_td ltx_align_center" id="A1.T4.4.1.14.4">Music</td> </tr> <tr class="ltx_tr" id="A1.T4.4.1.15"> <td class="ltx_td ltx_align_left" id="A1.T4.4.1.15.1">MUSDB <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib171" title="">171</a>]</cite> </td> <td class="ltx_td ltx_align_center" id="A1.T4.4.1.15.2">150 songs</td> <td class="ltx_td ltx_align_center" id="A1.T4.4.1.15.3"><a class="ltx_ref ltx_url ltx_font_typewriter" href="https://paperswithcode.com/dataset/musdb18" title="">https://paperswithcode.com/dataset/musdb18</a></td> <td class="ltx_td ltx_align_center" id="A1.T4.4.1.15.4">Music</td> </tr> <tr class="ltx_tr" id="A1.T4.4.1.16"> <td class="ltx_td ltx_align_left" id="A1.T4.4.1.16.1">M4Singer <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib245" title="">245</a>]</cite> </td> <td class="ltx_td ltx_align_center" id="A1.T4.4.1.16.2">700 songs</td> <td class="ltx_td ltx_align_center" id="A1.T4.4.1.16.3"><a class="ltx_ref ltx_url ltx_font_typewriter" href="https://github.com/M4Singer/M4Singer" title="">https://github.com/M4Singer/M4Singer</a></td> <td class="ltx_td ltx_align_center" id="A1.T4.4.1.16.4">Music</td> </tr> <tr class="ltx_tr" id="A1.T4.4.1.17"> <td class="ltx_td ltx_align_left ltx_border_b" id="A1.T4.4.1.17.1">Jamendo</td> <td class="ltx_td ltx_align_center ltx_border_b" id="A1.T4.4.1.17.2">600k songs</td> <td class="ltx_td ltx_align_center ltx_border_b" id="A1.T4.4.1.17.3"><a class="ltx_ref ltx_url ltx_font_typewriter" href="https://www.jamendo.com/?language=en" title="">https://www.jamendo.com/?language=en</a></td> <td class="ltx_td ltx_align_center ltx_border_b" id="A1.T4.4.1.17.4">Music</td> </tr> </table> </span></div> </figure> </section> <section class="ltx_appendix" id="A2"> <h2 class="ltx_title ltx_title_appendix"> <span class="ltx_tag ltx_tag_appendix">Appendix B </span>Open-source Spoken Dialogue Models</h2> <div class="ltx_para" id="A2.p1"> <p class="ltx_p" id="A2.p1.1">In this section, we provide a comprehensive list of publicly available and open-source spoken dialogue models in Table <a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#A2.T5" title="Table 5 ‣ Appendix B Open-source Spoken Dialogue Models ‣ WavChat: A Survey of Spoken Dialogue Models"><span class="ltx_text ltx_ref_tag">5</span></a>.</p> </div> <figure class="ltx_table" id="A2.T5"> <figcaption class="ltx_caption ltx_centering"><span class="ltx_tag ltx_tag_table"><span class="ltx_text" id="A2.T5.2.1.1" style="font-size:90%;">Table 5</span>: </span><span class="ltx_text" id="A2.T5.3.2" style="font-size:90%;">A comprehensive list of publicly available spoken dialogue models and their URL</span></figcaption> <div class="ltx_inline-block ltx_align_center ltx_transformed_outer" id="A2.T5.4" style="width:433.6pt;height:442.8pt;vertical-align:-0.0pt;"><span class="ltx_transformed_inner" style="transform:translate(-21.1pt,21.6pt) scale(0.911177294153841,0.911177294153841) ;"> <table class="ltx_tabular ltx_align_middle" id="A2.T5.4.1"> <tr class="ltx_tr" id="A2.T5.4.1.1"> <td class="ltx_td ltx_align_center ltx_border_t" id="A2.T5.4.1.1.1" style="padding-top:2pt;padding-bottom:2pt;"><span class="ltx_text ltx_font_bold" id="A2.T5.4.1.1.1.1">Model</span></td> <td class="ltx_td ltx_align_center ltx_border_t" id="A2.T5.4.1.1.2" style="padding-top:2pt;padding-bottom:2pt;"><span class="ltx_text ltx_font_bold" id="A2.T5.4.1.1.2.1">URL</span></td> </tr> <tr class="ltx_tr" id="A2.T5.4.1.2"> <td class="ltx_td ltx_align_center ltx_border_t" id="A2.T5.4.1.2.1" style="padding-top:2pt;padding-bottom:2pt;">AudioGPT</td> <td class="ltx_td ltx_align_center ltx_border_t" id="A2.T5.4.1.2.2" style="padding-top:2pt;padding-bottom:2pt;"><a class="ltx_ref ltx_url ltx_font_typewriter" href="https://github.com/AIGC-Audio/AudioGPT" title="">https://github.com/AIGC-Audio/AudioGPT</a></td> </tr> <tr class="ltx_tr" id="A2.T5.4.1.3"> <td class="ltx_td ltx_align_center" id="A2.T5.4.1.3.1" style="padding-top:2pt;padding-bottom:2pt;">SpeechGPT</td> <td class="ltx_td ltx_align_center" id="A2.T5.4.1.3.2" style="padding-top:2pt;padding-bottom:2pt;"><a class="ltx_ref ltx_url ltx_font_typewriter" href="https://github.com/0nutation/SpeechGPT" title="">https://github.com/0nutation/SpeechGPT</a></td> </tr> <tr class="ltx_tr" id="A2.T5.4.1.4"> <td class="ltx_td ltx_align_center" id="A2.T5.4.1.4.1" style="padding-top:2pt;padding-bottom:2pt;">Freeze-Omni</td> <td class="ltx_td ltx_align_center" id="A2.T5.4.1.4.2" style="padding-top:2pt;padding-bottom:2pt;"><a class="ltx_ref ltx_url ltx_font_typewriter" href="https://github.com/VITA-MLLM/Freeze-Omni" title="">https://github.com/VITA-MLLM/Freeze-Omni</a></td> </tr> <tr class="ltx_tr" id="A2.T5.4.1.5"> <td class="ltx_td ltx_align_center" id="A2.T5.4.1.5.1" style="padding-top:2pt;padding-bottom:2pt;">Baichuan-Omni</td> <td class="ltx_td ltx_align_center" id="A2.T5.4.1.5.2" style="padding-top:2pt;padding-bottom:2pt;"><a class="ltx_ref ltx_url ltx_font_typewriter" href="https://github.com/westlake-baichuan-mllm/bc-omni" title="">https://github.com/westlake-baichuan-mllm/bc-omni</a></td> </tr> <tr class="ltx_tr" id="A2.T5.4.1.6"> <td class="ltx_td ltx_align_center" id="A2.T5.4.1.6.1" style="padding-top:2pt;padding-bottom:2pt;">GLM-4-Voice</td> <td class="ltx_td ltx_align_center" id="A2.T5.4.1.6.2" style="padding-top:2pt;padding-bottom:2pt;"><a class="ltx_ref ltx_url ltx_font_typewriter" href="https://github.com/THUDM/GLM-4-Voice" title="">https://github.com/THUDM/GLM-4-Voice</a></td> </tr> <tr class="ltx_tr" id="A2.T5.4.1.7"> <td class="ltx_td ltx_align_center" id="A2.T5.4.1.7.1" style="padding-top:2pt;padding-bottom:2pt;">Mini-Omni</td> <td class="ltx_td ltx_align_center" id="A2.T5.4.1.7.2" style="padding-top:2pt;padding-bottom:2pt;"><a class="ltx_ref ltx_url ltx_font_typewriter" href="https://github.com/gpt-omni/mini-omni" title="">https://github.com/gpt-omni/mini-omni</a></td> </tr> <tr class="ltx_tr" id="A2.T5.4.1.8"> <td class="ltx_td ltx_align_center" id="A2.T5.4.1.8.1" style="padding-top:2pt;padding-bottom:2pt;">Mini-Omni2</td> <td class="ltx_td ltx_align_center" id="A2.T5.4.1.8.2" style="padding-top:2pt;padding-bottom:2pt;"><a class="ltx_ref ltx_url ltx_font_typewriter" href="https://github.com/gpt-omni/mini-omni2" title="">https://github.com/gpt-omni/mini-omni2</a></td> </tr> <tr class="ltx_tr" id="A2.T5.4.1.9"> <td class="ltx_td ltx_align_center" id="A2.T5.4.1.9.1" style="padding-top:2pt;padding-bottom:2pt;">FunAudioLLM</td> <td class="ltx_td ltx_align_center" id="A2.T5.4.1.9.2" style="padding-top:2pt;padding-bottom:2pt;"><a class="ltx_ref ltx_url ltx_font_typewriter" href="https://github.com/FunAudioLLM" title="">https://github.com/FunAudioLLM</a></td> </tr> <tr class="ltx_tr" id="A2.T5.4.1.10"> <td class="ltx_td ltx_align_center" id="A2.T5.4.1.10.1" style="padding-top:2pt;padding-bottom:2pt;">Qwen-Audio</td> <td class="ltx_td ltx_align_center" id="A2.T5.4.1.10.2" style="padding-top:2pt;padding-bottom:2pt;"><a class="ltx_ref ltx_url ltx_font_typewriter" href="https://github.com/QwenLM/Qwen-Audio" title="">https://github.com/QwenLM/Qwen-Audio</a></td> </tr> <tr class="ltx_tr" id="A2.T5.4.1.11"> <td class="ltx_td ltx_align_center" id="A2.T5.4.1.11.1" style="padding-top:2pt;padding-bottom:2pt;">Qwen2-Audio</td> <td class="ltx_td ltx_align_center" id="A2.T5.4.1.11.2" style="padding-top:2pt;padding-bottom:2pt;"><a class="ltx_ref ltx_url ltx_font_typewriter" href="https://github.com/QwenLM/Qwen2-Audio" title="">https://github.com/QwenLM/Qwen2-Audio</a></td> </tr> <tr class="ltx_tr" id="A2.T5.4.1.12"> <td class="ltx_td ltx_align_center" id="A2.T5.4.1.12.1" style="padding-top:2pt;padding-bottom:2pt;">LLaMA3.1</td> <td class="ltx_td ltx_align_center" id="A2.T5.4.1.12.2" style="padding-top:2pt;padding-bottom:2pt;"><a class="ltx_ref ltx_url ltx_font_typewriter" href="https://www.llama.com" title="">https://www.llama.com</a></td> </tr> <tr class="ltx_tr" id="A2.T5.4.1.13"> <td class="ltx_td ltx_align_center" id="A2.T5.4.1.13.1" style="padding-top:2pt;padding-bottom:2pt;">Audio Flamingo</td> <td class="ltx_td ltx_align_center" id="A2.T5.4.1.13.2" style="padding-top:2pt;padding-bottom:2pt;"><a class="ltx_ref ltx_url ltx_font_typewriter" href="https://github.com/NVIDIA/audio-flamingo" title="">https://github.com/NVIDIA/audio-flamingo</a></td> </tr> <tr class="ltx_tr" id="A2.T5.4.1.14"> <td class="ltx_td ltx_align_center" id="A2.T5.4.1.14.1" style="padding-top:2pt;padding-bottom:2pt;">Spirit LM</td> <td class="ltx_td ltx_align_center" id="A2.T5.4.1.14.2" style="padding-top:2pt;padding-bottom:2pt;"><a class="ltx_ref ltx_url ltx_font_typewriter" href="https://github.com/facebookresearch/spiritlm" title="">https://github.com/facebookresearch/spiritlm</a></td> </tr> <tr class="ltx_tr" id="A2.T5.4.1.15"> <td class="ltx_td ltx_align_center" id="A2.T5.4.1.15.1" style="padding-top:2pt;padding-bottom:2pt;">dGSLM</td> <td class="ltx_td ltx_align_center" id="A2.T5.4.1.15.2" style="padding-top:2pt;padding-bottom:2pt;"><a class="ltx_ref ltx_url ltx_font_typewriter" href="https://github.com/facebookresearch/fairseq/tree/main/examples/textless_nlp/dgslm" title="">https://github.com/facebookresearch/fairseq/tree/main/examples/textless_nlp/dgslm</a></td> </tr> <tr class="ltx_tr" id="A2.T5.4.1.16"> <td class="ltx_td ltx_align_center" id="A2.T5.4.1.16.1" style="padding-top:2pt;padding-bottom:2pt;">Spoken-LLM</td> <td class="ltx_td ltx_align_center" id="A2.T5.4.1.16.2" style="padding-top:2pt;padding-bottom:2pt;"><a class="ltx_ref ltx_url ltx_font_typewriter" href="https://arxiv.org/abs/2305.11000" title="">https://arxiv.org/abs/2305.11000</a></td> </tr> <tr class="ltx_tr" id="A2.T5.4.1.17"> <td class="ltx_td ltx_align_center" id="A2.T5.4.1.17.1" style="padding-top:2pt;padding-bottom:2pt;">LLaMA-Omni</td> <td class="ltx_td ltx_align_center" id="A2.T5.4.1.17.2" style="padding-top:2pt;padding-bottom:2pt;"><a class="ltx_ref ltx_url ltx_font_typewriter" href="https://github.com/ictnlp/LLaMA-Omni" title="">https://github.com/ictnlp/LLaMA-Omni</a></td> </tr> <tr class="ltx_tr" id="A2.T5.4.1.18"> <td class="ltx_td ltx_align_center" id="A2.T5.4.1.18.1" style="padding-top:2pt;padding-bottom:2pt;">Moshi</td> <td class="ltx_td ltx_align_center" id="A2.T5.4.1.18.2" style="padding-top:2pt;padding-bottom:2pt;"><a class="ltx_ref ltx_url ltx_font_typewriter" href="https://github.com/kyutai-labs/moshi" title="">https://github.com/kyutai-labs/moshi</a></td> </tr> <tr class="ltx_tr" id="A2.T5.4.1.19"> <td class="ltx_td ltx_align_center" id="A2.T5.4.1.19.1" style="padding-top:2pt;padding-bottom:2pt;">SALMONN</td> <td class="ltx_td ltx_align_center" id="A2.T5.4.1.19.2" style="padding-top:2pt;padding-bottom:2pt;"><a class="ltx_ref ltx_url ltx_font_typewriter" href="https://github.com/bytedance/SALMONN" title="">https://github.com/bytedance/SALMONN</a></td> </tr> <tr class="ltx_tr" id="A2.T5.4.1.20"> <td class="ltx_td ltx_align_center" id="A2.T5.4.1.20.1" style="padding-top:2pt;padding-bottom:2pt;">LTU-AS</td> <td class="ltx_td ltx_align_center" id="A2.T5.4.1.20.2" style="padding-top:2pt;padding-bottom:2pt;"><a class="ltx_ref ltx_url ltx_font_typewriter" href="https://github.com/YuanGongND/ltu" title="">https://github.com/YuanGongND/ltu</a></td> </tr> <tr class="ltx_tr" id="A2.T5.4.1.21"> <td class="ltx_td ltx_align_center" id="A2.T5.4.1.21.1" style="padding-top:2pt;padding-bottom:2pt;">VITA</td> <td class="ltx_td ltx_align_center" id="A2.T5.4.1.21.2" style="padding-top:2pt;padding-bottom:2pt;"><a class="ltx_ref ltx_url ltx_font_typewriter" href="https://github.com/VITA-MLLM/VITA" title="">https://github.com/VITA-MLLM/VITA</a></td> </tr> <tr class="ltx_tr" id="A2.T5.4.1.22"> <td class="ltx_td ltx_align_center" id="A2.T5.4.1.22.1" style="padding-top:2pt;padding-bottom:2pt;">SpeechGPT-Gen</td> <td class="ltx_td ltx_align_center" id="A2.T5.4.1.22.2" style="padding-top:2pt;padding-bottom:2pt;"><a class="ltx_ref ltx_url ltx_font_typewriter" href="https://github.com/0nutation/SpeechGPT" title="">https://github.com/0nutation/SpeechGPT</a></td> </tr> <tr class="ltx_tr" id="A2.T5.4.1.23"> <td class="ltx_td ltx_align_center" id="A2.T5.4.1.23.1" style="padding-top:2pt;padding-bottom:2pt;">Westlake-Omni</td> <td class="ltx_td ltx_align_center" id="A2.T5.4.1.23.2" style="padding-top:2pt;padding-bottom:2pt;"><a class="ltx_ref ltx_url ltx_font_typewriter" href="https://github.com/xinchen-ai/Westlake-Omni" title="">https://github.com/xinchen-ai/Westlake-Omni</a></td> </tr> <tr class="ltx_tr" id="A2.T5.4.1.24"> <td class="ltx_td ltx_align_center" id="A2.T5.4.1.24.1" style="padding-top:2pt;padding-bottom:2pt;">MooER-Omni</td> <td class="ltx_td ltx_align_center" id="A2.T5.4.1.24.2" style="padding-top:2pt;padding-bottom:2pt;"><a class="ltx_ref ltx_url ltx_font_typewriter" href="https://github.com/MooreThreads/MooER" title="">https://github.com/MooreThreads/MooER</a></td> </tr> <tr class="ltx_tr" id="A2.T5.4.1.25"> <td class="ltx_td ltx_align_center" id="A2.T5.4.1.25.1" style="padding-top:2pt;padding-bottom:2pt;">Hertz-dev</td> <td class="ltx_td ltx_align_center" id="A2.T5.4.1.25.2" style="padding-top:2pt;padding-bottom:2pt;"><a class="ltx_ref ltx_url ltx_font_typewriter" href="https://github.com/Standard-Intelligence/hertz-dev" title="">https://github.com/Standard-Intelligence/hertz-dev</a></td> </tr> <tr class="ltx_tr" id="A2.T5.4.1.26"> <td class="ltx_td ltx_align_center" id="A2.T5.4.1.26.1" style="padding-top:2pt;padding-bottom:2pt;">Fish-Agent</td> <td class="ltx_td ltx_align_center" id="A2.T5.4.1.26.2" style="padding-top:2pt;padding-bottom:2pt;"><a class="ltx_ref ltx_url ltx_font_typewriter" href="https://github.com/fishaudio/fish-speech" title="">https://github.com/fishaudio/fish-speech</a></td> </tr> <tr class="ltx_tr" id="A2.T5.4.1.27"> <td class="ltx_td ltx_align_center ltx_border_b" id="A2.T5.4.1.27.1" style="padding-top:2pt;padding-bottom:2pt;">SpeechGPT2</td> <td class="ltx_td ltx_align_center ltx_border_b" id="A2.T5.4.1.27.2" style="padding-top:2pt;padding-bottom:2pt;"><a class="ltx_ref ltx_url ltx_font_typewriter" href="https://0nutation.github.io/SpeechGPT2.github.io/" title="">https://0nutation.github.io/SpeechGPT2.github.io/</a></td> </tr> </table> </span></div> </figure> </section> <section class="ltx_appendix" id="A3"> <h2 class="ltx_title ltx_title_appendix"> <span class="ltx_tag ltx_tag_appendix">Appendix C </span>Open-source Codec Models</h2> <div class="ltx_para" id="A3.p1"> <p class="ltx_p" id="A3.p1.1">In this section, we provide a comprehensive list of publicly available and open-source codec models, as shown in Table <a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#A3.T6" title="Table 6 ‣ Appendix C Open-source Codec Models ‣ WavChat: A Survey of Spoken Dialogue Models"><span class="ltx_text ltx_ref_tag">6</span></a>.</p> </div> <figure class="ltx_table" id="A3.T6"> <figcaption class="ltx_caption ltx_centering"><span class="ltx_tag ltx_tag_table"><span class="ltx_text" id="A3.T6.2.1.1" style="font-size:90%;">Table 6</span>: </span><span class="ltx_text" id="A3.T6.3.2" style="font-size:90%;">A comprehensive list of publicly available codec models and their URL</span></figcaption> <div class="ltx_inline-block ltx_align_center ltx_transformed_outer" id="A3.T6.4" style="width:433.6pt;height:439pt;vertical-align:-0.0pt;"><span class="ltx_transformed_inner" style="transform:translate(-76.6pt,77.5pt) scale(0.739036142052894,0.739036142052894) ;"> <table class="ltx_tabular ltx_align_middle" id="A3.T6.4.1"> <tr class="ltx_tr" id="A3.T6.4.1.1"> <td class="ltx_td ltx_align_center ltx_border_tt" id="A3.T6.4.1.1.1"><span class="ltx_text ltx_font_bold" id="A3.T6.4.1.1.1.1">Model</span></td> <td class="ltx_td ltx_align_center ltx_border_tt" id="A3.T6.4.1.1.2"><span class="ltx_text ltx_font_bold" id="A3.T6.4.1.1.2.1">URL</span></td> </tr> <tr class="ltx_tr" id="A3.T6.4.1.2"> <td class="ltx_td ltx_align_center ltx_border_t" id="A3.T6.4.1.2.1">Encodec <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib43" title="">43</a>]</cite> </td> <td class="ltx_td ltx_align_center ltx_border_t" id="A3.T6.4.1.2.2"><a class="ltx_ref ltx_url ltx_font_typewriter" href="https://github.com/facebookresearch/encodec" title="">https://github.com/facebookresearch/encodec</a></td> </tr> <tr class="ltx_tr" id="A3.T6.4.1.3"> <td class="ltx_td ltx_align_center" id="A3.T6.4.1.3.1">SoundStream <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib238" title="">238</a>]</cite> </td> <td class="ltx_td ltx_align_center" id="A3.T6.4.1.3.2"><a class="ltx_ref ltx_url ltx_font_typewriter" href="https://github.com/wesbz/SoundStream" title="">https://github.com/wesbz/SoundStream</a></td> </tr> <tr class="ltx_tr" id="A3.T6.4.1.4"> <td class="ltx_td ltx_align_center" id="A3.T6.4.1.4.1">DAC <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib113" title="">113</a>]</cite> </td> <td class="ltx_td ltx_align_center" id="A3.T6.4.1.4.2"><a class="ltx_ref ltx_url ltx_font_typewriter" href="https://github.com/descriptinc/descript-audio-codec" title="">https://github.com/descriptinc/descript-audio-codec</a></td> </tr> <tr class="ltx_tr" id="A3.T6.4.1.5"> <td class="ltx_td ltx_align_center" id="A3.T6.4.1.5.1">WavTokenizer <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib90" title="">90</a>]</cite> </td> <td class="ltx_td ltx_align_center" id="A3.T6.4.1.5.2"><a class="ltx_ref ltx_url ltx_font_typewriter" href="https://github.com/jishengpeng/WavTokenizer" title="">https://github.com/jishengpeng/WavTokenizer</a></td> </tr> <tr class="ltx_tr" id="A3.T6.4.1.6"> <td class="ltx_td ltx_align_center" id="A3.T6.4.1.6.1">SpeechTokenizer <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib249" title="">249</a>]</cite> </td> <td class="ltx_td ltx_align_center" id="A3.T6.4.1.6.2"><a class="ltx_ref ltx_url ltx_font_typewriter" href="https://github.com/ZhangXInFD/SpeechTokenizer" title="">https://github.com/ZhangXInFD/SpeechTokenizer</a></td> </tr> <tr class="ltx_tr" id="A3.T6.4.1.7"> <td class="ltx_td ltx_align_center" id="A3.T6.4.1.7.1">SNAC <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib193" title="">193</a>]</cite> </td> <td class="ltx_td ltx_align_center" id="A3.T6.4.1.7.2"><a class="ltx_ref ltx_url ltx_font_typewriter" href="https://github.com/hubertsiuzdak/snac" title="">https://github.com/hubertsiuzdak/snac</a></td> </tr> <tr class="ltx_tr" id="A3.T6.4.1.8"> <td class="ltx_td ltx_align_center" id="A3.T6.4.1.8.1">SemantiCodec <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib135" title="">135</a>]</cite> </td> <td class="ltx_td ltx_align_center" id="A3.T6.4.1.8.2"><a class="ltx_ref ltx_url ltx_font_typewriter" href="https://github.com/haoheliu/SemantiCodec-inference" title="">https://github.com/haoheliu/SemantiCodec-inference</a></td> </tr> <tr class="ltx_tr" id="A3.T6.4.1.9"> <td class="ltx_td ltx_align_center" id="A3.T6.4.1.9.1">Mimi <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib44" title="">44</a>]</cite> </td> <td class="ltx_td ltx_align_center" id="A3.T6.4.1.9.2"><a class="ltx_ref ltx_url ltx_font_typewriter" href="https://github.com/kyutai-labs/moshi" title="">https://github.com/kyutai-labs/moshi</a></td> </tr> <tr class="ltx_tr" id="A3.T6.4.1.10"> <td class="ltx_td ltx_align_center" id="A3.T6.4.1.10.1">HiFi-Codec <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib230" title="">230</a>]</cite> </td> <td class="ltx_td ltx_align_center" id="A3.T6.4.1.10.2"><a class="ltx_ref ltx_url ltx_font_typewriter" href="https://github.com/yangdongchao/AcademiCodec" title="">https://github.com/yangdongchao/AcademiCodec</a></td> </tr> <tr class="ltx_tr" id="A3.T6.4.1.11"> <td class="ltx_td ltx_align_center" id="A3.T6.4.1.11.1">FunCodec <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib51" title="">51</a>]</cite> </td> <td class="ltx_td ltx_align_center" id="A3.T6.4.1.11.2"><a class="ltx_ref ltx_url ltx_font_typewriter" href="https://github.com/modelscope/FunCodec" title="">https://github.com/modelscope/FunCodec</a></td> </tr> <tr class="ltx_tr" id="A3.T6.4.1.12"> <td class="ltx_td ltx_align_center" id="A3.T6.4.1.12.1">APCodec <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib4" title="">4</a>]</cite> </td> <td class="ltx_td ltx_align_center" id="A3.T6.4.1.12.2"><a class="ltx_ref ltx_url ltx_font_typewriter" href="https://github.com/YangAi520/APCodec/tree/main" title="">https://github.com/YangAi520/APCodec/tree/main</a></td> </tr> <tr class="ltx_tr" id="A3.T6.4.1.13"> <td class="ltx_td ltx_align_center" id="A3.T6.4.1.13.1">AudioDec <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib220" title="">220</a>]</cite> </td> <td class="ltx_td ltx_align_center" id="A3.T6.4.1.13.2"><a class="ltx_ref ltx_url ltx_font_typewriter" href="https://github.com/facebookresearch/AudioDec" title="">https://github.com/facebookresearch/AudioDec</a></td> </tr> <tr class="ltx_tr" id="A3.T6.4.1.14"> <td class="ltx_td ltx_align_center" id="A3.T6.4.1.14.1">FACodec <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib100" title="">100</a>]</cite> </td> <td class="ltx_td ltx_align_center" id="A3.T6.4.1.14.2"><a class="ltx_ref ltx_url ltx_font_typewriter" href="https://github.com/lifeiteng/naturalspeech3_facodec" title="">https://github.com/lifeiteng/naturalspeech3_facodec</a></td> </tr> <tr class="ltx_tr" id="A3.T6.4.1.15"> <td class="ltx_td ltx_align_center" id="A3.T6.4.1.15.1">Language-Codec <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib89" title="">89</a>]</cite> </td> <td class="ltx_td ltx_align_center" id="A3.T6.4.1.15.2"><a class="ltx_ref ltx_url ltx_font_typewriter" href="https://github.com/jishengpeng/Languagecodec" title="">https://github.com/jishengpeng/Languagecodec</a></td> </tr> <tr class="ltx_tr" id="A3.T6.4.1.16"> <td class="ltx_td ltx_align_center" id="A3.T6.4.1.16.1">XCodec <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib236" title="">236</a>]</cite> </td> <td class="ltx_td ltx_align_center" id="A3.T6.4.1.16.2"><a class="ltx_ref ltx_url ltx_font_typewriter" href="https://github.com/zhenye234/xcodec" title="">https://github.com/zhenye234/xcodec</a></td> </tr> <tr class="ltx_tr" id="A3.T6.4.1.17"> <td class="ltx_td ltx_align_center" id="A3.T6.4.1.17.1">TiCodec <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib177" title="">177</a>]</cite> </td> <td class="ltx_td ltx_align_center" id="A3.T6.4.1.17.2"><a class="ltx_ref ltx_url ltx_font_typewriter" href="https://github.com/y-ren16/TiCodec" title="">https://github.com/y-ren16/TiCodec</a></td> </tr> <tr class="ltx_tr" id="A3.T6.4.1.18"> <td class="ltx_td ltx_align_center" id="A3.T6.4.1.18.1">SoCodec <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib70" title="">70</a>]</cite> </td> <td class="ltx_td ltx_align_center" id="A3.T6.4.1.18.2"><a class="ltx_ref ltx_url ltx_font_typewriter" href="https://github.com/hhguo/SoCodec" title="">https://github.com/hhguo/SoCodec</a></td> </tr> <tr class="ltx_tr" id="A3.T6.4.1.19"> <td class="ltx_td ltx_align_center" id="A3.T6.4.1.19.1">FUVC <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib253" title="">253</a>]</cite> </td> <td class="ltx_td ltx_align_center" id="A3.T6.4.1.19.2"><a class="ltx_ref ltx_url ltx_font_typewriter" href="https://github.com/z21110008/FUVC" title="">https://github.com/z21110008/FUVC</a></td> </tr> <tr class="ltx_tr" id="A3.T6.4.1.20"> <td class="ltx_td ltx_align_center" id="A3.T6.4.1.20.1">HILCodec <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib3" title="">3</a>]</cite> </td> <td class="ltx_td ltx_align_center" id="A3.T6.4.1.20.2"><a class="ltx_ref ltx_url ltx_font_typewriter" href="https://github.com/aask1357/hilcodec" title="">https://github.com/aask1357/hilcodec</a></td> </tr> <tr class="ltx_tr" id="A3.T6.4.1.21"> <td class="ltx_td ltx_align_center" id="A3.T6.4.1.21.1">LaDiffCodec <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib233" title="">233</a>]</cite> </td> <td class="ltx_td ltx_align_center" id="A3.T6.4.1.21.2"><a class="ltx_ref ltx_url ltx_font_typewriter" href="https://github.com/haiciyang/LaDiffCodec" title="">https://github.com/haiciyang/LaDiffCodec</a></td> </tr> <tr class="ltx_tr" id="A3.T6.4.1.22"> <td class="ltx_td ltx_align_center" id="A3.T6.4.1.22.1">LLM-Codec <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib229" title="">229</a>]</cite> </td> <td class="ltx_td ltx_align_center" id="A3.T6.4.1.22.2"><a class="ltx_ref ltx_url ltx_font_typewriter" href="https://github.com/yangdongchao/LLM-Codec" title="">https://github.com/yangdongchao/LLM-Codec</a></td> </tr> <tr class="ltx_tr" id="A3.T6.4.1.23"> <td class="ltx_td ltx_align_center" id="A3.T6.4.1.23.1">SpatialCodec <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib226" title="">226</a>]</cite> </td> <td class="ltx_td ltx_align_center" id="A3.T6.4.1.23.2"><a class="ltx_ref ltx_url ltx_font_typewriter" href="https://github.com/XZWY/SpatialCodec" title="">https://github.com/XZWY/SpatialCodec</a></td> </tr> <tr class="ltx_tr" id="A3.T6.4.1.24"> <td class="ltx_td ltx_align_center" id="A3.T6.4.1.24.1">BigCodec <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib224" title="">224</a>]</cite> </td> <td class="ltx_td ltx_align_center" id="A3.T6.4.1.24.2"><a class="ltx_ref ltx_url ltx_font_typewriter" href="https://github.com/Aria-K-Alethia/BigCodec" title="">https://github.com/Aria-K-Alethia/BigCodec</a></td> </tr> <tr class="ltx_tr" id="A3.T6.4.1.25"> <td class="ltx_td ltx_align_center" id="A3.T6.4.1.25.1">SuperCodec <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib254" title="">254</a>]</cite> </td> <td class="ltx_td ltx_align_center" id="A3.T6.4.1.25.2"><a class="ltx_ref ltx_url ltx_font_typewriter" href="https://github.com/exercise-book-yq/Supercodec" title="">https://github.com/exercise-book-yq/Supercodec</a></td> </tr> <tr class="ltx_tr" id="A3.T6.4.1.26"> <td class="ltx_td ltx_align_center" id="A3.T6.4.1.26.1">RepCodec <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib86" title="">86</a>]</cite> </td> <td class="ltx_td ltx_align_center" id="A3.T6.4.1.26.2"><a class="ltx_ref ltx_url ltx_font_typewriter" href="https://github.com/mct10/RepCodec" title="">https://github.com/mct10/RepCodec</a></td> </tr> <tr class="ltx_tr" id="A3.T6.4.1.27"> <td class="ltx_td ltx_align_center" id="A3.T6.4.1.27.1">EnCodecMAE <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib164" title="">164</a>]</cite> </td> <td class="ltx_td ltx_align_center" id="A3.T6.4.1.27.2"><a class="ltx_ref ltx_url ltx_font_typewriter" href="https://github.com/habla-liaa/encodecmae" title="">https://github.com/habla-liaa/encodecmae</a></td> </tr> <tr class="ltx_tr" id="A3.T6.4.1.28"> <td class="ltx_td ltx_align_center" id="A3.T6.4.1.28.1">MuCodec <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib225" title="">225</a>]</cite> </td> <td class="ltx_td ltx_align_center" id="A3.T6.4.1.28.2"><a class="ltx_ref ltx_url ltx_font_typewriter" href="https://github.com/xuyaoxun/MuCodec" title="">https://github.com/xuyaoxun/MuCodec</a></td> </tr> <tr class="ltx_tr" id="A3.T6.4.1.29"> <td class="ltx_td ltx_align_center" id="A3.T6.4.1.29.1">SPARC <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib32" title="">32</a>]</cite> </td> <td class="ltx_td ltx_align_center" id="A3.T6.4.1.29.2"><a class="ltx_ref ltx_url ltx_font_typewriter" href="https://github.com/Berkeley-Speech-Group/Speech-Articulatory-Coding" title="">https://github.com/Berkeley-Speech-Group/Speech-Articulatory-Coding</a></td> </tr> <tr class="ltx_tr" id="A3.T6.4.1.30"> <td class="ltx_td ltx_align_center" id="A3.T6.4.1.30.1">BANC <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib172" title="">172</a>]</cite> </td> <td class="ltx_td ltx_align_center" id="A3.T6.4.1.30.2"><a class="ltx_ref ltx_url ltx_font_typewriter" href="https://github.com/anton-jeran/MULTI-AUDIODEC" title="">https://github.com/anton-jeran/MULTI-AUDIODEC</a></td> </tr> <tr class="ltx_tr" id="A3.T6.4.1.31"> <td class="ltx_td ltx_align_center" id="A3.T6.4.1.31.1">SpeechRVQ <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib188" title="">188</a>]</cite> </td> <td class="ltx_td ltx_align_center" id="A3.T6.4.1.31.2"><a class="ltx_ref ltx_url ltx_font_typewriter" href="https://huggingface.co/ibm/DAC.speech.v1.0" title="">https://huggingface.co/ibm/DAC.speech.v1.0</a></td> </tr> <tr class="ltx_tr" id="A3.T6.4.1.32"> <td class="ltx_td ltx_align_center" id="A3.T6.4.1.32.1">QINCo <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib87" title="">87</a>]</cite> </td> <td class="ltx_td ltx_align_center" id="A3.T6.4.1.32.2"><a class="ltx_ref ltx_url ltx_font_typewriter" href="https://github.com/facebookresearch/Qinco" title="">https://github.com/facebookresearch/Qinco</a></td> </tr> <tr class="ltx_tr" id="A3.T6.4.1.33"> <td class="ltx_td ltx_align_center ltx_border_bb" id="A3.T6.4.1.33.1">SimVQ <cite class="ltx_cite ltx_citemacro_cite">[<a class="ltx_ref" href="https://arxiv.org/html/2411.13577v1#bib.bib257" title="">257</a>]</cite> </td> <td class="ltx_td ltx_align_center ltx_border_bb" id="A3.T6.4.1.33.2"><a class="ltx_ref ltx_url ltx_font_typewriter" href="https://github.com/youngsheen/SimVQ" title="">https://github.com/youngsheen/SimVQ</a></td> </tr> </table> </span></div> </figure> <div class="ltx_pagination ltx_role_newpage"></div> </section> </article> </div> <footer class="ltx_page_footer"> <div class="ltx_page_logo">Generated on Thu Nov 14 18:22:34 2024 by <a class="ltx_LaTeXML_logo" href="http://dlmf.nist.gov/LaTeXML/"><span style="letter-spacing:-0.2em; margin-right:0.1em;">L<span class="ltx_font_smallcaps" style="position:relative; bottom:2.2pt;">a</span>T<span class="ltx_font_smallcaps" style="font-size:120%;position:relative; bottom:-0.2ex;">e</span></span><span style="font-size:90%; position:relative; bottom:-0.2ex;">XML</span><img alt="Mascot Sammy" src="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAAsAAAAOCAYAAAD5YeaVAAAAAXNSR0IArs4c6QAAAAZiS0dEAP8A/wD/oL2nkwAAAAlwSFlzAAALEwAACxMBAJqcGAAAAAd0SU1FB9wKExQZLWTEaOUAAAAddEVYdENvbW1lbnQAQ3JlYXRlZCB3aXRoIFRoZSBHSU1Q72QlbgAAAdpJREFUKM9tkL+L2nAARz9fPZNCKFapUn8kyI0e4iRHSR1Kb8ng0lJw6FYHFwv2LwhOpcWxTjeUunYqOmqd6hEoRDhtDWdA8ApRYsSUCDHNt5ul13vz4w0vWCgUnnEc975arX6ORqN3VqtVZbfbTQC4uEHANM3jSqXymFI6yWazP2KxWAXAL9zCUa1Wy2tXVxheKA9YNoR8Pt+aTqe4FVVVvz05O6MBhqUIBGk8Hn8HAOVy+T+XLJfLS4ZhTiRJgqIoVBRFIoric47jPnmeB1mW/9rr9ZpSSn3Lsmir1fJZlqWlUonKsvwWwD8ymc/nXwVBeLjf7xEKhdBut9Hr9WgmkyGEkJwsy5eHG5vN5g0AKIoCAEgkEkin0wQAfN9/cXPdheu6P33fBwB4ngcAcByHJpPJl+fn54mD3Gg0NrquXxeLRQAAwzAYj8cwTZPwPH9/sVg8PXweDAauqqr2cDjEer1GJBLBZDJBs9mE4zjwfZ85lAGg2+06hmGgXq+j3+/DsixYlgVN03a9Xu8jgCNCyIegIAgx13Vfd7vdu+FweG8YRkjXdWy329+dTgeSJD3ieZ7RNO0VAXAPwDEAO5VKndi2fWrb9jWl9Esul6PZbDY9Go1OZ7PZ9z/lyuD3OozU2wAAAABJRU5ErkJggg=="/></a> </div></footer> </div> </body> </html>