CINXE.COM
Optimizing Data Delivery: Insights from User Preferences on Visuals, Tables, and Text
<!DOCTYPE html> <html lang="en"> <head> <meta content="text/html; charset=utf-8" http-equiv="content-type"/> <title>Optimizing Data Delivery: Insights from User Preferences on Visuals, Tables, and Text</title> <!--Generated on Tue Nov 12 00:21:54 2024 by LaTeXML (version 0.8.8) http://dlmf.nist.gov/LaTeXML/.--> <meta content="width=device-width, initial-scale=1, shrink-to-fit=no" name="viewport"/> <link href="https://cdn.jsdelivr.net/npm/bootstrap@5.3.0/dist/css/bootstrap.min.css" rel="stylesheet" type="text/css"/> <link href="/static/browse/0.3.4/css/ar5iv.0.7.9.min.css" rel="stylesheet" type="text/css"/> <link href="/static/browse/0.3.4/css/ar5iv-fonts.0.7.9.min.css" rel="stylesheet" type="text/css"/> <link href="/static/browse/0.3.4/css/latexml_styles.css" rel="stylesheet" type="text/css"/> <script src="https://cdn.jsdelivr.net/npm/bootstrap@5.3.0/dist/js/bootstrap.bundle.min.js"></script> <script src="https://cdnjs.cloudflare.com/ajax/libs/html2canvas/1.3.3/html2canvas.min.js"></script> <script src="/static/browse/0.3.4/js/addons_new.js"></script> <script src="/static/browse/0.3.4/js/feedbackOverlay.js"></script> <meta content="Data visualization, user modeling, personalization, recommendation, Large Language Models (LLMs)" lang="en" name="keywords"/> <base href="/html/2411.07451v1/"/></head> <body> <nav class="ltx_page_navbar"> <nav class="ltx_TOC"> <ol class="ltx_toclist"> <li class="ltx_tocentry ltx_tocentry_section"><a class="ltx_ref" href="https://arxiv.org/html/2411.07451v1#S1" title="In Optimizing Data Delivery: Insights from User Preferences on Visuals, Tables, and Text"><span class="ltx_text ltx_ref_title"><span class="ltx_tag ltx_tag_ref">1 </span>Introduction</span></a></li> <li class="ltx_tocentry ltx_tocentry_section"> <a class="ltx_ref" href="https://arxiv.org/html/2411.07451v1#S2" title="In Optimizing Data Delivery: Insights from User Preferences on Visuals, Tables, and Text"><span class="ltx_text ltx_ref_title"><span class="ltx_tag ltx_tag_ref">2 </span>Related Work</span></a> <ol class="ltx_toclist ltx_toclist_section"> <li class="ltx_tocentry ltx_tocentry_subsection"><a class="ltx_ref" href="https://arxiv.org/html/2411.07451v1#S2.SS1" title="In 2. Related Work ‣ Optimizing Data Delivery: Insights from User Preferences on Visuals, Tables, and Text"><span class="ltx_text ltx_ref_title"><span class="ltx_tag ltx_tag_ref">2.1 </span>Text to Visualization Generation</span></a></li> <li class="ltx_tocentry ltx_tocentry_subsection"><a class="ltx_ref" href="https://arxiv.org/html/2411.07451v1#S2.SS2" title="In 2. Related Work ‣ Optimizing Data Delivery: Insights from User Preferences on Visuals, Tables, and Text"><span class="ltx_text ltx_ref_title"><span class="ltx_tag ltx_tag_ref">2.2 </span>Visualization + Text for Analysis</span></a></li> <li class="ltx_tocentry ltx_tocentry_subsection"><a class="ltx_ref" href="https://arxiv.org/html/2411.07451v1#S2.SS3" title="In 2. Related Work ‣ Optimizing Data Delivery: Insights from User Preferences on Visuals, Tables, and Text"><span class="ltx_text ltx_ref_title"><span class="ltx_tag ltx_tag_ref">2.3 </span>Accessibility & Technical Literacy</span></a></li> </ol> </li> <li class="ltx_tocentry ltx_tocentry_section"> <a class="ltx_ref" href="https://arxiv.org/html/2411.07451v1#S3" title="In Optimizing Data Delivery: Insights from User Preferences on Visuals, Tables, and Text"><span class="ltx_text ltx_ref_title"><span class="ltx_tag ltx_tag_ref">3 </span>Study</span></a> <ol class="ltx_toclist ltx_toclist_section"> <li class="ltx_tocentry ltx_tocentry_subsection"><a class="ltx_ref" href="https://arxiv.org/html/2411.07451v1#S3.SS1" title="In 3. Study ‣ Optimizing Data Delivery: Insights from User Preferences on Visuals, Tables, and Text"><span class="ltx_text ltx_ref_title"><span class="ltx_tag ltx_tag_ref">3.1 </span>Participants</span></a></li> <li class="ltx_tocentry ltx_tocentry_subsection"> <a class="ltx_ref" href="https://arxiv.org/html/2411.07451v1#S3.SS2" title="In 3. Study ‣ Optimizing Data Delivery: Insights from User Preferences on Visuals, Tables, and Text"><span class="ltx_text ltx_ref_title"><span class="ltx_tag ltx_tag_ref">3.2 </span>Method</span></a> <ol class="ltx_toclist ltx_toclist_subsection"> <li class="ltx_tocentry ltx_tocentry_subsubsection"><a class="ltx_ref" href="https://arxiv.org/html/2411.07451v1#S3.SS2.SSS1" title="In 3.2. Method ‣ 3. Study ‣ Optimizing Data Delivery: Insights from User Preferences on Visuals, Tables, and Text"><span class="ltx_text ltx_ref_title"><span class="ltx_tag ltx_tag_ref">3.2.1 </span>User Characteristic Questions</span></a></li> <li class="ltx_tocentry ltx_tocentry_subsubsection"><a class="ltx_ref" href="https://arxiv.org/html/2411.07451v1#S3.SS2.SSS2" title="In 3.2. Method ‣ 3. Study ‣ Optimizing Data Delivery: Insights from User Preferences on Visuals, Tables, and Text"><span class="ltx_text ltx_ref_title"><span class="ltx_tag ltx_tag_ref">3.2.2 </span>Instructions</span></a></li> <li class="ltx_tocentry ltx_tocentry_subsubsection"><a class="ltx_ref" href="https://arxiv.org/html/2411.07451v1#S3.SS2.SSS3" title="In 3.2. Method ‣ 3. Study ‣ Optimizing Data Delivery: Insights from User Preferences on Visuals, Tables, and Text"><span class="ltx_text ltx_ref_title"><span class="ltx_tag ltx_tag_ref">3.2.3 </span>Survey Questions</span></a></li> </ol> </li> </ol> </li> <li class="ltx_tocentry ltx_tocentry_section"> <a class="ltx_ref" href="https://arxiv.org/html/2411.07451v1#S4" title="In Optimizing Data Delivery: Insights from User Preferences on Visuals, Tables, and Text"><span class="ltx_text ltx_ref_title"><span class="ltx_tag ltx_tag_ref">4 </span>Results</span></a> <ol class="ltx_toclist ltx_toclist_section"> <li class="ltx_tocentry ltx_tocentry_subsection"><a class="ltx_ref" href="https://arxiv.org/html/2411.07451v1#S4.SS1" title="In 4. Results ‣ Optimizing Data Delivery: Insights from User Preferences on Visuals, Tables, and Text"><span class="ltx_text ltx_ref_title"><span class="ltx_tag ltx_tag_ref">4.1 </span>RQ1: General Preferences</span></a></li> <li class="ltx_tocentry ltx_tocentry_subsection"> <a class="ltx_ref" href="https://arxiv.org/html/2411.07451v1#S4.SS2" title="In 4. Results ‣ Optimizing Data Delivery: Insights from User Preferences on Visuals, Tables, and Text"><span class="ltx_text ltx_ref_title"><span class="ltx_tag ltx_tag_ref">4.2 </span>RQ2: User Characteristics</span></a> <ol class="ltx_toclist ltx_toclist_subsection"> <li class="ltx_tocentry ltx_tocentry_subsubsection"><a class="ltx_ref" href="https://arxiv.org/html/2411.07451v1#S4.SS2.SSS1" title="In 4.2. RQ2: User Characteristics ‣ 4. Results ‣ Optimizing Data Delivery: Insights from User Preferences on Visuals, Tables, and Text"><span class="ltx_text ltx_ref_title"><span class="ltx_tag ltx_tag_ref">4.2.1 </span>Findings Summary</span></a></li> <li class="ltx_tocentry ltx_tocentry_subsubsection"><a class="ltx_ref" href="https://arxiv.org/html/2411.07451v1#S4.SS2.SSS2" title="In 4.2. RQ2: User Characteristics ‣ 4. Results ‣ Optimizing Data Delivery: Insights from User Preferences on Visuals, Tables, and Text"><span class="ltx_text ltx_ref_title"><span class="ltx_tag ltx_tag_ref">4.2.2 </span>H2a: Influence of Data Visualization Experience on Preference</span></a></li> <li class="ltx_tocentry ltx_tocentry_subsubsection"><a class="ltx_ref" href="https://arxiv.org/html/2411.07451v1#S4.SS2.SSS3" title="In 4.2. RQ2: User Characteristics ‣ 4. Results ‣ Optimizing Data Delivery: Insights from User Preferences on Visuals, Tables, and Text"><span class="ltx_text ltx_ref_title"><span class="ltx_tag ltx_tag_ref">4.2.3 </span>H2b: Influence of Data Analysis Experience on Preference</span></a></li> <li class="ltx_tocentry ltx_tocentry_subsubsection"><a class="ltx_ref" href="https://arxiv.org/html/2411.07451v1#S4.SS2.SSS4" title="In 4.2. RQ2: User Characteristics ‣ 4. Results ‣ Optimizing Data Delivery: Insights from User Preferences on Visuals, Tables, and Text"><span class="ltx_text ltx_ref_title"><span class="ltx_tag ltx_tag_ref">4.2.4 </span>H2c: Influence of Age on Preference</span></a></li> </ol> </li> <li class="ltx_tocentry ltx_tocentry_subsection"> <a class="ltx_ref" href="https://arxiv.org/html/2411.07451v1#S4.SS3" title="In 4. Results ‣ Optimizing Data Delivery: Insights from User Preferences on Visuals, Tables, and Text"><span class="ltx_text ltx_ref_title"><span class="ltx_tag ltx_tag_ref">4.3 </span>RQ3: Work Experience</span></a> <ol class="ltx_toclist ltx_toclist_subsection"> <li class="ltx_tocentry ltx_tocentry_subsubsection"><a class="ltx_ref" href="https://arxiv.org/html/2411.07451v1#S4.SS3.SSS1" title="In 4.3. RQ3: Work Experience ‣ 4. Results ‣ Optimizing Data Delivery: Insights from User Preferences on Visuals, Tables, and Text"><span class="ltx_text ltx_ref_title"><span class="ltx_tag ltx_tag_ref">4.3.1 </span>Findings Summary</span></a></li> <li class="ltx_tocentry ltx_tocentry_subsubsection"><a class="ltx_ref" href="https://arxiv.org/html/2411.07451v1#S4.SS3.SSS2" title="In 4.3. RQ3: Work Experience ‣ 4. Results ‣ Optimizing Data Delivery: Insights from User Preferences on Visuals, Tables, and Text"><span class="ltx_text ltx_ref_title"><span class="ltx_tag ltx_tag_ref">4.3.2 </span>H3a: Influence of Role on Preference</span></a></li> <li class="ltx_tocentry ltx_tocentry_subsubsection"><a class="ltx_ref" href="https://arxiv.org/html/2411.07451v1#S4.SS3.SSS3" title="In 4.3. RQ3: Work Experience ‣ 4. Results ‣ Optimizing Data Delivery: Insights from User Preferences on Visuals, Tables, and Text"><span class="ltx_text ltx_ref_title"><span class="ltx_tag ltx_tag_ref">4.3.3 </span>H3b: Influence of Industry on Preference</span></a></li> <li class="ltx_tocentry ltx_tocentry_subsubsection"><a class="ltx_ref" href="https://arxiv.org/html/2411.07451v1#S4.SS3.SSS4" title="In 4.3. RQ3: Work Experience ‣ 4. Results ‣ Optimizing Data Delivery: Insights from User Preferences on Visuals, Tables, and Text"><span class="ltx_text ltx_ref_title"><span class="ltx_tag ltx_tag_ref">4.3.4 </span>Preferences When Combining User Characteristics</span></a></li> </ol> </li> <li class="ltx_tocentry ltx_tocentry_subsection"> <a class="ltx_ref" href="https://arxiv.org/html/2411.07451v1#S4.SS4" title="In 4. Results ‣ Optimizing Data Delivery: Insights from User Preferences on Visuals, Tables, and Text"><span class="ltx_text ltx_ref_title"><span class="ltx_tag ltx_tag_ref">4.4 </span>RQ4: LLM Preference Predictions</span></a> <ol class="ltx_toclist ltx_toclist_subsection"> <li class="ltx_tocentry ltx_tocentry_subsubsection"><a class="ltx_ref" href="https://arxiv.org/html/2411.07451v1#S4.SS4.SSS1" title="In 4.4. RQ4: LLM Preference Predictions ‣ 4. Results ‣ Optimizing Data Delivery: Insights from User Preferences on Visuals, Tables, and Text"><span class="ltx_text ltx_ref_title"><span class="ltx_tag ltx_tag_ref">4.4.1 </span>H4a: LLM Alignment with Humans</span></a></li> <li class="ltx_tocentry ltx_tocentry_subsubsection"><a class="ltx_ref" href="https://arxiv.org/html/2411.07451v1#S4.SS4.SSS2" title="In 4.4. RQ4: LLM Preference Predictions ‣ 4. Results ‣ Optimizing Data Delivery: Insights from User Preferences on Visuals, Tables, and Text"><span class="ltx_text ltx_ref_title"><span class="ltx_tag ltx_tag_ref">4.4.2 </span>H4b: Personalized LLMs</span></a></li> <li class="ltx_tocentry ltx_tocentry_subsubsection"><a class="ltx_ref" href="https://arxiv.org/html/2411.07451v1#S4.SS4.SSS3" title="In 4.4. RQ4: LLM Preference Predictions ‣ 4. Results ‣ Optimizing Data Delivery: Insights from User Preferences on Visuals, Tables, and Text"><span class="ltx_text ltx_ref_title"><span class="ltx_tag ltx_tag_ref">4.4.3 </span>H4c: User-specific Accuracy</span></a></li> </ol> </li> </ol> </li> <li class="ltx_tocentry ltx_tocentry_section"> <a class="ltx_ref" href="https://arxiv.org/html/2411.07451v1#S5" title="In Optimizing Data Delivery: Insights from User Preferences on Visuals, Tables, and Text"><span class="ltx_text ltx_ref_title"><span class="ltx_tag ltx_tag_ref">5 </span>Discussion</span></a> <ol class="ltx_toclist ltx_toclist_section"> <li class="ltx_tocentry ltx_tocentry_subsection"><a class="ltx_ref" href="https://arxiv.org/html/2411.07451v1#S5.SS1" title="In 5. Discussion ‣ Optimizing Data Delivery: Insights from User Preferences on Visuals, Tables, and Text"><span class="ltx_text ltx_ref_title"><span class="ltx_tag ltx_tag_ref">5.1 </span>General Preferences</span></a></li> <li class="ltx_tocentry ltx_tocentry_subsection"><a class="ltx_ref" href="https://arxiv.org/html/2411.07451v1#S5.SS2" title="In 5. Discussion ‣ Optimizing Data Delivery: Insights from User Preferences on Visuals, Tables, and Text"><span class="ltx_text ltx_ref_title"><span class="ltx_tag ltx_tag_ref">5.2 </span>Influence of User Characteristics and Work Experience</span></a></li> <li class="ltx_tocentry ltx_tocentry_subsection"><a class="ltx_ref" href="https://arxiv.org/html/2411.07451v1#S5.SS3" title="In 5. Discussion ‣ Optimizing Data Delivery: Insights from User Preferences on Visuals, Tables, and Text"><span class="ltx_text ltx_ref_title"><span class="ltx_tag ltx_tag_ref">5.3 </span>Human Preference vs. GPT Preference</span></a></li> </ol> </li> <li class="ltx_tocentry ltx_tocentry_section"><a class="ltx_ref" href="https://arxiv.org/html/2411.07451v1#S6" title="In Optimizing Data Delivery: Insights from User Preferences on Visuals, Tables, and Text"><span class="ltx_text ltx_ref_title"><span class="ltx_tag ltx_tag_ref">6 </span>Conclusion</span></a></li> <li class="ltx_tocentry ltx_tocentry_appendix"> <a class="ltx_ref" href="https://arxiv.org/html/2411.07451v1#A1" title="In Optimizing Data Delivery: Insights from User Preferences on Visuals, Tables, and Text"><span class="ltx_text ltx_ref_title"><span class="ltx_tag ltx_tag_ref">A </span>Related Works Cont.</span></a> <ol class="ltx_toclist ltx_toclist_appendix"> <li class="ltx_tocentry ltx_tocentry_subsection"><a class="ltx_ref" href="https://arxiv.org/html/2411.07451v1#A1.SS1" title="In Appendix A Related Works Cont. ‣ Optimizing Data Delivery: Insights from User Preferences on Visuals, Tables, and Text"><span class="ltx_text ltx_ref_title"><span class="ltx_tag ltx_tag_ref">A.1 </span>Accessibility</span></a></li> </ol> </li> <li class="ltx_tocentry ltx_tocentry_appendix"> <a class="ltx_ref" href="https://arxiv.org/html/2411.07451v1#A2" title="In Optimizing Data Delivery: Insights from User Preferences on Visuals, Tables, and Text"><span class="ltx_text ltx_ref_title"><span class="ltx_tag ltx_tag_ref">B </span>Study Design</span></a> <ol class="ltx_toclist ltx_toclist_appendix"> <li class="ltx_tocentry ltx_tocentry_subsection"><a class="ltx_ref" href="https://arxiv.org/html/2411.07451v1#A2.SS1" title="In Appendix B Study Design ‣ Optimizing Data Delivery: Insights from User Preferences on Visuals, Tables, and Text"><span class="ltx_text ltx_ref_title"><span class="ltx_tag ltx_tag_ref">B.1 </span>Human Annotations</span></a></li> <li class="ltx_tocentry ltx_tocentry_subsection"><a class="ltx_ref" href="https://arxiv.org/html/2411.07451v1#A2.SS2" title="In Appendix B Study Design ‣ Optimizing Data Delivery: Insights from User Preferences on Visuals, Tables, and Text"><span class="ltx_text ltx_ref_title"><span class="ltx_tag ltx_tag_ref">B.2 </span>User Characteristics Questions</span></a></li> <li class="ltx_tocentry ltx_tocentry_subsection"><a class="ltx_ref" href="https://arxiv.org/html/2411.07451v1#A2.SS3" title="In Appendix B Study Design ‣ Optimizing Data Delivery: Insights from User Preferences on Visuals, Tables, and Text"><span class="ltx_text ltx_ref_title"><span class="ltx_tag ltx_tag_ref">B.3 </span>Survey Questions</span></a></li> </ol> </li> <li class="ltx_tocentry ltx_tocentry_appendix"> <a class="ltx_ref" href="https://arxiv.org/html/2411.07451v1#A3" title="In Optimizing Data Delivery: Insights from User Preferences on Visuals, Tables, and Text"><span class="ltx_text ltx_ref_title"><span class="ltx_tag ltx_tag_ref">C </span>Further Discussion</span></a> <ol class="ltx_toclist ltx_toclist_appendix"> <li class="ltx_tocentry ltx_tocentry_subsection"><a class="ltx_ref" href="https://arxiv.org/html/2411.07451v1#A3.SS1" title="In Appendix C Further Discussion ‣ Optimizing Data Delivery: Insights from User Preferences on Visuals, Tables, and Text"><span class="ltx_text ltx_ref_title"><span class="ltx_tag ltx_tag_ref">C.1 </span>Implications for Data Tools</span></a></li> <li class="ltx_tocentry ltx_tocentry_subsection"><a class="ltx_ref" href="https://arxiv.org/html/2411.07451v1#A3.SS2" title="In Appendix C Further Discussion ‣ Optimizing Data Delivery: Insights from User Preferences on Visuals, Tables, and Text"><span class="ltx_text ltx_ref_title"><span class="ltx_tag ltx_tag_ref">C.2 </span>Influence of Work Experience</span></a></li> </ol> </li> </ol></nav> </nav> <div class="ltx_page_main"> <div class="ltx_page_content"> <article class="ltx_document ltx_authors_1line ltx_leqno"> <h1 class="ltx_title ltx_title_document">Optimizing Data Delivery: Insights from User Preferences on Visuals, Tables, and Text</h1> <div class="ltx_authors"> <span class="ltx_creator ltx_role_author"> <span class="ltx_personname">Reuben Luera </span><span class="ltx_author_notes"> <span class="ltx_contact ltx_role_affiliation"><span class="ltx_text ltx_affiliation_institution" id="id1.1.id1">University of California–San Diego</span><span class="ltx_text ltx_affiliation_city" id="id2.2.id2">San Diego</span><span class="ltx_text ltx_affiliation_state" id="id3.3.id3">California</span><span class="ltx_text ltx_affiliation_country" id="id4.4.id4">USA</span> </span> <span class="ltx_contact ltx_role_email"><a href="mailto:raluera@ucsd.edu">raluera@ucsd.edu</a> </span></span></span> <span class="ltx_author_before">, </span><span class="ltx_creator ltx_role_author"> <span class="ltx_personname">Ryan Rossi </span><span class="ltx_author_notes"> <span class="ltx_contact ltx_role_affiliation"><span class="ltx_text ltx_affiliation_institution" id="id5.1.id1">Adobe Research</span><span class="ltx_text ltx_affiliation_city" id="id6.2.id2">San Jose</span><span class="ltx_text ltx_affiliation_state" id="id7.3.id3">California</span><span class="ltx_text ltx_affiliation_country" id="id8.4.id4">USA</span> </span> <span class="ltx_contact ltx_role_email"><a href="mailto:ryrossi@adobe.com">ryrossi@adobe.com</a> </span></span></span> <span class="ltx_author_before">, </span><span class="ltx_creator ltx_role_author"> <span class="ltx_personname">Franck Dernoncourt </span><span class="ltx_author_notes"> <span class="ltx_contact ltx_role_affiliation"><span class="ltx_text ltx_affiliation_institution" id="id9.1.id1">Adobe Research</span><span class="ltx_text ltx_affiliation_city" id="id10.2.id2">Seattle</span><span class="ltx_text ltx_affiliation_state" id="id11.3.id3">Washington</span><span class="ltx_text ltx_affiliation_country" id="id12.4.id4">USA</span> </span> <span class="ltx_contact ltx_role_email"><a href="mailto:dernonco@adobe.com">dernonco@adobe.com</a> </span></span></span> <span class="ltx_author_before">, </span><span class="ltx_creator ltx_role_author"> <span class="ltx_personname">Alexa Siu </span><span class="ltx_author_notes"> <span class="ltx_contact ltx_role_affiliation"><span class="ltx_text ltx_affiliation_institution" id="id13.1.id1">Adobe Research</span><span class="ltx_text ltx_affiliation_city" id="id14.2.id2">San Jose</span><span class="ltx_text ltx_affiliation_state" id="id15.3.id3">California</span><span class="ltx_text ltx_affiliation_country" id="id16.4.id4">USA</span> </span> <span class="ltx_contact ltx_role_email"><a href="mailto:asiu@adobe.com">asiu@adobe.com</a> </span></span></span> <span class="ltx_author_before">, </span><span class="ltx_creator ltx_role_author"> <span class="ltx_personname">Sungchul Kim </span><span class="ltx_author_notes"> <span class="ltx_contact ltx_role_affiliation"><span class="ltx_text ltx_affiliation_institution" id="id17.1.id1">Adobe Research</span><span class="ltx_text ltx_affiliation_city" id="id18.2.id2">San Jose</span><span class="ltx_text ltx_affiliation_state" id="id19.3.id3">California</span><span class="ltx_text ltx_affiliation_country" id="id20.4.id4">USA</span> </span> <span class="ltx_contact ltx_role_email"><a href="mailto:sukim@adobe.com">sukim@adobe.com</a> </span></span></span> <span class="ltx_author_before">, </span><span class="ltx_creator ltx_role_author"> <span class="ltx_personname">Tong Yu </span><span class="ltx_author_notes"> <span class="ltx_contact ltx_role_affiliation"><span class="ltx_text ltx_affiliation_institution" id="id21.1.id1">Adobe Research</span><span class="ltx_text ltx_affiliation_city" id="id22.2.id2">San Jose</span><span class="ltx_text ltx_affiliation_state" id="id23.3.id3">California</span><span class="ltx_text ltx_affiliation_country" id="id24.4.id4">USA</span> </span> <span class="ltx_contact ltx_role_email"><a href="mailto:tyu@adobe.com">tyu@adobe.com</a> </span></span></span> <span class="ltx_author_before">, </span><span class="ltx_creator ltx_role_author"> <span class="ltx_personname">Ruiyi Zhang </span><span class="ltx_author_notes"> <span class="ltx_contact ltx_role_affiliation"><span class="ltx_text ltx_affiliation_institution" id="id25.1.id1">Adobe Research</span><span class="ltx_text ltx_affiliation_city" id="id26.2.id2">San Jose</span><span class="ltx_text ltx_affiliation_state" id="id27.3.id3">California</span><span class="ltx_text ltx_affiliation_country" id="id28.4.id4">USA</span> </span> <span class="ltx_contact ltx_role_email"><a href="mailto:ruizhang@adobe.com">ruizhang@adobe.com</a> </span></span></span> <span class="ltx_author_before">, </span><span class="ltx_creator ltx_role_author"> <span class="ltx_personname">Xiang Chen </span><span class="ltx_author_notes"> <span class="ltx_contact ltx_role_affiliation"><span class="ltx_text ltx_affiliation_institution" id="id29.1.id1">Adobe Research</span><span class="ltx_text ltx_affiliation_city" id="id30.2.id2">San Jose</span><span class="ltx_text ltx_affiliation_state" id="id31.3.id3">California</span><span class="ltx_text ltx_affiliation_country" id="id32.4.id4">USA</span> </span> <span class="ltx_contact ltx_role_email"><a href="mailto:xiangche@adobe.com">xiangche@adobe.com</a> </span></span></span> <span class="ltx_author_before">, </span><span class="ltx_creator ltx_role_author"> <span class="ltx_personname">Nedim Lipka </span><span class="ltx_author_notes"> <span class="ltx_contact ltx_role_affiliation"><span class="ltx_text ltx_affiliation_institution" id="id33.1.id1">Adobe Research</span><span class="ltx_text ltx_affiliation_city" id="id34.2.id2">San Jose</span><span class="ltx_text ltx_affiliation_state" id="id35.3.id3">California</span><span class="ltx_text ltx_affiliation_country" id="id36.4.id4">USA</span> </span> <span class="ltx_contact ltx_role_email"><a href="mailto:lipka@adobe.com">lipka@adobe.com</a> </span></span></span> <span class="ltx_author_before">, </span><span class="ltx_creator ltx_role_author"> <span class="ltx_personname">Zhehao Zhang </span><span class="ltx_author_notes"> <span class="ltx_contact ltx_role_affiliation"><span class="ltx_text ltx_affiliation_institution" id="id37.1.id1">Dartmouth College</span><span class="ltx_text ltx_affiliation_city" id="id38.2.id2">Hanover</span><span class="ltx_text ltx_affiliation_state" id="id39.3.id3">New Hampshire</span><span class="ltx_text ltx_affiliation_country" id="id40.4.id4">USA</span> </span> <span class="ltx_contact ltx_role_email"><a href="mailto:zhehao.zhang.gr@dartmouth.edu">zhehao.zhang.gr@dartmouth.edu</a> </span></span></span> <span class="ltx_author_before">, </span><span class="ltx_creator ltx_role_author"> <span class="ltx_personname">Seon Gyeom Kim </span><span class="ltx_author_notes"> <span class="ltx_contact ltx_role_affiliation"><span class="ltx_text ltx_affiliation_institution" id="id41.1.id1">KAIST</span><span class="ltx_text ltx_affiliation_city" id="id42.2.id2">Daejeon</span><span class="ltx_text ltx_affiliation_country" id="id43.3.id3">South Korea</span> </span> <span class="ltx_contact ltx_role_email"><a href="mailto:ksg%CB%990320@kaist.ac.kr">ksg˙0320@kaist.ac.kr</a> </span></span></span> <span class="ltx_author_before"> and </span><span class="ltx_creator ltx_role_author"> <span class="ltx_personname">Tak Yeon Lee </span><span class="ltx_author_notes"> <span class="ltx_contact ltx_role_affiliation"><span class="ltx_text ltx_affiliation_institution" id="id44.1.id1">KAIST</span><span class="ltx_text ltx_affiliation_city" id="id45.2.id2">Daejeon</span><span class="ltx_text ltx_affiliation_country" id="id46.3.id3">South Korea</span> </span> <span class="ltx_contact ltx_role_email"><a href="mailto:takyeonlee@kaist.ac.kr">takyeonlee@kaist.ac.kr</a> </span></span></span> </div> <div class="ltx_dates">(2025; 20 February 2025; 12 March 2025; 5 June 2025)</div> <div class="ltx_abstract"> <h6 class="ltx_title ltx_title_abstract">Abstract.</h6> <p class="ltx_p" id="id47.id1">In this work, we research user preferences to see a chart, table, or text given a question asked by the user. This enables us to understand when it is best to show a chart, table, or text to the user for the specific question. For this, we conduct a user study where users are shown a question and asked what they would prefer to see and used the data to establish that a user’s personal traits does influence the data outputs that they prefer. Understanding how user characteristics impact a user’s preferences is critical to creating data tools with a better user experience. Additionally, we investigate to what degree an LLM can be used to replicate a user’s preference with and without user preference data. Overall, these findings have significant implications pertaining to the development of data tools and the replication of human preferences using LLMs. Furthermore, this work demonstrates the potential use of LLMs to replicate user preference data which has major implications for future user modeling and personalization research.</p> </div> <div class="ltx_keywords">Data visualization, user modeling, personalization, recommendation, Large Language Models (LLMs) </div> <span class="ltx_note ltx_note_frontmatter ltx_role_copyright" id="id1"><sup class="ltx_note_mark">†</sup><span class="ltx_note_outer"><span class="ltx_note_content"><sup class="ltx_note_mark">†</sup><span class="ltx_note_type">copyright: </span>acmlicensed</span></span></span><span class="ltx_note ltx_note_frontmatter ltx_role_journalyear" id="id2"><sup class="ltx_note_mark">†</sup><span class="ltx_note_outer"><span class="ltx_note_content"><sup class="ltx_note_mark">†</sup><span class="ltx_note_type">journalyear: </span>2025</span></span></span><span class="ltx_note ltx_note_frontmatter ltx_role_conference" id="id3"><sup class="ltx_note_mark">†</sup><span class="ltx_note_outer"><span class="ltx_note_content"><sup class="ltx_note_mark">†</sup><span class="ltx_note_type">conference: </span>Make sure to enter the correct conference title from your rights confirmation emai; April 28–May 2, 2025; Sydney, Australia</span></span></span><span class="ltx_note ltx_note_frontmatter ltx_role_ccs" id="id4"><sup class="ltx_note_mark">†</sup><span class="ltx_note_outer"><span class="ltx_note_content"><sup class="ltx_note_mark">†</sup><span class="ltx_note_type">ccs: </span>Human-Centered Computing Visualizations</span></span></span><span class="ltx_note ltx_note_frontmatter ltx_role_ccs" id="id5"><sup class="ltx_note_mark">†</sup><span class="ltx_note_outer"><span class="ltx_note_content"><sup class="ltx_note_mark">†</sup><span class="ltx_note_type">ccs: </span>Human-Centered Computing User Studies</span></span></span><span class="ltx_note ltx_note_frontmatter ltx_role_ccs" id="id6"><sup class="ltx_note_mark">†</sup><span class="ltx_note_outer"><span class="ltx_note_content"><sup class="ltx_note_mark">†</sup><span class="ltx_note_type">ccs: </span>Information Systems Decision Support Systems</span></span></span><span class="ltx_note ltx_note_frontmatter ltx_role_ccs" id="id7"><sup class="ltx_note_mark">†</sup><span class="ltx_note_outer"><span class="ltx_note_content"><sup class="ltx_note_mark">†</sup><span class="ltx_note_type">ccs: </span>Computing Methodologies Machine Learning</span></span></span> <section class="ltx_section" id="S1"> <h2 class="ltx_title ltx_title_section"> <span class="ltx_tag ltx_tag_section">1. </span>Introduction</h2> <div class="ltx_para" id="S1.p1"> <p class="ltx_p" id="S1.p1.1">As data and large language models (LLMs) continue to grow in prominence, it is crucial to identify the most effective ways to present data outputs, as the format, whether chart, table, or text, significantly influences how users engage with and interpret information <cite class="ltx_cite ltx_citemacro_citep">(Tufte and Graves-Morris, <a class="ltx_ref" href="https://arxiv.org/html/2411.07451v1#bib.bib23" title="">1983</a>; Few, <a class="ltx_ref" href="https://arxiv.org/html/2411.07451v1#bib.bib7" title="">2004</a>)</cite>. With datasets becoming larger and more complex, visualizations are increasingly necessary to help users digest the information effectively <cite class="ltx_cite ltx_citemacro_citep">(Godfrey et al<span class="ltx_text">.</span>, <a class="ltx_ref" href="https://arxiv.org/html/2411.07451v1#bib.bib8" title="">2016</a>)</cite>. The expansion of LLM use in data analysis adds another layer, making it essential to understand when these models should present different output formats. Moreover, individuals have varying preferences for data representation, driven by their unique characteristics, such as experience with data analysis and visualization, age, and work experience. This paper investigates these preferences, exploring how user characteristics shape their choice of data outputs, and how LLMs can adapt to deliver more personalized and intuitive results <cite class="ltx_cite ltx_citemacro_citep">(Brown et al<span class="ltx_text">.</span>, <a class="ltx_ref" href="https://arxiv.org/html/2411.07451v1#bib.bib2" title="">2020</a>)</cite>. Ultimately, by dynamically tailoring outputs based on user backgrounds, LLMs can offer a more customized and effective experience, helping users better understand and utilize data.</p> </div> <div class="ltx_para ltx_noindent" id="S1.p2"> <p class="ltx_p" id="S1.p2.1"><span class="ltx_text ltx_font_bold" id="S1.p2.1.1">Summary of Main Contributions.</span> The key contributions of this work are as follows:</p> </div> <div class="ltx_para" id="S1.p3"> <ul class="ltx_itemize" id="S1.I1"> <li class="ltx_item" id="S1.I1.i1" style="list-style-type:none;"> <span class="ltx_tag ltx_tag_item">•</span> <div class="ltx_para" id="S1.I1.i1.p1"> <p class="ltx_p" id="S1.I1.i1.p1.1"><span class="ltx_text ltx_font_bold" id="S1.I1.i1.p1.1.1">A comprehensive user survey and methods design.</span> We outline the key components of the Amazon Mturk survey, detailing the respondent population, survey setup, specific user and data-related questions, and the instructions provided to participants.</p> </div> </li> <li class="ltx_item" id="S1.I1.i2" style="list-style-type:none;"> <span class="ltx_tag ltx_tag_item">•</span> <div class="ltx_para" id="S1.I1.i2.p1"> <p class="ltx_p" id="S1.I1.i2.p1.1"><span class="ltx_text ltx_font_bold" id="S1.I1.i2.p1.1.1">An analysis of general data output preference results.</span> The first research question examined the general population’s preferred data output for a given question, aiming to establish a baseline for data preferences without considering user characteristics.</p> </div> </li> <li class="ltx_item" id="S1.I1.i3" style="list-style-type:none;"> <span class="ltx_tag ltx_tag_item">•</span> <div class="ltx_para" id="S1.I1.i3.p1"> <p class="ltx_p" id="S1.I1.i3.p1.1"><span class="ltx_text ltx_font_bold" id="S1.I1.i3.p1.1.1">An overview of data output preferences when organized by personal user characteristics.</span> The 2nd research question explored how a user’s personal characteristics influence their data output preferences, focusing on experience with data analysis and visualization and their age.</p> </div> </li> <li class="ltx_item" id="S1.I1.i4" style="list-style-type:none;"> <span class="ltx_tag ltx_tag_item">•</span> <div class="ltx_para" id="S1.I1.i4.p1"> <p class="ltx_p" id="S1.I1.i4.p1.1"><span class="ltx_text ltx_font_bold" id="S1.I1.i4.p1.1.1">An overview of data output preferences when organized by work experiences.</span> The third research question explored how work experience, including industry and role, influences users’ data output preferences.</p> </div> </li> <li class="ltx_item" id="S1.I1.i5" style="list-style-type:none;"> <span class="ltx_tag ltx_tag_item">•</span> <div class="ltx_para" id="S1.I1.i5.p1"> <p class="ltx_p" id="S1.I1.i5.p1.1"><span class="ltx_text ltx_font_bold" id="S1.I1.i5.p1.1.1">A comparison between human and GPT preferences.</span> We used GPT to see if it could predict the human preference data we received throughout the study</p> </div> </li> </ul> </div> </section> <section class="ltx_section" id="S2"> <h2 class="ltx_title ltx_title_section"> <span class="ltx_tag ltx_tag_section">2. </span>Related Work</h2> <section class="ltx_subsection" id="S2.SS1"> <h3 class="ltx_title ltx_title_subsection"> <span class="ltx_tag ltx_tag_subsection">2.1. </span>Text to Visualization Generation</h3> <div class="ltx_para" id="S2.SS1.p1"> <p class="ltx_p" id="S2.SS1.p1.1">Visualization generation, whether charts or tables, from natural language, has become increasingly common as LLMs and natural language interfaces (NLIs) for data grow in popularity. These systems allow users with less data literacy to create comprehensive charts and expansive tables. Current research in this area often focuses on the creation of these systems <cite class="ltx_cite ltx_citemacro_citep">(Tian et al<span class="ltx_text">.</span>, <a class="ltx_ref" href="https://arxiv.org/html/2411.07451v1#bib.bib22" title="">2024</a>; Rashid et al<span class="ltx_text">.</span>, <a class="ltx_ref" href="https://arxiv.org/html/2411.07451v1#bib.bib17" title="">2021</a>; Narechania et al<span class="ltx_text">.</span>, <a class="ltx_ref" href="https://arxiv.org/html/2411.07451v1#bib.bib16" title="">2020</a>)</cite>, but does not do expansive research on which data output is best given a natural language question or a user’s individual characteristics.</p> </div> <div class="ltx_para" id="S2.SS1.p2"> <p class="ltx_p" id="S2.SS1.p2.1">In ChartGPT, <cite class="ltx_cite ltx_citemacro_citep">(Tian et al<span class="ltx_text">.</span>, <a class="ltx_ref" href="https://arxiv.org/html/2411.07451v1#bib.bib22" title="">2024</a>)</cite> explains their system as an LLM that is capable of generating charts given abstract natural language inputs. ChartGPT is effective at grouping parts of the natural language inputs into subtasks to identify key parts of it to present an appropriate visualization. Similarly, <cite class="ltx_cite ltx_citemacro_citep">(Rashid et al<span class="ltx_text">.</span>, <a class="ltx_ref" href="https://arxiv.org/html/2411.07451v1#bib.bib17" title="">2021</a>)</cite>’s Text2Chart uses BERT and LSTM, two deep learning models, to also create visualizations from natural language. Meanwhile, <cite class="ltx_cite ltx_citemacro_citep">(Narechania et al<span class="ltx_text">.</span>, <a class="ltx_ref" href="https://arxiv.org/html/2411.07451v1#bib.bib16" title="">2020</a>)</cite>’s NL4DV relies on traditional NLP methods, like dependency analysis and lexical parsing, instead of on an LLM. We aim to take this research further by simulating the natural language interactions in these systems and exploring data output mediums while also considering different user characteristics.</p> </div> </section> <section class="ltx_subsection" id="S2.SS2"> <h3 class="ltx_title ltx_title_subsection"> <span class="ltx_tag ltx_tag_subsection">2.2. </span>Visualization + Text for Analysis</h3> <div class="ltx_para" id="S2.SS2.p1"> <p class="ltx_p" id="S2.SS2.p1.1">A lot of work has been done to test the varying degrees in text and data visualizations can be used in tandem with one another to help users digest data. Systems like Eviza <cite class="ltx_cite ltx_citemacro_citep">(Setlur et al<span class="ltx_text">.</span>, <a class="ltx_ref" href="https://arxiv.org/html/2411.07451v1#bib.bib18" title="">2016</a>)</cite> create visualization and text combinations that make it easier for users to understand the data they are dealing with. Meanwhile, systems like <cite class="ltx_cite ltx_citemacro_citep">(Smits et al<span class="ltx_text">.</span>, <a class="ltx_ref" href="https://arxiv.org/html/2411.07451v1#bib.bib20" title="">[n. d.]</a>; Singh et al<span class="ltx_text">.</span>, <a class="ltx_ref" href="https://arxiv.org/html/2411.07451v1#bib.bib19" title="">2024</a>)</cite> help users create alt-text for a given data visualization. Creating text for a given data visualization helps users understand data visualizations that may have been inaccessible to them for whatever reason. On the other hand, <cite class="ltx_cite ltx_citemacro_citep">(Hearst and Tory, <a class="ltx_ref" href="https://arxiv.org/html/2411.07451v1#bib.bib9" title="">2019</a>)</cite> found that 40% of users do not prefer to see charts when in a conversational UI. Instead, they prefer their answers to be outputted in text.</p> </div> <div class="ltx_para" id="S2.SS2.p2"> <p class="ltx_p" id="S2.SS2.p2.1">Work by <cite class="ltx_cite ltx_citemacro_citep">(Stokes et al<span class="ltx_text">.</span>, <a class="ltx_ref" href="https://arxiv.org/html/2411.07451v1#bib.bib21" title="">2022</a>)</cite> investigated the value of including varying degrees of textual information for understanding univariate line charts. This work focused only on univariate line charts and also investigated the placement of text and its impact. While that work investigated showing only a line chart, a line chart with visual and short text annotations, along with showing the user a longer text description. It did not investigate the combination of showing the user a line chart with a detailed text description, nor do they study the utility of including a table with raw data. Furthermore, the findings of that work also conflated visual annotations, as the intermediate charts included in their study, include both visual annotations (i.e. highlighting the maximum value of a time series, and then displaying a textual annotation near it) as well as short textual annotations that are placed near the highlighted point on the line chart.</p> </div> </section> <section class="ltx_subsection" id="S2.SS3"> <h3 class="ltx_title ltx_title_subsection"> <span class="ltx_tag ltx_tag_subsection">2.3. </span>Accessibility & Technical Literacy</h3> <div class="ltx_para" id="S2.SS3.p1"> <p class="ltx_p" id="S2.SS3.p1.1">The capability to display data in several different formats, such as in charts, tables, or text, is significant for accessibility reasons. As data becomes more relevant to different sectors, data illiteracy can be a limiting factor <cite class="ltx_cite ltx_citemacro_citep">(Disseldorp, <a class="ltx_ref" href="https://arxiv.org/html/2411.07451v1#bib.bib6" title="">2020</a>; Vemulapalli, <a class="ltx_ref" href="https://arxiv.org/html/2411.07451v1#bib.bib24" title="">2024</a>; D’Ignazio, <a class="ltx_ref" href="https://arxiv.org/html/2411.07451v1#bib.bib5" title="">2017</a>)</cite>. Moreover, taking physical and neurological disabilities into account ensures that these systems are more accommodating <cite class="ltx_cite ltx_citemacro_citep">(Lundgard and Satyanarayan, <a class="ltx_ref" href="https://arxiv.org/html/2411.07451v1#bib.bib12" title="">2021</a>; Lee et al<span class="ltx_text">.</span>, <a class="ltx_ref" href="https://arxiv.org/html/2411.07451v1#bib.bib11" title="">2024</a>; Wu and Szafir, <a class="ltx_ref" href="https://arxiv.org/html/2411.07451v1#bib.bib25" title="">2023</a>)</cite>. Overall, by researching how a person’s unique characteristics impact their data preferences, data visualization applications can become more accessible.</p> </div> <div class="ltx_para" id="S2.SS3.p2"> <p class="ltx_p" id="S2.SS3.p2.1">Taking data literacy issues a step further, many users in non-technical fields often find themselves having to interact with complex data sets and visualizations <cite class="ltx_cite ltx_citemacro_citep">(Vemulapalli, <a class="ltx_ref" href="https://arxiv.org/html/2411.07451v1#bib.bib24" title="">2024</a>)</cite>. Furthermore, many companies have limited resources to teach their technically limited workers how to use data appropriately <cite class="ltx_cite ltx_citemacro_citep">(Disseldorp, <a class="ltx_ref" href="https://arxiv.org/html/2411.07451v1#bib.bib6" title="">2020</a>)</cite>, and often have to rely on common end-of-year performance reviews to gauge a worker’s technical literacy. For most companies and employees, by this point, it is often too late. Again, most companies do not have the resources to bring their employees up to speed on data techniques, so research continues to be needed for an alternative. Given this, conducting research that takes varying technical literacy and disabilities into account can help understand how to best serve those who are often marginalized in data conversations.</p> </div> <figure class="ltx_figure" id="S2.F1"><img alt="Refer to caption" class="ltx_graphics ltx_centering ltx_img_landscape" height="221" id="S2.F1.g1" src="x1.png" width="639"/> <figcaption class="ltx_caption ltx_centering"><span class="ltx_tag ltx_tag_figure">Figure 1. </span> When answering the user survey, MTurkers were shown this figure as an example of what the data outputs could potentially look like. The leftmost example is what the text answer would look like, the middle is the answer in the form of a table, and the right is the answer in the form of a chart. Then they were asked, ”Given a data analysis question, is it most useful to show the user text, data table, or chart?”</figcaption> </figure> </section> </section> <section class="ltx_section" id="S3"> <h2 class="ltx_title ltx_title_section"> <span class="ltx_tag ltx_tag_section">3. </span>Study</h2> <div class="ltx_para" id="S3.p1"> <p class="ltx_p" id="S3.p1.1">We aim to conduct a user study, focused on when it is best to show a visualization, table, text, or any combination of these options to the user for a given question. Our study consists of a user survey and a pre-survey questionnaire. The survey will ask a question and then prompt the user to choose between a chart, text, or table result. We will conduct this user study collecting user preference data and synthesizing these results, highlighting interesting trends.</p> </div> <div class="ltx_para" id="S3.p2"> <p class="ltx_p" id="S3.p2.1">In this work, we study the following research questions:</p> </div> <div class="ltx_para ltx_noindent" id="S3.p3"> <p class="ltx_p" id="S3.p3.1"><span class="ltx_text ltx_font_bold" id="S3.p3.1.1">RQ1: </span>Given a data question, in general, will users prefer to see the answer visualized as a table, text, or chart?</p> </div> <div class="ltx_para" id="S3.p4"> <ul class="ltx_itemize" id="S3.I1"> <li class="ltx_item" id="S3.I1.i1" style="list-style-type:none;"> <span class="ltx_tag ltx_tag_item">•</span> <div class="ltx_para" id="S3.I1.i1.p1"> <p class="ltx_p" id="S3.I1.i1.p1.1"><span class="ltx_text ltx_font_bold" id="S3.I1.i1.p1.1.1">H1</span> users will prefer to see their data represented as charts, tables, and then text in that order.</p> </div> </li> </ul> </div> <div class="ltx_para ltx_noindent" id="S3.p5"> <p class="ltx_p" id="S3.p5.1"><span class="ltx_text ltx_font_bold" id="S3.p5.1.1">RQ2:</span> Are there certain personal user characteristics that correlate with the users’ preference to see a chart, table or text?</p> </div> <div class="ltx_para" id="S3.p6"> <ul class="ltx_itemize" id="S3.I2"> <li class="ltx_item" id="S3.I2.i1" style="list-style-type:none;"> <span class="ltx_tag ltx_tag_item">•</span> <div class="ltx_para" id="S3.I2.i1.p1"> <p class="ltx_p" id="S3.I2.i1.p1.1"><span class="ltx_text ltx_font_bold" id="S3.I2.i1.p1.1.1">H2a</span>: Respondents with more data visualization experience will prefer charts, while users with less experience will show a stronger preference towards tables and text.</p> </div> </li> <li class="ltx_item" id="S3.I2.i2" style="list-style-type:none;"> <span class="ltx_tag ltx_tag_item">•</span> <div class="ltx_para" id="S3.I2.i2.p1"> <p class="ltx_p" id="S3.I2.i2.p1.1"><span class="ltx_text ltx_font_bold" id="S3.I2.i2.p1.1.1">H2b</span>: Respondents with more data analysis experience will also prefer charts over text and table outputs.</p> </div> </li> <li class="ltx_item" id="S3.I2.i3" style="list-style-type:none;"> <span class="ltx_tag ltx_tag_item">•</span> <div class="ltx_para" id="S3.I2.i3.p1"> <p class="ltx_p" id="S3.I2.i3.p1.1"><span class="ltx_text ltx_font_bold" id="S3.I2.i3.p1.1.1">H2c</span>: In terms of age, younger respondents will show a stronger preference for charts while older respondents will prefer tables and text.</p> </div> </li> </ul> </div> <div class="ltx_para ltx_noindent" id="S3.p7"> <p class="ltx_p" id="S3.p7.1"><span class="ltx_text ltx_font_bold" id="S3.p7.1.1">RQ3:</span> Does a respondent’s role at work or industry they work in correlate with their preference to see a visualization, table, or text?</p> <ul class="ltx_itemize" id="S3.I3"> <li class="ltx_item" id="S3.I3.i1" style="list-style-type:none;"> <span class="ltx_tag ltx_tag_item">•</span> <div class="ltx_para" id="S3.I3.i1.p1"> <p class="ltx_p" id="S3.I3.i1.p1.1"><span class="ltx_text ltx_font_bold" id="S3.I3.i1.p1.1.1">H3a</span>: Respondents will prefer different data outputs based on the role they play at work, with more presentation-oriented roles preferring charts.</p> </div> </li> <li class="ltx_item" id="S3.I3.i2" style="list-style-type:none;"> <span class="ltx_tag ltx_tag_item">•</span> <div class="ltx_para" id="S3.I3.i2.p1"> <p class="ltx_p" id="S3.I3.i2.p1.1"><span class="ltx_text ltx_font_bold" id="S3.I3.i2.p1.1.1">H3b</span>: Respondents will prefer different data outputs based on the industry they work in, with more technical industries preferring tables and text.</p> </div> </li> </ul> </div> <div class="ltx_para ltx_noindent" id="S3.p8"> <p class="ltx_p" id="S3.p8.1"><span class="ltx_text ltx_font_bold" id="S3.p8.1.1">RQ4:</span> Can LLMs be used to predict whether a question should be answered with a visualization, data table, or text?</p> <ul class="ltx_itemize" id="S3.I4"> <li class="ltx_item" id="S3.I4.i1" style="list-style-type:none;"> <span class="ltx_tag ltx_tag_item">•</span> <div class="ltx_para" id="S3.I4.i1.p1"> <p class="ltx_p" id="S3.I4.i1.p1.1"><span class="ltx_text ltx_font_bold" id="S3.I4.i1.p1.1.1">H4a</span>: LLM Alignment with Humans: Using only an LLM without any user-specific personalization will perform poorly on predicting whether a user should be answered with a visualization, data table, or text.</p> </div> </li> <li class="ltx_item" id="S3.I4.i2" style="list-style-type:none;"> <span class="ltx_tag ltx_tag_item">•</span> <div class="ltx_para" id="S3.I4.i2.p1"> <p class="ltx_p" id="S3.I4.i2.p1.1"><span class="ltx_text ltx_font_bold" id="S3.I4.i2.p1.1.1">H4b</span>: Personalized LLMs: Does including user-specific examples and user characteristics in the LLM improve the accuracy of the LLM in generating the preferred answer for individual users?</p> </div> </li> <li class="ltx_item" id="S3.I4.i3" style="list-style-type:none;"> <span class="ltx_tag ltx_tag_item">•</span> <div class="ltx_para" id="S3.I4.i3.p1"> <p class="ltx_p" id="S3.I4.i3.p1.1"><span class="ltx_text ltx_font_bold" id="S3.I4.i3.p1.1.1">H4c</span>: User-specific Accuracy: Can the personalized LLM approach perform well for some users and worse for others?</p> </div> </li> </ul> </div> <section class="ltx_subsection" id="S3.SS1"> <h3 class="ltx_title ltx_title_subsection"> <span class="ltx_tag ltx_tag_subsection">3.1. </span>Participants</h3> <div class="ltx_para" id="S3.SS1.p1"> <p class="ltx_p" id="S3.SS1.p1.1">In order to conduct the survey, we used Amazon Mechanical Turk. In total we uploaded 5 different sets of questions, each set having approximately 50 questions each in addition to the eight demographic questions at the beginning. The data questions were created in a different survey on Upwork, where we asked participants to create general data questions that a user might ask an LLM. In doing so, we pulled from that bank of questions and select them based on their relevance to our survey. Given that there were 5 sets of questions, and 50 questions per set, we essentially had 250 unique questions in total. For each set of questions, the users were compensated $1.40.</p> </div> <div class="ltx_para" id="S3.SS1.p2"> <p class="ltx_p" id="S3.SS1.p2.1">For every set, we initially recruited 200 respondents per set. At fifty questions per set, we essentially had about 50,000 unique responses. From there, we cleaned out any responses were subpar or were suspected as duplicate responses. We also set a certain time threshold (400 seconds) and cleaned out any user that fell beneath this threshold as our survey could not reasonably be completed in less than 400 seconds.</p> </div> </section> <section class="ltx_subsection" id="S3.SS2"> <h3 class="ltx_title ltx_title_subsection"> <span class="ltx_tag ltx_tag_subsection">3.2. </span>Method</h3> <div class="ltx_para" id="S3.SS2.p1"> <p class="ltx_p" id="S3.SS2.p1.1">The survey was broken down into three subsections: the demographic questions, the instructions, and the data survey questions. The answers from the two sections were used in tandem to identify trends and correlations. The same instructions were presented to each user, ensuring consistency with every survey.</p> </div> <section class="ltx_subsubsection" id="S3.SS2.SSS1"> <h4 class="ltx_title ltx_title_subsubsection"> <span class="ltx_tag ltx_tag_subsubsection">3.2.1. </span>User Characteristic Questions</h4> <div class="ltx_para" id="S3.SS2.SSS1.p1"> <p class="ltx_p" id="S3.SS2.SSS1.p1.1">In order to answer RQ1 and gain a better understanding of if and how a person’s characteristics impact their data output preference, we first had to gather user characteristics from each respondent. In doing so, we can map which, if any, user characteristics impact a person’s data output and which do not. The questions asked at the beginning of the survey can be found in section <a class="ltx_ref" href="https://arxiv.org/html/2411.07451v1#A2.SS3" title="B.3. Survey Questions ‣ Appendix B Study Design ‣ Optimizing Data Delivery: Insights from User Preferences on Visuals, Tables, and Text"><span class="ltx_text ltx_ref_tag">B.3</span></a>.</p> </div> </section> <section class="ltx_subsubsection" id="S3.SS2.SSS2"> <h4 class="ltx_title ltx_title_subsubsection"> <span class="ltx_tag ltx_tag_subsubsection">3.2.2. </span>Instructions</h4> <div class="ltx_para" id="S3.SS2.SSS2.p1"> <p class="ltx_p" id="S3.SS2.SSS2.p1.1">After the user characteristic questions, we showed a consistent figure that presented an example scenario (Fig. <a class="ltx_ref" href="https://arxiv.org/html/2411.07451v1#S2.F1" title="Figure 1 ‣ 2.3. Accessibility & Technical Literacy ‣ 2. Related Work ‣ Optimizing Data Delivery: Insights from User Preferences on Visuals, Tables, and Text"><span class="ltx_text ltx_ref_tag">1</span></a>). As seen in Fig. <a class="ltx_ref" href="https://arxiv.org/html/2411.07451v1#S2.F1" title="Figure 1 ‣ 2.3. Accessibility & Technical Literacy ‣ 2. Related Work ‣ Optimizing Data Delivery: Insights from User Preferences on Visuals, Tables, and Text"><span class="ltx_text ltx_ref_tag">1</span></a> the figure consists of a given prompt and then shows three examples of the potential data output mediums. As the user goes through the rest of the survey, they can reference this figure for examples of the different output mediums.</p> </div> <div class="ltx_para" id="S3.SS2.SSS2.p2"> <p class="ltx_p" id="S3.SS2.SSS2.p2.1">Furthermore, it was decided that the instructions would be the only place where the user could see examples of text, tables, and charts. This was intentionally done so as not to bias the respondent on each question. If the respondent saw a specific text, table, or chart for each question, this could potentially disrupt their own personal beliefs on what the output could look like. While we understand the drawbacks of this approach, this was the best way to mitigate bias. Given these reasons, we maintained that the instructions would be succinct and section that respondents can reference to get examples of what type of data output they would prefer.</p> </div> </section> <section class="ltx_subsubsection" id="S3.SS2.SSS3"> <h4 class="ltx_title ltx_title_subsubsection"> <span class="ltx_tag ltx_tag_subsubsection">3.2.3. </span>Survey Questions</h4> <div class="ltx_para" id="S3.SS2.SSS3.p1"> <p class="ltx_p" id="S3.SS2.SSS3.p1.1">As mentioned, each survey had 50 unique questions that all had the same question structure. Each question begins with the same basic instruction: ”Given the question below, please select your preference on how the answer to the question should be presented.” After this instruction, the user is presented with a generic prompt and asked to choose what data output medium would best fit the needs of that given question (Fig <a class="ltx_ref" href="https://arxiv.org/html/2411.07451v1#S2.F1" title="Figure 1 ‣ 2.3. Accessibility & Technical Literacy ‣ 2. Related Work ‣ Optimizing Data Delivery: Insights from User Preferences on Visuals, Tables, and Text"><span class="ltx_text ltx_ref_tag">1</span></a>).</p> </div> <div class="ltx_para" id="S3.SS2.SSS3.p2"> <p class="ltx_p" id="S3.SS2.SSS3.p2.1">After being shown these questions, users were presented with three options: text, table, or chart. Given the question, the users were tasked with choosing which of these three data output methods they preferred. These options were presented as radial buttons and the respondents were tasked with choosing one of the three options for each of the 50 questions.</p> </div> <figure class="ltx_figure" id="S3.F2"><img alt="Refer to caption" class="ltx_graphics ltx_centering ltx_img_landscape" height="315" id="S3.F2.g1" src="x2.png" width="748"/> <figcaption class="ltx_caption ltx_centering"><span class="ltx_tag ltx_tag_figure">Figure 2. </span>For each question, we aggregate all the user preferences, and derive a distribution, which is shown above (sorted by the value for chart, which is why we see a nice curve for the probability of chart). Notably, we see that as the probability that a user prefers a chart increases, the probability a user prefers text or table decreases. </figcaption> </figure> </section> </section> </section> <section class="ltx_section" id="S4"> <h2 class="ltx_title ltx_title_section"> <span class="ltx_tag ltx_tag_section">4. </span>Results</h2> <section class="ltx_subsection" id="S4.SS1"> <h3 class="ltx_title ltx_title_subsection"> <span class="ltx_tag ltx_tag_subsection">4.1. </span>RQ1: General Preferences</h3> <div class="ltx_para ltx_noindent" id="S4.SS1.p1"> <p class="ltx_p" id="S4.SS1.p1.1"><span class="ltx_text ltx_font_bold" id="S4.SS1.p1.1.1">Findings:</span> RQ1 asks: Given a data question or prompt will users prefer to see the answer visualized as a table, text, or chart? From the results, we found that the most common preference was for tables at 41.70%, with charts at 36.2%, and text preferred far less at 21.97% (Fig. <a class="ltx_ref" href="https://arxiv.org/html/2411.07451v1#A3.F10" title="Figure 10 ‣ C.2. Influence of Work Experience ‣ Appendix C Further Discussion ‣ Optimizing Data Delivery: Insights from User Preferences on Visuals, Tables, and Text"><span class="ltx_text ltx_ref_tag">10</span></a>).</p> </div> <div class="ltx_para" id="S4.SS1.p2"> <p class="ltx_p" id="S4.SS1.p2.1">Given this, it was clear that there was a large amount of variability between the three data output preferences. Figures <a class="ltx_ref" href="https://arxiv.org/html/2411.07451v1#S3.F2" title="Figure 2 ‣ 3.2.3. Survey Questions ‣ 3.2. Method ‣ 3. Study ‣ Optimizing Data Delivery: Insights from User Preferences on Visuals, Tables, and Text"><span class="ltx_text ltx_ref_tag">2</span></a> & <a class="ltx_ref" href="https://arxiv.org/html/2411.07451v1#A3.F28" title="Figure 28 ‣ C.2. Influence of Work Experience ‣ Appendix C Further Discussion ‣ Optimizing Data Delivery: Insights from User Preferences on Visuals, Tables, and Text"><span class="ltx_text ltx_ref_tag">28</span></a> illustrate how user preferences are distributed across individual users and questions. Looking at Figure <a class="ltx_ref" href="https://arxiv.org/html/2411.07451v1#S3.F2" title="Figure 2 ‣ 3.2.3. Survey Questions ‣ 3.2. Method ‣ 3. Study ‣ Optimizing Data Delivery: Insights from User Preferences on Visuals, Tables, and Text"><span class="ltx_text ltx_ref_tag">2</span></a>, it is clear that users exhibited preferences for all three data output types to some degree. This signifies the variability within the preferences and speaks to the complexity of the task. The user preferences did not have a uniform consistency and instead displayed significant variation. On a similar note, Figure <a class="ltx_ref" href="https://arxiv.org/html/2411.07451v1#A3.F28" title="Figure 28 ‣ C.2. Influence of Work Experience ‣ Appendix C Further Discussion ‣ Optimizing Data Delivery: Insights from User Preferences on Visuals, Tables, and Text"><span class="ltx_text ltx_ref_tag">28</span></a> shows that there was a wide distribution given individual questions, following a similar pattern to Figure <a class="ltx_ref" href="https://arxiv.org/html/2411.07451v1#S3.F2" title="Figure 2 ‣ 3.2.3. Survey Questions ‣ 3.2. Method ‣ 3. Study ‣ Optimizing Data Delivery: Insights from User Preferences on Visuals, Tables, and Text"><span class="ltx_text ltx_ref_tag">2</span></a>. Overall, this variability in preferences underscores the need for a deeper analysis on ways to personalize outputs based on user characteristics.</p> </div> <div class="ltx_para ltx_noindent" id="S4.SS1.p3"> <p class="ltx_p" id="S4.SS1.p3.1"><span class="ltx_text ltx_font_bold" id="S4.SS1.p3.1.1">Analysis Details & Discussion:</span></p> </div> <div class="ltx_para" id="S4.SS1.p4"> <p class="ltx_p" id="S4.SS1.p4.1">In our original hypothesis (H1) it was stated that <span class="ltx_text ltx_font_italic" id="S4.SS1.p4.1.1">H1: Users will prefer to see their data represented as charts, tables, and then text in that order.</span></p> </div> <div class="ltx_para" id="S4.SS1.p5"> <p class="ltx_p" id="S4.SS1.p5.1">This hypothesis was <span class="ltx_text ltx_font_bold" id="S4.SS1.p5.1.1">partially correct</span> as the text was the least preferred output format, but unlike our hypothesis tables and charts were the most preferred outputs, respectively. These results were gathered by calculating the percentage of each output answer within the larger dataset.</p> </div> <div class="ltx_para" id="S4.SS1.p6"> <ol class="ltx_enumerate" id="S4.I1"> <li class="ltx_item" id="S4.I1.i1" style="list-style-type:none;"> <span class="ltx_tag ltx_tag_item">(1)</span> <div class="ltx_para" id="S4.I1.i1.p1"> <p class="ltx_p" id="S4.I1.i1.p1.1"><span class="ltx_text ltx_font_bold" id="S4.I1.i1.p1.1.1">Tables</span> were the most preferred data output, with 41.7% of responses preferring tables. This preference may have been caused by the unique way that data is presented in tables. Tables are organized in a way that allows users to quickly look up and compare specific data points <cite class="ltx_cite ltx_citemacro_citep">(Few, <a class="ltx_ref" href="https://arxiv.org/html/2411.07451v1#bib.bib7" title="">2004</a>)</cite>. Furthermore, tables are effective at handling large data densities at a single time, allowing users to navigate large data sets <cite class="ltx_cite ltx_citemacro_citep">(Tufte and Graves-Morris, <a class="ltx_ref" href="https://arxiv.org/html/2411.07451v1#bib.bib23" title="">1983</a>)</cite>.</p> </div> </li> <li class="ltx_item" id="S4.I1.i2" style="list-style-type:none;"> <span class="ltx_tag ltx_tag_item">(2)</span> <div class="ltx_para" id="S4.I1.i2.p1"> <p class="ltx_p" id="S4.I1.i2.p1.1"><span class="ltx_text ltx_font_bold" id="S4.I1.i2.p1.1.1">Charts </span> were the second preferred data output, with 36.32% of responses preferring charts. Charts are especially good at visualizing data in a way that makes it more accessible to more people. This is illustrated in <cite class="ltx_cite ltx_citemacro_citep">(Cleveland and McGill, <a class="ltx_ref" href="https://arxiv.org/html/2411.07451v1#bib.bib4" title="">1984</a>)</cite> where the article explains how different charts are especially effective at showing trends and change over time. Furthermore, <cite class="ltx_cite ltx_citemacro_citep">(Heer et al<span class="ltx_text">.</span>, <a class="ltx_ref" href="https://arxiv.org/html/2411.07451v1#bib.bib10" title="">2010</a>)</cite> explains how users can use charts and graphs to get enhance comprehension, while also simplifying the data.</p> </div> </li> <li class="ltx_item" id="S4.I1.i3" style="list-style-type:none;"> <span class="ltx_tag ltx_tag_item">(3)</span> <div class="ltx_para" id="S4.I1.i3.p1"> <p class="ltx_p" id="S4.I1.i3.p1.1"><span class="ltx_text ltx_font_bold" id="S4.I1.i3.p1.1.1">Text </span> was the data output method that users preferred the least with 21.97% of responses indicating that they preferred text outputs. This suggests that text was the least desirable output, as the preference was not as strong as either charts or tables. In terms of data, text limits the amount of data that can be shown at a single time <cite class="ltx_cite ltx_citemacro_citep">(Card, <a class="ltx_ref" href="https://arxiv.org/html/2411.07451v1#bib.bib3" title="">1999</a>)</cite> which can make reading and understanding large data sets harder to digest. This could potentially be why users selected text as their least preferred output method.</p> </div> </li> </ol> </div> <div class="ltx_para" id="S4.SS1.p7"> <p class="ltx_p" id="S4.SS1.p7.1">While applicable on their own, these results also serve as a baseline for RQ2 and RQ3. These results will act as a control when we compare and introduce user characteristics like a user’s age, experience with data analysis, etc. Doing so will help us compare these results with no added demographic variables to results with demographic variables.</p> </div> <figure class="ltx_figure" id="S4.F3"><img alt="Refer to caption" class="ltx_graphics ltx_centering ltx_img_landscape" height="365" id="S4.F3.g1" src="x3.png" width="689"/> <figcaption class="ltx_caption ltx_centering"><span class="ltx_tag ltx_tag_figure">Figure 3. </span><span class="ltx_text ltx_font_bold" id="S4.F3.2.1">User preference by Data Visualization Experience</span>: This shows the data preferences of respondents based on their data vis experience, specifically comparing charts, tables, and text outputs.</figcaption> </figure> </section> <section class="ltx_subsection" id="S4.SS2"> <h3 class="ltx_title ltx_title_subsection"> <span class="ltx_tag ltx_tag_subsection">4.2. </span>RQ2: User Characteristics</h3> <div class="ltx_para" id="S4.SS2.p1"> <p class="ltx_p" id="S4.SS2.p1.1">RQ2 asks: <span class="ltx_text ltx_font_italic" id="S4.SS2.p1.1.1">Are there certain personal user characteristics that correlate with the users’ preference to see a chart, table, or text?</span></p> </div> <section class="ltx_subsubsection" id="S4.SS2.SSS1"> <h4 class="ltx_title ltx_title_subsubsection"> <span class="ltx_tag ltx_tag_subsubsection">4.2.1. </span>Findings Summary</h4> <div class="ltx_para" id="S4.SS2.SSS1.p1"> <p class="ltx_p" id="S4.SS2.SSS1.p1.1">After conducting the aforementioned study and analyzing the results, we were able to make significant findings pertaining to the relationship between personal user characteristics and their preferred data output. In short, we found that a user’s familiarity with data visualization, familiarity with data analysis, and their age all influenced their data output preferences to some degree. For one, in both tests run for data analysis and visualization, we found that those with more experience in either preferred charts. Meanwhile, those with less experience were more drawn to tables. Similarly, in terms of age, younger respondents were more inclined to favor charts, while older respondents had a bias for tables.</p> </div> <div class="ltx_para ltx_noindent" id="S4.SS2.SSS1.p2"> <p class="ltx_p" id="S4.SS2.SSS1.p2.1"><span class="ltx_text ltx_font_bold" id="S4.SS2.SSS1.p2.1.1">Analysis Details:</span> After we got results from the respondents, we organized the respondents based on their personal characteristics; specifically looking at their familiarity with data analysis and visualizations and their age. We organized the findings into heatmaps that compared the user characteristic (y-axis) with the different data outputs. Unless otherwise mentioned, the heatmaps are normalized row-wise. A p-value that is less than 0.05 shows that a statistically significant association exists and that the null hypothesis can be rejected. The p-values revealed that there were highly significant associations between user characteristics and user preferences.</p> </div> </section> <section class="ltx_subsubsection" id="S4.SS2.SSS2"> <h4 class="ltx_title ltx_title_subsubsection"> <span class="ltx_tag ltx_tag_subsubsection">4.2.2. </span>H2a: Influence of Data Visualization Experience on Preference</h4> <div class="ltx_para" id="S4.SS2.SSS2.p1"> <p class="ltx_p" id="S4.SS2.SSS2.p1.1"><span class="ltx_text ltx_font_bold ltx_font_italic" id="S4.SS2.SSS2.p1.1.1">H2a:<span class="ltx_text ltx_font_medium" id="S4.SS2.SSS2.p1.1.1.1"> Respondents with more data visualization experience will prefer charts, while users with less experience will show a stronger preference towards tables and text.</span></span></p> </div> <div class="ltx_para" id="S4.SS2.SSS2.p2"> <p class="ltx_p" id="S4.SS2.SSS2.p2.1"><span class="ltx_text ltx_font_bold" id="S4.SS2.SSS2.p2.1.1">Figure <a class="ltx_ref" href="https://arxiv.org/html/2411.07451v1#S4.F3" title="Figure 3 ‣ 4.1. RQ1: General Preferences ‣ 4. Results ‣ Optimizing Data Delivery: Insights from User Preferences on Visuals, Tables, and Text"><span class="ltx_text ltx_ref_tag">3</span></a></span> shows the relationship between a user’s familiarity with data visualizations and their data output preferences. As seen in the figure, participants who referred to themselves as ”very familiar” with data visualizations had the strongest preference for charts (43.2%). As experience with data visualization decreased, so did the preference for charts, with only 26.4% of very unfamiliar respondents preferring charts. Similarly, preference for text also grew stronger as familiarity with data visualization waned, with respondents who were ”very unfamiliar” preferring text at 36.4% and tables at 37.3%.</p> </div> <div class="ltx_para" id="S4.SS2.SSS2.p3"> <p class="ltx_p" id="S4.SS2.SSS2.p3.1"><span class="ltx_text ltx_font_bold" id="S4.SS2.SSS2.p3.1.1">Analysis:</span> The hypothesis that respondents with more data visualization experience will prefer charts over tables and text <span class="ltx_text ltx_font_bold ltx_font_italic" id="S4.SS2.SSS2.p3.1.2">was supported</span> ( p ¡ 0.01). The values outputted by the study and the statistical tests both support H2a, signifying that users who are more familiar with data visualizations have a stronger preference towards charts. Charts are often only useful to those with a certain level of data visualization literacy, which could explain this trend. Similarly, the converse can be said for the stronger preference for tables and charts for those with less familiarity. Those with less familiarity might prefer these outputs as they are easier to understand with less data visualization experience. Looking at Figure <a class="ltx_ref" href="https://arxiv.org/html/2411.07451v1#S4.F3" title="Figure 3 ‣ 4.1. RQ1: General Preferences ‣ 4. Results ‣ Optimizing Data Delivery: Insights from User Preferences on Visuals, Tables, and Text"><span class="ltx_text ltx_ref_tag">3</span></a> from a high level, there is almost a diagonal that forms from top left to bottom right. This diagonal illustrates that as familiarity wanes, so does the preference for chart. Conversely, the preference for table gets stronger as familiarity wanes, with the ”very unfamiliar” row being the main outlier.</p> </div> <figure class="ltx_figure" id="S4.F4"><img alt="Refer to caption" class="ltx_graphics ltx_centering ltx_img_landscape" height="434" id="S4.F4.g1" src="x4.png" width="713"/> <figcaption class="ltx_caption ltx_centering"><span class="ltx_tag ltx_tag_figure">Figure 4. </span><span class="ltx_text ltx_font_bold" id="S4.F4.2.1">User preference by Data Analysis Experience:</span> This shows the data preferences of respondents based on their data analysis experience, specifically comparing charts, tables, and text outputs. </figcaption> </figure> </section> <section class="ltx_subsubsection" id="S4.SS2.SSS3"> <h4 class="ltx_title ltx_title_subsubsection"> <span class="ltx_tag ltx_tag_subsubsection">4.2.3. </span>H2b: Influence of Data Analysis Experience on Preference</h4> <div class="ltx_para" id="S4.SS2.SSS3.p1"> <p class="ltx_p" id="S4.SS2.SSS3.p1.1"><span class="ltx_text ltx_font_bold ltx_font_italic" id="S4.SS2.SSS3.p1.1.1">H2b:<span class="ltx_text ltx_font_medium" id="S4.SS2.SSS3.p1.1.1.1"> Respondents with more data analysis experience will also prefer charts over text and table outputs.</span></span></p> </div> <div class="ltx_para" id="S4.SS2.SSS3.p2"> <p class="ltx_p" id="S4.SS2.SSS3.p2.1"><span class="ltx_text ltx_font_bold" id="S4.SS2.SSS3.p2.1.1">Figure <a class="ltx_ref" href="https://arxiv.org/html/2411.07451v1#S4.F4" title="Figure 4 ‣ 4.2.2. H2a: Influence of Data Visualization Experience on Preference ‣ 4.2. RQ2: User Characteristics ‣ 4. Results ‣ Optimizing Data Delivery: Insights from User Preferences on Visuals, Tables, and Text"><span class="ltx_text ltx_ref_tag">4</span></a></span> shows the relationship between a contingency table that compares a user’s familiarity with data analysis to whichever data output they prefer. In creating this table, we found that data analysis experience is strongly associated with data output preference. More specifically, users who identified themselves as being very familiar with data analysis preferred charts at 41.4%. Meanwhile, those with lower familiarity with data analysis had a stronger bias towards tables with users who identified themselves as unfamiliar and very unfamiliar preferring tables at a rate of 46.0% and 38.3% respectively. Finally, text output was always the least preferred but it grew marginally as familiarity waned.</p> </div> <div class="ltx_para" id="S4.SS2.SSS3.p3"> <p class="ltx_p" id="S4.SS2.SSS3.p3.1"><span class="ltx_text ltx_font_bold" id="S4.SS2.SSS3.p3.1.1">Analysis:</span> The hypothesis that respondents with more data analysis experience will prefer charts over tables and text <span class="ltx_text ltx_font_bold ltx_font_italic" id="S4.SS2.SSS3.p3.1.2">was supported</span> ( p ¡ 0.01). The results from our comparison of a user’s data preference output with their data analysis experience are not too dissimilar to the results from the data visualization experience part of the study. For example, users with the most experience with data Analysis showed a strong preference for charts, with the preference dropping with a user’s familiarity. However, the difference is that users with the least amount of familiarity with data analysis put an end to the trends in both the chart and table rows. Once again, the preference for charts among the most familiar could be because of the extra level of data literacy that is required to understand charts. Overall, this table shows that there is an association between the two variables. When designing LLMs, designers and developers can use this information to create a better user experience for their users.</p> </div> <figure class="ltx_figure" id="S4.F5"><img alt="Refer to caption" class="ltx_graphics ltx_centering ltx_img_landscape" height="529" id="S4.F5.g1" src="x5.png" width="831"/> <figcaption class="ltx_caption ltx_centering"><span class="ltx_tag ltx_tag_figure">Figure 5. </span>Comparison of the users’ age to their preference in terms of whether they prefer the answer to be shown to them as a chart, table, or text. </figcaption> </figure> </section> <section class="ltx_subsubsection" id="S4.SS2.SSS4"> <h4 class="ltx_title ltx_title_subsubsection"> <span class="ltx_tag ltx_tag_subsubsection">4.2.4. </span>H2c: Influence of Age on Preference</h4> <div class="ltx_para" id="S4.SS2.SSS4.p1"> <p class="ltx_p" id="S4.SS2.SSS4.p1.1"><span class="ltx_text ltx_font_bold ltx_font_italic" id="S4.SS2.SSS4.p1.1.1">H2c:<span class="ltx_text ltx_font_medium" id="S4.SS2.SSS4.p1.1.1.1"> In terms of age, younger respondents will show a stronger preference towards charts, while older respondents will prefer tables and text.</span></span></p> </div> <div class="ltx_para" id="S4.SS2.SSS4.p2"> <p class="ltx_p" id="S4.SS2.SSS4.p2.1"><span class="ltx_text ltx_font_bold" id="S4.SS2.SSS4.p2.1.1">Figure <a class="ltx_ref" href="https://arxiv.org/html/2411.07451v1#S4.F5" title="Figure 5 ‣ 4.2.3. H2b: Influence of Data Analysis Experience on Preference ‣ 4.2. RQ2: User Characteristics ‣ 4. Results ‣ Optimizing Data Delivery: Insights from User Preferences on Visuals, Tables, and Text"><span class="ltx_text ltx_ref_tag">5</span></a></span> shows the relationship between a user’s age range to what data output they prefer. In doing so, we found that different age groups prefer different data output methods. For one, younger users aged 18-24 showed the strongest bias toward charts at 43.1%. On the other hand, the preference for tables increased with age with users 45 and older showing a preference for tables. Finally, text was the least preferred data output across all ages with 18-24 year olds preferring it the least at 15.9%.</p> </div> <div class="ltx_para" id="S4.SS2.SSS4.p3"> <p class="ltx_p" id="S4.SS2.SSS4.p3.1"><span class="ltx_text ltx_font_bold" id="S4.SS2.SSS4.p3.1.1">Analysis:</span> The hypothesis that younger respondents will prefer charts over tables and text while older respondents would prefer the opposite <span class="ltx_text ltx_font_bold ltx_font_italic" id="S4.SS2.SSS4.p3.1.2">was supported</span> (p ¡ 0.01). The data outputted in the contingency table in Figure <a class="ltx_ref" href="https://arxiv.org/html/2411.07451v1#S4.F5" title="Figure 5 ‣ 4.2.3. H2b: Influence of Data Analysis Experience on Preference ‣ 4.2. RQ2: User Characteristics ‣ 4. Results ‣ Optimizing Data Delivery: Insights from User Preferences on Visuals, Tables, and Text"><span class="ltx_text ltx_ref_tag">5</span></a> and the p-value show that age is strongly associated with a user’s data output method of choice. As mentioned, younger user’s seem to prefer charts the most, but the interesting part is that this preference for charts seems to steadily drop with age, with the biggest drop coming between the 18-24 group and the 25-34 group. According to <cite class="ltx_cite ltx_citemacro_citep">(Mládková, <a class="ltx_ref" href="https://arxiv.org/html/2411.07451v1#bib.bib15" title="">2017</a>; Yaru and Harun, <a class="ltx_ref" href="https://arxiv.org/html/2411.07451v1#bib.bib26" title="">2024</a>; Mellman, <a class="ltx_ref" href="https://arxiv.org/html/2411.07451v1#bib.bib13" title="">2020</a>)</cite>, younger generations prefer receiving information in easier-to-understand snippets, than larger sets of texts. Given this, it makes sense that this age group has a stronger preference for charts, with there also being a steady decline in chart preferences with each age group. Conversely, table preferences typically increase as participants get older. All in all, this data can be used when designing user experiences as it shows that a user’s primary data output preference may change with age.</p> </div> </section> </section> <section class="ltx_subsection" id="S4.SS3"> <h3 class="ltx_title ltx_title_subsection"> <span class="ltx_tag ltx_tag_subsection">4.3. </span>RQ3: Work Experience</h3> <div class="ltx_para" id="S4.SS3.p1"> <p class="ltx_p" id="S4.SS3.p1.1">RQ3 asks: <span class="ltx_text ltx_font_italic" id="S4.SS3.p1.1.1">Does a respondent’s role at work or industry they work in correlate with their preference to see a visualization, table, or text?</span></p> </div> <section class="ltx_subsubsection" id="S4.SS3.SSS1"> <h4 class="ltx_title ltx_title_subsubsection"> <span class="ltx_tag ltx_tag_subsubsection">4.3.1. </span>Findings Summary</h4> <div class="ltx_para" id="S4.SS3.SSS1.p1"> <p class="ltx_p" id="S4.SS3.SSS1.p1.1">After conducting a study comparing the influence of a user’s work experience on their data output preferences, we concluded that there are highly significant associations between the two. Specifically in terms of roles, we found that those in decision-maker roles strongly preferred tables, while analysts had the strongest preference for charts among the group (Fig. <a class="ltx_ref" href="https://arxiv.org/html/2411.07451v1#S4.F6" title="Figure 6 ‣ 4.3.1. Findings Summary ‣ 4.3. RQ3: Work Experience ‣ 4. Results ‣ Optimizing Data Delivery: Insights from User Preferences on Visuals, Tables, and Text"><span class="ltx_text ltx_ref_tag">6</span></a>). Meanwhile, in terms of a user’s industry, there was a lot of variation but industries like development and IT, and Sales and Marketing preferred charts more than other industries (Fig. <a class="ltx_ref" href="https://arxiv.org/html/2411.07451v1#S4.F7" title="Figure 7 ‣ 4.3.2. H3a: Influence of Role on Preference ‣ 4.3. RQ3: Work Experience ‣ 4. Results ‣ Optimizing Data Delivery: Insights from User Preferences on Visuals, Tables, and Text"><span class="ltx_text ltx_ref_tag">7</span></a>).</p> </div> <div class="ltx_para" id="S4.SS3.SSS1.p2"> <p class="ltx_p" id="S4.SS3.SSS1.p2.1"><span class="ltx_text ltx_font_bold" id="S4.SS3.SSS1.p2.1.1">Analysis Details:</span> After we got results from the respondents, we organized the respondents based on their work experiences; specifically looking at the role they played at work and the industry they worked in. Given these findings, we used p-values as forms of statistical analysis. The p-values revealed highly significant associations between the users’ work experiences and the user data output preferences.</p> </div> <figure class="ltx_figure" id="S4.F6"><img alt="Refer to caption" class="ltx_graphics ltx_centering ltx_img_landscape" height="623" id="S4.F6.g1" src="x6.png" width="830"/> <figcaption class="ltx_caption ltx_centering"><span class="ltx_tag ltx_tag_figure">Figure 6. </span><span class="ltx_text ltx_font_bold" id="S4.F6.2.1">User preference by Role</span>: This shows the data preferences of respondents based on their work role, specifically comparing charts, tables, and text outputs. </figcaption> </figure> </section> <section class="ltx_subsubsection" id="S4.SS3.SSS2"> <h4 class="ltx_title ltx_title_subsubsection"> <span class="ltx_tag ltx_tag_subsubsection">4.3.2. </span>H3a: Influence of Role on Preference</h4> <div class="ltx_para" id="S4.SS3.SSS2.p1"> <p class="ltx_p" id="S4.SS3.SSS2.p1.1"><span class="ltx_text ltx_font_bold ltx_font_italic" id="S4.SS3.SSS2.p1.1.1">H3a:<span class="ltx_text ltx_font_medium" id="S4.SS3.SSS2.p1.1.1.1"> Respondents will prefer different data outputs based on the role they play at work, with more presentation-oriented roles preferring charts.</span></span></p> </div> <div class="ltx_para" id="S4.SS3.SSS2.p2"> <p class="ltx_p" id="S4.SS3.SSS2.p2.1"><span class="ltx_text ltx_font_bold" id="S4.SS3.SSS2.p2.1.1">Figure <a class="ltx_ref" href="https://arxiv.org/html/2411.07451v1#S4.F6" title="Figure 6 ‣ 4.3.1. Findings Summary ‣ 4.3. RQ3: Work Experience ‣ 4. Results ‣ Optimizing Data Delivery: Insights from User Preferences on Visuals, Tables, and Text"><span class="ltx_text ltx_ref_tag">6</span></a></span> shows the relationship between a user’s role at work with their data output preferences. In general, the results show that the data preferences are significantly different depending on a user’s role. For example, for those who identified as analysts, charts were the most preferred data output method (38.7%). On the other hand, decision makers were the group that preferred charts the least at 28.5%, but preferred tables 51.9% of the time. Finally, support specialists had the highest bias for text at 27.5% of responses.</p> </div> <div class="ltx_para" id="S4.SS3.SSS2.p3"> <p class="ltx_p" id="S4.SS3.SSS2.p3.1"><span class="ltx_text ltx_font_bold" id="S4.SS3.SSS2.p3.1.1">Analysis:</span> The hypothesis that respondents with more presentation oriented workers will prefer charts over tables and text <span class="ltx_text ltx_font_bold ltx_font_italic" id="S4.SS3.SSS2.p3.1.2">was not supported</span> ( p ¡ 0.01). For the most part, each role had a stronger preference for tables than they did for charts, even if for some it was not by much. Considering that each role has a varied preference percentage breakdown and that the P-value is p ¡ 0.01, there is a strong indication that there is a significant association between data preference and role. From these results, it can be concluded that LLMs could use a user’s work role to influence what data output they use. For example, if a user is marked as a decision maker, it may make sense to show them a table given that respondents preferred tables 51.0% of the time. Furthermore, an LLM might also want to give more weight to charts for analysts as they preferred charts 38.7% of the time. Given all of this information, LLMs have the opportunity to be more personalized by incorporating data like this that presents data based on a user’s persona.</p> </div> <figure class="ltx_figure" id="S4.F7"><img alt="Refer to caption" class="ltx_graphics ltx_centering ltx_img_landscape" height="623" id="S4.F7.g1" src="x7.png" width="830"/> <figcaption class="ltx_caption ltx_centering"><span class="ltx_tag ltx_tag_figure">Figure 7. </span><span class="ltx_text ltx_font_bold" id="S4.F7.2.1">User preference by Industry</span>: This shows the data preferences of respondents based on their work industry, specifically comparing charts, tables, and text outputs </figcaption> </figure> </section> <section class="ltx_subsubsection" id="S4.SS3.SSS3"> <h4 class="ltx_title ltx_title_subsubsection"> <span class="ltx_tag ltx_tag_subsubsection">4.3.3. </span>H3b: Influence of Industry on Preference</h4> <div class="ltx_para" id="S4.SS3.SSS3.p1"> <p class="ltx_p" id="S4.SS3.SSS3.p1.1"><span class="ltx_text ltx_font_bold ltx_font_italic" id="S4.SS3.SSS3.p1.1.1">H3b:<span class="ltx_text ltx_font_medium" id="S4.SS3.SSS3.p1.1.1.1"> Respondents will prefer different data outputs based on the industry they work in, with more technical industries preferring tables and text.</span></span></p> </div> <div class="ltx_para" id="S4.SS3.SSS3.p2"> <p class="ltx_p" id="S4.SS3.SSS3.p2.1"><span class="ltx_text ltx_font_bold" id="S4.SS3.SSS3.p2.1.1">Figure <a class="ltx_ref" href="https://arxiv.org/html/2411.07451v1#S4.F7" title="Figure 7 ‣ 4.3.2. H3a: Influence of Role on Preference ‣ 4.3. RQ3: Work Experience ‣ 4. Results ‣ Optimizing Data Delivery: Insights from User Preferences on Visuals, Tables, and Text"><span class="ltx_text ltx_ref_tag">7</span></a></span> displays the relationship between the industry a user works in and whether they prefer data outputted as a table, chart, or text. From this chart, it is clear that respondents in the Development and IT industry had the highest preference for charts at 39.2%. Meanwhile, industries like finance and accounting have a stronger preference for tables at 43.5%. Finally, text was most strongly preferred by unemployed respondents at 30.1%, suggesting that they prefer a narrative with their data.</p> </div> <div class="ltx_para" id="S4.SS3.SSS3.p3"> <p class="ltx_p" id="S4.SS3.SSS3.p3.1"><span class="ltx_text ltx_font_bold" id="S4.SS3.SSS3.p3.1.1">Analysis:</span> The findings <span class="ltx_text ltx_font_bold ltx_font_italic" id="S4.SS3.SSS3.p3.1.2">support</span> (p ¡ 0.01) our hypothesis as respondents preferred different data outputs based on the industries they worked in. Even more so, technical fields like Development and IT have a stronger preference for charts. This could potentially be because they are more efficient at conveying trends <cite class="ltx_cite ltx_citemacro_citep">(Tufte and Graves-Morris, <a class="ltx_ref" href="https://arxiv.org/html/2411.07451v1#bib.bib23" title="">1983</a>)</cite>. Similarly, those in the Finance and Accounting industries preferred tables, suggesting that they may have wanted to look at many data points and potentially compare them <cite class="ltx_cite ltx_citemacro_citep">(Few, <a class="ltx_ref" href="https://arxiv.org/html/2411.07451v1#bib.bib7" title="">2004</a>)</cite>. Developers of LLMs can use this information to make their systems more responsive to users in a wide array of industries.</p> </div> <figure class="ltx_figure" id="S4.F8"><img alt="Refer to caption" class="ltx_graphics ltx_centering ltx_img_landscape" height="539" id="S4.F8.g1" src="x8.png" width="705"/> <figcaption class="ltx_caption ltx_centering"><span class="ltx_tag ltx_tag_figure">Figure 8. </span>When users are highly experienced in both visualization and data analysis, they tend to prefer visualizations for answering questions. However, users who are unfamiliar with visualizations but experienced in data analysis lean towards text-based answers, while novices in both fields show a slight preference for visualizations over tables and text. </figcaption> </figure> </section> <section class="ltx_subsubsection" id="S4.SS3.SSS4"> <h4 class="ltx_title ltx_title_subsubsection"> <span class="ltx_tag ltx_tag_subsubsection">4.3.4. </span>Preferences When Combining User Characteristics</h4> <div class="ltx_para" id="S4.SS3.SSS4.p1"> <p class="ltx_p" id="S4.SS3.SSS4.p1.1">Figure <a class="ltx_ref" href="https://arxiv.org/html/2411.07451v1#S4.F8" title="Figure 8 ‣ 4.3.3. H3b: Influence of Industry on Preference ‣ 4.3. RQ3: Work Experience ‣ 4. Results ‣ Optimizing Data Delivery: Insights from User Preferences on Visuals, Tables, and Text"><span class="ltx_text ltx_ref_tag">8</span></a> highlights how the intersection of a user’s combined experience with both data analysis and visualizations influences their data output preferences. This further segmentation of the data can help gain more granular insights into how different user characteristics influence a user’s preferences.</p> </div> <div class="ltx_para" id="S4.SS3.SSS4.p2"> <p class="ltx_p" id="S4.SS3.SSS4.p2.1">For example, if a respondent marked that they were highly familiar with both data analysis and visualizations (data visualization experience = 5; data analysis experience = 5), then they were more likely to prefer charts (46%) over tables (35%) or text (18%). However, when a respondent indicated that they were experienced in only data analysis (data visualization experience = 1; data analysis experience = 5) we found that their preference shifted heavily towards text (78%). The shift away from visualizations makes sense as these users most likely have to use text to compensate for their lack of visualization experience. Similarly, novices in both data analysis and visualizations (data visualization experience = 1; data analysis experience = 1) both show a marginal preference for charts (37%). However, with an increase in data analysis experience, users begin to strongly prefer tables at 45%.</p> </div> <div class="ltx_para" id="S4.SS3.SSS4.p3"> <p class="ltx_p" id="S4.SS3.SSS4.p3.1">The takeaways about user preferences become more substantial when comparing two unalike user characteristics, such as a user’s role and visualization experience (Figure <a class="ltx_ref" href="https://arxiv.org/html/2411.07451v1#A3.F23" title="Figure 23 ‣ C.2. Influence of Work Experience ‣ Appendix C Further Discussion ‣ Optimizing Data Delivery: Insights from User Preferences on Visuals, Tables, and Text"><span class="ltx_text ltx_ref_tag">23</span></a>). While users may have one characteristic consistent, the difference in the other characteristic often causes a sizeable swing in their data preferences. For example, an analyst with high visualization experience prefers charts (43%), while analysts with less experience prefer tables (47%). All in all, the fluctuations across similar roles and similar visualization experiences produce the finding that even the slightest change in user characteristics can influence their preferences.</p> </div> <div class="ltx_para" id="S4.SS3.SSS4.p4"> <p class="ltx_p" id="S4.SS3.SSS4.p4.1">These particular data points underscore that not all users with the same user characteristics have the same preferences. The preferences do not exist in a vacuum and often are dependent on other user characteristics. For this reason, LLMs and other data tools need to be able to dynamically adjust to a combination of user characteristics to best meet the needs of the user.</p> </div> </section> </section> <section class="ltx_subsection" id="S4.SS4"> <h3 class="ltx_title ltx_title_subsection"> <span class="ltx_tag ltx_tag_subsection">4.4. </span>RQ4: LLM Preference Predictions</h3> <div class="ltx_para" id="S4.SS4.p1"> <p class="ltx_p" id="S4.SS4.p1.1"><span class="ltx_text ltx_font_bold ltx_font_italic" id="S4.SS4.p1.1.1">RQ4:<span class="ltx_text ltx_font_medium" id="S4.SS4.p1.1.1.1"> Can LLMs be used to predict whether a question should be answered with a visualization, data table, or text?</span></span></p> </div> <div class="ltx_para" id="S4.SS4.p2"> <p class="ltx_p" id="S4.SS4.p2.1">In other words, is there alignment on this task between what humans actually prefer and the inferences generated by the LLM? Notably, this question is of fundamental importance since if this holds, then LLMs can be used to infer how a question should be answered for a specific user.</p> </div> <section class="ltx_subsubsection" id="S4.SS4.SSS1"> <h4 class="ltx_title ltx_title_subsubsection"> <span class="ltx_tag ltx_tag_subsubsection">4.4.1. </span>H4a: LLM Alignment with Humans</h4> <div class="ltx_para" id="S4.SS4.SSS1.p1"> <p class="ltx_p" id="S4.SS4.SSS1.p1.1"><span class="ltx_text ltx_font_bold ltx_font_italic" id="S4.SS4.SSS1.p1.1.1">H4a:<span class="ltx_text ltx_font_medium" id="S4.SS4.SSS1.p1.1.1.1"> Using an LLM without user-specific personalization will perform poorly on predicting whether a user should be answered with a chart, table, or text.</span></span></p> </div> <div class="ltx_para" id="S4.SS4.SSS1.p2"> <p class="ltx_p" id="S4.SS4.SSS1.p2.1">To answer this question, we used the approach shown in Figure <a class="ltx_ref" href="https://arxiv.org/html/2411.07451v1#A3.F27" title="Figure 27 ‣ C.2. Influence of Work Experience ‣ Appendix C Further Discussion ‣ Optimizing Data Delivery: Insights from User Preferences on Visuals, Tables, and Text"><span class="ltx_text ltx_ref_tag">27</span></a>. For the LLM we used GPT4o (gpt-4o-2024-05-13b). Using this non-personalized approach to predict the answer preferred by a user for a given question, the average accuracy is 0.367. Notably, the average accuracy of the non-personalized LLM approach is very close to what would be expected by random selection. This finding implies that different users often have different preferences for how they want the answer to be presented for a given data analysis question. Hence, this result is interesting and important from a personalization perspective, and leads us to the next few research questions that seek to test whether including user-specific information including their characteristics and preferences about other questions can lead to better predictive performance.</p> </div> </section> <section class="ltx_subsubsection" id="S4.SS4.SSS2"> <h4 class="ltx_title ltx_title_subsubsection"> <span class="ltx_tag ltx_tag_subsubsection">4.4.2. </span>H4b: Personalized LLMs</h4> <div class="ltx_para" id="S4.SS4.SSS2.p1"> <p class="ltx_p" id="S4.SS4.SSS2.p1.1"><span class="ltx_text ltx_font_bold ltx_font_italic" id="S4.SS4.SSS2.p1.1.1">H4b:<span class="ltx_text ltx_font_medium" id="S4.SS4.SSS2.p1.1.1.1"> Does including user-specific examples and user characteristics in the LLM improve the accuracy of the LLM in generating the preferred answer for individual users?</span></span></p> </div> <div class="ltx_para" id="S4.SS4.SSS2.p2"> <p class="ltx_p" id="S4.SS4.SSS2.p2.1">To answer this question, we personalized the LLM by including the user characteristics and previous preferences that a specific user had, that is, we included questions along with how they prefer to view the answer to it (text, data table, visualization). We provide an overview of the approach in Figure <a class="ltx_ref" href="https://arxiv.org/html/2411.07451v1#A3.F26" title="Figure 26 ‣ C.2. Influence of Work Experience ‣ Appendix C Further Discussion ‣ Optimizing Data Delivery: Insights from User Preferences on Visuals, Tables, and Text"><span class="ltx_text ltx_ref_tag">26</span></a>. In Figure <a class="ltx_ref" href="https://arxiv.org/html/2411.07451v1#S4.F9" title="Figure 9 ‣ 4.4.2. H4b: Personalized LLMs ‣ 4.4. RQ4: LLM Preference Predictions ‣ 4. Results ‣ Optimizing Data Delivery: Insights from User Preferences on Visuals, Tables, and Text"><span class="ltx_text ltx_ref_tag">9</span></a>, we observe that the accuracy in terms of how well we personalize the responses for the specific user increases as a function of the number of user-specific examples we use for inference. To understand the effectiveness of our approach, we also investigated the accuracy when we removed different components of our approach. Notably, when we remove the few-shot examples, our approach achieves only 0.377 accuracy whereas when both the few-shot examples and user characteristics are removed, the performance decreases further to 0.367. We note that this last ablation is the case where no personalization is used since we do not include any user-specific examples (few-shot) and we do not provide any user characteristics to the model. In comparison, we achieve an accuracy of 0.469 and 0.487 when 20 and 40-shots are used by the model, respectively.</p> </div> <div class="ltx_para" id="S4.SS4.SSS2.p3"> <p class="ltx_p" id="S4.SS4.SSS2.p3.1">As an aside, we also investigated GPT3.5 using 40-shot user-specific examples with visualization experience, data analysis experience and the users’ role. For this model, we achieved an accuracy of 0.441 compared to 0.487 using the GPT4o model.</p> </div> <figure class="ltx_figure" id="S4.F9"><img alt="Refer to caption" class="ltx_graphics ltx_centering ltx_img_landscape" height="443" id="S4.F9.g1" src="x9.png" width="664"/> <figcaption class="ltx_caption ltx_centering"><span class="ltx_tag ltx_tag_figure">Figure 9. </span>Our approach shows that accuracy improves as more user-specific examples are used for personalization, indicating that including additional examples of user preferences for different data analysis questions allows for more precise output. This result was based on user-specific characteristics like visualization and data analysis experience and their role. See text for further discussion. </figcaption> </figure> <figure class="ltx_figure" id="S4.SS4.SSS2.fig1"> <div class="ltx_flex_figure"> <div class="ltx_flex_cell ltx_flex_size_2"><span class="ltx_ERROR ltx_figure_panel undefined" id="S4.SS4.SSS2.fig1.1">\MakeFramed</span></div> <div class="ltx_flex_cell ltx_flex_size_2"> <div class="ltx_block ltx_figure_panel" id="S4.SS4.SSS2.fig1.2"> <span class="ltx_ERROR undefined" id="S4.SS4.SSS2.fig1.2.1">\FrameRestore</span><span class="ltx_ERROR undefined" id="S4.SS4.SSS2.fig1.2.2">{adjustwidth}</span> </div> </div> <div class="ltx_flex_break"></div> <div class="ltx_flex_cell ltx_flex_size_1"> <p class="ltx_p ltx_figure_panel" id="S4.SS4.SSS2.fig1.3">4pt7pt <span class="ltx_text ltx_font_typewriter" id="S4.SS4.SSS2.fig1.3.1" style="font-size:90%;">Given the data analytics question below, along with the list of user characteristics (preferences indicated by the user), and list of questions and responses for the user, please select how the answer to the question should be presented (e.g., Table, Text, Chart) for the specific user with the user characteristics and the user’s preferences for other questions.</span></p> </div> <div class="ltx_flex_break"></div> <div class="ltx_flex_cell ltx_flex_size_1"> <p class="ltx_p ltx_figure_panel" id="S4.SS4.SSS2.fig1.4"><span class="ltx_text ltx_font_typewriter" id="S4.SS4.SSS2.fig1.4.1" style="font-size:90%;">The possible options are:</span></p> </div> <div class="ltx_flex_break"></div> <div class="ltx_flex_cell ltx_flex_size_1"> <p class="ltx_p ltx_figure_panel" id="S4.SS4.SSS2.fig1.5"><span class="ltx_text ltx_font_typewriter" id="S4.SS4.SSS2.fig1.5.1" style="font-size:90%;">* Table</span></p> </div> <div class="ltx_flex_break"></div> <div class="ltx_flex_cell ltx_flex_size_1"> <p class="ltx_p ltx_figure_panel" id="S4.SS4.SSS2.fig1.6"><span class="ltx_text ltx_font_typewriter" id="S4.SS4.SSS2.fig1.6.1" style="font-size:90%;">* Text</span></p> </div> <div class="ltx_flex_break"></div> <div class="ltx_flex_cell ltx_flex_size_1"> <p class="ltx_p ltx_figure_panel" id="S4.SS4.SSS2.fig1.7"><span class="ltx_text ltx_font_typewriter" id="S4.SS4.SSS2.fig1.7.1" style="font-size:90%;">* Chart</span></p> </div> <div class="ltx_flex_break"></div> <div class="ltx_flex_cell ltx_flex_size_1"> <p class="ltx_p ltx_figure_panel" id="S4.SS4.SSS2.fig1.8"><span class="ltx_text ltx_font_typewriter" id="S4.SS4.SSS2.fig1.8.1" style="font-size:90%;">Here are the user characteristics:</span></p> </div> <div class="ltx_flex_break"></div> <div class="ltx_flex_cell ltx_flex_size_1"> <p class="ltx_p ltx_align_center ltx_figure_panel" id="S4.SS4.SSS2.fig1.9"><span class="ltx_text ltx_font_typewriter" id="S4.SS4.SSS2.fig1.9.1" style="font-size:90%;">[User Characteristics]</span></p> </div> <div class="ltx_flex_break"></div> <div class="ltx_flex_cell ltx_flex_size_1"> <p class="ltx_p ltx_figure_panel" id="S4.SS4.SSS2.fig1.10"><span class="ltx_text ltx_font_typewriter" id="S4.SS4.SSS2.fig1.10.1" style="font-size:90%;">Here is a list of questions and preferences for how the user wanted the answer to be presented:</span></p> </div> <div class="ltx_flex_break"></div> <div class="ltx_flex_cell ltx_flex_size_1"> <p class="ltx_p ltx_align_center ltx_figure_panel" id="S4.SS4.SSS2.fig1.11"><span class="ltx_text ltx_font_typewriter" id="S4.SS4.SSS2.fig1.11.1" style="font-size:90%;">[User-specific Few-shot Examples]</span></p> </div> <div class="ltx_flex_break"></div> <div class="ltx_flex_cell ltx_flex_size_1"> <p class="ltx_p ltx_align_center ltx_figure_panel" id="S4.SS4.SSS2.fig1.12"><span class="ltx_text ltx_font_typewriter" id="S4.SS4.SSS2.fig1.12.1" style="font-size:90%;">[Question]</span></p> </div> <div class="ltx_flex_break"></div> <div class="ltx_flex_cell ltx_flex_size_1"><span class="ltx_ERROR ltx_figure_panel undefined" id="S4.SS4.SSS2.fig1.13">\endMakeFramed</span></div> </div> </figure> </section> <section class="ltx_subsubsection" id="S4.SS4.SSS3"> <h4 class="ltx_title ltx_title_subsubsection"> <span class="ltx_tag ltx_tag_subsubsection">4.4.3. </span>H4c: User-specific Accuracy</h4> <div class="ltx_para" id="S4.SS4.SSS3.p1"> <p class="ltx_p" id="S4.SS4.SSS3.p1.1"><span class="ltx_text ltx_font_bold ltx_font_italic" id="S4.SS4.SSS3.p1.1.1">H4c:<span class="ltx_text ltx_font_medium" id="S4.SS4.SSS3.p1.1.1.1"> Can the personalized LLM approach perform well for some users and worse for others? In other words, are some users easier to personalize and others more difficult.</span></span></p> </div> <div class="ltx_para" id="S4.SS4.SSS3.p2"> <p class="ltx_p" id="S4.SS4.SSS3.p2.1">We also investigated the accuracy of individual users in Table <a class="ltx_ref" href="https://arxiv.org/html/2411.07451v1#A3.T1" title="Table 1 ‣ C.2. Influence of Work Experience ‣ Appendix C Further Discussion ‣ Optimizing Data Delivery: Insights from User Preferences on Visuals, Tables, and Text"><span class="ltx_text ltx_ref_tag">1</span></a>. For brevity, we provided the accuracy of ten users across the different models. We also selected a small subset of users and provided their user-specific accuracies from the various models investigated.</p> </div> <div class="ltx_para" id="S4.SS4.SSS3.p3"> <p class="ltx_p" id="S4.SS4.SSS3.p3.1">We also now show accuracy for a small subset of users, and this just shows that for some users, predicting how they want the answer to be is easy, but for others it is more difficult.</p> </div> </section> </section> </section> <section class="ltx_section" id="S5"> <h2 class="ltx_title ltx_title_section"> <span class="ltx_tag ltx_tag_section">5. </span>Discussion</h2> <div class="ltx_para" id="S5.p1"> <p class="ltx_p" id="S5.p1.1">This study focuses on how user characteristics influence a user’s data output preferences, specifically conducting a user study measuring the preferences between charts, tables, and text outputs. We then synthesized and presented the results, and in this section, we discuss the results, examine common themes and highlight practical takeaways that can be applied to existing data tools.</p> </div> <section class="ltx_subsection" id="S5.SS1"> <h3 class="ltx_title ltx_title_subsection"> <span class="ltx_tag ltx_tag_subsection">5.1. </span>General Preferences</h3> <div class="ltx_para" id="S5.SS1.p1"> <p class="ltx_p" id="S5.SS1.p1.1">Research question 1 (RQ1) looked at the issue from a bird’s eye view and established the framework for later RQs, but also was significant in its own right in that it provided insight into general preferences. We hypothesized that charts would be the most preferred, but users actually generally preferred tables 41% of the time. Charts were preferred at a somewhat similar rate of 36.2% and text was the least preferred at 21.9%.</p> </div> <div class="ltx_para" id="S5.SS1.p2"> <p class="ltx_p" id="S5.SS1.p2.1">From this data, we can gather that tables are still preferred because of their ability to quickly display large data sets and allow the user to find and compare specific pieces of data <cite class="ltx_cite ltx_citemacro_citep">(Few, <a class="ltx_ref" href="https://arxiv.org/html/2411.07451v1#bib.bib7" title="">2004</a>)</cite>. Considering that charts are not far behind, the utility of charts should not be underestimated as they are useful in identifying trends <cite class="ltx_cite ltx_citemacro_citep">(Tufte and Graves-Morris, <a class="ltx_ref" href="https://arxiv.org/html/2411.07451v1#bib.bib23" title="">1983</a>)</cite>. Finally, text still had a substantial amount of users saying they preferred it, which can have something to do with its straightforward nature or even its ability to tell a narrative.</p> </div> </section> <section class="ltx_subsection" id="S5.SS2"> <h3 class="ltx_title ltx_title_subsection"> <span class="ltx_tag ltx_tag_subsection">5.2. </span>Influence of User Characteristics and Work Experience</h3> <div class="ltx_para" id="S5.SS2.p1"> <p class="ltx_p" id="S5.SS2.p1.1">The results show that user characteristics such as data visualization experience, data analysis experience, and age significantly influence data output preferences. Users with more experience in data visualization and analysis demonstrated a strong preference for charts, likely because they are more accustomed to interpreting trends and complex data <cite class="ltx_cite ltx_citemacro_citep">(Tufte and Graves-Morris, <a class="ltx_ref" href="https://arxiv.org/html/2411.07451v1#bib.bib23" title="">1983</a>)</cite>. Meanwhile, those with less experience tended to prefer tables, possibly due to the need for quickly comparing and contrasting large data sets. Age also plays a role, as younger users favored charts, likely due to their familiarity with visually rich platforms, while older users were more inclined to prefer tables and text. This indicates that user characteristics indeed shape the way users prefer to receive data outputs.</p> </div> <div class="ltx_para" id="S5.SS2.p2"> <p class="ltx_p" id="S5.SS2.p2.1">Work experience also greatly affects data preferences, with users’ roles and the industries they work in shaping their output preferences. Analysis-oriented roles leaned towards charts, as they offer high-level insights, while decision-makers showed a stronger preference for tables, valuing the quick and accurate information they provide. Likewise, industry differences were notable: development and IT professionals preferred both charts and tables, likely for presenting trends and precise data, while those in finance and accounting favored tables due to their need for large volumes of exact, easily comparable data. These findings emphasize the opportunity to personalize data outputs based on work roles and industries, tailoring data presentations to better suit the needs of different users based on their professional backgrounds.</p> </div> </section> <section class="ltx_subsection" id="S5.SS3"> <h3 class="ltx_title ltx_title_subsection"> <span class="ltx_tag ltx_tag_subsection">5.3. </span>Human Preference vs. GPT Preference</h3> <div class="ltx_para" id="S5.SS3.p1"> <p class="ltx_p" id="S5.SS3.p1.1">The results from investigating RQ4 reveal that providing user-specific information helps large language models predict how users will prefer to see their data. By feeding the LLMs user data like user role, data visualization experience, age, etc., we can increase the effectiveness of the models’ predictions. Specifically, the LLM performed much better as we increased the user data we provided during a few shot learning, with accuracy numbers rising from 0.367 with no user data to 0.487 with forty or so examples. Providing LLMs with few shot examples significantly increased its accuracy. Overall, from this, we can gather that feeding an LLM personal data alters an LLM’s outputs.</p> </div> <div class="ltx_para" id="S5.SS3.p2"> <p class="ltx_p" id="S5.SS3.p2.1">Furthermore, comparing GPT4o with GPT3.5 revealed that using the more advanced models for predicting personalization improves accuracy. Despite the finding that feeding LLM models improves accuracy, the user-specific accuracy variance still shows that some users are harder to predict. With this in mind, LLMs still require further tuning before they can be considered fully accurate or reliable in this space. With this in mind, we summarize our findings by stating that feeding personalized information improves GPT’s ability to predict user preferences, but continued work should be done to optimize models to improve accuracy.</p> </div> </section> </section> <section class="ltx_section" id="S6"> <h2 class="ltx_title ltx_title_section"> <span class="ltx_tag ltx_tag_section">6. </span>Conclusion</h2> <div class="ltx_para" id="S6.p1"> <p class="ltx_p" id="S6.p1.1">In this paper, we conducted a research survey that investigated how user characteristics and work experiences shape data output preferences. We found that each of the characteristics we studied influenced a user’s data output preference in some way. For this reason, we recommend that data tools tailor their outputs to the personal characteristics of each user. Doing so will create a better user experience and is likely to increase efficiency. Additionally, we used this data to explore how effective LLMs are at predicting these user preferences. Our findings indicate that when LLMs are given no personalization information, they perform poorly. However, when the LLM is provided with user-specific information, its performance improves significantly, with accuracy increasing markedly. These findings underscore the significance of understanding a user’s characteristics when creating data tools and attempting to replicate preferences when using LLMs.</p> </div> </section> <section class="ltx_bibliography" id="bib"> <h2 class="ltx_title ltx_title_bibliography">References</h2> <ul class="ltx_biblist"> <li class="ltx_bibitem" id="bib.bib1"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">(1)</span> <span class="ltx_bibblock"> </span> </li> <li class="ltx_bibitem" id="bib.bib2"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">Brown et al<span class="ltx_text" id="bib.bib2.2.2.1">.</span> (2020)</span> <span class="ltx_bibblock"> Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeffrey Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. 2020. </span> <span class="ltx_bibblock">Language Models are Few-Shot Learners. </span> <span class="ltx_bibblock"><em class="ltx_emph ltx_font_italic" id="bib.bib2.3.1">CoRR</em> abs/2005.14165 (2020). </span> <span class="ltx_bibblock">arXiv:2005.14165 <a class="ltx_ref ltx_url ltx_font_typewriter" href="https://arxiv.org/abs/2005.14165" title="">https://arxiv.org/abs/2005.14165</a> </span> </li> <li class="ltx_bibitem" id="bib.bib3"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">Card (1999)</span> <span class="ltx_bibblock"> Stuart K Card. 1999. </span> <span class="ltx_bibblock"><em class="ltx_emph ltx_font_italic" id="bib.bib3.1.1">Readings in information visualization: using vision to think</em>. </span> <span class="ltx_bibblock">Morgan Kaufmann. </span> <span class="ltx_bibblock"> </span> </li> <li class="ltx_bibitem" id="bib.bib4"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">Cleveland and McGill (1984)</span> <span class="ltx_bibblock"> William S Cleveland and Robert McGill. 1984. </span> <span class="ltx_bibblock">Graphical perception: Theory, experimentation, and application to the development of graphical methods. </span> <span class="ltx_bibblock"><em class="ltx_emph ltx_font_italic" id="bib.bib4.1.1">Journal of the American statistical association</em> 79, 387 (1984), 531–554. </span> <span class="ltx_bibblock"> </span> </li> <li class="ltx_bibitem" id="bib.bib5"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">D’Ignazio (2017)</span> <span class="ltx_bibblock"> Catherine D’Ignazio. 2017. </span> <span class="ltx_bibblock">Creative data literacy: Bridging the gap between the data-haves and data-have nots. </span> <span class="ltx_bibblock"><em class="ltx_emph ltx_font_italic" id="bib.bib5.1.1">Information Design Journal</em> 23, 1 (2017), 6–18. </span> <span class="ltx_bibblock"> </span> </li> <li class="ltx_bibitem" id="bib.bib6"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">Disseldorp (2020)</span> <span class="ltx_bibblock"> EM Disseldorp. 2020. </span> <span class="ltx_bibblock"><em class="ltx_emph ltx_font_italic" id="bib.bib6.1.1">Data Literacy: Detecting Data Literacy Gaps within Businesses</em>. </span> <span class="ltx_bibblock">B.S. thesis. University of Twente. </span> <span class="ltx_bibblock"> </span> </li> <li class="ltx_bibitem" id="bib.bib7"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">Few (2004)</span> <span class="ltx_bibblock"> Stephen Few. 2004. </span> <span class="ltx_bibblock">Show me the numbers. </span> <span class="ltx_bibblock"><em class="ltx_emph ltx_font_italic" id="bib.bib7.1.1">Analytics Pres</em> 2 (2004). </span> <span class="ltx_bibblock"> </span> </li> <li class="ltx_bibitem" id="bib.bib8"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">Godfrey et al<span class="ltx_text" id="bib.bib8.2.2.1">.</span> (2016)</span> <span class="ltx_bibblock"> Parke Godfrey, Jarek Gryz, and Piotr Lasek. 2016. </span> <span class="ltx_bibblock">Interactive visualization of large data sets. </span> <span class="ltx_bibblock"><em class="ltx_emph ltx_font_italic" id="bib.bib8.3.1">IEEE transactions on knowledge and data engineering</em> 28, 8 (2016), 2142–2157. </span> <span class="ltx_bibblock"> </span> </li> <li class="ltx_bibitem" id="bib.bib9"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">Hearst and Tory (2019)</span> <span class="ltx_bibblock"> Marti Hearst and Melanie Tory. 2019. </span> <span class="ltx_bibblock">Would you like a chart with that? Incorporating visualizations into conversational interfaces. In <em class="ltx_emph ltx_font_italic" id="bib.bib9.1.1">2019 IEEE Visualization Conference (VIS)</em>. IEEE, 1–5. </span> <span class="ltx_bibblock"> </span> </li> <li class="ltx_bibitem" id="bib.bib10"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">Heer et al<span class="ltx_text" id="bib.bib10.2.2.1">.</span> (2010)</span> <span class="ltx_bibblock"> Jeffrey Heer, Michael Bostock, and Vadim Ogievetsky. 2010. </span> <span class="ltx_bibblock">A tour through the visualization zoo. </span> <span class="ltx_bibblock"><em class="ltx_emph ltx_font_italic" id="bib.bib10.3.1">Commun. ACM</em> 53, 6 (2010), 59–67. </span> <span class="ltx_bibblock"> </span> </li> <li class="ltx_bibitem" id="bib.bib11"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">Lee et al<span class="ltx_text" id="bib.bib11.2.2.1">.</span> (2024)</span> <span class="ltx_bibblock"> Bongshin Lee, Kim Marriott, Danielle Szafir, and Gerhard Weber. 2024. </span> <span class="ltx_bibblock">Inclusive Data Visualization (Dagstuhl Seminar 23252). </span> <span class="ltx_bibblock">(2024). </span> <span class="ltx_bibblock"> </span> </li> <li class="ltx_bibitem" id="bib.bib12"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">Lundgard and Satyanarayan (2021)</span> <span class="ltx_bibblock"> Alan Lundgard and Arvind Satyanarayan. 2021. </span> <span class="ltx_bibblock">Accessible visualization via natural language descriptions: A four-level model of semantic content. </span> <span class="ltx_bibblock"><em class="ltx_emph ltx_font_italic" id="bib.bib12.1.1">IEEE transactions on visualization and computer graphics</em> 28, 1 (2021), 1073–1083. </span> <span class="ltx_bibblock"> </span> </li> <li class="ltx_bibitem" id="bib.bib13"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">Mellman (2020)</span> <span class="ltx_bibblock"> Letha Marie Mellman. 2020. </span> <span class="ltx_bibblock"><em class="ltx_emph ltx_font_italic" id="bib.bib13.1.1">Getting Online with Generation Z: Learning Preferences</em>. </span> <span class="ltx_bibblock">University of Northern Colorado. </span> <span class="ltx_bibblock"> </span> </li> <li class="ltx_bibitem" id="bib.bib14"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">Mittelstädt et al<span class="ltx_text" id="bib.bib14.2.2.1">.</span> (2015)</span> <span class="ltx_bibblock"> Sebastian Mittelstädt, Dominik Jäckle, Florian Stoffel, and Daniel A Keim. 2015. </span> <span class="ltx_bibblock">Colorcat: Guided design of colormaps for combined analysis tasks. </span> <span class="ltx_bibblock">(2015). </span> <span class="ltx_bibblock"> </span> </li> <li class="ltx_bibitem" id="bib.bib15"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">Mládková (2017)</span> <span class="ltx_bibblock"> Ludmila Mládková. 2017. </span> <span class="ltx_bibblock">Learning habits of generation Z students. In <em class="ltx_emph ltx_font_italic" id="bib.bib15.1.1">European Conference on Knowledge Management</em>. Academic Conferences International Limited, 698–703. </span> <span class="ltx_bibblock"> </span> </li> <li class="ltx_bibitem" id="bib.bib16"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">Narechania et al<span class="ltx_text" id="bib.bib16.2.2.1">.</span> (2020)</span> <span class="ltx_bibblock"> Arpit Narechania, Arjun Srinivasan, and John Stasko. 2020. </span> <span class="ltx_bibblock">NL4DV: A toolkit for generating analytic specifications for data visualization from natural language queries. </span> <span class="ltx_bibblock"><em class="ltx_emph ltx_font_italic" id="bib.bib16.3.1">IEEE Transactions on Visualization and Computer Graphics</em> 27, 2 (2020), 369–379. </span> <span class="ltx_bibblock"> </span> </li> <li class="ltx_bibitem" id="bib.bib17"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">Rashid et al<span class="ltx_text" id="bib.bib17.2.2.1">.</span> (2021)</span> <span class="ltx_bibblock"> Md Mahinur Rashid, Hasin Kawsar Jahan, Annysha Huzzat, Riyasaat Ahmed Rahul, Tamim Bin Zakir, Farhana Meem, Md Saddam Hossain Mukta, and Swakkhar Shatabda. 2021. </span> <span class="ltx_bibblock">Text2Chart: A Multi-Staged Chart Generator from Natural. </span> <span class="ltx_bibblock"><em class="ltx_emph ltx_font_italic" id="bib.bib17.3.1">arXiv preprint arXiv:2104.04584</em> (2021). </span> <span class="ltx_bibblock"> </span> </li> <li class="ltx_bibitem" id="bib.bib18"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">Setlur et al<span class="ltx_text" id="bib.bib18.2.2.1">.</span> (2016)</span> <span class="ltx_bibblock"> Vidya Setlur, Sarah E Battersby, Melanie Tory, Rich Gossweiler, and Angel X Chang. 2016. </span> <span class="ltx_bibblock">Eviza: A natural language interface for visual analysis. In <em class="ltx_emph ltx_font_italic" id="bib.bib18.3.1">Proceedings of the 29th annual symposium on user interface software and technology</em>. 365–377. </span> <span class="ltx_bibblock"> </span> </li> <li class="ltx_bibitem" id="bib.bib19"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">Singh et al<span class="ltx_text" id="bib.bib19.2.2.1">.</span> (2024)</span> <span class="ltx_bibblock"> Nikhil Singh, Lucy Lu Wang, and Jonathan Bragg. 2024. </span> <span class="ltx_bibblock">FigurA11y: AI Assistance for Writing Scientific Alt Text. In <em class="ltx_emph ltx_font_italic" id="bib.bib19.3.1">Proceedings of the 29th International Conference on Intelligent User Interfaces</em>. 886–906. </span> <span class="ltx_bibblock"> </span> </li> <li class="ltx_bibitem" id="bib.bib20"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">Smits et al<span class="ltx_text" id="bib.bib20.2.2.1">.</span> ([n. d.])</span> <span class="ltx_bibblock"> Thomas C Smits, Sehi L’Yi, Andrew Patrick Mar, and Nils Gehlenborg. [n. d.]. </span> <span class="ltx_bibblock">AltGosling: Automatic Generation of Text Descriptions for Accessible Genomics Data Visualization. </span> <span class="ltx_bibblock">([n. d.]). </span> <span class="ltx_bibblock"> </span> </li> <li class="ltx_bibitem" id="bib.bib21"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">Stokes et al<span class="ltx_text" id="bib.bib21.2.2.1">.</span> (2022)</span> <span class="ltx_bibblock"> Chase Stokes, Vidya Setlur, Bridget Cogley, Arvind Satyanarayan, and Marti A Hearst. 2022. </span> <span class="ltx_bibblock">Striking a balance: Reader takeaways and preferences when integrating text and charts. </span> <span class="ltx_bibblock"><em class="ltx_emph ltx_font_italic" id="bib.bib21.3.1">IEEE Transactions on Visualization and Computer Graphics</em> 29, 1 (2022), 1233–1243. </span> <span class="ltx_bibblock"> </span> </li> <li class="ltx_bibitem" id="bib.bib22"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">Tian et al<span class="ltx_text" id="bib.bib22.2.2.1">.</span> (2024)</span> <span class="ltx_bibblock"> Yuan Tian, Weiwei Cui, Dazhen Deng, Xinjing Yi, Yurun Yang, Haidong Zhang, and Yingcai Wu. 2024. </span> <span class="ltx_bibblock">Chartgpt: Leveraging llms to generate charts from abstract natural language. </span> <span class="ltx_bibblock"><em class="ltx_emph ltx_font_italic" id="bib.bib22.3.1">IEEE Transactions on Visualization and Computer Graphics</em> (2024). </span> <span class="ltx_bibblock"> </span> </li> <li class="ltx_bibitem" id="bib.bib23"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">Tufte and Graves-Morris (1983)</span> <span class="ltx_bibblock"> Edward R Tufte and Peter R Graves-Morris. 1983. </span> <span class="ltx_bibblock"><em class="ltx_emph ltx_font_italic" id="bib.bib23.1.1">The visual display of quantitative information</em>. Vol. 2. </span> <span class="ltx_bibblock">Graphics press Cheshire, CT. </span> <span class="ltx_bibblock"> </span> </li> <li class="ltx_bibitem" id="bib.bib24"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">Vemulapalli (2024)</span> <span class="ltx_bibblock"> Gopichand Vemulapalli. 2024. </span> <span class="ltx_bibblock">Overcoming data literacy barriers: empowering non-technical teams. </span> <span class="ltx_bibblock"><em class="ltx_emph ltx_font_italic" id="bib.bib24.1.1">International Journal of Holistic Management Perspectives</em> 5, 5 (2024), 1–17. </span> <span class="ltx_bibblock"> </span> </li> <li class="ltx_bibitem" id="bib.bib25"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">Wu and Szafir (2023)</span> <span class="ltx_bibblock"> Keke Wu and Danielle Albers Szafir. 2023. </span> <span class="ltx_bibblock">Empowering People with Intellectual and Developmental Disabilities through Cognitively Accessible Visualizations. In <em class="ltx_emph ltx_font_italic" id="bib.bib25.1.1">2023 IEEE Workshop on Visualization for Social Good (VIS4Good)</em>. IEEE, 1–5. </span> <span class="ltx_bibblock"> </span> </li> <li class="ltx_bibitem" id="bib.bib26"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">Yaru and Harun (2024)</span> <span class="ltx_bibblock"> Zhou Yaru and Azahar Harun. 2024. </span> <span class="ltx_bibblock">An investigation into the reading direction preferences of Generation Z: a study on the design of Asian Winter Olympics posters. </span> <span class="ltx_bibblock"><em class="ltx_emph ltx_font_italic" id="bib.bib26.1.1">International Journal of Art and Design (IJAD)</em> 8, 1 (2024), 187–198. </span> <span class="ltx_bibblock"> </span> </li> </ul> </section> <section class="ltx_appendix" id="Ax1"> <h2 class="ltx_title ltx_title_appendix">Appendix</h2> </section> <section class="ltx_appendix" id="A1"> <h2 class="ltx_title ltx_title_appendix"> <span class="ltx_tag ltx_tag_appendix">Appendix A </span>Related Works Cont.</h2> <section class="ltx_subsection" id="A1.SS1"> <h3 class="ltx_title ltx_title_subsection"> <span class="ltx_tag ltx_tag_subsection">A.1. </span>Accessibility</h3> <div class="ltx_para" id="A1.SS1.p1"> <p class="ltx_p" id="A1.SS1.p1.1">Firstly, addressing traditional disabilities like vision impairments or colorblindness is crucial for creating accessible data outputs. Those who have trouble seeing may have difficulty digesting data visualizations and prefer natural language or text <cite class="ltx_cite ltx_citemacro_citep">(Lundgard and Satyanarayan, <a class="ltx_ref" href="https://arxiv.org/html/2411.07451v1#bib.bib12" title="">2021</a>)</cite>. In a similar vein, colorblind users may also have trouble with visualizations, as they often rely heavily on discerning between colors <cite class="ltx_cite ltx_citemacro_citep">(Mittelstädt et al<span class="ltx_text">.</span>, <a class="ltx_ref" href="https://arxiv.org/html/2411.07451v1#bib.bib14" title="">2015</a>)</cite>. Data analysis also can be difficult for neurodivergent users, as data is often overwhelming and can be cognitively complex. Given these points, there exists a need to make data accessible and output it in a way that is personalized to the needs of the individual.</p> </div> </section> </section> <section class="ltx_appendix" id="A2"> <h2 class="ltx_title ltx_title_appendix"> <span class="ltx_tag ltx_tag_appendix">Appendix B </span>Study Design</h2> <section class="ltx_subsection" id="A2.SS1"> <h3 class="ltx_title ltx_title_subsection"> <span class="ltx_tag ltx_tag_subsection">B.1. </span>Human Annotations</h3> <div class="ltx_para" id="A2.SS1.p1"> <ul class="ltx_itemize" id="A2.I1"> <li class="ltx_item" id="A2.I1.i1" style="list-style-type:none;"> <span class="ltx_tag ltx_tag_item">•</span> <div class="ltx_para" id="A2.I1.i1.p1"> <p class="ltx_p" id="A2.I1.i1.p1.1"><span class="ltx_text ltx_font_bold" id="A2.I1.i1.p1.1.1">Title:</span> Output Medium Preference for Data Analytics Natural Language Questions</p> </div> </li> <li class="ltx_item" id="A2.I1.i2" style="list-style-type:none;"> <span class="ltx_tag ltx_tag_item">•</span> <div class="ltx_para" id="A2.I1.i2.p1"> <p class="ltx_p" id="A2.I1.i2.p1.1"><span class="ltx_text ltx_font_bold" id="A2.I1.i2.p1.1.1">Description</span>: We require output preferences for questions asked to an Analytics system. Given a question about analytics data such as “What is the total revenue last month”, please select the best output medium for the question as per the given instructions.</p> </div> </li> <li class="ltx_item" id="A2.I1.i3" style="list-style-type:none;"> <span class="ltx_tag ltx_tag_item">•</span> <div class="ltx_para" id="A2.I1.i3.p1"> <p class="ltx_p" id="A2.I1.i3.p1.1"><span class="ltx_text ltx_font_bold" id="A2.I1.i3.p1.1.1">Total Questions:</span> We aim to have participants do 20 Questions. Justification: for this is that we may want to keep it brief for MTurkers so that they do not just click through the survey. Potential drawback: not getting a large amount of results from the same MTurker.</p> </div> </li> <li class="ltx_item" id="A2.I1.i4" style="list-style-type:none;"> <span class="ltx_tag ltx_tag_item">•</span> <div class="ltx_para" id="A2.I1.i4.p1"> <p class="ltx_p" id="A2.I1.i4.p1.1"><span class="ltx_text ltx_font_bold" id="A2.I1.i4.p1.1.1">Fifteen Minutes:</span> Given that we will have 20 questions with the reader having to read the instructions, this should be an ample amount of time.</p> </div> </li> <li class="ltx_item" id="A2.I1.i5" style="list-style-type:none;"> <span class="ltx_tag ltx_tag_item">•</span> <div class="ltx_para" id="A2.I1.i5.p1"> <p class="ltx_p" id="A2.I1.i5.p1.1"><span class="ltx_text ltx_font_bold" id="A2.I1.i5.p1.1.1">Should Annotators do the same questions</span>: No. Justification: providing different types of questions can give us a wider spread of data. However, we will use the same types of questions (Summary number, visualization, etc.) Potential Drawback: Maybe more variation.</p> </div> </li> <li class="ltx_item" id="A2.I1.i6" style="list-style-type:none;"> <span class="ltx_tag ltx_tag_item">•</span> <div class="ltx_para" id="A2.I1.i6.p1"> <p class="ltx_p" id="A2.I1.i6.p1.1"><span class="ltx_text ltx_font_bold" id="A2.I1.i6.p1.1.1">Price:</span> ”.40 cents” Given that we will have up to 20 questions, we may want to offer more money than usual as the survey will be longer.</p> </div> </li> <li class="ltx_item" id="A2.I1.i7" style="list-style-type:none;"> <span class="ltx_tag ltx_tag_item">•</span> <div class="ltx_para" id="A2.I1.i7.p1"> <p class="ltx_p" id="A2.I1.i7.p1.1"><span class="ltx_text ltx_font_bold" id="A2.I1.i7.p1.1.1">Hypotheses</span>:</p> <ul class="ltx_itemize" id="A2.I1.i7.I1"> <li class="ltx_item" id="A2.I1.i7.I1.i1" style="list-style-type:none;"> <span class="ltx_tag ltx_tag_item"><span class="ltx_text ltx_font_bold" id="A2.I1.i7.I1.i1.1.1.1">–</span></span> <div class="ltx_para" id="A2.I1.i7.I1.i1.p1"> <p class="ltx_p" id="A2.I1.i7.I1.i1.p1.1">1. Participants with more of a data analytics background will prefer text outputs or tables for more precise information.</p> </div> </li> <li class="ltx_item" id="A2.I1.i7.I1.i2" style="list-style-type:none;"> <span class="ltx_tag ltx_tag_item"><span class="ltx_text ltx_font_bold" id="A2.I1.i7.I1.i2.1.1.1">–</span></span> <div class="ltx_para" id="A2.I1.i7.I1.i2.p1"> <p class="ltx_p" id="A2.I1.i7.I1.i2.p1.1">2. Participants with more of a managerial or decision-making role will prefer visualizations as they may be more presentation-oriented.</p> </div> </li> <li class="ltx_item" id="A2.I1.i7.I1.i3" style="list-style-type:none;"> <span class="ltx_tag ltx_tag_item"><span class="ltx_text ltx_font_bold" id="A2.I1.i7.I1.i3.1.1.1">–</span></span> <div class="ltx_para" id="A2.I1.i7.I1.i3.p1"> <p class="ltx_p" id="A2.I1.i7.I1.i3.p1.1">3. Older participants will prefer outputs with more visualizations while younger participants will prefer outputs that are primarily text.</p> </div> </li> <li class="ltx_item" id="A2.I1.i7.I1.i4" style="list-style-type:none;"> <span class="ltx_tag ltx_tag_item"><span class="ltx_text ltx_font_bold" id="A2.I1.i7.I1.i4.1.1.1">–</span></span> <div class="ltx_para" id="A2.I1.i7.I1.i4.p1"> <p class="ltx_p" id="A2.I1.i7.I1.i4.p1.1">4. There will be a strong correlation between output preference and device preference, with mobile user preferring textual outputs.</p> </div> </li> <li class="ltx_item" id="A2.I1.i7.I1.i5" style="list-style-type:none;"> <span class="ltx_tag ltx_tag_item"><span class="ltx_text ltx_font_bold" id="A2.I1.i7.I1.i5.1.1.1">–</span></span> <div class="ltx_para" id="A2.I1.i7.I1.i5.p1"> <p class="ltx_p" id="A2.I1.i7.I1.i5.p1.1">5. Participants in more technical industries (finance, IT, and Tech) will prefer charts and text outputs as they are more precise.</p> </div> </li> <li class="ltx_item" id="A2.I1.i7.I1.i6" style="list-style-type:none;"> <span class="ltx_tag ltx_tag_item"><span class="ltx_text ltx_font_bold" id="A2.I1.i7.I1.i6.1.1.1">–</span></span> <div class="ltx_para" id="A2.I1.i7.I1.i6.p1"> <p class="ltx_p" id="A2.I1.i7.I1.i6.p1.1">6. The preferences of human participants will align closely with the simulated preferences of an LLM</p> </div> </li> </ul> </div> </li> </ul> </div> </section> <section class="ltx_subsection" id="A2.SS2"> <h3 class="ltx_title ltx_title_subsection"> <span class="ltx_tag ltx_tag_subsection">B.2. </span>User Characteristics Questions</h3> <div class="ltx_para" id="A2.SS2.p1"> <ul class="ltx_itemize" id="A2.I2"> <li class="ltx_item" id="A2.I2.ix1" style="list-style-type:none;"> <span class="ltx_tag ltx_tag_item"><span class="ltx_text ltx_font_bold" id="A2.I2.ix1.1.1.1">QA:</span></span> <div class="ltx_para" id="A2.I2.ix1.p1"> <p class="ltx_p" id="A2.I2.ix1.p1.1">How would you rate your experience with data analysis?</p> <ul class="ltx_itemize" id="A2.I2.ix1.I1"> <li class="ltx_item" id="A2.I2.ix1.I1.i1" style="list-style-type:none;"> <span class="ltx_tag ltx_tag_item"><span class="ltx_text ltx_font_bold" id="A2.I2.ix1.I1.i1.1.1.1">–</span></span> <div class="ltx_para" id="A2.I2.ix1.I1.i1.p1"> <p class="ltx_p" id="A2.I2.ix1.I1.i1.p1.1">Responses ranged from very unfamiliar to very familiar.</p> </div> </li> </ul> </div> </li> <li class="ltx_item" id="A2.I2.ix2" style="list-style-type:none;"> <span class="ltx_tag ltx_tag_item"><span class="ltx_text ltx_font_bold" id="A2.I2.ix2.1.1.1">QB:</span></span> <div class="ltx_para" id="A2.I2.ix2.p1"> <p class="ltx_p" id="A2.I2.ix2.p1.1">How would you rate your experience level interacting with data visualizations?</p> <ul class="ltx_itemize" id="A2.I2.ix2.I2"> <li class="ltx_item" id="A2.I2.ix2.I2.i1" style="list-style-type:none;"> <span class="ltx_tag ltx_tag_item"><span class="ltx_text ltx_font_bold" id="A2.I2.ix2.I2.i1.1.1.1">–</span></span> <div class="ltx_para" id="A2.I2.ix2.I2.i1.p1"> <p class="ltx_p" id="A2.I2.ix2.I2.i1.p1.1">Responses ranged from not familiar or very familiar.</p> </div> </li> </ul> </div> </li> <li class="ltx_item" id="A2.I2.ix3" style="list-style-type:none;"> <span class="ltx_tag ltx_tag_item"><span class="ltx_text ltx_font_bold" id="A2.I2.ix3.1.1.1">QC:</span></span> <div class="ltx_para" id="A2.I2.ix3.p1"> <p class="ltx_p" id="A2.I2.ix3.p1.1">What industry do you currently work in?</p> <ul class="ltx_itemize" id="A2.I2.ix3.I3"> <li class="ltx_item" id="A2.I2.ix3.I3.i1" style="list-style-type:none;"> <span class="ltx_tag ltx_tag_item"><span class="ltx_text ltx_font_bold" id="A2.I2.ix3.I3.i1.1.1.1">–</span></span> <div class="ltx_para" id="A2.I2.ix3.I3.i1.p1"> <p class="ltx_p" id="A2.I2.ix3.I3.i1.p1.1">Response examples include education, management, customer support, and other similar options.</p> </div> </li> </ul> </div> </li> <li class="ltx_item" id="A2.I2.ix4" style="list-style-type:none;"> <span class="ltx_tag ltx_tag_item"><span class="ltx_text ltx_font_bold" id="A2.I2.ix4.1.1.1">QD:</span></span> <div class="ltx_para" id="A2.I2.ix4.p1"> <p class="ltx_p" id="A2.I2.ix4.p1.1">Which of the following best describes your primary role at work?</p> <ul class="ltx_itemize" id="A2.I2.ix4.I4"> <li class="ltx_item" id="A2.I2.ix4.I4.i1" style="list-style-type:none;"> <span class="ltx_tag ltx_tag_item"><span class="ltx_text ltx_font_bold" id="A2.I2.ix4.I4.i1.1.1.1">–</span></span> <div class="ltx_para" id="A2.I2.ix4.I4.i1.p1"> <p class="ltx_p" id="A2.I2.ix4.I4.i1.p1.1">Response examples include decision maker, analyst, manager, and other similar options.</p> </div> </li> </ul> </div> </li> <li class="ltx_item" id="A2.I2.ix5" style="list-style-type:none;"> <span class="ltx_tag ltx_tag_item"><span class="ltx_text ltx_font_bold" id="A2.I2.ix5.1.1.1">QE:</span></span> <div class="ltx_para" id="A2.I2.ix5.p1"> <p class="ltx_p" id="A2.I2.ix5.p1.1">Which age range best describes you?</p> <ul class="ltx_itemize" id="A2.I2.ix5.I5"> <li class="ltx_item" id="A2.I2.ix5.I5.i1" style="list-style-type:none;"> <span class="ltx_tag ltx_tag_item"><span class="ltx_text ltx_font_bold" id="A2.I2.ix5.I5.i1.1.1.1">–</span></span> <div class="ltx_para" id="A2.I2.ix5.I5.i1.p1"> <p class="ltx_p" id="A2.I2.ix5.I5.i1.p1.1">Responses include age ranges from 18 to 45+.</p> </div> </li> </ul> </div> </li> <li class="ltx_item" id="A2.I2.ix6" style="list-style-type:none;"> <span class="ltx_tag ltx_tag_item"><span class="ltx_text ltx_font_bold" id="A2.I2.ix6.1.1.1">QF:</span></span> <div class="ltx_para" id="A2.I2.ix6.p1"> <p class="ltx_p" id="A2.I2.ix6.p1.1">Which education level best describes you?</p> <ul class="ltx_itemize" id="A2.I2.ix6.I6"> <li class="ltx_item" id="A2.I2.ix6.I6.i1" style="list-style-type:none;"> <span class="ltx_tag ltx_tag_item"><span class="ltx_text ltx_font_bold" id="A2.I2.ix6.I6.i1.1.1.1">–</span></span> <div class="ltx_para" id="A2.I2.ix6.I6.i1.p1"> <p class="ltx_p" id="A2.I2.ix6.I6.i1.p1.1">Response examples include answers that range from high school to graduate school.</p> </div> </li> </ul> </div> </li> <li class="ltx_item" id="A2.I2.ix7" style="list-style-type:none;"> <span class="ltx_tag ltx_tag_item"><span class="ltx_text ltx_font_bold" id="A2.I2.ix7.1.1.1">QG:</span></span> <div class="ltx_para" id="A2.I2.ix7.p1"> <p class="ltx_p" id="A2.I2.ix7.p1.1">Which gender best describes you?</p> <ul class="ltx_itemize" id="A2.I2.ix7.I7"> <li class="ltx_item" id="A2.I2.ix7.I7.i1" style="list-style-type:none;"> <span class="ltx_tag ltx_tag_item"><span class="ltx_text ltx_font_bold" id="A2.I2.ix7.I7.i1.1.1.1">–</span></span> <div class="ltx_para" id="A2.I2.ix7.I7.i1.p1"> <p class="ltx_p" id="A2.I2.ix7.I7.i1.p1.1">Response examples male, female, non-binary, and prefer not to answer</p> </div> </li> </ul> </div> </li> <li class="ltx_item" id="A2.I2.ix8" style="list-style-type:none;"> <span class="ltx_tag ltx_tag_item"><span class="ltx_text ltx_font_bold" id="A2.I2.ix8.1.1.1">QH:</span></span> <div class="ltx_para" id="A2.I2.ix8.p1"> <p class="ltx_p" id="A2.I2.ix8.p1.1">Which device do you use most frequently for work-related tasks?</p> <ul class="ltx_itemize" id="A2.I2.ix8.I8"> <li class="ltx_item" id="A2.I2.ix8.I8.i1" style="list-style-type:none;"> <span class="ltx_tag ltx_tag_item"><span class="ltx_text ltx_font_bold" id="A2.I2.ix8.I8.i1.1.1.1">–</span></span> <div class="ltx_para" id="A2.I2.ix8.I8.i1.p1"> <p class="ltx_p" id="A2.I2.ix8.I8.i1.p1.1">Response examples include phones, desktop computers, and tablets.</p> </div> </li> </ul> </div> </li> </ul> </div> </section> <section class="ltx_subsection" id="A2.SS3"> <h3 class="ltx_title ltx_title_subsection"> <span class="ltx_tag ltx_tag_subsection">B.3. </span>Survey Questions</h3> <div class="ltx_para" id="A2.SS3.p1"> <p class="ltx_p" id="A2.SS3.p1.1">Some examples of questions that were used in our survey are as follows:</p> </div> <div class="ltx_para" id="A2.SS3.p2"> <ul class="ltx_itemize" id="A2.I3"> <li class="ltx_item" id="A2.I3.i1" style="list-style-type:none;"> <span class="ltx_tag ltx_tag_item">•</span> <div class="ltx_para" id="A2.I3.i1.p1"> <p class="ltx_p" id="A2.I3.i1.p1.1">Forecast the growth rate of our paid customers in the next quarter.</p> </div> </li> <li class="ltx_item" id="A2.I3.i2" style="list-style-type:none;"> <span class="ltx_tag ltx_tag_item">•</span> <div class="ltx_para" id="A2.I3.i2.p1"> <p class="ltx_p" id="A2.I3.i2.p1.1">Show me the distribution of user characteristics based on age groups.</p> </div> </li> <li class="ltx_item" id="A2.I3.i3" style="list-style-type:none;"> <span class="ltx_tag ltx_tag_item">•</span> <div class="ltx_para" id="A2.I3.i3.p1"> <p class="ltx_p" id="A2.I3.i3.p1.1">What are my top 20 attributes by highest segment count?</p> </div> </li> <li class="ltx_item" id="A2.I3.i4" style="list-style-type:none;"> <span class="ltx_tag ltx_tag_item">•</span> <div class="ltx_para" id="A2.I3.i4.p1"> <p class="ltx_p" id="A2.I3.i4.p1.1">What is the average time spent by users on the website in the last week?</p> </div> </li> <li class="ltx_item" id="A2.I3.i5" style="list-style-type:none;"> <span class="ltx_tag ltx_tag_item">•</span> <div class="ltx_para" id="A2.I3.i5.p1"> <p class="ltx_p" id="A2.I3.i5.p1.1">Display the distribution of monthly revenue from different shipping methods.</p> </div> </li> </ul> </div> </section> </section> <section class="ltx_appendix" id="A3"> <h2 class="ltx_title ltx_title_appendix"> <span class="ltx_tag ltx_tag_appendix">Appendix C </span>Further Discussion</h2> <section class="ltx_subsection" id="A3.SS1"> <h3 class="ltx_title ltx_title_subsection"> <span class="ltx_tag ltx_tag_subsection">C.1. </span>Implications for Data Tools</h3> <div class="ltx_para" id="A3.SS1.p1"> <p class="ltx_p" id="A3.SS1.p1.1">Given that we have established that both user characteristics and work experiences influence data output preferences, there are significant opportunities to use these findings to design data tools and large language models (LLMs) that better meet the individual needs of users. For example, a system could identify a user as a young data analyst and could use this information to confidently display a chart. Similarly, an older decision-maker in finance or accounting could be recommended a table. Either way, there is an opportunity for LLMs and other data tools to use these insights to create a user experience, increasing satisfaction and efficiency.</p> </div> <div class="ltx_para" id="A3.SS1.p2"> <p class="ltx_p" id="A3.SS1.p2.1">We do not, however, expect this data to be the end-all-be-all for data tools. Instead, this data should be used as a foundation to be built upon. LLMs, for example, could use this foundation but continue to build upon it by gathering user preferences over time. As data and personalization continue to grow interdependent on one another, the insights from this study can continue this growth and lay the foundation for tools that better fit the needs of users.</p> </div> </section> <section class="ltx_subsection" id="A3.SS2"> <h3 class="ltx_title ltx_title_subsection"> <span class="ltx_tag ltx_tag_subsection">C.2. </span>Influence of Work Experience</h3> <div class="ltx_para" id="A3.SS2.p1"> <p class="ltx_p" id="A3.SS2.p1.1">A user’s work experience, specifically the role they play at work and the industry they work in, significantly affects the data outputs they prefer. In terms of a user’s role at work, we found that analysis-oriented roles have a stronger preference for charts. Meanwhile, those in decision-making roles had the strongest preference and preferred tables about 52% of the time. From this data, we gathered that a user’s role influences their data output preference. Workers who preferred charts may have liked the high-level insights provided by charts, while those who preferred tables may have appreciated the quick and accurate information they offer.</p> </div> <div class="ltx_para" id="A3.SS2.p2"> <p class="ltx_p" id="A3.SS2.p2.1">On a similar note, a user’s industry often influences the data output they prefer most. Notably, those in development and IT preferred both charts and tables at a higher percentage, potentially due to the need to display and present data trends to their colleagues while also needing precise data. On the other hand, those in finance and accounting preferred tables, which aligns with their need for large amounts of precise data points that are easy to compare. Overall, we found that industries with a more visual storytelling nature tend to prefer charts, while those that require precision prefer tables.</p> </div> <div class="ltx_para" id="A3.SS2.p3"> <p class="ltx_p" id="A3.SS2.p3.1">In total, these findings support the overall thesis that there is an opportunity to display data in a way that is personalized based on a user’s background. Specifically, different workers prefer to see their data displayed in various ways depending on their roles and the industry in which they work. This should come as no surprise, as different work roles have different data needs: what works well for one industry might be detrimental in another and vice versa. At a high level, these takeaways can be used to create data experiences that better cater to the individual user based on their work industry or role.</p> </div> <figure class="ltx_figure" id="A3.F10"><img alt="Refer to caption" class="ltx_graphics ltx_centering ltx_img_landscape" height="196" id="A3.F10.g1" src="x10.png" width="747"/> <figcaption class="ltx_caption ltx_centering"><span class="ltx_tag ltx_tag_figure">Figure 10. </span>Ranking results for RQ1 showed that users preferred Tables at 41.7% with Charts trailing at 36.32%, and Text being the least preferred at 21.97%. </figcaption> </figure> <figure class="ltx_figure" id="A3.F11"><img alt="Refer to caption" class="ltx_graphics ltx_centering ltx_img_landscape" height="500" id="A3.F11.g1" src="x11.png" width="821"/> <figcaption class="ltx_caption ltx_centering"><span class="ltx_tag ltx_tag_figure">Figure 11. </span><span class="ltx_text ltx_font_bold" id="A3.F11.3.1">The chart shows that if the user is more familiar with data analysis, they prefer charts more. However, if they have less familiarity they begin to prefer tables much more. Furthermore, users with more data analysis experience prefer text less, and the preference towards text increases as a function of their data analysis lack of experience (e.g., 21.5% for very experienced to 27% for very inexperienced). </span>: <span class="ltx_text ltx_font_bold" id="A3.F11.4.2">User preference by Data Analysis Experience</span>: Answer by Data Analysis Experience: </figcaption> </figure> <figure class="ltx_figure" id="A3.F12"><img alt="Refer to caption" class="ltx_graphics ltx_centering ltx_img_landscape" height="88" id="A3.F12.g1" src="x12.png" width="822"/> <figcaption class="ltx_caption ltx_centering"><span class="ltx_tag ltx_tag_figure">Figure 12. </span>Definitions from user survey</figcaption> </figure> <figure class="ltx_figure" id="A3.F13"><img alt="Refer to caption" class="ltx_graphics ltx_centering ltx_img_landscape" height="273" id="A3.F13.g1" src="x13.png" width="822"/> <figcaption class="ltx_caption ltx_centering"><span class="ltx_tag ltx_tag_figure">Figure 13. </span>Example question from user survey</figcaption> </figure> <figure class="ltx_figure" id="A3.F14"><img alt="Refer to caption" class="ltx_graphics ltx_centering ltx_img_landscape" height="304" id="A3.F14.g1" src="x14.png" width="822"/> <figcaption class="ltx_caption ltx_centering"><span class="ltx_tag ltx_tag_figure">Figure 14. </span>Example question from user survey</figcaption> </figure> <figure class="ltx_figure" id="A3.F15"><img alt="Refer to caption" class="ltx_graphics ltx_centering ltx_img_landscape" height="509" id="A3.F15.g1" src="x15.png" width="822"/> <figcaption class="ltx_caption ltx_centering"><span class="ltx_tag ltx_tag_figure">Figure 15. </span>Example question from user survey</figcaption> </figure> <figure class="ltx_figure" id="A3.F16"><img alt="Refer to caption" class="ltx_graphics ltx_centering ltx_img_landscape" height="387" id="A3.F16.g1" src="x16.png" width="822"/> <figcaption class="ltx_caption ltx_centering"><span class="ltx_tag ltx_tag_figure">Figure 16. </span>Example question from user survey</figcaption> </figure> <figure class="ltx_figure" id="A3.F17"><img alt="Refer to caption" class="ltx_graphics ltx_centering ltx_img_landscape" height="390" id="A3.F17.g1" src="x17.png" width="822"/> <figcaption class="ltx_caption ltx_centering"><span class="ltx_tag ltx_tag_figure">Figure 17. </span>Example question from user survey</figcaption> </figure> <figure class="ltx_figure" id="A3.F18"><img alt="Refer to caption" class="ltx_graphics ltx_centering ltx_img_landscape" height="385" id="A3.F18.g1" src="x18.png" width="822"/> <figcaption class="ltx_caption ltx_centering"><span class="ltx_tag ltx_tag_figure">Figure 18. </span>Example question from user survey</figcaption> </figure> <figure class="ltx_figure" id="A3.F19"><img alt="Refer to caption" class="ltx_graphics ltx_centering ltx_img_landscape" height="383" id="A3.F19.g1" src="x19.png" width="822"/> <figcaption class="ltx_caption ltx_centering"><span class="ltx_tag ltx_tag_figure">Figure 19. </span>Example question from user survey</figcaption> </figure> <figure class="ltx_figure" id="A3.F20"><img alt="Refer to caption" class="ltx_graphics ltx_centering ltx_img_landscape" height="274" id="A3.F20.g1" src="x20.png" width="822"/> <figcaption class="ltx_caption ltx_centering"><span class="ltx_tag ltx_tag_figure">Figure 20. </span>User preferences by Data Analysis Experience and Data Visualization Experience.</figcaption> </figure> <figure class="ltx_figure" id="A3.F21"><img alt="Refer to caption" class="ltx_graphics ltx_centering ltx_img_landscape" height="634" id="A3.F21.g1" src="x21.png" width="830"/> <figcaption class="ltx_caption ltx_centering"><span class="ltx_tag ltx_tag_figure">Figure 21. </span>Comparing the users experience with visualizations and data analysis to their preference in terms of whether they prefer the answer to be shown to them as a chart, table or text. Note we normalize each output type (rows) and combined counts of very familiar and familiar, and very unfamiliar and unfamiliar. </figcaption> </figure> <figure class="ltx_figure" id="A3.F22"><img alt="Refer to caption" class="ltx_graphics ltx_centering ltx_img_landscape" height="634" id="A3.F22.g1" src="x22.png" width="830"/> <figcaption class="ltx_caption ltx_centering"><span class="ltx_tag ltx_tag_figure">Figure 22. </span>Comparing the users experience with visualizations and data analysis to their preference in terms of whether they prefer the answer to be shown as a chart, table or text. Note we normalize each output type (columns) and combined counts of very familiar and familiar, and very unfamiliar and unfamiliar. </figcaption> </figure> <figure class="ltx_figure" id="A3.F23"><img alt="Refer to caption" class="ltx_graphics ltx_centering ltx_img_landscape" height="638" id="A3.F23.g1" src="x23.png" width="829"/> <figcaption class="ltx_caption ltx_centering"><span class="ltx_tag ltx_tag_figure">Figure 23. </span>Comparing the users’ experience with visualizations and the role of the user to their preference in terms of whether they prefer the answer to be shown to them as a chart, table, or text. Note we normalize each output type (rows) and combine counts of very familiar and familiar, and very unfamiliar and unfamiliar. </figcaption> </figure> <figure class="ltx_figure" id="A3.F24"><img alt="Refer to caption" class="ltx_graphics ltx_centering ltx_img_landscape" height="636" id="A3.F24.g1" src="x24.png" width="830"/> <figcaption class="ltx_caption ltx_centering"><span class="ltx_tag ltx_tag_figure">Figure 24. </span>Comparing the users experience with data analysis and the role of the user to their preference in terms of whether they prefer the answer to be shown to them as a chart, table or text. Note we normalize each output type (rows) and combine counts of very familiar and familiar, and very unfamiliar and unfamiliar. </figcaption> </figure> <figure class="ltx_figure" id="A3.F25"><img alt="Refer to caption" class="ltx_graphics ltx_centering ltx_img_landscape" height="638" id="A3.F25.g1" src="x25.png" width="829"/> <figcaption class="ltx_caption ltx_centering"><span class="ltx_tag ltx_tag_figure">Figure 25. </span>Comparing the users experience with visualizations and the role of the user to their preference in terms of whether they prefer the answer to be shown to them as a chart, table or text. Note we normalize each output type (rows) and combine counts of very familiar and familiar, and very unfamiliar and unfamiliar. </figcaption> </figure> <figure class="ltx_figure" id="A3.F26"> <div class="ltx_flex_figure"> <div class="ltx_flex_cell ltx_flex_size_1"> <div class="ltx_block ltx_figure_panel" id="A3.F26.1"> <span class="ltx_ERROR ltx_centering undefined" id="A3.F26.1.1">\MakeFramed</span><span class="ltx_ERROR ltx_centering undefined" id="A3.F26.1.2">\FrameRestore</span><span class="ltx_ERROR ltx_centering undefined" id="A3.F26.1.3">{adjustwidth}</span> </div> </div> <div class="ltx_flex_break"></div> <div class="ltx_flex_cell ltx_flex_size_1"> <p class="ltx_p ltx_figure_panel ltx_align_center" id="A3.F26.2"><span class="ltx_text" id="A3.F26.2.1" style="font-size:90%;">4pt7pt</span></p> </div> <div class="ltx_flex_break"></div> <div class="ltx_flex_cell ltx_flex_size_1"> <blockquote class="ltx_quote ltx_centering ltx_figure_panel" id="A3.F26.3"> <p class="ltx_p" id="A3.F26.3.1"><span class="ltx_text ltx_font_typewriter" id="A3.F26.3.1.1" style="font-size:90%;">Given the data analytics question below, along with the list of user characteristics (preferences indicated by the user), and list of questions and responses for the user, please select how the answer to the question should be presented (e.g., Table, Text, Chart) for the specific user with the user characteristics and the user’s preferences for other questions.</span></p> <p class="ltx_p" id="A3.F26.3.2"><span class="ltx_text ltx_font_typewriter" id="A3.F26.3.2.1" style="font-size:90%;">The possible options are: * Table * Text * Chart</span></p> <p class="ltx_p" id="A3.F26.3.3"><span class="ltx_text ltx_font_typewriter" id="A3.F26.3.3.1" style="font-size:90%;">Here are the user characteristics: * visualization experience: somewhat familiar (3/5) * data analysis experience: unfamiliar (2/5) * role: manager</span></p> <p class="ltx_p" id="A3.F26.3.4"><span class="ltx_text ltx_font_typewriter" id="A3.F26.3.4.1" style="font-size:90%;">Here is a list of questions and preferences for how the user wanted the answer to be presented:</span></p> <p class="ltx_p" id="A3.F26.3.5"><span class="ltx_text ltx_font_typewriter" id="A3.F26.3.5.1" style="font-size:90%;">Question: What is the total count of ad clicks recorded? <br class="ltx_break"/>Answer: Text</span></p> <p class="ltx_p" id="A3.F26.3.6"><span class="ltx_text ltx_font_typewriter" id="A3.F26.3.6.1" style="font-size:90%;">Question: Analyze the distribution of Click-Throughs by Age groups (20-25) in Serbia <br class="ltx_break"/>Answer: Table</span></p> <p class="ltx_p" id="A3.F26.3.7"><span class="ltx_text ltx_font_typewriter" id="A3.F26.3.7.1" style="font-size:90%;">Question: Segment our user base by location (urban, suburban, rural) in the automotive industry. <br class="ltx_break"/>Answer: Text</span></p> <p class="ltx_p" id="A3.F26.3.8"><span class="ltx_text ltx_font_typewriter" id="A3.F26.3.8.1" style="font-size:90%;">Question: Check in on mobile visits in July and check the overlap with loyalty level B? <br class="ltx_break"/>Answer: Table</span></p> <p class="ltx_p" id="A3.F26.3.9"><span class="ltx_text ltx_font_typewriter" id="A3.F26.3.9.1" style="font-size:90%;">Question: What is the distribution of cart views across various entry pages (X, Y, Z) in Serbia? <br class="ltx_break"/>Answer: Table</span></p> <p class="ltx_p" id="A3.F26.3.10"><span class="ltx_text ltx_font_typewriter" id="A3.F26.3.10.1" style="font-size:90%;">Note that: * Output the answer with the prefix "Answer:" on a separate line. * Do not include any explanations in the answers. * The user experience with visualization and data analysis is on a 5-point Likert scale with options from very familiar (5) to very unfamiliar (1).</span></p> <p class="ltx_p" id="A3.F26.3.11"><span class="ltx_text ltx_font_typewriter" id="A3.F26.3.11.1" style="font-size:90%;">Question: Compare revenue for the US <br class="ltx_break"/>Answer:</span></p> </blockquote> </div> <div class="ltx_flex_break"></div> <div class="ltx_flex_cell ltx_flex_size_1"><span class="ltx_ERROR ltx_centering ltx_figure_panel undefined" id="A3.F26.4">\endMakeFramed</span></div> </div> <figcaption class="ltx_caption ltx_centering"><span class="ltx_tag ltx_tag_figure">Figure 26. </span>Example of how user characteristic information was fed into GPT one step at a time. This approach allowed GPT to use each user characteristic quality to add to the overall persona, resulting in answers based on the user’s characteristics.</figcaption> </figure> <figure class="ltx_table" id="A3.T1"> <table class="ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle" id="A3.T1.10"> <thead class="ltx_thead"> <tr class="ltx_tr" id="A3.T1.10.11.1"> <th class="ltx_td ltx_align_center ltx_th ltx_th_column ltx_th_row ltx_border_tt" id="A3.T1.10.11.1.1"><span class="ltx_text ltx_font_bold" id="A3.T1.10.11.1.1.1" style="font-size:90%;">Users</span></th> <th class="ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt" id="A3.T1.10.11.1.2"><span class="ltx_text ltx_font_bold" id="A3.T1.10.11.1.2.1" style="font-size:90%;">K=40</span></th> <th class="ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt" id="A3.T1.10.11.1.3"><span class="ltx_text ltx_font_bold" id="A3.T1.10.11.1.3.1" style="font-size:90%;">K=20</span></th> <th class="ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt" id="A3.T1.10.11.1.4"><span class="ltx_text ltx_font_bold" id="A3.T1.10.11.1.4.1" style="font-size:90%;">K=10</span></th> <th class="ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt" id="A3.T1.10.11.1.5"><span class="ltx_text ltx_font_bold" id="A3.T1.10.11.1.5.1" style="font-size:90%;">K=5</span></th> <th class="ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt" id="A3.T1.10.11.1.6"><span class="ltx_text ltx_font_bold" id="A3.T1.10.11.1.6.1" style="font-size:90%;">No Few-Shot</span></th> <th class="ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt" id="A3.T1.10.11.1.7"><span class="ltx_text ltx_font_bold" id="A3.T1.10.11.1.7.1" style="font-size:90%;">No Person.</span></th> </tr> </thead> <tbody class="ltx_tbody"> <tr class="ltx_tr" id="A3.T1.1.1"> <th class="ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_t" id="A3.T1.1.1.1"><math alttext="u_{1}" class="ltx_Math" display="inline" id="A3.T1.1.1.1.m1.1"><semantics id="A3.T1.1.1.1.m1.1a"><msub id="A3.T1.1.1.1.m1.1.1" xref="A3.T1.1.1.1.m1.1.1.cmml"><mi id="A3.T1.1.1.1.m1.1.1.2" mathsize="90%" xref="A3.T1.1.1.1.m1.1.1.2.cmml">u</mi><mn id="A3.T1.1.1.1.m1.1.1.3" mathsize="90%" xref="A3.T1.1.1.1.m1.1.1.3.cmml">1</mn></msub><annotation-xml encoding="MathML-Content" id="A3.T1.1.1.1.m1.1b"><apply id="A3.T1.1.1.1.m1.1.1.cmml" xref="A3.T1.1.1.1.m1.1.1"><csymbol cd="ambiguous" id="A3.T1.1.1.1.m1.1.1.1.cmml" xref="A3.T1.1.1.1.m1.1.1">subscript</csymbol><ci id="A3.T1.1.1.1.m1.1.1.2.cmml" xref="A3.T1.1.1.1.m1.1.1.2">𝑢</ci><cn id="A3.T1.1.1.1.m1.1.1.3.cmml" type="integer" xref="A3.T1.1.1.1.m1.1.1.3">1</cn></apply></annotation-xml><annotation encoding="application/x-tex" id="A3.T1.1.1.1.m1.1c">u_{1}</annotation><annotation encoding="application/x-llamapun" id="A3.T1.1.1.1.m1.1d">italic_u start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT</annotation></semantics></math></th> <td class="ltx_td ltx_align_center ltx_border_t" id="A3.T1.1.1.2"><span class="ltx_text" id="A3.T1.1.1.2.1" style="font-size:90%;">0.80</span></td> <td class="ltx_td ltx_align_center ltx_border_t" id="A3.T1.1.1.3"><span class="ltx_text" id="A3.T1.1.1.3.1" style="font-size:90%;">0.80</span></td> <td class="ltx_td ltx_align_center ltx_border_t" id="A3.T1.1.1.4"><span class="ltx_text" id="A3.T1.1.1.4.1" style="font-size:90%;">0.75</span></td> <td class="ltx_td ltx_align_center ltx_border_t" id="A3.T1.1.1.5"><span class="ltx_text" id="A3.T1.1.1.5.1" style="font-size:90%;">0.73</span></td> <td class="ltx_td ltx_align_center ltx_border_t" id="A3.T1.1.1.6"><span class="ltx_text" id="A3.T1.1.1.6.1" style="font-size:90%;">0.54</span></td> <td class="ltx_td ltx_align_center ltx_border_t" id="A3.T1.1.1.7"><span class="ltx_text" id="A3.T1.1.1.7.1" style="font-size:90%;">0.64</span></td> </tr> <tr class="ltx_tr" id="A3.T1.2.2"> <th class="ltx_td ltx_align_center ltx_th ltx_th_row" id="A3.T1.2.2.1"><math alttext="u_{2}" class="ltx_Math" display="inline" id="A3.T1.2.2.1.m1.1"><semantics id="A3.T1.2.2.1.m1.1a"><msub id="A3.T1.2.2.1.m1.1.1" xref="A3.T1.2.2.1.m1.1.1.cmml"><mi id="A3.T1.2.2.1.m1.1.1.2" mathsize="90%" xref="A3.T1.2.2.1.m1.1.1.2.cmml">u</mi><mn id="A3.T1.2.2.1.m1.1.1.3" mathsize="90%" xref="A3.T1.2.2.1.m1.1.1.3.cmml">2</mn></msub><annotation-xml encoding="MathML-Content" id="A3.T1.2.2.1.m1.1b"><apply id="A3.T1.2.2.1.m1.1.1.cmml" xref="A3.T1.2.2.1.m1.1.1"><csymbol cd="ambiguous" id="A3.T1.2.2.1.m1.1.1.1.cmml" xref="A3.T1.2.2.1.m1.1.1">subscript</csymbol><ci id="A3.T1.2.2.1.m1.1.1.2.cmml" xref="A3.T1.2.2.1.m1.1.1.2">𝑢</ci><cn id="A3.T1.2.2.1.m1.1.1.3.cmml" type="integer" xref="A3.T1.2.2.1.m1.1.1.3">2</cn></apply></annotation-xml><annotation encoding="application/x-tex" id="A3.T1.2.2.1.m1.1c">u_{2}</annotation><annotation encoding="application/x-llamapun" id="A3.T1.2.2.1.m1.1d">italic_u start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT</annotation></semantics></math></th> <td class="ltx_td ltx_align_center" id="A3.T1.2.2.2"><span class="ltx_text" id="A3.T1.2.2.2.1" style="font-size:90%;">0.81</span></td> <td class="ltx_td ltx_align_center" id="A3.T1.2.2.3"><span class="ltx_text" id="A3.T1.2.2.3.1" style="font-size:90%;">0.77</span></td> <td class="ltx_td ltx_align_center" id="A3.T1.2.2.4"><span class="ltx_text" id="A3.T1.2.2.4.1" style="font-size:90%;">0.78</span></td> <td class="ltx_td ltx_align_center" id="A3.T1.2.2.5"><span class="ltx_text" id="A3.T1.2.2.5.1" style="font-size:90%;">0.82</span></td> <td class="ltx_td ltx_align_center" id="A3.T1.2.2.6"><span class="ltx_text" id="A3.T1.2.2.6.1" style="font-size:90%;">0.28</span></td> <td class="ltx_td ltx_align_center" id="A3.T1.2.2.7"><span class="ltx_text" id="A3.T1.2.2.7.1" style="font-size:90%;">0.20</span></td> </tr> <tr class="ltx_tr" id="A3.T1.3.3"> <th class="ltx_td ltx_align_center ltx_th ltx_th_row" id="A3.T1.3.3.1"><math alttext="u_{3}" class="ltx_Math" display="inline" id="A3.T1.3.3.1.m1.1"><semantics id="A3.T1.3.3.1.m1.1a"><msub id="A3.T1.3.3.1.m1.1.1" xref="A3.T1.3.3.1.m1.1.1.cmml"><mi id="A3.T1.3.3.1.m1.1.1.2" mathsize="90%" xref="A3.T1.3.3.1.m1.1.1.2.cmml">u</mi><mn id="A3.T1.3.3.1.m1.1.1.3" mathsize="90%" xref="A3.T1.3.3.1.m1.1.1.3.cmml">3</mn></msub><annotation-xml encoding="MathML-Content" id="A3.T1.3.3.1.m1.1b"><apply id="A3.T1.3.3.1.m1.1.1.cmml" xref="A3.T1.3.3.1.m1.1.1"><csymbol cd="ambiguous" id="A3.T1.3.3.1.m1.1.1.1.cmml" xref="A3.T1.3.3.1.m1.1.1">subscript</csymbol><ci id="A3.T1.3.3.1.m1.1.1.2.cmml" xref="A3.T1.3.3.1.m1.1.1.2">𝑢</ci><cn id="A3.T1.3.3.1.m1.1.1.3.cmml" type="integer" xref="A3.T1.3.3.1.m1.1.1.3">3</cn></apply></annotation-xml><annotation encoding="application/x-tex" id="A3.T1.3.3.1.m1.1c">u_{3}</annotation><annotation encoding="application/x-llamapun" id="A3.T1.3.3.1.m1.1d">italic_u start_POSTSUBSCRIPT 3 end_POSTSUBSCRIPT</annotation></semantics></math></th> <td class="ltx_td ltx_align_center" id="A3.T1.3.3.2"><span class="ltx_text" id="A3.T1.3.3.2.1" style="font-size:90%;">0.80</span></td> <td class="ltx_td ltx_align_center" id="A3.T1.3.3.3"><span class="ltx_text" id="A3.T1.3.3.3.1" style="font-size:90%;">0.53</span></td> <td class="ltx_td ltx_align_center" id="A3.T1.3.3.4"><span class="ltx_text" id="A3.T1.3.3.4.1" style="font-size:90%;">0.55</span></td> <td class="ltx_td ltx_align_center" id="A3.T1.3.3.5"><span class="ltx_text" id="A3.T1.3.3.5.1" style="font-size:90%;">0.42</span></td> <td class="ltx_td ltx_align_center" id="A3.T1.3.3.6"><span class="ltx_text" id="A3.T1.3.3.6.1" style="font-size:90%;">0.28</span></td> <td class="ltx_td ltx_align_center" id="A3.T1.3.3.7"><span class="ltx_text" id="A3.T1.3.3.7.1" style="font-size:90%;">0.18</span></td> </tr> <tr class="ltx_tr" id="A3.T1.4.4"> <th class="ltx_td ltx_align_center ltx_th ltx_th_row" id="A3.T1.4.4.1"><math alttext="u_{4}" class="ltx_Math" display="inline" id="A3.T1.4.4.1.m1.1"><semantics id="A3.T1.4.4.1.m1.1a"><msub id="A3.T1.4.4.1.m1.1.1" xref="A3.T1.4.4.1.m1.1.1.cmml"><mi id="A3.T1.4.4.1.m1.1.1.2" mathsize="90%" xref="A3.T1.4.4.1.m1.1.1.2.cmml">u</mi><mn id="A3.T1.4.4.1.m1.1.1.3" mathsize="90%" xref="A3.T1.4.4.1.m1.1.1.3.cmml">4</mn></msub><annotation-xml encoding="MathML-Content" id="A3.T1.4.4.1.m1.1b"><apply id="A3.T1.4.4.1.m1.1.1.cmml" xref="A3.T1.4.4.1.m1.1.1"><csymbol cd="ambiguous" id="A3.T1.4.4.1.m1.1.1.1.cmml" xref="A3.T1.4.4.1.m1.1.1">subscript</csymbol><ci id="A3.T1.4.4.1.m1.1.1.2.cmml" xref="A3.T1.4.4.1.m1.1.1.2">𝑢</ci><cn id="A3.T1.4.4.1.m1.1.1.3.cmml" type="integer" xref="A3.T1.4.4.1.m1.1.1.3">4</cn></apply></annotation-xml><annotation encoding="application/x-tex" id="A3.T1.4.4.1.m1.1c">u_{4}</annotation><annotation encoding="application/x-llamapun" id="A3.T1.4.4.1.m1.1d">italic_u start_POSTSUBSCRIPT 4 end_POSTSUBSCRIPT</annotation></semantics></math></th> <td class="ltx_td ltx_align_center" id="A3.T1.4.4.2"><span class="ltx_text" id="A3.T1.4.4.2.1" style="font-size:90%;">0.70</span></td> <td class="ltx_td ltx_align_center" id="A3.T1.4.4.3"><span class="ltx_text" id="A3.T1.4.4.3.1" style="font-size:90%;">0.77</span></td> <td class="ltx_td ltx_align_center" id="A3.T1.4.4.4"><span class="ltx_text" id="A3.T1.4.4.4.1" style="font-size:90%;">0.68</span></td> <td class="ltx_td ltx_align_center" id="A3.T1.4.4.5"><span class="ltx_text" id="A3.T1.4.4.5.1" style="font-size:90%;">0.64</span></td> <td class="ltx_td ltx_align_center" id="A3.T1.4.4.6"><span class="ltx_text" id="A3.T1.4.4.6.1" style="font-size:90%;">0.64</span></td> <td class="ltx_td ltx_align_center" id="A3.T1.4.4.7"><span class="ltx_text" id="A3.T1.4.4.7.1" style="font-size:90%;">0.62</span></td> </tr> <tr class="ltx_tr" id="A3.T1.5.5"> <th class="ltx_td ltx_align_center ltx_th ltx_th_row" id="A3.T1.5.5.1"><math alttext="u_{5}" class="ltx_Math" display="inline" id="A3.T1.5.5.1.m1.1"><semantics id="A3.T1.5.5.1.m1.1a"><msub id="A3.T1.5.5.1.m1.1.1" xref="A3.T1.5.5.1.m1.1.1.cmml"><mi id="A3.T1.5.5.1.m1.1.1.2" mathsize="90%" xref="A3.T1.5.5.1.m1.1.1.2.cmml">u</mi><mn id="A3.T1.5.5.1.m1.1.1.3" mathsize="90%" xref="A3.T1.5.5.1.m1.1.1.3.cmml">5</mn></msub><annotation-xml encoding="MathML-Content" id="A3.T1.5.5.1.m1.1b"><apply id="A3.T1.5.5.1.m1.1.1.cmml" xref="A3.T1.5.5.1.m1.1.1"><csymbol cd="ambiguous" id="A3.T1.5.5.1.m1.1.1.1.cmml" xref="A3.T1.5.5.1.m1.1.1">subscript</csymbol><ci id="A3.T1.5.5.1.m1.1.1.2.cmml" xref="A3.T1.5.5.1.m1.1.1.2">𝑢</ci><cn id="A3.T1.5.5.1.m1.1.1.3.cmml" type="integer" xref="A3.T1.5.5.1.m1.1.1.3">5</cn></apply></annotation-xml><annotation encoding="application/x-tex" id="A3.T1.5.5.1.m1.1c">u_{5}</annotation><annotation encoding="application/x-llamapun" id="A3.T1.5.5.1.m1.1d">italic_u start_POSTSUBSCRIPT 5 end_POSTSUBSCRIPT</annotation></semantics></math></th> <td class="ltx_td ltx_align_center" id="A3.T1.5.5.2"><span class="ltx_text" id="A3.T1.5.5.2.1" style="font-size:90%;">0.70</span></td> <td class="ltx_td ltx_align_center" id="A3.T1.5.5.3"><span class="ltx_text" id="A3.T1.5.5.3.1" style="font-size:90%;">0.47</span></td> <td class="ltx_td ltx_align_center" id="A3.T1.5.5.4"><span class="ltx_text" id="A3.T1.5.5.4.1" style="font-size:90%;">0.38</span></td> <td class="ltx_td ltx_align_center" id="A3.T1.5.5.5"><span class="ltx_text" id="A3.T1.5.5.5.1" style="font-size:90%;">0.31</span></td> <td class="ltx_td ltx_align_center" id="A3.T1.5.5.6"><span class="ltx_text" id="A3.T1.5.5.6.1" style="font-size:90%;">0.30</span></td> <td class="ltx_td ltx_align_center" id="A3.T1.5.5.7"><span class="ltx_text" id="A3.T1.5.5.7.1" style="font-size:90%;">0.28</span></td> </tr> <tr class="ltx_tr" id="A3.T1.6.6"> <th class="ltx_td ltx_align_center ltx_th ltx_th_row" id="A3.T1.6.6.1"><math alttext="u_{6}" class="ltx_Math" display="inline" id="A3.T1.6.6.1.m1.1"><semantics id="A3.T1.6.6.1.m1.1a"><msub id="A3.T1.6.6.1.m1.1.1" xref="A3.T1.6.6.1.m1.1.1.cmml"><mi id="A3.T1.6.6.1.m1.1.1.2" mathsize="90%" xref="A3.T1.6.6.1.m1.1.1.2.cmml">u</mi><mn id="A3.T1.6.6.1.m1.1.1.3" mathsize="90%" xref="A3.T1.6.6.1.m1.1.1.3.cmml">6</mn></msub><annotation-xml encoding="MathML-Content" id="A3.T1.6.6.1.m1.1b"><apply id="A3.T1.6.6.1.m1.1.1.cmml" xref="A3.T1.6.6.1.m1.1.1"><csymbol cd="ambiguous" id="A3.T1.6.6.1.m1.1.1.1.cmml" xref="A3.T1.6.6.1.m1.1.1">subscript</csymbol><ci id="A3.T1.6.6.1.m1.1.1.2.cmml" xref="A3.T1.6.6.1.m1.1.1.2">𝑢</ci><cn id="A3.T1.6.6.1.m1.1.1.3.cmml" type="integer" xref="A3.T1.6.6.1.m1.1.1.3">6</cn></apply></annotation-xml><annotation encoding="application/x-tex" id="A3.T1.6.6.1.m1.1c">u_{6}</annotation><annotation encoding="application/x-llamapun" id="A3.T1.6.6.1.m1.1d">italic_u start_POSTSUBSCRIPT 6 end_POSTSUBSCRIPT</annotation></semantics></math></th> <td class="ltx_td ltx_align_center" id="A3.T1.6.6.2"><span class="ltx_text" id="A3.T1.6.6.2.1" style="font-size:90%;">0.67</span></td> <td class="ltx_td ltx_align_center" id="A3.T1.6.6.3"><span class="ltx_text" id="A3.T1.6.6.3.1" style="font-size:90%;">0.68</span></td> <td class="ltx_td ltx_align_center" id="A3.T1.6.6.4"><span class="ltx_text" id="A3.T1.6.6.4.1" style="font-size:90%;">0.68</span></td> <td class="ltx_td ltx_align_center" id="A3.T1.6.6.5"><span class="ltx_text" id="A3.T1.6.6.5.1" style="font-size:90%;">0.60</span></td> <td class="ltx_td ltx_align_center" id="A3.T1.6.6.6"><span class="ltx_text" id="A3.T1.6.6.6.1" style="font-size:90%;">0.53</span></td> <td class="ltx_td ltx_align_center" id="A3.T1.6.6.7"><span class="ltx_text" id="A3.T1.6.6.7.1" style="font-size:90%;">0.51</span></td> </tr> <tr class="ltx_tr" id="A3.T1.7.7"> <th class="ltx_td ltx_align_center ltx_th ltx_th_row" id="A3.T1.7.7.1"><math alttext="u_{7}" class="ltx_Math" display="inline" id="A3.T1.7.7.1.m1.1"><semantics id="A3.T1.7.7.1.m1.1a"><msub id="A3.T1.7.7.1.m1.1.1" xref="A3.T1.7.7.1.m1.1.1.cmml"><mi id="A3.T1.7.7.1.m1.1.1.2" mathsize="90%" xref="A3.T1.7.7.1.m1.1.1.2.cmml">u</mi><mn id="A3.T1.7.7.1.m1.1.1.3" mathsize="90%" xref="A3.T1.7.7.1.m1.1.1.3.cmml">7</mn></msub><annotation-xml encoding="MathML-Content" id="A3.T1.7.7.1.m1.1b"><apply id="A3.T1.7.7.1.m1.1.1.cmml" xref="A3.T1.7.7.1.m1.1.1"><csymbol cd="ambiguous" id="A3.T1.7.7.1.m1.1.1.1.cmml" xref="A3.T1.7.7.1.m1.1.1">subscript</csymbol><ci id="A3.T1.7.7.1.m1.1.1.2.cmml" xref="A3.T1.7.7.1.m1.1.1.2">𝑢</ci><cn id="A3.T1.7.7.1.m1.1.1.3.cmml" type="integer" xref="A3.T1.7.7.1.m1.1.1.3">7</cn></apply></annotation-xml><annotation encoding="application/x-tex" id="A3.T1.7.7.1.m1.1c">u_{7}</annotation><annotation encoding="application/x-llamapun" id="A3.T1.7.7.1.m1.1d">italic_u start_POSTSUBSCRIPT 7 end_POSTSUBSCRIPT</annotation></semantics></math></th> <td class="ltx_td ltx_align_center" id="A3.T1.7.7.2"><span class="ltx_text" id="A3.T1.7.7.2.1" style="font-size:90%;">0.60</span></td> <td class="ltx_td ltx_align_center" id="A3.T1.7.7.3"><span class="ltx_text" id="A3.T1.7.7.3.1" style="font-size:90%;">0.40</span></td> <td class="ltx_td ltx_align_center" id="A3.T1.7.7.4"><span class="ltx_text" id="A3.T1.7.7.4.1" style="font-size:90%;">0.40</span></td> <td class="ltx_td ltx_align_center" id="A3.T1.7.7.5"><span class="ltx_text" id="A3.T1.7.7.5.1" style="font-size:90%;">0.36</span></td> <td class="ltx_td ltx_align_center" id="A3.T1.7.7.6"><span class="ltx_text" id="A3.T1.7.7.6.1" style="font-size:90%;">0.26</span></td> <td class="ltx_td ltx_align_center" id="A3.T1.7.7.7"><span class="ltx_text" id="A3.T1.7.7.7.1" style="font-size:90%;">0.26</span></td> </tr> <tr class="ltx_tr" id="A3.T1.8.8"> <th class="ltx_td ltx_align_center ltx_th ltx_th_row" id="A3.T1.8.8.1"><math alttext="u_{8}" class="ltx_Math" display="inline" id="A3.T1.8.8.1.m1.1"><semantics id="A3.T1.8.8.1.m1.1a"><msub id="A3.T1.8.8.1.m1.1.1" xref="A3.T1.8.8.1.m1.1.1.cmml"><mi id="A3.T1.8.8.1.m1.1.1.2" mathsize="90%" xref="A3.T1.8.8.1.m1.1.1.2.cmml">u</mi><mn id="A3.T1.8.8.1.m1.1.1.3" mathsize="90%" xref="A3.T1.8.8.1.m1.1.1.3.cmml">8</mn></msub><annotation-xml encoding="MathML-Content" id="A3.T1.8.8.1.m1.1b"><apply id="A3.T1.8.8.1.m1.1.1.cmml" xref="A3.T1.8.8.1.m1.1.1"><csymbol cd="ambiguous" id="A3.T1.8.8.1.m1.1.1.1.cmml" xref="A3.T1.8.8.1.m1.1.1">subscript</csymbol><ci id="A3.T1.8.8.1.m1.1.1.2.cmml" xref="A3.T1.8.8.1.m1.1.1.2">𝑢</ci><cn id="A3.T1.8.8.1.m1.1.1.3.cmml" type="integer" xref="A3.T1.8.8.1.m1.1.1.3">8</cn></apply></annotation-xml><annotation encoding="application/x-tex" id="A3.T1.8.8.1.m1.1c">u_{8}</annotation><annotation encoding="application/x-llamapun" id="A3.T1.8.8.1.m1.1d">italic_u start_POSTSUBSCRIPT 8 end_POSTSUBSCRIPT</annotation></semantics></math></th> <td class="ltx_td ltx_align_center" id="A3.T1.8.8.2"><span class="ltx_text" id="A3.T1.8.8.2.1" style="font-size:90%;">0.50</span></td> <td class="ltx_td ltx_align_center" id="A3.T1.8.8.3"><span class="ltx_text" id="A3.T1.8.8.3.1" style="font-size:90%;">0.40</span></td> <td class="ltx_td ltx_align_center" id="A3.T1.8.8.4"><span class="ltx_text" id="A3.T1.8.8.4.1" style="font-size:90%;">0.33</span></td> <td class="ltx_td ltx_align_center" id="A3.T1.8.8.5"><span class="ltx_text" id="A3.T1.8.8.5.1" style="font-size:90%;">0.36</span></td> <td class="ltx_td ltx_align_center" id="A3.T1.8.8.6"><span class="ltx_text" id="A3.T1.8.8.6.1" style="font-size:90%;">0.34</span></td> <td class="ltx_td ltx_align_center" id="A3.T1.8.8.7"><span class="ltx_text" id="A3.T1.8.8.7.1" style="font-size:90%;">0.32</span></td> </tr> <tr class="ltx_tr" id="A3.T1.9.9"> <th class="ltx_td ltx_align_center ltx_th ltx_th_row" id="A3.T1.9.9.1"><math alttext="u_{9}" class="ltx_Math" display="inline" id="A3.T1.9.9.1.m1.1"><semantics id="A3.T1.9.9.1.m1.1a"><msub id="A3.T1.9.9.1.m1.1.1" xref="A3.T1.9.9.1.m1.1.1.cmml"><mi id="A3.T1.9.9.1.m1.1.1.2" mathsize="90%" xref="A3.T1.9.9.1.m1.1.1.2.cmml">u</mi><mn id="A3.T1.9.9.1.m1.1.1.3" mathsize="90%" xref="A3.T1.9.9.1.m1.1.1.3.cmml">9</mn></msub><annotation-xml encoding="MathML-Content" id="A3.T1.9.9.1.m1.1b"><apply id="A3.T1.9.9.1.m1.1.1.cmml" xref="A3.T1.9.9.1.m1.1.1"><csymbol cd="ambiguous" id="A3.T1.9.9.1.m1.1.1.1.cmml" xref="A3.T1.9.9.1.m1.1.1">subscript</csymbol><ci id="A3.T1.9.9.1.m1.1.1.2.cmml" xref="A3.T1.9.9.1.m1.1.1.2">𝑢</ci><cn id="A3.T1.9.9.1.m1.1.1.3.cmml" type="integer" xref="A3.T1.9.9.1.m1.1.1.3">9</cn></apply></annotation-xml><annotation encoding="application/x-tex" id="A3.T1.9.9.1.m1.1c">u_{9}</annotation><annotation encoding="application/x-llamapun" id="A3.T1.9.9.1.m1.1d">italic_u start_POSTSUBSCRIPT 9 end_POSTSUBSCRIPT</annotation></semantics></math></th> <td class="ltx_td ltx_align_center" id="A3.T1.9.9.2"><span class="ltx_text" id="A3.T1.9.9.2.1" style="font-size:90%;">0.40</span></td> <td class="ltx_td ltx_align_center" id="A3.T1.9.9.3"><span class="ltx_text" id="A3.T1.9.9.3.1" style="font-size:90%;">0.37</span></td> <td class="ltx_td ltx_align_center" id="A3.T1.9.9.4"><span class="ltx_text" id="A3.T1.9.9.4.1" style="font-size:90%;">0.33</span></td> <td class="ltx_td ltx_align_center" id="A3.T1.9.9.5"><span class="ltx_text" id="A3.T1.9.9.5.1" style="font-size:90%;">0.36</span></td> <td class="ltx_td ltx_align_center" id="A3.T1.9.9.6"><span class="ltx_text" id="A3.T1.9.9.6.1" style="font-size:90%;">0.34</span></td> <td class="ltx_td ltx_align_center" id="A3.T1.9.9.7"><span class="ltx_text" id="A3.T1.9.9.7.1" style="font-size:90%;">0.28</span></td> </tr> <tr class="ltx_tr" id="A3.T1.10.10"> <th class="ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_bb" id="A3.T1.10.10.1"><math alttext="u_{10}" class="ltx_Math" display="inline" id="A3.T1.10.10.1.m1.1"><semantics id="A3.T1.10.10.1.m1.1a"><msub id="A3.T1.10.10.1.m1.1.1" xref="A3.T1.10.10.1.m1.1.1.cmml"><mi id="A3.T1.10.10.1.m1.1.1.2" mathsize="90%" xref="A3.T1.10.10.1.m1.1.1.2.cmml">u</mi><mn id="A3.T1.10.10.1.m1.1.1.3" mathsize="90%" xref="A3.T1.10.10.1.m1.1.1.3.cmml">10</mn></msub><annotation-xml encoding="MathML-Content" id="A3.T1.10.10.1.m1.1b"><apply id="A3.T1.10.10.1.m1.1.1.cmml" xref="A3.T1.10.10.1.m1.1.1"><csymbol cd="ambiguous" id="A3.T1.10.10.1.m1.1.1.1.cmml" xref="A3.T1.10.10.1.m1.1.1">subscript</csymbol><ci id="A3.T1.10.10.1.m1.1.1.2.cmml" xref="A3.T1.10.10.1.m1.1.1.2">𝑢</ci><cn id="A3.T1.10.10.1.m1.1.1.3.cmml" type="integer" xref="A3.T1.10.10.1.m1.1.1.3">10</cn></apply></annotation-xml><annotation encoding="application/x-tex" id="A3.T1.10.10.1.m1.1c">u_{10}</annotation><annotation encoding="application/x-llamapun" id="A3.T1.10.10.1.m1.1d">italic_u start_POSTSUBSCRIPT 10 end_POSTSUBSCRIPT</annotation></semantics></math></th> <td class="ltx_td ltx_align_center ltx_border_bb" id="A3.T1.10.10.2"><span class="ltx_text" id="A3.T1.10.10.2.1" style="font-size:90%;">0.40</span></td> <td class="ltx_td ltx_align_center ltx_border_bb" id="A3.T1.10.10.3"><span class="ltx_text" id="A3.T1.10.10.3.1" style="font-size:90%;">0.34</span></td> <td class="ltx_td ltx_align_center ltx_border_bb" id="A3.T1.10.10.4"><span class="ltx_text" id="A3.T1.10.10.4.1" style="font-size:90%;">0.15</span></td> <td class="ltx_td ltx_align_center ltx_border_bb" id="A3.T1.10.10.5"><span class="ltx_text" id="A3.T1.10.10.5.1" style="font-size:90%;">0.23</span></td> <td class="ltx_td ltx_align_center ltx_border_bb" id="A3.T1.10.10.6"><span class="ltx_text" id="A3.T1.10.10.6.1" style="font-size:90%;">0.18</span></td> <td class="ltx_td ltx_align_center ltx_border_bb" id="A3.T1.10.10.7"><span class="ltx_text" id="A3.T1.10.10.7.1" style="font-size:90%;">0.16</span></td> </tr> </tbody> </table> <figcaption class="ltx_caption ltx_centering" style="font-size:90%;"><span class="ltx_tag ltx_tag_table">Table 1. </span>Results for a small set of users, showing the accuracy across different few-shot values and personalization approaches.</figcaption> </figure> <figure class="ltx_figure" id="A3.F27"> <div class="ltx_flex_figure"> <div class="ltx_flex_cell ltx_flex_size_2"><span class="ltx_ERROR ltx_figure_panel undefined" id="A3.F27.1">\MakeFramed</span></div> <div class="ltx_flex_cell ltx_flex_size_2"> <div class="ltx_block ltx_figure_panel" id="A3.F27.2"> <span class="ltx_ERROR undefined" id="A3.F27.2.1">\FrameRestore</span><span class="ltx_ERROR undefined" id="A3.F27.2.2">{adjustwidth}</span> </div> </div> <div class="ltx_flex_break"></div> <div class="ltx_flex_cell ltx_flex_size_1"> <p class="ltx_p ltx_figure_panel" id="A3.F27.3">4pt7pt <span class="ltx_text ltx_font_typewriter" id="A3.F27.3.1" style="font-size:90%;">Given the data analytics question below, please select how the answer to the question should be presented (e.g., Table, Text, Chart) for the specific user.<span class="ltx_text ltx_font_serif" id="A3.F27.3.1.1"></span></span></p> </div> <div class="ltx_flex_break"></div> <div class="ltx_flex_cell ltx_flex_size_1"> <p class="ltx_p ltx_figure_panel" id="A3.F27.4"><span class="ltx_text ltx_font_typewriter" id="A3.F27.4.1" style="font-size:90%;">The possible options are:<span class="ltx_text ltx_font_serif" id="A3.F27.4.1.1"></span></span></p> </div> <div class="ltx_flex_break"></div> <div class="ltx_flex_cell ltx_flex_size_1"> <ul class="ltx_itemize ltx_figure_panel" id="A3.I1"> <li class="ltx_item" id="A3.I1.i1" style="list-style-type:none;"> <span class="ltx_tag ltx_tag_item">•</span> <div class="ltx_para" id="A3.I1.i1.p1"> <p class="ltx_p" id="A3.I1.i1.p1.1"><span class="ltx_text ltx_font_typewriter" id="A3.I1.i1.p1.1.1" style="font-size:90%;">Table</span><span class="ltx_text" id="A3.I1.i1.p1.1.2" style="font-size:90%;"></span></p> </div> </li> <li class="ltx_item" id="A3.I1.i2" style="list-style-type:none;"> <span class="ltx_tag ltx_tag_item">•</span> <div class="ltx_para" id="A3.I1.i2.p1"> <p class="ltx_p" id="A3.I1.i2.p1.1"><span class="ltx_text ltx_font_typewriter" id="A3.I1.i2.p1.1.1" style="font-size:90%;">Text</span><span class="ltx_text" id="A3.I1.i2.p1.1.2" style="font-size:90%;"></span></p> </div> </li> <li class="ltx_item" id="A3.I1.i3" style="list-style-type:none;"> <span class="ltx_tag ltx_tag_item">•</span> <div class="ltx_para" id="A3.I1.i3.p1"> <p class="ltx_p" id="A3.I1.i3.p1.1"><span class="ltx_text ltx_font_typewriter" id="A3.I1.i3.p1.1.1" style="font-size:90%;">Chart</span><span class="ltx_text" id="A3.I1.i3.p1.1.2" style="font-size:90%;"></span></p> </div> </li> </ul> </div> <div class="ltx_flex_break"></div> <div class="ltx_flex_cell ltx_flex_size_1"> <p class="ltx_p ltx_align_center ltx_figure_panel" id="A3.F27.5"><span class="ltx_text ltx_font_typewriter" id="A3.F27.5.1" style="font-size:90%;">[Question]</span></p> </div> <div class="ltx_flex_break"></div> <div class="ltx_flex_cell ltx_flex_size_1"><span class="ltx_ERROR ltx_figure_panel undefined" id="A3.F27.6">\endMakeFramed</span></div> </div> <figcaption class="ltx_caption"><span class="ltx_tag ltx_tag_figure">Figure 27. </span>Non-personalized approach. This is the basic approach that is not personalized for a specific user.</figcaption> </figure> <figure class="ltx_figure" id="A3.F28"><img alt="Refer to caption" class="ltx_graphics ltx_centering ltx_img_landscape" height="350" id="A3.F28.g1" src="x26.png" width="831"/> <figcaption class="ltx_caption ltx_centering"><span class="ltx_tag ltx_tag_figure">Figure 28. </span>For each user, we aggregate all their preferences and derive a distribution, shown above. The values are sorted by the probability of choosing ’chart’, creating the observed curve.</figcaption> </figure> <figure class="ltx_figure" id="A3.F29"> <div class="ltx_flex_figure"> <div class="ltx_flex_cell ltx_flex_size_1"> <figure class="ltx_figure ltx_figure_panel ltx_align_center" id="A3.F29.sf1"><img alt="Refer to caption" class="ltx_graphics ltx_img_landscape" height="546" id="A3.F29.sf1.g1" src="x27.png" width="831"/> <figcaption class="ltx_caption"><span class="ltx_tag ltx_tag_figure"><span class="ltx_text" id="A3.F29.sf1.2.1.1" style="font-size:80%;">(a)</span> </span><span class="ltx_text" id="A3.F29.sf1.3.2" style="font-size:80%;">Questions CCDF</span></figcaption> </figure> </div> <div class="ltx_flex_break"></div> <div class="ltx_flex_cell ltx_flex_size_1"> <figure class="ltx_figure ltx_figure_panel ltx_align_center" id="A3.F29.sf2"><img alt="Refer to caption" class="ltx_graphics ltx_img_landscape" height="546" id="A3.F29.sf2.g1" src="x28.png" width="831"/> <figcaption class="ltx_caption"><span class="ltx_tag ltx_tag_figure"><span class="ltx_text" id="A3.F29.sf2.2.1.1" style="font-size:80%;">(b)</span> </span><span class="ltx_text" id="A3.F29.sf2.3.2" style="font-size:80%;">Users CCDF</span></figcaption> </figure> </div> </div> <figcaption class="ltx_caption ltx_centering"><span class="ltx_tag ltx_tag_figure">Figure 29. </span>CCDF for the questions and users. For each question, we derive a distribution of user responses, and conversely, for each user, we derive the distribution of responses and for both compute the CCDF. </figcaption> </figure> <div class="ltx_pagination ltx_role_newpage"></div> </section> </section> </article> </div> <footer class="ltx_page_footer"> <div class="ltx_page_logo">Generated on Tue Nov 12 00:21:54 2024 by <a class="ltx_LaTeXML_logo" href="http://dlmf.nist.gov/LaTeXML/"><span style="letter-spacing:-0.2em; margin-right:0.1em;">L<span class="ltx_font_smallcaps" style="position:relative; bottom:2.2pt;">a</span>T<span class="ltx_font_smallcaps" style="font-size:120%;position:relative; bottom:-0.2ex;">e</span></span><span style="font-size:90%; position:relative; bottom:-0.2ex;">XML</span><img alt="Mascot Sammy" src="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAAsAAAAOCAYAAAD5YeaVAAAAAXNSR0IArs4c6QAAAAZiS0dEAP8A/wD/oL2nkwAAAAlwSFlzAAALEwAACxMBAJqcGAAAAAd0SU1FB9wKExQZLWTEaOUAAAAddEVYdENvbW1lbnQAQ3JlYXRlZCB3aXRoIFRoZSBHSU1Q72QlbgAAAdpJREFUKM9tkL+L2nAARz9fPZNCKFapUn8kyI0e4iRHSR1Kb8ng0lJw6FYHFwv2LwhOpcWxTjeUunYqOmqd6hEoRDhtDWdA8ApRYsSUCDHNt5ul13vz4w0vWCgUnnEc975arX6ORqN3VqtVZbfbTQC4uEHANM3jSqXymFI6yWazP2KxWAXAL9zCUa1Wy2tXVxheKA9YNoR8Pt+aTqe4FVVVvz05O6MBhqUIBGk8Hn8HAOVy+T+XLJfLS4ZhTiRJgqIoVBRFIoric47jPnmeB1mW/9rr9ZpSSn3Lsmir1fJZlqWlUonKsvwWwD8ymc/nXwVBeLjf7xEKhdBut9Hr9WgmkyGEkJwsy5eHG5vN5g0AKIoCAEgkEkin0wQAfN9/cXPdheu6P33fBwB4ngcAcByHJpPJl+fn54mD3Gg0NrquXxeLRQAAwzAYj8cwTZPwPH9/sVg8PXweDAauqqr2cDjEer1GJBLBZDJBs9mE4zjwfZ85lAGg2+06hmGgXq+j3+/DsixYlgVN03a9Xu8jgCNCyIegIAgx13Vfd7vdu+FweG8YRkjXdWy329+dTgeSJD3ieZ7RNO0VAXAPwDEAO5VKndi2fWrb9jWl9Esul6PZbDY9Go1OZ7PZ9z/lyuD3OozU2wAAAABJRU5ErkJggg=="/></a> </div></footer> </div> </body> </html>