CINXE.COM

Detecting Ambiguous Utterances in an Intelligent Assistant - LY Corporation R&D - LY Corporation

<!DOCTYPE html> <html lang="ja"> <head> <title>Detecting Ambiguous Utterances in an Intelligent Assistant - LY Corporation R&amp;D - LY Corporation</title> <!-- Font preload ---> <link rel="preload" href="/assets/fonts/LINESeedJP_OTF_Rg.woff2" as="font" type="font/woff2" crossorigin> <link rel="preload" href="/assets/fonts/LINESeedJP_OTF_Bd.woff2" as="font" type="font/woff2" crossorigin> <link rel="preload" href="/assets/fonts/LINESeedJP_OTF_Eb.woff2" as="font" type="font/woff2" crossorigin> <meta charset="UTF-8"> <meta name="viewport" content="width=device-width,initial-scale=1"> <meta name="description" content="In intelligent assistants that perform both chatting and tasks through dialogue, like Siri and Alexa, users often make ambiguous utterances such as &quot;I&#39;m hungry&quot; or &quot;I have a headache,&quot; which can be interpreted as either chat or task intents. Naively determining these intents can lead to mismatched responses, spoiling the user experience. Therefore, it is desirable to determine the ambiguity of user utterances. We created a dataset from an actual intelligent assistant via crowdsourcing and analyzed tendencies of ambiguous utterances. Using this labeled data of chat, task, and ambiguous intents, we developed a supervised intent classification model. To detect ambiguous utterances robustly, we propose feeding sentence embeddings developed from microblogs and search logs with a self-attention mechanism. Experiments showed that our model outperformed two baselines, including a strong LLM-based one. We will release the dataset upon acceptance to support future research. "><meta property="og:title" content="Detecting Ambiguous Utterances in an Intelligent Assistant - LY Corporation R&amp;D - LY Corporation"><meta property="og:description" content="In intelligent assistants that perform both chatting and tasks through dialogue, like Siri and Alexa, users often make ambiguous utterances such as &quot;I&#39;m hungry&quot; or &quot;I have a headache,&quot; which can be interpreted as either chat or task intents. Naively determining these intents can lead to mismatched responses, spoiling the user experience. Therefore, it is desirable to determine the ambiguity of user utterances. We created a dataset from an actual intelligent assistant via crowdsourcing and analyzed tendencies of ambiguous utterances. Using this labeled data of chat, task, and ambiguous intents, we developed a supervised intent classification model. To detect ambiguous utterances robustly, we propose feeding sentence embeddings developed from microblogs and search logs with a self-attention mechanism. Experiments showed that our model outperformed two baselines, including a strong LLM-based one. We will release the dataset upon acceptance to support future research. "> <meta property="og:locale" content="ja_JP"> <meta property="og:type" content="website"> <meta property="og:url" content="https://reasearch.lycorp.co.jp/"> <meta property="og:site_name" content="LY Corporation R&D - LY Corporation"> <meta property="og:image" content="https://s.yimg.jp/images/research_lab/ly_rd/assets/images/OGP_en.png"> <meta property="fb:app_id" content="313412695662677"> <link rel="icon" href="https://s.yimg.jp/images/research_lab/ly_rd/assets/images/favicon.ico" type="image/vnd.microsoft.icon" /> <link rel="stylesheet" href="https://s.yimg.jp/images/research_lab/ly_rd/assets/style/css/style.css" /> <link rel="stylesheet" href="/css/patch.css" /> <script type="text/javascript" src="https://s.yimg.jp/images/research_lab/ly_rd/js/src/lib/jquery-3.4.1.min.js"></script> </head> <body> <header class="l-header"> <div class="l-header__head"> <div class="p-header-head"> <h1 class="p-header-head__title u-fw-normal"> <a href="/en">LY Corporation R&D</a> </h1> <a href="https://www.lycorp.co.jp/en/"><img src="https://s.yimg.jp/images/research_lab/ly_rd/assets/images/logo_en.svg" alt="LY Coroporation" class="p-header-head__logo"></a> </div> </div> <!--EMG--> <div id="EMG"> <!-- EMG noResult --> </div> <div id="EMG2"> <!-- EMG2 noResult --> </div> <div id="EMG3"> <!-- EMG3 noResult --> </div> <div class="l-header__foot"> <div class="p-header-foot"> <div class="p-header-menu"> <button type="button" class="p-header-menu__button"> <span class="p-header-menu__line"></span> <span class="p-header-menu__line"></span> <span class="p-header-menu__line"></span> </button> </div> <div class="p-header-menu-bg"></div> <div class="p-header-foot__inner"> <div class="p-header-foot__contents"> <nav class="p-header-nav"> <ul class="p-header-nav__items wide"> <li class="p-header-nav__item"> <a href="/en/aboutus" class="p-header-nav__link">About Us</a> </li> <li class="p-header-nav__item"> <a href="/en/research_area" class="p-header-nav__link">Research Area</a> </li> <li class="p-header-nav__item"> <a href="/en/news" class="p-header-nav__link">News</a> </li> <li class="p-header-nav__item"> <a href="/en/publications" class="p-header-nav__link p-header-nav__link--current">Publications</a> </li> <li class="p-header-nav__item"> <a href="/en/awards" class="p-header-nav__link">Awards</a> </li> <li class="p-header-nav__item"> <a href="/en/softwaredata" class="p-header-nav__link">Software/Data</a> </li> <li class="p-header-nav__item"> <a href="/en/people" class="p-header-nav__link">People</a> </li> <!-- spacer --> <li class="p-header-nav__item"></li><li class="p-header-nav__item"></li> </ul> </nav> <div class="p-header-langage"> <ul class="p-header-langage__items"> <li class="p-header-langage__item u-mr5"> <a href="/jp/publications/2125" class="p-header-langage__link p-header-langage__link--disabled">JP</a> </li> <li class="p-header-langage__item p-header-langage__item--right"> <span class="p-header-langage__link">EN</span> </li> </ul> </div> <div class="p-header-search"> <button type="text" class="p-header-search__icon"></button> <div class="p-header-search__wrap"> <div class="p-header-search__inner"> <form action="/en/search"> <input name="p" type="text" placeholder="Search" class="p-header-search__input"> </form> </div> </div> </div> </div> </div> </div> </div> </header> <main class="l-main"> <section class="c-hero"> <h2 class="c-hero__title">Publications</h2> </section> <div class="l-main__wrap publications-detail"> <section> <h3> <span class="publications-detail__text">CONFERENCE (INTERNATIONAL)</span> <span class="c-title-4th publications-detail__title">Detecting Ambiguous Utterances in an Intelligent Assistant</span> </h3> <p class="publications-detail__text"><a href="/en/people/34">Satoshi Akasaki</a>, Manabu Sassano</p> <p class="publications-detail__text">The 2024 Conference on Empirical Methods in Natural Language Processing (EMNLP 2024)</p> <p class="publications-detail__text u-mb30">November 10, 2024</p> <p class="publications-detail__text-main">In intelligent assistants that perform both chatting and tasks through dialogue, like Siri and Alexa, users often make ambiguous utterances such as "I'm hungry" or "I have a headache," which can be interpreted as either chat or task intents. Naively determining these intents can lead to mismatched responses, spoiling the user experience. Therefore, it is desirable to determine the ambiguity of user utterances. We created a dataset from an actual intelligent assistant via crowdsourcing and analyzed tendencies of ambiguous utterances. Using this labeled data of chat, task, and ambiguous intents, we developed a supervised intent classification model. To detect ambiguous utterances robustly, we propose feeding sentence embeddings developed from microblogs and search logs with a self-attention mechanism. Experiments showed that our model outperformed two baselines, including a strong LLM-based one. We will release the dataset upon acceptance to support future research. </p> <div class="c-button"> <ul class="c-button__items"> <li class="c-button__item p-publicationlist__button-item"> <a href="/en/research_area/1" class="c-button__link publications-detail__button">Natural Language Processing</a> </li> </ul> </div> <p class="publications-detail__download u-mt10"> <span class="u-fw-bold publications-detail__download-text">Paper : </span> <a href="https://aclanthology.org/2024.emnlp-industry.28/" target="_blank">Detecting Ambiguous Utterances in an Intelligent Assistant</a><img src="https://s.yimg.jp/images/research_lab/ly_rd/assets/images/blanc_icon02.png" alt="open into new tab or window" style="margin-left: 3px;"> <span class="publications-detail__download-text">(external link)</span> </p> </section> </div> <footer class="l-footer"> <div class="l-footer__head"> <ul class="p-footer-sns"> <li class="p-footer-sns__list"> <a href="https://www.facebook.com/share.php?u=https://research.lycorp.co.jp/" onClick="window.open(encodeURI(decodeURI(this.href)), 'facebookwindow', 'width=650, height=470, personalbar=0, toolbar=0, scrollbars=1, sizable=1'); return false;" target="_blank"> <img src="https://s.yimg.jp/images/research_lab/ly_rd/assets/images/sns_facebook.png" alt="facebook" class="p-footer-sns__img"> </a> </li> <li class="p-footer-sns__list"> <a href="https://twitter.com/share?url=https://research.lycorp.co.jp/" onClick="window.open(encodeURI(decodeURI(this.href)), 'tweetwindow', 'width=650, height=470, personalbar=0, toolbar=0, scrollbars=1, sizable=1'); return false;" target="_blank"> <img src="https://s.yimg.jp/images/research_lab/ly_rd/assets/images/sns_twitter.png" alt="twitter" class="p-footer-sns__img"> </a> </li> <li class="p-footer-sns__list"> <a href="https://social-plugins.line.me/lineit/share?url=https://research.lycorp.co.jp/" onClick="window.open(encodeURI(decodeURI(this.href)), 'linewindow', 'width=650, height=470, personalbar=0, toolbar=0, scrollbars=1, sizable=1'); return false;" target="_blank"> <img src="https://s.yimg.jp/images/research_lab/ly_rd/assets/images/sns_line.png" alt="line" class="p-footer-sns__img"> </a> </li> </ul> </div> <div class="l-footer__foot"> <ul class="p-footer-nav"> <li class="p-footer-nav__item"><a href="https://www.lycorp.co.jp/en/company/privacypolicy/" class="p-footer-nav__link" target="_blank">Privacy Policy</a></li> <li class="p-footer-nav__item"><a href="https://privacy.lycorp.co.jp/en/" class="p-footer-nav__link" target="_blank">Privacy Center</a></li> <li class="p-footer-nav__item"><a href="https://www.lycorp.co.jp/en/company/terms/" class="p-footer-nav__link" target="_blank">Terms of Use</a></li> <li class="p-footer-nav__item"><a href="https://www.lycorp.co.jp/en/company/overview/" class="p-footer-nav__link" target="_blank">Company Overview</a></li> <li class="p-footer-nav__item"><a href="https://www.lycorp.co.jp/ja/recruit/" class="p-footer-nav__link" target="_blank">Careers</a></li> <li class="p-footer-nav__item"><a href="https://www.lycorp.co.jp/ja/contact/" class="p-footer-nav__link" target="_blank">Contact</a></li> </ul> <small class="p-footer-copyright"> <span class="p-footer-copyright__pc">&copy; LY Corporation</span> <span class="p-footer-copyright__sp">&copy; LY Corporation</span> </small> </div> <div class="c-pagetop"> <button type="button" class="c-pagetop__button"></button> </div> </footer> <script type="text/javascript" src="https://s.yimg.jp/images/research_lab/ly_rd/js/dist/main.bundle.js"></script> </body> <script async src="https://s.yimg.jp/images/ds/yas/ya-1.4.0.min.js"></script> <script> window.yacmds = window.yacmds || []; window.ya = window.ya || function(){yacmds.push(arguments)}; ya('init', '346436060e644d5987723bc46b0bb00b', '9219a714-7f39-4dec-89f2-5fad717f8839'); ya('hit', 'pageview,webPerformance'); </script> </html>

Pages: 1 2 3 4 5 6 7 8 9 10