CINXE.COM

Mohit Iyyer — Home

<!DOCTYPE html> <html> <head> <meta charset="utf-8"> <title>Mohit Iyyer &mdash; Home</title> <link rel="stylesheet" href="css/new_master.css" /> <script> function unhide(divID) { var item = document.getElementById(divID); if (item) { item.className=(item.className=='hidden')?'unhidden':'hidden'; } } </script> <script type="text/javascript"> var gaJsHost = (("https:" == document.location.protocol) ? "https://ssl." : "http://www."); document.write(unescape("%3Cscript src='" + gaJsHost + "google-analytics.com/ga.js' type='text/javascript'%3E%3C/script%3E")); </script> <script type="text/javascript"> try { var pageTracker = _gat._getTracker("UA-52267931-1"); pageTracker._trackPageview(); } catch(err) {} </script> </head> <body> <div class="wrapper"> <div class="posts-wrapper"> <div class="post"> <img src='./data/mohit_iyyer_square.jpeg' style="float:right; width:220px;" /> <h1>mohit iyyer</h1> <h2>miyyer@cs.umass.edu // <a href="./data/cv.pdf"> CV</a> // <a href="https://scholar.google.com/citations?user=rBVA5tcAAAAJ&hl=en"> Scholar</a> // <a href="https://www.github.com/miyyer">github</a> // <a href="https://www.twitter.com/mohitiyyer">twitter</a></h2> <br> <div class="announcement-box"> In January 2025, I'll be returning to the University of Maryland's CS department as an associate professor. If you're interested in joining my lab, please <a href="https://www.cs.umd.edu/grad/apply">apply</a> to UMD! </div> <p>I am currently an associate professor in computer science at <a href="https://www.cics.umass.edu">UMass Amherst</a> and a member of <a href="https://nlp.cs.umass.edu">UMass NLP</a>. Previously, I was a Young Investigator at <a href="http://allenai.org/">AI2</a>; before that, I completed my PhD at the <a href="http://www.cs.umd.edu/">University of Maryland, College Park</a>. My research interests lie broadly in natural language processing and machine learning. Problems that I'm currently excited about include:<br> </p> <p style="margin-left:3em;margin-top:-0.5em;margin-bottom:1.25em;"> (1) Improving instruction following abilities of large language models for long-form generation<br> (2) Designing methods to evaluate long-form & multilingual text (e.g., for factuality and coherence)<br> (3) Building collaborative human-LLM systems to help human authors in creative writing tasks<br> (4) Increasing robustness of LLM-generated text detectors to attacks (e.g., paraphrasing) </p> </div> </div> <div class="posts-wrapper"> <div class="post" style="margin-bottom:1.25em;"> <ul class="news"> <li><strong>Sep. 2024:</strong> papers to appear at EMNLP 2024 on <a href="https://novelchallenge.github.io/">long-context LLM evaluation</a> and <a href="https://arxiv.org/abs/2406.14517">watermarking</a>, and EMNLP Findings on <a href="https://arxiv.org/abs/2406.19276">factuality evaluation</a> and <a href="https://arxiv.org/abs/2406.19371">long-form instruction tuning</a></li> <li><strong>Aug. 2024:</strong> keynote at <a href="https://kddcup24.github.io/">KDD CUP 2024 RAG workshop</a></li> <li><strong>Jul. 2024:</strong> papers to appear at COLM 2024 on <a href="https://arxiv.org/abs/2404.01261">faithfulness in book summarization</a> and <a href="https://arxiv.org/abs/2404.13784">image reproduction via multimodal LLMs</a></li> <li><strong>Jul. 2024: </strong> keynote at <a href="https://longcontextfm.github.io/">Workshop on Long Context Foundation Models</a> (ICML 2024)</li> <li><strong>Jun. 2024: </strong> released <a href="https://novelchallenge.github.io/">NoCha</a>, a new long-context LLM benchmark!</li> <li><strong>Jun. 2024: </strong> talk at Tel Aviv University NLP seminar</li> <li><strong>May 2024: </strong> papers to appear at ACL 2024 on multi-stage knowledge distillation and ACL Findings on <a href="https://arxiv.org/abs/2310.03214">search-augmented LLMs</a></li> <li><strong>Mar. 2024: </strong> papers to appear at NAACL 2024 on <a href="https://arxiv.org/abs/2311.01449">TopicGPT</a> and at NAACL Findings on <a href="https://arxiv.org/abs/2311.09517">grammar error explanation</a></li> <li><strong>Feb. 2024: </strong> talk at UChicago / TTIC NLP seminar</li> <li><strong>Jan. 2024: </strong> paper to appear at ICLR 2024 (oral) on <a href="https://arxiv.org/abs/2310.00785">book-length summarization</a></li> <li><strong>Jan. 2024: </strong> papers to appear at EACL 2024 on <a href="https://arxiv.org/abs/2305.14564">using LLMs to reason over long documents</a> and <a href="https://arxiv.org/abs/2302.11521">parameter-efficient LM adaptation</a></li> <li><strong>Dec. 2023: </strong> talk at University of Tokyo</li> <li><strong>Nov. 2023: </strong> our <a href="https://people.cs.umass.edu/~amir/papers/CCS23-LM-stealing.pdf">paper</a> on stealing LLM decoding algorithms won a Distinguished Paper award at CCS 2023!</li> <li><strong>Nov. 2023: </strong> launched <a href="https://litmt.org">litmt.org</a>, a platform for sharing machine-translated world literature</li> <li><strong>Nov. 2023: </strong> talk at MIT Embodied Intelligence seminar</li> <li><strong>Nov. 2023: </strong> talk at UPenn CLunch</li> <li><strong>Oct. 2023: </strong> papers to appear at EMNLP on <a href="https://arxiv.org/abs/2305.14625">evaluating retrieval-augmented generation</a> and <a href="https://arxiv.org/abs/2305.14251">FActScore</a>; at EMNLP-Findings on video game dialogue generation and LLM-based re-ranking; and WMT on <a href="https://arxiv.org/abs/2304.03245">LLMs for literary MT</a> <li><strong>Oct. 2023: </strong> talk at UMD CLIP colloquium</li> <li><strong>Sep. 2023: </strong> one paper to appear at NeurIPS 2023 on <a href="https://arxiv.org/abs/2303.13408">detecting AI-generated text</a></li> <li><strong>Aug. 2023: </strong> spoke about AI and education on two local TV stations (<a href="https://www.westernmassnews.com/2023/08/17/colleges-work-plans-adjust-new-world-ai-chatgpt/">Western Mass News</a>, <a href="https://www.wwlp.com/massappeal/how-will-artificial-intelligence-the-workplace/">WWLP</a>) <li><strong>Jul. 2023: </strong> keynote at <a href="https://reml-workshop.github.io/">Workshop on Retrieval-Enhanced Machine Learning</a> (SIGIR 2023)</li></a> <li><strong>May 2023: </strong> our <a href="https://arxiv.org/abs/2301.13298">paper</a> on evaluating long-form summarization won an <a href="https://2023.eacl.org/program/best-paper/">outstanding paper</a> award at EACL 2023!</li></a> <li><strong>May 2023: </strong> one <a href=https://arxiv.org/abs/2305.18201>paper</a> to appear at ACL 2023 on expert human evaluation of long-form QA</li></a> <li><strong>May 2023: </strong> talk at <a href="https://insights-workshop.github.io/">Workshop on Insights from Negative Results in NLP</a> <li><strong>Apr. 2023: </strong> talk at <a href="https://cds.nyu.edu/text-data-speaker-series/">NYU Text-as-Data series</a> <li><strong>Feb. 2023: </strong> spoke about ChatGPT on three local TV stations (<a href="https://www.youtube.com/watch?v=rI10pD0JPh0">GBH</a>, <a href="https://www.wwlp.com/massappeal/a-new-form-of-ai-is-disrupting-critical-thinking/">WWLP</a>, and <a href="https://www.youtube.com/watch?v=zQVgHdbwaRE">Western Mass News</a>) <li><strong>Jan. 2023: </strong> papers to appear at EACL 2023 (<a href="https://martiansideofthemoon.github.io/assets/longeval.pdf">better human evaluation of long-form summarization</a>) and EACL-Findings (<a href="https://arxiv.org/abs/2210.07188">crowdsourcing coreference annotations</a>) <li><strong>Dec. 2022: </strong> talk at <a href="https://indoml.in/">IndoML 2022</a> <li><strong>Nov. 2022: </strong> honored to receive a <a href="https://www.cics.umass.edu/news/iyyer-named-2022-samsung-ai-researcher-year">Samsung AI Researcher of the Year</a> award! <li><strong>Oct. 2022: </strong> six papers to appear at EMNLP 2022 on <a href= <a href="https://arxiv.org/abs/2205.09726"> decoding with large ranking models</a>, <a href="https://arxiv.org/abs/2205.12647">zero-shot cross-lingual summarization</a>, <a href="https://mrdrozdov.github.io/knnlm_retrieval_quality.pdf">retrieval-augmented LMs</a>, <a href="https://arxiv.org/abs/2210.14250">document-level literary translation</a>, and datasets for <a href="https://arxiv.org/abs/2210.11689">analyzing Chinese LMs</a> and <a href="https://arxiv.org/abs/2210.13746">MT metrics</a></li> <li><strong>Jul. 2022: </strong> teaching course on text generation at <a href="https://irdta.eu/deeplearn/2022su/">DeepLearn 2022 Summer School</a> <li><strong>May 2022: </strong> talk at Baidu Research</li> <li><strong>May 2022: </strong> preprints on <a href="https://arxiv.org/abs/2205.09726">decoding with large ranking models</a> and <a href="https://arxiv.org/abs/2205.12647">zero-shot cross-lingual summarization</a></li> <li><strong>Apr. 2022: </strong> papers to appear at NAACL 2022 on <a href="https://arxiv.org/abs/2205.09278">long-form QA</a> and a <a href="https://arxiv.org/abs/2204.10878">long-range LM challenge dataset</a></li> <li><strong>Apr. 2022: </strong> talk at UNC Chapel Hill</li> <li><strong>Feb. 2022: </strong>paper to appear at ACL 2022 on <a href="https://arxiv.org/abs/2203.10053">retrieving literary evidence</a></li> <li><strong>Oct. 2021: </strong>talk at Cornell AI seminar</li> <li><strong>Aug. 2021: </strong>six papers to appear at EMNLP 2021 on language generation (<a href="https://arxiv.org/abs/2109.06835">evaluation</a>, <a href="https://arxiv.org/abs/2109.09115">analysis</a>, and <a href="https://arxiv.org/abs/2104.07000">models</a>), <a href="https://arxiv.org/abs/2109.06270">few-shot learning</a>, <a href="https://arxiv.org/abs/2109.06304">phrase embeddings</a>, and <a href="https://arxiv.org/abs/2109.05112">latent tree induction</a></li> <li><strong>Jun. 2021: </strong>co-advising the <a href="http://ciir.cs.umass.edu/alexaprize">UMass Alexa Prize Taskbot</a> team</li> <li><strong>Jun. 2021: </strong>talk at Yandex</li> <li><strong>Jun. 2021: </strong>talk at Cambridge NLP seminar</li> <li><strong>May 2021: </strong>papers to appear at ACL 2021 (<a href="https://arxiv.org/abs/2009.13267">energy-based NMT</a>) and ACL-Findings (modeling clinical notes)</li> <li><strong>Mar. 2021: </strong>three papers to appear at NAACL 2021 (<a href="https://arxiv.org/abs/2103.06332">long-form QA</a>, <a href="https://arxiv.org/abs/2105.02584">table embeddings</a>, and <a href="https://arxiv.org/abs/2104.03474">simple neural LMs</a>)</li> <li><strong>Mar. 2021: </strong>got an <a href="https://www.nsf.gov/awardsearch/showAward?AWD_ID=2046248&HistoricalAwards=false">NSF CAREER</a> award to work on interactive storytelling!</li> <li><strong>Feb. 2021: </strong>talk at Georgia Tech NLP seminar</li> <li><strong>Sep. 2020: </strong>four papers to appear at EMNLP 2020 (<a href="https://arxiv.org/abs/2005.00770">task transferability</a>, <a href="https://arxiv.org/abs/2010.05700">stylistic paraphrasing</a>, <a href="https://arxiv.org/abs/2010.01717">interactive story generation</a>, <a href="https://mrdrozdov.github.io/static/papers/sdiora.pdf">unsupervised parsing</a>) <li><strong>Sep. 2020: </strong>talk at UMass CICS <a href="https://www.cics.umass.edu/event/ai-analysis-racism-sports-journalism-college-coaching-q-mohit-iyyer">Computing and Social Justice series </a> <li><strong>Sep. 2020: </strong>talk at <a href="https://nlp.cis.upenn.edu/clunch.html">UPenn CLunch</a> <li><strong>Jul. 2020: </strong>co-organizing the <a href="https://sites.google.com/view/nuse">Workshop on Narrative Understanding, Storylines, and Events</a> at ACL 2020 <li><strong>Jun. 2020: </strong>talk at <a href="https://fwdays.com/en/event/data-science-fwdays-2020">Data Science fwdays'20</a> <li><strong>May 2020: </strong>ACL 2020 paper on <a href="https://arxiv.org/abs/2005.00742">"stupid" attention mechanisms</a> now available! <li><strong>Mar. 2020: </strong>talk at USC/ISI Boston office <li><strong>Jan. 2020: </strong> our work on characterizing racial bias in American football was covered in <a href="https://theundefeated.com/features/artificial-intelligence-racial-bias-in-sports/">The Undefeated</a> <li><strong>Jan. 2020: </strong>talk at CMU LTI <li><strong>Dec. 2020: </strong>paper on <a href="https://arxiv.org/abs/1910.12366">stealing BERT-based models</a> to appear at ICLR 2020 <li><strong>Nov. 2019: </strong>talks at Google NYC, NYU <li><strong>Oct. 2019: </strong>talk at UMass Lowell <li><strong>Sep. 2019: </strong>talk at IBM <a href="http://ibm.biz/qasp2019">QA and semantic parsing</a> workshop <li><strong>Aug. 2019: </strong>two papers to appear at EMNLP 2019 (bias in sports commentary, unsupervised parsing) <li><strong>Jun. 2019: </strong>co-organizing the <a href="https://sites.google.com/view/narrativeunderstanding">Workshop on Narrative Understanding</a> at NAACL 2019; please consider attending! <li><strong>May 2019: </strong>three papers to appear at ACL 2019 (<a href="https://arxiv.org/abs/1906.02622">question generation</a>, <a href="https://arxiv.org/abs/1906.02780">fast decoding</a>, and <a href="https://arxiv.org/abs/1906.03656">paragraph embeddings</a>) <li><strong>Feb. 2019: </strong>two papers to appear at NAACL 2019 (<a href="https://arxiv.org/abs/1904.02142">unsupervised parsing</a>, <a href="https://arxiv.org/abs/1904.08386">computational literary criticism</a>) <li><strong>Nov. 2018: </strong>talk at the University of Antwerp <li><strong>Nov. 2018: </strong>talk at <a href="https://web.cs.wpi.edu/cs/about/colloquium.html">WPI CS Colloquium</a> <li><strong>Oct. 2018: </strong>talk at <a href="http://vermontcomplexsystems.org/events/science-of-stories/">UVM Symposium on the Science of Stories</a> <li><strong>Aug. 2018: </strong>three papers to appear at EMNLP 2018 (<a href="https://arxiv.org/abs/1808.07036">QuAC</a>, <a href="https://arxiv.org/abs/1804.07781">(un)interpretability</a>, and <a href="#">sentiment reproducibility</a>) <li><strong>Jul. 2018: </strong>talk at <a href="https://sites.google.com/view/tticlanggen-2018">TTIC Language Generation</a> workshop <li><strong>Jun. 2018: </strong><a href="https://arxiv.org/abs/1802.05365">ELMo</a> won best long paper at NAACL 2018!</li> <li><strong>Mar. 2018: </strong>talk at <a href="https://nlg.isi.edu/nl-seminar">USC/ISI NL Seminar</a></li> <li><strong>Feb. 2018: </strong>three papers to appear at NAACL 2018 (<a href="https://arxiv.org/abs/1804.06059">adversarial paraphrasing</a>, <a href="https://arxiv.org/abs/1802.05365">ELMo</a>, and <a href="https://arxiv.org/abs/1804.06026">image colorization</a>)</li> <li><strong>Feb. 2018: </strong>talk at Ursinus College on applications of machine learning to the digital humanities</li> <li><strong>Jan. 2018: </strong>talk at Indian Institute of Science, Bengaluru</li> <li><strong>Oct. 2017: </strong>submit your QA system to our <a href="https://sites.google.com/view/hcqa/">human-computer QA competition</a> at NIPS 2017!</li> <li><strong>Apr. 2017: </strong>COMICS data and code released <a href='https://github.com/miyyer/comics'>here</a>!</li> <li><strong>Jan. 2017: </strong>talk at CU Boulder Stats, Optimization, and Machine Learning seminar</li> <li><strong>Nov. 2016: </strong>talk at UMass Machine Learning &amp; Friends Lunch</li> <li><strong>Nov. 2016: </strong>new <a href="https://arxiv.org/abs/1611.05118">paper</a> on understanding comic book narratives and characters.</li> <li><strong>Nov. 2016: </strong>new <a href="https://arxiv.org/abs/1611.01242">paper</a> and associated <a href="https://www.microsoft.com/en-us/download/details.aspx?id=54253">dataset</a> for sequential semantic parsing.</li> <li><strong>Jun. 2016: </strong>our <a href="pubs/2016_naacl_relationships.pdf">paper</a> on characterizing fictional relationships won <a href="http://naacl.org/naacl-hlt-2016/best_papers.html">best long paper</a> at NAACL 2016!</li> <li><strong>Apr. 2016: </strong>we are organizing a <a href="https://sites.google.com/a/colorado.edu/2016-naacl-ws-human-computer-qa/">workshop</a> at NAACL 2016 on human-computer question answering with great invited speakers and accepted papers!</li> <li><strong>May 2015: </strong>our quiz bowl robot recently faced off against a team of four Jeopardy champions. Watch the <a href="http://www.youtube.com/watch?v=ZVHR8OAHDlI">introduction</a> to learn how it works and then check out the actual <a href="http://www.youtube.com/watch?v=LqsUaprYMOw">match</a>! If you're interested, <a href="http://www.github.com/miyyer/qb">code</a> for the entire system is also available. </li> </ul> </div> </div> <br> <div class="posts-wrapper" style="clear:both;margin-bottom:1.0em;"> <span id="gotostudents" style="font-family: 'roboto_bold';font-size:1.4em;text-transform:uppercase;font-weight: 300;">GROUP</span><br> <span style="margin-left:9em"><a href="https://lilakk.github.io/">Yapei Chang</a></span><br> <span style="margin-left:9em"><a href="https://marzenakrp.github.io/">Marzena Karpinska</a><span style="font-size:0.9em"><i> (postdoc)</i></span><br> <span style="margin-left:9em"> <a href="https://mungg.github.io/">Yekyung Kim</a></span><br> <span style="margin-left:9em"><a href="https://chtmp223.github.io/">Chau Pham</a><br> <span style="margin-left:9em"><a href="https://rishanthrajendhran.github.io/">Rishanth Rajendhran</a><br> <span style="margin-left:9em"><a href="https://jenna-russell.github.io/">Jenna Russell</a><br> <span style="margin-left:9em"><a href="https://yixiao-song.github.io">Yixiao Song</a> <span style="font-size:0.9em"><i>(w/ <a href="https://www.umass.edu/linguistics/member/rajesh-bhatt">Rajesh Bhatt</a>)</i></span> </span><br> <span style="margin-left:9em"><a href="http://katherinethai.github.io/">Katherine Thai</a></span><br> <span style="margin-left:9em"><a href="https://people.cs.umass.edu/~shufanwang/">Shufan Wang</a></span> <br> <br> <span style="margin-left:9em"><b>Former PhD students:</b></span><br> <span style="margin-left:9em"><a href="https://people.cs.umass.edu/~tuvu/">Tu Vu</a> <span style="font-size:0.9em"><i>(PhD 2023, now Research Scientist @ Google & Asst. Prof @ Virginia Tech)</i></span></span><br> <span style="margin-left:9em"><a href="http://martiansideofthemoon.github.io/">Kalpesh Krishna</a> <span style="font-size:0.9em"><i>(PhD 2023, now Research Scientist @ Google Bard)</i></span></span><br> <span style="margin-left:9em"><a href="https://people.cs.umass.edu/~simengsun/">Simeng Sun</a> <span style="font-size:0.9em"><i>(PhD 2024, now Research Scientist @ Nvidia)</i></span></span><br> <span style="margin-left:9em"><a href="https://mrdrozdov.github.io/">Andrew Drozdov</a> <span style="font-size:0.9em"><i>(PhD 2024, co-advised w/ <a href="https://people.cs.umass.edu/~mccallum/">Andrew McCallum</a>, now Research Scientist at Databricks)</i></span></span><br> <span style="margin-left:9em"><a href="https://people.cs.umass.edu/~nsa/">Nader Akoury</a> <span style="font-size:0.9em"><i>(PhD 2024, now postdoc at Cornell)</i></span></span><br> <br><div style="display:inline-block;margin-left:9em;font-style:italic;line-height:1.6em">Also see my group's other awesome <a href="javascript:unhide('alumni');"><i>alumni</i></a>!</div> <div id="alumni" class="hidden"> <span style="margin-left:9em"><a href="http://brown.edu/Research/AI/people/jack.html">Jack Merullo</a> <span style="font-size:0.9em"><i>(UMass undergrad, now PhD student at Brown)</i></span><br> <span style="margin-left:9em"><a href="https://fallcat.github.io/">Weiqiu You</a><span style="font-size:0.9em"><i> (UMass MS, now PhD student at UPenn)</i></span><br> <span style="margin-left:9em"><a href="https://akshitab.github.io/">Akshita Bhagia</a><span style="font-size:0.9em"><i> (UMass MS, now research engineer at AI2)</i></span><br> <span style="margin-left:9em"><a href="#">Dhruvil Gala</a><span style="font-size:0.9em"><i> (UMass UG, now at Microsoft)</i></span><br> <span style="margin-left:9em"><a href="https://www.linkedin.com/in/ahsaasbajaj/">Ahsaas Bajaj</a><span style="font-size:0.9em"><i> (UMass MS, now at Walmart Labs)</i></span><br> <span style="margin-left:9em"><a href="#">Sangeetha Balasubramanian</a><span style="font-size:0.9em"><i> (UMass MS, now ML Engineer at Amazon)</i></span><br> </div> <br><div style="display:inline-block;margin-left:9em;font-style:italic;line-height:1.6em">If you're a prospective PhD student, click <a href="javascript:unhide('prospect_faq');">here</a> for more info.</div><br> <div id="prospect_faq" class="hidden"> <div style="margin-left:9em">I plan to take one new PhD student every year. If you're interested in working with me, please <a href="https://www.cics.umass.edu/admissions/prospective-graduates">apply</a> to the UMass CICS PhD program and list me as a potential advisor. Feel free to send me an email after applying; I may not respond, but I'll still probably see it.</div> <br> </div> <div style="display:inline-block;margin-left:9em;font-style:italic;line-height:1.6em">If you're a current undergraduate or MS student at UMass interested in research, click <a href="javascript:unhide('ug_faq');">here</a>.</div><br> <div id="ug_faq" style="margin-left:9em" class="hidden"> I'm happy to work on research projects with current UMass / Five College undergraduates who have taken NLP and/or machine learning. Please send me an email or stop by my office hours if you're interested! <br><br></div> <!-- <div style="display:inline-block;margin-left:9em;font-style:italic;line-height:1.6em" id="postdoc_info"><strong>NEW: </strong>If you're looking for a postdoc, click <a href="javascript:unhide('postdoc_faq');">here</a> for more info.</div><br> <div id="postdoc_faq" style="margin-left:9em" class="hidden"> I'm looking for a postdoc! The position is for 1-2 years and flexible in terms of research agenda. Topics of particular interest include NLP / digital humanities applications to literary text and language generation (see my recent publications below for a better idea of my research directions). Responsibilities include both leading your own research projects as well as working with and/or supervising graduate and undergraduate students at UMass. Ideal candidates should either (1) hold a PhD in a computational field (e.g., computer or information science) with publications in NLP conferences or (2) hold a humanities or social science PhD (e.g. comparative literature, English, linguistics) with extensive experience using and/or designing techniques for computational text analysis. <strong>If interested, send me an email with your CV at miyyer@cs.umass.edu</strong>. --> </div> <br> <div class="posts-wrapper" style="clear:both;margin-bottom:1.0em;"> <span id="gototeaching" style="font-family: 'roboto_bold';font-size:1.4em;text-transform:uppercase;font-weight: 300;">TEACHING</span><br> <span style="margin-left:9em"><a href="https://people.cs.umass.edu/~miyyer/nlpseminar/index.html">Fall 2024: <i>NLP seminar (CS 692L)</i></a></span><br> <span style="margin-left:9em"><a href="cs685/index.html">Spring 2024: <i>Advanced Natural Language Processing (CS 685)</i></a></span><br> <span style="margin-left:9em"><a href="https://people.cs.umass.edu/~miyyer/nlpseminar/spring24.html">Spring 2024: <i>NLP seminar (CS 692L)</i></a></span><br> <span style="margin-left:9em"><a href="https://people.cs.umass.edu/~miyyer/nlpseminar/fall23.html">Fall 2023: <i>NLP seminar (CS 692L)</i></a></span><br> <span style="margin-left:9em"><a href="cs685_s23/index.html">Spring 2023: <i>Advanced Natural Language Processing (CS 685)</i></a></span><br> <span style="margin-left:9em"><a href="cs685_f22/index.html">Spring 2022: <i>Advanced Natural Language Processing (CS 685)</i></a></span><br> <span style="margin-left:9em"><a href="cs685_f21/index.html">Fall 2021: <i>Advanced Natural Language Processing (CS 685)</i></a></span><br> <span style="margin-left:9em"><a href="cs685_f20/index.html">Fall 2020: <i>Advanced Natural Language Processing (CS 685)</i></a></span><br> <span style="margin-left:9em"><a href="cs585/index.html">Fall 2019: <i>Introduction to Natural Language Processing (CS 585)</i></a></span> <br><span style="margin-left:9em"><a href="https://people.cs.umass.edu/~brenocon/cs690d_s19/">Spring 2019: <i>Deep Learning for Natural Language Processing (CS 690D)</i></a></span><br> <span style="margin-left:9em"><a href="https://people.cs.umass.edu/~miyyer/cs585_2018/">Fall 2018: <i>Introduction to Natural Language Processing (CS 585)</i></a></span><br> </div> <div class="posts-wrapper" style="clear:both"> <h3 id="gotopubs" style="margin-top:1.5em;margin-bottom:0.5em;">preprints</h3> <ul class="pubs"> <li> <a href="https://arxiv.org/abs/2406.17761">CaLMQA: Exploring culturally specific long-form question answering across 23 languages </a><br> Shane Arora*, Marzena Karpinska*, Hung-Ting Chen, Ipsita Bhattacharjee, <b>Mohit Iyyer</b>, and Eunsol Choi.<br> <i>arXiv 2024</i><br> <h2><a href="https://github.com/2015aroras/CaLMQA">code + data</a> // <a href="javascript:unhide('calmqa24');" class="bibtex">bibtex</a></h2> <div id="calmqa24" class="hidden"> <pre> @inproceedings{calmqa24, author={Shane Arora and Marzena Karpinska and Hung-Ting Chen and Ipsita Bhattacharjee and Mohit Iyyer and Eunsol Choi}, booktitle = {arXiv}, Year = "2024", Title={CaLMQA: Exploring culturally specific long-form question answering across 23 languages }, } </pre> </div> </li> <li> <a href="https://arxiv.org/abs/2407.11930">Fine-grained Hallucination Detection and Mitigation in Long-form Question Answering </a><br> Rachneet Sachdeva, Yixiao Song, <b>Mohit Iyyer</b>, Iryna Gurevych.<br> <i>arXiv 2024</i><br> <h2><a href="javascript:unhide('haluquest24');" class="bibtex">bibtex</a></h2> <div id="haluquest24" class="hidden"> <pre> @inproceedings{haluquest24, author={Rachneet Sachdeva and Yixiao Song and Mohit Iyyer and Iryna Gurevych.}, booktitle = {arXiv}, Year = "2024", Title={Fine-grained Hallucination Detection and Mitigation in Long-form Question Answering }, } </pre> </div> </li> <li> <a href="https://arxiv.org/abs/2406.19928">Interactive Topic Models with Optimal Transport </a><br> Garima Dhanania, Sheshera Mysore, Chau Minh Pham, <b>Mohit Iyyer</b>, Hamed Zamani, Andrew McCallum.<br> <i>arXiv 2024</i><br> <h2><a href="javascript:unhide('transport24');" class="bibtex">bibtex</a></h2> <div id="transport24" class="hidden"> <pre> @inproceedings{transport24, author={Garima Dhanania and Sheshera Mysore and Chau Minh Pham and Mohit Iyyer and Hamed Zamani and Andrew McCallum.}, booktitle = {arXiv}, Year = "2024", Title={Interactive Topic Models with Optimal Transport }, } </pre> </div> </li> <li> <a href="https://arxiv.org/abs/2309.09055">Exploring the impact of low-rank adaptation on the performance, efficiency, and regularization of RLHF </a><br> Simeng Sun, Dhawal Gupta, and <b>Mohit Iyyer</b>.<br> <i>arXiv 2023</i><br> <h2><a href="https://github.com/SimengSun/alpaca_farm_lora">code</a> // <a href="javascript:unhide('rlhf23');" class="bibtex">bibtex</a></h2> <div id="rlhf23" class="hidden"> <pre> @inproceedings{rlhf23, author={Simeng Sun and Dhawal Gupta and Mohit Iyyer}, booktitle = {arXiv}, Year = "2023", Title={Exploring the impact of low-rank adaptation on the performance, efficiency, and regularization of RLHF }, } </pre> </div> </li> </ul> </div> <div class="posts-wrapper" style="clear:both"> <h3 id="gotopubs" style="margin-top:1.5em;margin-bottom:0.5em;">publications</h3> <ul class="pubs"> <li> <a href="https://arxiv.org/abs/2406.16264">One Thousand and One Pairs: A "novel" challenge for long-context language models </a><br> Marzena Karpinska, Katherine Thai, Kyle Lo, Tanya Goyal, and <b>Mohit Iyyer</b>.<br> <i>EMNLP 2024</i><br> <h2><a href="https://novelchallenge.github.io/">leaderboard</a> // <a href="https://github.com/marzenakrp/nocha/">code + sample data</a> // <a href="javascript:unhide('nocha24');" class="bibtex">bibtex</a></h2> <div id="nocha24" class="hidden"> <pre> @inproceedings{nocha24, author={Marzena Karpinska and Katherine Thai and Kyle Lo and Tanya Goyal and Mohit Iyyer}, booktitle = {EMNLP}, Year = "2024", Title={One Thousand and One Pairs: A "novel" challenge for long-context language models }, } </pre> </div> </li> <li> <a href="https://arxiv.org/abs/2406.14517">PostMark: A Robust Blackbox Watermark for Large Language Models </a><br> Yapei Chang, Kalpesh Krishna, Amir Houmansadr, John Wieting, and <b>Mohit Iyyer</b>.<br> <i>EMNLP 2024</i><br> <h2><a href="https://github.com/lilakk/PostMark">code + data</a> // <a href="javascript:unhide('postmark24');" class="bibtex">bibtex</a></h2> <div id="postmark24" class="hidden"> <pre> @inproceedings{postmark24, author={Yapei Chang and Kalpesh Krishna and Amir Houmansadr and John Wieting and Mohit Iyyer}, booktitle = {EMNLP}, Year = "2024", Title={PostMark: A Robust Blackbox Watermark for Large Language Models }, } </pre> </div> </li> <li> <a href="https://arxiv.org/abs/2406.19276">VeriScore: Evaluating the factuality of verifiable claims in long-form text generation </a><br> Yixiao Song, Yekyung Kim, and <b>Mohit Iyyer</b>.<br> <i>Findings of EMNLP 2024</i><br> <h2><a href="https://github.com/Yixiao-Song/VeriScore">code + data</a> // <a href="javascript:unhide('veriscore24');" class="bibtex">bibtex</a></h2> <div id="veriscore24" class="hidden"> <pre> @inproceedings{veriscore24, author={Yixiao Song and Yekyung Kim and Mohit Iyyer}, booktitle = {Findings of EMNLP}, Year = "2024", Title={VeriScore: Evaluating the factuality of verifiable claims in long-form text generation }, } </pre> </div> </li> <li> <a href="https://arxiv.org/abs/2406.19371">Suri: Multi-constraint Instruction Following for Long-form Text Generation </a><br> Chau Minh Pham, Simeng Sun, and <b>Mohit Iyyer</b>.<br> <i>Findings of EMNLP 2024</i><br> <h2><a href="https://github.com/chtmp223/suri">code + data</a> // <a href="javascript:unhide('suri24');" class="bibtex">bibtex</a></h2> <div id="suri24" class="hidden"> <pre> @inproceedings{suri24, author={Chau Minh Pham and Simeng Sun and Mohit Iyyer}, booktitle = {Findings of EMNLP}, Year = "2024", Title={Suri: Multi-constraint Instruction Following for Long-form Text Generation }, } </pre> </div> </li> <li> <a href="https://arxiv.org/abs/2404.01261">FABLES: Evaluating faithfulness and content selection in book-length summarization </a><br> Yekyung Kim, Yapei Chang, Marzena Karpinska, Aparna Garimella, Varun Manjunatha, Kyle Lo, Tanya Goyal, and <b>Mohit Iyyer</b>.<br> <i>COLM 2024</i><br> <h2><a href="https://github.com/mungg/FABLES">code + data</a> // <a href="javascript:unhide('fables24');" class="bibtex">bibtex</a></h2> <div id="fables24" class="hidden"> <pre> @inproceedings{fables24, author={Yekyung Kim and Yapei Chang and Marzena Karpinska and Aparna Garimella and Varun Manjunatha and Kyle Lo and Tanya Goyal and Mohit Iyyer}, booktitle = {Conference on Language Modeling}, Year = "2024", Title={FABLES: Evaluating faithfulness and content selection in book-length summarization }, } </pre> </div> </li> <li> <a href="https://arxiv.org/abs/2404.13784">Iteratively Prompting Multimodal LLMs to Reproduce Natural and AI-Generated Images </a><br> Ali Naseh, Katherine Thai, <b>Mohit Iyyer</b>, and Amir Houmansadr.<br> <i>COLM 2024</i><br> <h2><a href="javascript:unhide('imprompt24');" class="bibtex">bibtex</a></h2> <div id="imprompt24" class="hidden"> <pre> @inproceedings{imprompt24, author={Ali Naseh and Katherine Thai and Mohit Iyyer and Amir Houmansadr}, booktitle = {Conference on Language Modeling}, Year = "2024", Title={Iteratively Prompting Multimodal LLMs to Reproduce Natural and AI-Generated Images }, } </pre> </div> </li> <li> <a href="https://arxiv.org/abs/2311.08640">Multistage Collaborative Knowledge Distillation from a Large Language Model for Semi-Supervised Sequence Generation</a><br> Jiachen Zhao, Wenlong Zhao, Andrew Drozdov, Benjamin Rozonoyer, Md Arafat Sultan, Jay-Yoon Lee, <b>Mohit Iyyer</b>, and Andrew McCallum.<br> <i>ACL 2024</i><br> <h2><a href="javascript:unhide('distill24');" class="bibtex">bibtex</a></h2> <div id="distill24" class="hidden"> <pre> @inproceedings{distill24, author={Jiachen Zhao and Wenlong Zhao and Andrew Drozdov and Benjamin Rozonoyer and Md Arafat Sultan and Jay-Yoon Lee and Mohit Iyyer and Andrew McCallum}, booktitle = {ACL}, Year = 2024, Title={Multistage Collaborative Knowledge Distillation from a Large Language Model for Semi-Supervised Sequence Generation }, } </pre> </div> </li> <li> <a href="https://arxiv.org/abs/2310.03214">FreshLLMs: Refreshing Large Language Models with Search Engine Augmentation </a><br> Tu Vu, <b>Mohit Iyyer</b>, Xuezhi Wang, Noah Constant, Jerry Wei, Jason Wei, Chris Tar, Yun-Hsuan Sung, Denny Zhou, Quoc Le, and Thang Luong.<br> <i>Findings of ACL 2024</i><br> <h2><a href="https://github.com/freshllms/freshqa">code + data</a> // <a href="javascript:unhide('freshqa23');" class="bibtex">bibtex</a></h2> <div id="freshqa23" class="hidden"> <pre> @inproceedings{freshqa23, author={Tu Vu and Mohit Iyyer and Xuezhi Wang and Noah Constant and Jerry Wei and Jason Wei and Chris Tar and Yun-Hsuan Sung and Denny Zhou and Quoc Le and Thang Luong}, booktitle = {Findings of ACL}, Year = "2023", Title={FreshLLMs: Refreshing Large Language Models with Search Engine Augmentation }, } </pre> </div> </li> <li> <a href="https://arxiv.org/abs/2311.01449">TopicGPT: A Prompt-based Topic Modeling Framework</a><br> Chau Minh Pham, Alexander Hoyle, Simeng Sun, Philip Resnik, and <b>Mohit Iyyer</b>.<br> <i>NAACL 2024</i><br> <h2><a href="https://github.com/chtmp223/topicGPT">code</a> // <a href="javascript:unhide('topicgpt23');" class="bibtex">bibtex</a></h2> <div id="topicgpt23" class="hidden"> <pre> @inproceedings{topicgpt24, author={Chau Minh Pham and Alexander Hoyle and Simeng Sun and Philip Resnik and Mohit Iyyer}, booktitle = NAACL, Year = 2024, Title={TopicGPT: A Prompt-based Topic Modeling Framework }, } </pre> </div> </li> <li> <a href="https://arxiv.org/abs/2311.09517">GEE! Grammar Error Explanation with Large Language Models </a><br> Yixiao Song, Kalpesh Krishna, Rajesh Bhatt, Kevin Gimpel, and <b>Mohit Iyyer</b>.<br> <i>Findings of NAACL 2024</i><br> <h2><a href="javascript:unhide('gee24');" class="bibtex">bibtex</a></h2> <div id="gee24" class="hidden"> <pre> @inproceedings{gee24, author={Yixiao Song and Kalpesh Krishna and Rajesh Bhatt and Kevin Gimpel, and Mohit Iyyer}, booktitle = {Findings of NAACL}, Year = 2024, Title={GEE! Grammar Error Explanation with Large Language Models }, } </pre> </div> </li> <li> <a href="https://dl.acm.org/doi/10.1145/3589334.3648142">Triage of Messages and Conversations in a Large-Scale Child Victimization Corpus </a><br> Prasanna Lakkur Subramanyam, <b>Mohit Iyyer</b>, and Brian Levine<br> <i>ACM The Web Conference 2024 (Web4Good Track)</i><br> <h2><a href="javascript:unhide('www24');" class="bibtex">bibtex</a></h2> <div id="www24" class="hidden"> <pre> @inproceedings{www24, author={Prasanna Lakkur Subramanyam and Mohit Iyyer and Brian Levine}, booktitle = {Proceedings of ACM The Web Conference (Web4Good Track)}, Year = 2024, Title={Triage of Messages and Conversations in a Large-Scale Child Victimization Corpus }, } </pre> </div> </li> <li> <a href="https://arxiv.org/abs/2310.00785">BooookScore: A systematic exploration of book-length summarization in the era of LLMs</a><br> Yapei Chang, Kyle Lo, Tanya Goyal, and <b>Mohit Iyyer</b>.<br> <i>ICLR 2024 (oral)</i><br> <h2><a href="https://github.com/lilakk/BooookScore">code</a> // <a href="javascript:unhide('booook23');" class="bibtex">bibtex</a></h2> <div id="booook23" class="hidden"> <pre> @inproceedings{booook23, author={Yapei Chang and Kyle Lo and Tanya Goyal and Mohit Iyyer}, booktitle = ICLR, Year = "2024", Title={BooookScore: A systematic exploration of book-length summarization in the era of LLMs }, } </pre> </div> </li> <li> <a href="https://arxiv.org/abs/2305.14564">PEARL: Prompting Large Language Models to Plan and Execute Actions Over Long Documents </a><br> Simeng Sun, Yang Liu, Shuohang Wang, Chenguang Zhu, and <b>Mohit Iyyer</b>.<br> <i>EACL 2024</i><br> <h2><a href="https://github.com/SimengSun/pearl">code</a> // <a href="javascript:unhide('pearl23');" class="bibtex">bibtex</a></h2> <div id="pearl23" class="hidden"> <pre> @inproceedings{pearl23, author={Simeng Sun and Yang Liu and Shuohang Wang and Chenguang Zhu and Mohit Iyyer}, booktitle = EACL, Year = "2024", Title={PEARL: Prompting Large Language Models to Plan and Execute Actions Over Long Documents }, } </pre> </div> </li> <li> <a href="https://arxiv.org/abs/2302.11521">How Does In-Context Learning Help Prompt Tuning? </a><br> Simeng Sun, Yang Liu, Dan Iter, Chenguang Zhu, and <b>Mohit Iyyer</b>.<br> <i>Findings of EACL 2024</i><br> <h2><a href="javascript:unhide('ipt23');" class="bibtex">bibtex</a></h2> <div id="ipt23" class="hidden"> <pre> @inproceedings{ipt23, author={Simeng Sun and Yang Liu and Dan Iter and Chenguang Zhu and Mohit Iyyer}, booktitle = EACL, Year = "2024", Title={How Does In-Context Learning Help Prompt Tuning?}, } </pre> </div> </li> <li> <a href="https://arxiv.org/abs/2305.14251">FActScore: Fine-grained Atomic Evaluation of Factual Precision in Long Form Text Generation </a><br> Sewon Min*, Kalpesh Krishna*, Xinxi Lyu, Mike Lewis, Wen-tau Yih, Pang Wei Koh, <b>Mohit Iyyer</b>, Luke Zettlemoyer, and Hannaneh Hajishirzi.<br> <i>EMNLP 2023</i><br> <h2><a href="https://github.com/shmsw25/FActScore">code + data</a> // <a href="javascript:unhide('factscore23');" class="bibtex">bibtex</a></h2> <div id="factscore23" class="hidden"> <pre> @inproceedings{factscore23, author={Sewon Min and Kalpesh Krishna and Xinxi Lyu and Mike Lewis and Wen-tau Yih and Pang Wei Koh and Mohit Iyyer and Luke Zettlemoyer and Hannaneh Hajishirzi}, booktitle = {EMNLP}, Year = "2023", Title={FActScore: Fine-grained Atomic Evaluation of Factual Precision in Long Form Text Generation }, } </pre> </div> </li> <li> <a href="https://arxiv.org/abs/2305.14625">kNN-LM Does Not Improve Open-ended Text Generation </a><br> Shufan Wang, Yixiao Song, Andrew Drozdov, Aparna Garimella, Varun Manjunatha, and <b>Mohit Iyyer</b>.<br> <i>EMNLP 2023</i><br> <h2><a href="javascript:unhide('knnlm23');" class="bibtex">bibtex</a></h2> <div id="knnlm23" class="hidden"> <pre> @inproceedings{knnlm23, author={Shufan Wang and Yixiao Song and Andrew Drozdov and Aparna Garimella and Varun Manjunatha and Mohit Iyyer}, booktitle = {EMNLP}, Year = "2023", Title={KNN-LM Does Not Improve Open-ended Text Generation }, } </pre> </div> </li> <li> <a href="#">Disco Elysium: Exploring Player Perceptions of LLM-Generated Dialogue within a Commercial Video Game</a><br> Nader Akoury and Qian Yang and <b>Mohit Iyyer</b>.<br> <i>Findings of EMNLP 2023</i><br> <h2><a href="javascript:unhide('disco23');" class="bibtex">bibtex</a></h2> <div id="disco23" class="hidden"> <pre> @inproceedings{disco23, author={Nader Akoury and Qian Yang and Mohit Iyyer}, booktitle = {Findings of EMNLP}, Year = "2023", Title={Disco Elysium: Exploring Player Perceptions of LLM-Generated Dialogue within a Commercial Video Game }, } </pre> </div> </li> <li> <a href="#">PaRaDe: Passage Ranking using Demonstrations with LLMs</a><br> Andrew Drozdov, Honglei Zhuang, Zhuyun Dai, Zhen Qin, Razieh Rahimi, Xuanhui Wang, Dana Alon, <b>Mohit Iyyer</b>, Andrew McCallum, Donald Metzler, and Kai Hui.<br> <i>Findings of EMNLP 2023 (short)</i><br> <h2><a href="javascript:unhide('parade23');" class="bibtex">bibtex</a></h2> <div id="parade23" class="hidden"> <pre> @inproceedings{parade23, author={Andrew Drozdov and Honglei Zhuang and Zhuyun Dai and Zhen Qin and Razieh Rahimi and Xuanhui Wang and Dana Alon and Mohit Iyyer and Andrew McCallum and Donald Metzler and Kai Hui}, booktitle = {Findings of EMNLP}, Year = "2023", Title={PaRaDe: Passage Ranking using Demonstrations with LLMs }, } </pre> </div> </li> <li> <a href="https://arxiv.org/abs/2304.03245">Large language models effectively leverage document-level context for literary translation, but critical errors persist </a><br> Marzena Karpinska and <b>Mohit Iyyer</b>.<br> <i>WMT 2023</i><br> <h2><a href="https://github.com/marzenakrp/LiteraryTranslation">data</a> // <a href="javascript:unhide('litmt23');" class="bibtex">bibtex</a></h2> <div id="litmt23" class="hidden"> <pre> @inproceedings{litmt23, author={Marzena Karpinska and Mohit Iyyer}, booktitle = {WMT}, Year = "2023", Title={Large language models effectively leverage document-level context for literary translation, but critical errors persist }, } </pre> </div> </li> <li> <a href="https://arxiv.org/abs/2303.13408">Paraphrasing evades detectors of AI-generated text, but retrieval is an effective defense </a><br> Kalpesh Krishna, Yixiao Song, Marzena Karpinska, John Wieting, and <b>Mohit Iyyer</b>.<br> <i>NeurIPS 2023</i><br> <h2><a href="https://github.com/martiansideofthemoon/ai-detection-paraphrases">code + model + data</a> // <a href="javascript:unhide('detect23');" class="bibtex">bibtex</a></h2> <div id="detect23" class="hidden"> <pre> @inproceedings{detect23, author={Kalpesh Krishna and Yixiao Song and Marzena Karpinska and John Wieting and Mohit Iyyer}, booktitle = {Conference on Neural Information Processing Systems}, Year = "2023", Title={Paraphrasing evades detectors of AI-generated text, but retrieval is an effective defense }, } </pre> </div> </li> <li> <a href=https://arxiv.org/abs/2305.18201>A Critical Evaluation of Evaluations for Long-form Question Answering</a><br> Fangyuan Xu*, Yixiao Song*, <b>Mohit Iyyer</b>, and Eunsol Choi.<br> <i>ACL 2023</i><br> <h2><a href="javascript:unhide('lfqa23');" class="bibtex">bibtex</a></h2> <div id="lfqa23" class="hidden"> <pre> @inproceedings{lfqa23, author={Fangyuan Xu and Yixiao Song and Mohit Iyyer and Eunsol Choi}, Booktitle = {Association of Computational Linguistics}, Year = "2023", Title={A Critical Evaluation of Evaluations for Long-form Question Answering}, } </pre> </div> </li> <li> <a href="https://people.cs.umass.edu/~amir/papers/CCS23-LM-stealing.pdf">Stealing the Decoding Algorithms of Language Models </a><br> Ali Naseh, Kalpesh Krishna, <b>Mohit Iyyer</b>, and Amir Houmansadr.<br> <i>CCS 2023 (distinguished paper)</i><br> <h2><a href="javascript:unhide('stealing23');" class="bibtex">bibtex</a></h2> <div id="stealing23" class="hidden"> <pre> @inproceedings{stealing23, author={Ali Naseh and Kalpesh Krishna and Mohit Iyyer and Amir Houmansadr}, booktitle = {ACM Computer and Communications Security Conference}, Year = "2023", Title={Stealing the Decoding Algorithms of Language Models}, } </pre> </div> </li> <li> <a href="https://people.cs.umass.edu/~nsa/papers/discoelysium_aaai_2023.pdf">Towards Grounded Dialogue Generation in Video Game Environments </a><br> Nader Akoury, Ronan Salz, and <b>Mohit Iyyer</b>.<br> <i>Workshop on Creative AI Across Modalities @ AAAI 2023</i><br> <h2><a href="javascript:unhide('disco23');" class="bibtex">bibtex</a></h2> <div id="disco23" class="hidden"> <pre> @inproceedings{disco23, author={Nader Akoury and Ronan Salz and Mohit Iyyer}, booktitle = {Workshop on Creative AI Across Modalities, AAAI 2023}, Year = "2023", Title={Towards Grounded Dialogue Generation in Video Game Environments}, } </pre> </div> </li> <li> <a href="https://arxiv.org/abs/2301.13298">LongEval: Guidelines for Human Evaluation of Faithfulness in Long-form Summarization </a><br> Kalpesh Krishna, Erin Bransom, Bailey Kuehl, <b>Mohit Iyyer</b>, Pradeep Dasigi, Arman Cohan, and Kyle Lo.<br> <i>EACL 2023 (<a href="https://2023.eacl.org/program/best-paper/">outstanding paper</a>)</i><br> <h2><a href="https://github.com/martiansideofthemoon/longeval-summarization">code + data</a> // <a href="javascript:unhide('longeval23');" class="bibtex">bibtex</a></h2> <div id="longeval23" class="hidden"> <pre> @inproceedings{longeval23, author={Kalpesh Krishna and Erin Bransom and Bailey Kuehl and Mohit Iyyer and Pradeep Dasigi and Arman Cohan and Kyle Lo}, booktitle = {European Chapter of the Association for Computational Linguistics}, Year = "2023", Title={LongEval: Guidelines for Human Evaluation of Faithfulness in Long-form Summarization}, } </pre> </div> </li> <li> <a href="https://arxiv.org/abs/2210.07188">ezCoref: Towards Unifying Annotation Guidelines for Coreference Resolution </a><br> Ankita Gupta, Marzena Karpinska, Wenlong Zhao, Kalpesh Krishna, Jack Merullo, Luke Yeh, <b>Mohit Iyyer</b>, and Brendan O'Connor.<br> <i>Findings of EACL 2023</i><br> <h2><a href="javascript:unhide('coref23');" class="bibtex">bibtex</a></h2> <div id="coref23" class="hidden"> <pre> @inproceedings{coref23, author={Ankita Gupta and Marzena Karpinska and Wenlong Zhao and Kalpesh Krishna and Jack Merullo and Luke Yeh and Mohit Iyyer and Brendan O'Connor}, booktitle = {Findings of European Chapter of the Association for Computational Linguistics}, Year = "2023", Title={ezCoref: Towards Unifying Annotation Guidelines for Coreference Resolution}, } </pre> </div> </li> <li> <a href="https://arxiv.org/abs/2205.09726">RankGen: Improving Text Generation with Large Ranking Models</a><br> Kalpesh Krishna, Yapei Chang, John Wieting, and <b>Mohit Iyyer</b>.<br> <i>EMNLP 2022</i><br> <h2><a href="https://github.com/martiansideofthemoon/rankgen">code</a> // <a href="javascript:unhide('rankgen22');" class="bibtex">bibtex</a></h2> <div id="rankgen22" class="hidden"> <pre> @inproceedings{rankgen22, author={Kalpesh Krishna and Yapei Chang and John Wieting and Mohit Iyyer}, booktitle = {Empirical Methods in Natural Language Processing}, Year = "2022", Title={RankGen: Improving Text Generation with Large Ranking Models}, } </pre> </div> </li> <li> <a href="https://arxiv.org/abs/2205.12647">Overcoming Catastrophic Forgetting in Zero-Shot Cross-Lingual Generation</a><br> Tu Vu, Aditya Barua, Brian Lester, Daniel Cer, <b>Mohit Iyyer</b>, and Noah Constant.<br> <i>EMNLP 2022</i><br> <h2><a href="https://github.com/google-research/prompt-tuning/tree/main/prompt_tuning/x_gen">code</a> // <a href="javascript:unhide('multilingual22');" class="bibtex">bibtex</a></h2> <div id="multilingual22" class="hidden"> <pre> @inproceedings{multilingual22, author={Tu Vu and Aditya Barua and Brian Lester and Daniel Cer and Mohit Iyyer and Noah Constant}, booktitle = {Empirical Methods in Natural Language Processing}, Year = "2022", Title={Overcoming Catastrophic Forgetting in Zero-Shot Cross-Lingual Generation}, } </pre> </div> </li> <li> <a href="https://arxiv.org/abs/2210.14250">Exploring Document-Level Literary Machine Translation with Parallel Paragraphs from World Literature</a><br> Katherine Thai*, Marzena Karpinska*, Kalpesh Krishna, William Ray, Moira Inghilleri, John Wieting, and <b>Mohit Iyyer</b>.<br> <i>EMNLP 2022</i><br> <h2><a href="https://github.com/katherinethai/par3/">data + code</a> // <a href="javascript:unhide('par22');" class="bibtex">bibtex</a></h2> <div id="par22" class="hidden"> <pre> @inproceedings{par22, author={Katherine Thai and Marzena Karpinska and Kalpesh Krishna and William Ray and Moira Inghilleri and John Wieting and Mohit Iyyer}, booktitle = {Empirical Methods in Natural Language Processing}, Year = "2022", Title={Exploring Document-Level Literary Machine Translation with Parallel Paragraphs from World Literature}, } </pre> </div> </li> <li> <a href="https://arxiv.org/abs/2210.11689">SLING: Sino Linguistic Evaluation of Large Language Models</a><br> Yixiao Song, Kalpesh Krishna, Rajesh Bhatt, and <b>Mohit Iyyer</b>.<br> <i>EMNLP 2022</i><br> <h2><a href="https://github.com/Yixiao-Song/SLING_Data_Code">data + code</a> // <a href="javascript:unhide('sling22');" class="bibtex">bibtex</a></h2> <div id="sling22" class="hidden"> <pre> @inproceedings{sling22, author={Yixiao Song and Kalpesh Krishna and Rajesh Bhatt and Mohit Iyyer}, booktitle = {Empirical Methods in Natural Language Processing}, Year = "2022", Title={SLING: Sino Linguistic Evaluation of Large Language Models}, } </pre> </div> </li> <li> <a href="https://arxiv.org/abs/2210.13746">DEMETR: Diagnosing Evaluation Metrics for Translation</a><br> Marzena Karpinska, Nishant Raj, Katherine Thai, Yixiao Song, Ankita Gupta, and <b>Mohit Iyyer</b>.<br> <i>EMNLP 2022</i><br> <h2><a href="https://github.com/marzenakrp/demetr">data</a> // <a href="javascript:unhide('demetr22');" class="bibtex">bibtex</a></h2> <div id="demetr22" class="hidden"> <pre> @inproceedings{demetr22, author={Marzena Karpinska and Nishant Raj and Katherine Thai and Yixiao Song and Ankita Gupta and Mohit Iyyer}, booktitle = {Empirical Methods in Natural Language Processing}, Year = "2022", Title={DEMETR: Diagnosing Evaluation Metrics for Translation}, } </pre> </div> </li> <li> <a href="https://arxiv.org/abs/2210.15859">You can't pick your neighbors, or can you? When and How to Rely on Retrieval in the KNN-LM</a><br> Andrew Drozdov, Shufan Wang, Razieh Rahimi, Andrew McCallum, Hamed Zamani, and <b>Mohit Iyyer</b>.<br> <i>Findings of EMNLP 2022</i><br> <h2><a href="javascript:unhide('knnlm22');" class="bibtex">bibtex</a></h2> <div id="knnlm22" class="hidden"> <pre> @inproceedings{knnlm22, author={Andrew Drozdov and Shufan Wang and Razieh Rahimi and Andrew McCallum and Hamed Zamani and Mohit Iyyer}, booktitle = {Empirical Methods in Natural Language Processing}, Year = "2022", Title={You can't pick your neighbors, or can you? When and How to Rely on Retrieval in the KNN-LM}, } </pre> </div> </li> <li> <a href="https://arxiv.org/abs/2205.09278">Modeling Exemplification in Long-form Question Answering via Retrieval</a><br> Shufan Wang, Fangyuan Xu, Laure Thompson, Eunsol Choi, and <b>Mohit Iyyer</b>.<br> <i>NAACL 2022</i><br> <h2><a href="https://github.com/north125ptlm/lfqa-retrieval">code</a> // <a href="javascript:unhide('lfqa22');" class="bibtex">bibtex</a></h2> <div id="lfqa22" class="hidden"> <pre> @inproceedings{lfqa22, author={Shufan Wang and Fangyuan Xu and Laure Thompson and Eunsol Choi and Mohit Iyyer}, Booktitle = {North American Association for Computational Linguistics}, Year = "2022", Title={Modeling Exemplification in Long-form Question Answering via Retrieval}, } </pre> </div> </li> <li> <a href="https://arxiv.org/abs/2204.10878">ChapterBreak: A Challenge Dataset for Long-Range Language Models</a><br> Simeng Sun, Katherine Thai, and <b>Mohit Iyyer</b>.<br> <i>NAACL 2022 (short)</i><br> <h2><a href="https://github.com/SimengSun/ChapterBreak">data + code</a> // <a href="javascript:unhide('chbrk22');" class="bibtex">bibtex</a></h2> <div id="chbrk22" class="hidden"> <pre> @inproceedings{chbrk22, author={Simeng Sun and Katherine Thai and Mohit Iyyer}, Booktitle = {North American Association for Computational Linguistics}, Year = "2022", Title={ChapterBreak: A Challenge Dataset for Long-Range Language Models}, } </pre> </div> </li> <li> <a href="https://arxiv.org/abs/2203.10053">RELiC: Retrieving Evidence for Literary Claims</a><br> Katherine Thai, Yapei Chang, Kalpesh Krishna, and <b>Mohit Iyyer</b>.<br> <i>ACL 2022</i><br> <h2><a href="https://relic.cs.umass.edu/">project page (data + code + leaderboard)</a> // <a href="javascript:unhide('relic22');" class="bibtex">bibtex</a></h2> <div id="relic22" class="hidden"> <pre> @inproceedings{relic22, author={Katherine Thai and Yapei Chang and Kalpesh Krishna and Mohit Iyyer}, Booktitle = {Association of Computational Linguistics}, Year = "2022", Title={RELiC: Retrieving Evidence for Literary Claims}, } </pre> </div> </li> <li> <a href="https://people.cs.umass.edu/~simengsun/paper/insights_negative_results_2022.pdf">How Much Do Modifications to Transformer Language Models Affect Their Ability to Learn Linguistic Knowledge?</a><br> Simeng Sun, Brian Dillon, and <b>Mohit Iyyer</b>.<br> <i>Workshop on Insights from Negative Results in NLP @ ACL 2022</i><br> <h2><a href="javascript:unhide('neg22');" class="bibtex">bibtex</a></h2> <div id="neg22" class="hidden"> <pre> @inproceedings{neg22, author={Simeng Sun and Brian Dillon and Mohit Iyyer}, Booktitle = {Workshop on Insights from Negative Results in NLP @ ACL 2022}, Year = "2022", Title={How Much Do Modifications to Transformer Language Models Affect Their Ability to Learn Linguistic Knowledge?}, } </pre> </div> </li> <li> <a href="https://arxiv.org/abs/2109.09115">Do Long-Range Language Models Actually Use Long-Range Context?</a><br> Simeng Sun, Kalpesh Krishna, Andrew Mattarella-Micke, and <b>Mohit Iyyer</b>.<br> <i>EMNLP 2021</i><br> <h2><a href="javascript:unhide('long21');" class="bibtex">bibtex</a></h2> <div id="long21" class="hidden"> <pre> @inproceedings{long21, author={Simeng Sun and Kalpesh Krishna and Andrew Mattarella-Micke and Mohit Iyyer}, Booktitle = {Empirical Methods in Natural Language Processing}, Year = "2021", Title={Do Long-Range Language Models Actually Use Long-Range Context?}, } </pre> </div> </li> <li> <a href="https://arxiv.org/abs/2109.06270">STraTA: Self-Training with Task Augmentation for Better Few-shot Learning.</a><br> Tu Vu, Minh-Thang Luong, Quoc Le, Grady Simon, and <b>Mohit Iyyer</b>.<br> <i>EMNLP 2021</i><br> <h2><a href="javascript:unhide('strata21');" class="bibtex">bibtex</a></h2> <div id="strata21" class="hidden"> <pre> @inproceedings{strata21, author={Tu Vu and Minh-Thang Luong and Quoc Le and Grady Simon and Mohit Iyyer}, Booktitle = {Empirical Methods in Natural Language Processing}, Year = "2021", Title={STraTA: Self-Training with Task Augmentation for Better Few-shot Learning.}, } </pre> </div> </li> <li> <a href="https://arxiv.org/abs/2109.06835">The Perils of Using Mechanical Turk to Evaluate Open-Ended Text Generation.</a><br> Marzena Karpinska, Nader Akoury, and <b>Mohit Iyyer</b>.<br> <i>EMNLP 2021</i><br> <h2><a href="javascript:unhide('perils21');" class="bibtex">bibtex</a></h2> <div id="perils21" class="hidden"> <pre> @inproceedings{perils21, author={Marzena Karpinska and Nader Akoury and Mohit Iyyer}, Booktitle = {Empirical Methods in Natural Language Processing}, Year = "2021", Title={The Perils of Using Mechanical Turk to Evaluate Open-Ended Text Generation.}, } </pre> </div> </li> <li> <a href="https://arxiv.org/abs/2109.06304">Phrase-BERT: Improved Phrase Embeddings from BERT with an Application to Corpus Exploration.</a><br> Shufan Wang, Laure Thompson, and <b>Mohit Iyyer</b>.<br> <i>EMNLP 2021</i><br> <h2><a href="https://github.com/sf-wa-326/phrase-bert-topic-model">code</a> // <a href="javascript:unhide('pb21');" class="bibtex">bibtex</a></h2> <div id="pb21" class="hidden"> <pre> @inproceedings{pb21, author={Shufan Wang and Laure Thompson and Mohit Iyyer}, Booktitle = {Empirical Methods in Natural Language Processing}, Year = "2021", Title={Phrase-BERT: Improved Phrase Embeddings from BERT with an Application to Corpus Exploration.}, } </pre> </div> </li> <li> <a href="https://arxiv.org/abs/2109.05112">Improved Latent Tree Induction with Distant Supervision via Span Constraints.</a><br> Zhiyang Xu, Andrew Drozdov, Jay Yoon Lee, Tim O'Gorman, Subendhu Rongali, Dylan Finkbeiner, Shilpa Suresh, <b>Mohit Iyyer</b>, and Andrew McCallum.<br> <i>EMNLP 2021</i><br> <h2><a href="https://github.com/iesl/distantly-supervised-diora">code</a> // <a href="javascript:unhide('diora21');" class="bibtex">bibtex</a></h2> <div id="diora21" class="hidden"> <pre> @inproceedings{diora21, author={Zhiyang Xu and Andrew Drozdov and Jay Yoon Lee and Tim O'Gorman and Subendhu Rongali and Dylan Finkbeiner and Shilpa Suresh and Mohit Iyyer and Andrew McCallum}, Booktitle = {Empirical Methods in Natural Language Processing}, Year = "2021", Title={Improved Latent Tree Induction with Distant Supervision via Span Constraints.}, } </pre> </div> </li> <li> <a href="https://arxiv.org/abs/2104.07000">IGA: An Intent-Guided Authoring Assistant.</a><br> Simeng Sun, Wenlong Zhao, Varun Manjunatha, Rajiv Jain, Vlad Morariu, Franck Dernoncourt, Balaji Vasan Srinivasan, and <b>Mohit Iyyer</b>.<br> <i>EMNLP 2021</i><br> <h2><a href="javascript:unhide('iga21');" class="bibtex">bibtex</a></h2> <div id="iga21" class="hidden"> <pre> @inproceedings{iga21, author={Simeng Sun and Wenlong Zhao and Varun Manjunatha and Rajiv Jain and Vlad Morariu and Franck Dernoncourt and Balaji Vasan Srinivasan and Mohit Iyyer}, Booktitle = {Empirical Methods in Natural Language Processing}, Year = "2021", Title={IGA: An Intent-Guided Authoring Assistant.}, } </pre> </div> </li> <li> <a href="https://arxiv.org/abs/2009.13267">Energy-Based Reranking: Improving Neural Machine Translation Using Energy-Based Models.</a><br> Sumanta Bhattacharyya, Pedram Rooshenas, Subhajit Naskar, Simeng Sun, <b>Mohit Iyyer</b>, and Andrew McCallum.<br> <i>ACL 2021</i><br> <h2><a href="javascript:unhide('enmt21');" class="bibtex">bibtex</a></h2> <div id="enmt21" class="hidden"> <pre> @inproceedings{enmt21, author={Sumanta Bhattacharyya and Pedram Rooshenas and Subhajit Naskar and Simeng Sun and Mohit Iyyer and Andrew McCallum}, Booktitle = {Association for Computational Linguistics}, Year = "2021", Title={Energy-Based Reranking: Improving Neural Machine Translation Using Energy-Based Models.}, } </pre> </div> </li> <li> <a href="https://aclanthology.org/2021.findings-acl.352.pdf">Predicting In-Hospital Mortality by Combining Clinical Notes with Time-Series Data.</a><br> Iman Deznabi, <b>Mohit Iyyer</b>, and Madalina Fiterau.<br> <i>Findings of ACL 2021 (short)</i><br> <h2><a href="javascript:unhide('clinical21');" class="bibtex">bibtex</a></h2> <div id="clinical21" class="hidden"> <pre> @inproceedings{clinical21, author={Iman Deznabi and Mohit Iyyer and Madalina Fiterau}, Booktitle = {Findings of the Association for Computational Linguistics}, Year = "2021", Title={Predicting in-hospital mortality by combining clinical notes with time-series data}, } </pre> </div> </li> <li> <a href="https://arxiv.org/abs/2104.09835">WiFiMod: Transformer-based Indoor Human Mobility Modeling using Passive Sensing.</a><br> Amee Trivedi, Kate Silverstein, Emma Strubell, <b>Mohit Iyyer</b>, and Prashant Shenoy.<br> <i>ACM COMPASS 2021</i><br> <h2><a href="javascript:unhide('wifimod21');" class="bibtex">bibtex</a></h2> <div id="wifimod21" class="hidden"> <pre> @inproceedings{wifimod21, author={Amee Trivedi and Kate Silverstein and Emma Strubell and Mohit Iyyer and Prashant Shenoy}, Booktitle = {ACM COMPASS}, Year = "2021", Title={WiFiMod: Transformer-based Indoor Human Mobility Modeling using Passive Sensing}, } </pre> </div> </li> <li> <a href="https://arxiv.org/abs/2103.06332">Hurdles to Progress in Long-form Question Answering.</a><br> Kalpesh Krishna, Aurko Roy, and <b>Mohit Iyyer</b>.<br> <i>NAACL 2021</i><br> <h2><a href="https://github.com/martiansideofthemoon/hurdles-longform-qa">code</a> // <a href="https://ai.googleblog.com/2021/03/progress-and-challenges-in-long-form.html">blog</a> // <a href="javascript:unhide('lfqa21');" class="bibtex">bibtex</a></h2> <div id="lfqa21" class="hidden"> <pre> @inproceedings{lfqa21, author={Kalpesh Krishna and Aurko Roy and Mohit Iyyer}, Booktitle = {North American Association for Computational Linguistics}, Year = "2021", Title={Hurdles to Progress in Long-form Question Answering}, } </pre> </div> </li> <li> <a href="https://arxiv.org/abs/2105.02584">TABBIE: Pretrained Representations of Tabular Data.</a><br> Hiroshi Iida, June Thai, Varun Manjunatha, and <b>Mohit Iyyer</b>.<br> <i>NAACL 2021</i><br> <h2><a href="https://github.com/SFIG611/tabbie">code</a> // <a href="javascript:unhide('tabbie21');" class="bibtex">bibtex</a></h2> <div id="tabbie21" class="hidden"> <pre> @inproceedings{tabbie21, author={Hiroshi Iida and June Thai and Varun Manjunatha and Mohit Iyyer}, Booktitle = {North American Association for Computational Linguistics}, Year = "2021", Title={TABBIE: Pretrained Representations of Tabular Data}, } </pre> </div> </li> <li> <a href="https://arxiv.org/abs/2104.03474">Revisiting Simple Neural Probabilistic Language Models.</a><br> Simeng Sun and <b>Mohit Iyyer</b>.<br> <i>NAACL 2021 (short)</i><br> <h2><a href="https://github.com/SimengSun/revisit-nplm">code</a> // <a href="javascript:unhide('stupidlm21');" class="bibtex">bibtex</a></h2> <div id="stupidlm21" class="hidden"> <pre> @inproceedings{stupidlm21, author={Simeng Sun and Mohit Iyyer}, Booktitle = {North American Association for Computational Linguistics}, Year = "2021", Title={Revisiting Simple Neural Probabilistic Language Models}, } </pre> </div> </li> <li> <a href="http://arxiv.org/abs/2103.15335">Changing the Mind of Transformers for Topically-Controllable Language Generation.</a><br> Haw-Shiuan Chang, Jiaming Yuan, <b>Mohit Iyyer</b>, and Andrew McCallum.<br> <i>EACL 2021</i><br> <h2><a href="https://github.com/iesl/interactive_LM">code</a> // <a href="javascript:unhide('eacl21');" class="bibtex">bibtex</a></h2> <div id="eacl21" class="hidden"> <pre> @inproceedings{eacl21, author={Haw-Shiuan Chang and Jiaming Yuan and Mohit Iyyer and Andrew McCallum}, Booktitle = {European Chapter of the Association for Computational Linguistics}, Year = "2021", Title={Changing the Mind of Transformers for Topically-Controllable Language Generation}, } </pre> </div> </li> <li> <a href="https://arxiv.org/abs/2103.02537">Weakly-Supervised Open-Retrieval Conversational Question Answering.</a><br> Chen Qu, Liu Yang, Cen Chen, W. Bruce Croft, Kalpesh Krishna, and <b>Mohit Iyyer</b>.<br> <i>ECIR 2021</i><br> <h2><a href="javascript:unhide('ecir21');" class="bibtex">bibtex</a></h2> <div id="ecir21" class="hidden"> <pre> @inproceedings{ecir20, author={Chen Qu and Liu Yang and Cen Chen and W. Bruce Croft and Kalpesh Krishna and Mohit Iyyer}, Booktitle = {European Conference on Information Retrieval}, Year = "2021", Title={Weakly-Supervised Open-Retrieval Conversational Question Answering}, } </pre> </div> </li> <li> <a href="https://arxiv.org/abs/2011.00092">Analyzing Gender Bias within Narrative Tropes.</a><br> Dhruvil Gala, Mohammad Omar Khursheed, Hannah Lerner, Brendan O'Connor, and <b>Mohit Iyyer</b>.<br> <i>Workshop on NLP and CSS at EMNLP 2020</i><br> <h2><a href="https://github.com/dhruvilgala/tvtropes">data</a> // <a href="javascript:unhide('tropes20');" class="bibtex">bibtex</a></h2> <div id="tropes20" class="hidden"> <pre> @inproceedings{tropes20, author={Dhruvil Gala and Mohammad Omar Khursheed and Hannah Lerner and Brendan O'Connor and Mohit Iyyer}, Booktitle = {Workshop on NLP and CSS at EMNLP}, Year = "2020", Title={Analyzing Gender Bias within Narrative Tropes}, } </pre> </div> </li> <li> <a href="https://arxiv.org/abs/2005.00770">Exploring and Predicting Transferability across NLP Tasks.</a><br> Tu Vu, Tong Wang, Tsendsuren Munkhdalai, Alessandro Sordoni, Adam Trischler, Andrew Mattarella-Micke, Subhransu Maji, and <b>Mohit Iyyer</b>.<br> <i>EMNLP 2020</i><br> <h2><a href="https://github.com/tuvuumass/task-transferability">code</a> // <a href="javascript:unhide('transfer20');" class="bibtex">bibtex</a></h2> <div id="transfer20" class="hidden"> <pre> @inproceedings{transfer20, author={Tu Vu and Tong Wang and Tsendsuren Munkhdalai and Alessandro Sordoni and Adam Trischler and Andrew Mattarella-Micke and Subhransu Maji and Mohit Iyyer}, Booktitle = {Empirical Methods in Natural Language Processing}, Year = "2020", Title={Exploring and Predicting Transferability across NLP Tasks}, } </pre> </div> </li> <li> <a href="https://arxiv.org/abs/2010.05700">Reformulating Unsupervised Style Transfer as Paraphrase Generation.</a><br> Kalpesh Krishna, John Wieting, and <b>Mohit Iyyer</b>.<br> <i>EMNLP 2020</i><br> <h2><a href="http://style.cs.umass.edu">project page (code + data + live demo)</a> // <a href="javascript:unhide('style20');" class="bibtex">bibtex</a></h2> <div id="style20" class="hidden"> <pre> @inproceedings{style20, author={Kalpesh Krishna and John Wieting and Mohit Iyyer}, Booktitle = {Empirical Methods in Natural Language Processing}, Year = "2020", Title={Reformulating Unsupervised Style Transfer as Paraphrase Generation}, } </pre> </div> </li> <li> <a href="https://arxiv.org/abs/2010.01717">STORIUM: A Dataset and Evaluation Platform for Machine-in-the-Loop Story Generation.</a><br> Nader Akoury, Shufan Wang, Josh Whiting, Stephen Hood, Nanyun Peng, and <b>Mohit Iyyer</b>.<br> <i>EMNLP 2020</i><br> <h2><a href="https://storium.cs.umass.edu">project page (data + leaderboard)</a> // <a href="javascript:unhide('storium20');" class="bibtex">bibtex</a></h2> <div id="storium20" class="hidden"> <pre> @inproceedings{storium20, author={Nader Akoury and Shufan Wang and Josh Whiting and Stephen Hood and Nanyun Peng and Mohit Iyyer}, Booktitle = {Empirical Methods in Natural Language Processing}, Year = "2020", Title={STORIUM: A Dataset and Evaluation Platform for Machine-in-the-Loop Story Generation}, } </pre> </div> </li> <li> <a href="https://mrdrozdov.github.io/static/papers/sdiora.pdf">Unsupervised Parsing with S-DIORA: Single Tree Encoding for Deep Inside-Outside Recursive Autoencoders.</a><br> Andrew Drozdov, Subendhu Rongali, Yi-Pei Chen, Tim O'Gorman, <b>Mohit Iyyer</b>, and Andrew McCallum.<br> <i>EMNLP 2020</i><br> <h2><a href="javascript:unhide('sdiora20');" class="bibtex">bibtex</a></h2> <div id="sdiora20" class="hidden"> <pre> @inproceedings{sdiora20, author={Andrew Drozdov and Subendhu Rongali and Yi-Pei Chen and Tim O'Gorman and Mohit Iyyer and Andrew McCallum}, Booktitle = {Empirical Methods in Natural Language Processing}, Year = "2020", Title={Unsupervised Parsing with S-DIORA: Single Tree Encoding for Deep Inside-Outside Recursive Autoencoders}, } </pre> </div> </li> <li> <a href="https://arxiv.org/abs/2005.00742">Hard-Coded Gaussian Attention for Neural Machine Translation.</a><br> Weiqiu You*, Simeng Sun*, and <b>Mohit Iyyer</b>.<br> <i>ACL 2020</i><br> <h2><a href="https://github.com/fallcat/stupidNMT">code</a> // <a href="javascript:unhide('acl2020');" class="bibtex">bibtex</a></h2> <div id="acl2020" class="hidden"> <pre> @inproceedings{acl2020, Author = {Weiqiu You and Simeng Sun and Mohit Iyyer}, Booktitle = {Association for Computational Linguistics}, Year = "2020", Title = {Hard-Coded Gaussian Attention for Neural Machine Translation} } </pre> </div> </li> <li> <a href="https://arxiv.org/abs/2005.11364">Open-Retrieval Conversational Question Answering.</a><br> Chen Qu, Liu Yang, Cen Chen, Minghui Qiu, W. Bruce Croft, and <b>Mohit Iyyer</b>.<br> <i>SIGIR 2020</i><br> <h2><a href="javascript:unhide('sigir20');" class="bibtex">bibtex</a></h2> <div id="sigir20" class="hidden"> <pre> @inproceedings{openconvqa, Author = {Chen Qu and Liu Yang and Cen Chen and Minghui Qiu and W. Bruce Croft and Mohit Iyyer}, Booktitle = {43rd International ACM SIGIR Conference on Research and Development in Information Retrieval}, Year = "2020", Title = {Open-Retrieval Conversational Question Answering} } </pre> </div> </li> <li> <a href="https://arxiv.org/abs/1804.08077">Which Evaluations Uncover Sense Representations that Actually Make Sense?</a><br> Fenfei Guo, Jordan Boyd-Graber, <b>Mohit Iyyer</b>, and Leah Findlater.<br> <i>LREC 2020</i><br> <h2><a href="javascript:unhide('lrec2020');" class="bibtex">bibtex</a></h2> <div id="lrec2020" class="hidden"> <pre> @inproceedings{lrec2020, Author = {Fenfei Guo and Jordan Boyd-Graber and Mohit Iyyer and Leah Findlater}, Booktitle = {Language Resources and Evaluation Conference}, Year = "2020", Title = {Which Evaluations Uncover Sense Representations that Actually Make Sense?} } </pre> </div> </li> <li> <a href="https://arxiv.org/abs/1910.12366">Thieves on Sesame Street! Model Extraction of BERT-based APIs.</a><br> Kalpesh Krishna, Gaurav Singh Tomar, Ankur Parikh, Nicolas Papernot, and <b>Mohit Iyyer</b>.<br> <i>ICLR 2020</i><br> <h2><a href="https://github.com/google-research/language/tree/master/language/bert_extraction">code</a> // <a href="javascript:unhide('thieves20');" class="bibtex">bibtex</a></h2> <div id="thieves20" class="hidden"> <pre> @inproceedings{thieves20, Author = {Kalpesh Krishna and Gaurav Singh Tomar and Ankur Parikh and Nicolas Papernot and Mohit Iyyer}, Booktitle = {International Conference on Learning Representations}, Year = "2020", Title = {Thieves on Sesame Street! Model Extraction of BERT-based APIs.} } </pre> </div> </li> <li> <a href="https://arxiv.org/abs/1909.03343">Investigating Sports Commentator Bias within a Large Corpus of American Football Broadcasts.</a><br> Jack Merullo*, Luke Yeh*, Abram Handler, Alvin Grissom II, Brendan O'Connor, and <b>Mohit Iyyer</b>.<br> <i>EMNLP 2019 (short)</i><br> <h2><a href="https://github.com/jmerullo/football">data + code</a> // <a href="javascript:unhide('football19');" class="bibtex">bibtex</a> // press: <a href="https://theundefeated.com/features/artificial-intelligence-racial-bias-in-sports/">The Undefeated</a></h2> <div id="football19" class="hidden"> <pre> @inproceedings{football2019, Author = {Jack Merullo and Luke Yeh and Abram Handler and Alvin Grissom II and Brendan O'Connor and Mohit Iyyer}, Booktitle = {Empirical Methods in Natural Language Processing}, Year = "2019", Title = {Investigating Sports Commentator Bias within a Large Corpus of American Football Broadcasts.} } </pre> </div> </li> <li> <a href="https://www.aclweb.org/anthology/D19-1161.pdf">Unsupervised Labeled Parsing with Deep Inside-Outside Recursive Autoencoders.</a><br> Andrew Drozdov, Patrick Verga, Yi-Pei Chen, <b>Mohit Iyyer</b>, and Andrew McCallum.<br> <i>EMNLP 2019 (short)</i><br> <h2><a href="javascript:unhide('diora2_19');" class="bibtex">bibtex</a></h2> <div id="diora2_19" class="hidden"> <pre> @inproceedings{diora2_2019, Author = {Andrew Drozdov and Patrick Verga and Yi-Pei Chen and Mohit Iyyer and Andrew McCallum}, Booktitle = {Empirical Methods in Natural Language Processing}, Year = "2019", Title = {Unsupervised Labeled Parsing with Deep Inside-Outside Recursive Autoencoders} } </pre> </div> </li> <li> <a href="https://arxiv.org/abs/1908.09456">Attentive History Selection for Conversational Question Answering.</a><br> Chen Qu, Liu Yang, Minghui Qiu, Yongfeng Zhang, Cen Chen, W. Bruce Croft, and <b>Mohit Iyyer</b><br> <i>CIKM 2019</i><br> <h2><a href="javascript:unhide('cikm19');" class="bibtex">bibtex</a></h2> <div id="cikm19" class="hidden"> <pre> @inproceedings{cikm2019, Author = {Chen Qu and Liu Yang and Minghui Qiu and Yongfeng Zhang and Cen Chen and W. Bruce Croft and Mohit Iyyer}, Booktitle = {Conference on Information and Knowledge Management}, Year = "2019", Title = {Attentive History Selection for Conversational Question Answering} </pre> </div> </li> <li> <a href="https://arxiv.org/abs/1906.02780">Syntactically Supervised Transformers for Faster Neural Machine Translation.</a><br> Nader Akoury, Kalpesh Krishna, and <b>Mohit Iyyer</b>.<br> <i>ACL 2019</i><br> <h2><a href="https://github.com/dojoteef/synst">code + data</a> // <a href="javascript:unhide('synst19');" class="bibtex">bibtex</a></h2> <div id="synst19" class="hidden"> <pre> @inproceedings{synst2019, Author = {Nader Akoury and Kalpesh Krishna and Mohit Iyyer}, Booktitle = {Association for Computational Linguistics}, Year = "2019", Title = {Syntactically Supervised Transformers for Faster Neural Machine Translation} } </pre> </div> </li> <li> <a href="https://arxiv.org/abs/1906.02622">Generating Question-Answer Hierarchies.</a><br> Kalpesh Krishna and <b>Mohit Iyyer</b>.<br> <i>ACL 2019</i><br> <h2><a href="http://squash.cs.umass.edu">project page (code + data + live demo)</a> // <a href="javascript:unhide('squash19');" class="bibtex">bibtex</a></h2> <div id="squash19" class="hidden"> <pre> @inproceedings{squash2019, Author = {Kalpesh Krishna and Mohit Iyyer}, Booktitle = {Association for Computational Linguistics}, Year = "2019", Title = {Generating Question-Answer Hierarchies} } </pre> </div> </li> <li> <a href="https://arxiv.org/abs/1906.03656">Encouraging Paragraph Embeddings to Remember Sentence Identity Improves Classification.</a><br> Tu Vu and <b>Mohit Iyyer</b>.<br> <i>ACL 2019 (short)</i><br> <h2><a href="https://github.com/tuvuumass/SCoPE">code</a> // <a href="javascript:unhide('para19');" class="bibtex">bibtex</a></h2> <div id="para19" class="hidden"> <pre> @inproceedings{paraemb2019, Author = {Tu Vu and Mohit Iyyer}, Booktitle = {Association for Computational Linguistics}, Year = "2019", Title = {Encouraging Paragraph Embeddings to Remember Sentence Identity Improves Classification} } </pre> </div> </li> <li> <a href="https://arxiv.org/abs/1904.04792">Quizbowl: The Case for Incremental Question Answering.</a><br> Pedro Rodriguez, Shi Feng, <b>Mohit Iyyer</b>, He He, and Jordan Boyd-Graber.<br> <i>arXiv 2019</i><br> <h2><a href="javascript:unhide('qb19');" class="bibtex">bibtex</a></h2> <div id="qb19" class="hidden"> <pre> @inproceedings{QB2019, Author = {Pedro Rodriguez and Shi Feng and Mohit Iyyer and He He and Jordan Boyd-Graber}, Booktitle = {arXiv}, Year = "2019", Title = {Quizbowl: The Case for Incremental Question Answering} } </pre> </div> </li> <li> <a href="https://arxiv.org/abs/1905.05412">BERT with History Modeling for Conversational Question Answering.</a><br> Chen Qu, Liu Yang, Minghui Qiu, W. Bruce Croft, Yongfeng Zhang, and <b>Mohit Iyyer</b>.<br> <i>SIGIR 2019 (short)</i><br> <h2><a href="https://github.com/prdwb/bert_hae">code</a> // <a href="javascript:unhide('sigir19');" class="bibtex">bibtex</a></h2> <div id="sigir19" class="hidden"> <pre> @inproceedings{ConvQA2019, Author = {Chen Qu and Liu Yang and Minghui Qiu and W. Bruce Croft and Yongfeng Zhang and Mohit Iyyer}, Booktitle = {42nd International ACM SIGIR Conference on Research and Development in Information Retrieval}, Year = "2019", Title = {BERT with History Modeling for Conversational Question Answering} } </pre> </div> </li> <li> <a href="https://arxiv.org/abs/1904.02142">Unsupervised Latent Tree Induction with Deep Inside-Outside Recursive Auto-Encoders.</a><br> Andrew Drozdov, Patrick Verga, Mohit Yadav, <b>Mohit Iyyer</b>, Andrew McCallum.<br> <i>NAACL 2019</i><br> <h2><a href="https://github.com/iesl/diora">code</a> // <a href="javascript:unhide('diora19');" class="bibtex">bibtex</a></h2> <div id="diora19" class="hidden"> <pre> @inproceedings{DIORA2019, Author = {Andrew Drozdov and Patrick Verga and Mohit Yadav and Mohit Iyyer and Andrew McCallum}, Booktitle = {North American Association for Computational Linguistics}, Year = "2019", Title = {Unsupervised Latent Tree Induction with Deep Inside-Outside Recursive Auto-Encoders} } </pre> </div> </li> <li> <a href="https://arxiv.org/abs/1904.08386">Casting Light on Invisible Cities: Computationally Engaging with Literary Criticism</a><br> Shufan Wang, <b>Mohit Iyyer</b><br> <i>NAACL 2019 (short)</i><br> <h2><a href="javascript:unhide('invisible19');" class="bibtex">bibtex</a></h2> <div id="invisible19" class="hidden"> <pre> @inproceedings{Wang2019, Author = {Shufan Wang and Mohit Iyyer}, Booktitle = {North American Association for Computational Linguistics}, Year = "2019", Title = {Casting Light on Invisible Cities: Computationally Engaging with Literary Criticism} } </pre> </div> </li> <li> <a href="https://arxiv.org/abs/1808.07036">QuAC: Question Answering in Context.</a><br> Eunsol Choi*, He He*, <b>Mohit Iyyer</b>*, Mark Yatskar*, Wen-tau Yih, Yejin Choi, Percy Liang, Luke Zettlemoyer.<br> <i>EMNLP 2018</i><br> <h2><a href="http://quac.ai">project page</a> // <a href="http://quac.ai/datasheet.pdf">datasheet</a> // <a href="javascript:unhide('quac18');" class="bibtex">bibtex</a></h2> <div id="quac18" class="hidden"> <pre> @inproceedings{ChoiQuAC2018, Author = {Eunsol Choi and He He and Mohit Iyyer and Mark Yatskar and Wen-tau Yih and Yejin Choi and Percy Liang and Luke Zettlemoyer}, Booktitle = {Empirical Methods in Natural Language Processing}, Year = "2018", Title = {QuAC: Question Answering in Context} } </pre> </div> </li> <li> <a href="https://arxiv.org/abs/1804.07781">Pathologies of Neural Models Make Interpretation Difficult.</a><br> Shi Feng, Eric Wallace, Alvin Grissom II, <b>Mohit Iyyer</b>, Pedro Rodriguez and Jordan Boyd-Graber<br> <i>EMNLP 2018</i><br> <h2><a href="https://vimeo.com/306158589">video</a> // <a href="http://users.umiacs.umd.edu/~shifeng/www/2018_emnlp_pathologies_slides.pdf">slides</a> // <a href="javascript:unhide('rawr18');" class="bibtex">bibtex</a> // press: <a href=https://cmns.umd.edu/news-events/features/4264>UMD</a></h2> <div id="rawr18" class="hidden"> <pre> @inproceedings{FengRAWR2018, Author = {Shi Feng and Eric Wallace and Alvin Grissom II and Mohit Iyyer and Pedro Rodriguez and Jordan Boyd-Graber}, Booktitle = {Empirical Methods in Natural Language Processing}, Year = "2018", Title = {Pathologies of Neural Models Make Interpretation Difficult} } </pre> </div> </li> <li> <a href="https://arxiv.org/abs/1808.07733">Revisiting the Importance of Encoding Logic Rules in Sentiment Classification.</a><br> Kalpesh Krishna, Preethi Jyothi, <b>Mohit Iyyer</b><br> <i>EMNLP 2018 (short)</i><br> <h2><a href="https://github.com/martiansideofthemoon/logic-rules-sentiment">code + data</a> // <a href="https://vimeo.com/306136412">video</a> // <a href="javascript:unhide('revisit18');" class="bibtex">bibtex</a></h2> <div id="revisit18" class="hidden"> <pre> @inproceedings{KrishnaRevisit2018, Author = {Kalpesh Krishna and Preethi Jyothi and Mohit Iyyer}, Booktitle = {Empirical Methods in Natural Language Processing}, Year = "2018", Title = {Revisiting the Importance of Encoding Logic Rules in Sentiment Classification} </pre> </div> </li> <li> <a href="https://arxiv.org/abs/1804.06059">Adversarial Example Generation with Syntactically Controlled Paraphrase Networks.</a><br> <b>Mohit Iyyer</b>*, John Wieting*, Kevin Gimpel, Luke Zettlemoyer.<br> <i>NAACL 2018</i><br> <h2><a href="https://github.com/miyyer/scpn">code + data</a> // <a href="https://vimeo.com/277673796">video</a> // <a href="javascript:unhide('scpn18');" class="bibtex">bibtex</a></h2> <div id="scpn18" class="hidden"> <pre> @inproceedings{IyyerSCPN2018, Author = {Mohit Iyyer and John Wieting and Kevin Gimpel and Luke Zettlemoyer}, Booktitle = {North American Association for Computational Linguistics}, Year = "2018", Title = {Adversarial Example Generation with Syntactically Controlled Paraphrase Networks} } </pre> </div> </li> <li> <a href="https://arxiv.org/abs/1802.05365">Deep Contextualized Word Representations.</a><br> Matthew E. Peters, Mark Neumann, <b>Mohit Iyyer</b>, Matt Gardner, Christopher Clark, Kenton Lee, Luke Zettlemoyer.<br> <i>NAACL 2018 (best long paper)</i><br> <h2><a href="https://github.com/allenai/allennlp/blob/master/tutorials/how_to/elmo.md">code</a> // <a href="https://vimeo.com/277673209">video</a> // <a href="javascript:unhide('elmo18');" class="bibtex">bibtex</a></h2> <div id="elmo18" class="hidden"> <pre> @inproceedings{PetersELMo2018, Author = {Matthew E. Peters and Mark Neumann and Mohit Iyyer and Matt Gardner and Christopher Clark and Kenton Lee and Luke Zettlemoyer}, Booktitle = {North American Association for Computational Linguistics}, Year = "2018", Title = {Deep contextualized word representations} } </pre> </div> </li> <li> <a href="https://arxiv.org/abs/1804.06026">Learning to Color from Language.</a><br> Varun Manjunatha*, <b>Mohit Iyyer</b>*, Jordan Boyd-Graber, Larry Davis.<br> <i>NAACL 2018 (short)</i><br> <h2><a href="https://github.com/superhans/colorfromlanguage">code</a> // <a href="javascript:unhide('color18');" class="bibtex">bibtex</a></h2> <div id="color18" class="hidden"> <pre> @inproceedings{Manjunatha2018, Author = {Varun Manjunatha and Mohit Iyyer and Jordan Boyd-Graber and Larry Davis}, Booktitle = {North American Association for Computational Linguistics}, Year = "2018", Title = {Learning to Color from Language} } </pre> </div> </li> <li> <a href="pubs/2017_acl_dynsp.pdf">Search-based Neural Structured Learning for Sequential Question Answering.</a><br> <b>Mohit Iyyer</b>, Wen-tau Yih, and Ming-Wei Chang.<br> <i>ACL 2017</i><br> <h2><a href="https://github.com/scottyih/DynSP">code</a> // <a href="https://www.microsoft.com/en-us/download/details.aspx?id=54253">data</a> // <a href="javascript:unhide('sqa17');" class="bibtex">bibtex</a> // <a href="https://arxiv.org/abs/1611.01242">previous version</a></h2> <div id="sqa17" class="hidden"> <pre> @inproceedings{IyyerSQA2016, Author = {Mohit Iyyer and Wen-tau Yih and Ming-Wei Chang}, Booktitle = {Association for Computational Linguistics}, Year = "2017", Title = {Search-based Neural Structured Learning for Sequential Question Answering} } </pre> </div> </li> <li> <a href="pubs/2017_cvpr_comics.pdf">The Amazing Mysteries of the Gutter: Drawing Inferences Between Panels in Comic Book Narratives.</a><br> <b>Mohit Iyyer</b>*, Varun Manjunatha*, Anupam Guha, Yogarshi Vyas, Jordan Boyd-Graber, Hal Daum茅 III, and Larry Davis.<br> <i>CVPR 2017 (spotlight)</i><br> <h2><a href="https://github.com/miyyer/comics">code + data</a> // <a href="https://www.youtube.com/watch?v=9t2DrC-jGp4">video</a> // <a href="javascript:unhide('comics16');" class="bibtex">bibtex</a> // press: <a href="https://www.technologyreview.com/s/602973/ai-machine-attempts-to-understand-comic-books-and-fails/">mit tech review</a>, <a href="http://www.digitaltrends.com/cool-tech/comic-book-ai/">digital trends</a></h2> <div id="comics16" class="hidden"> <pre> @inproceedings{IyyerComics2016, Author = {Mohit Iyyer and Varun Manjunatha and Anupam Guha and Yogarshi Vyas and Jordan Boyd-Graber <br>and Hal {Daum\'{e} III} and Larry Davis}, Booktitle = {IEEE Conference on Computer Vision and Pattern Recognition}, Year = "2017", Title = {The Amazing Mysteries of the Gutter: Drawing Inferences Between Panels in Comic Book Narratives} } </pre> </div> </li> <li> <a href="pubs/2017_relationships_aaai.pdf">Unsupervised Learning of Evolving Relationships Between Literary Characters</a>.<br> Snigdha Chaturvedi, <b>Mohit Iyyer</b>, and Hal Daum茅 III. <br> <i>AAAI 2017</i><br> <h2><a href="javascript:unhide('aaai17');" class="bibtex">bibtex</a></h2> <div id="aaai17" class="hidden"> <pre> @inproceedings{Chaturvedi:Iyyer:Daume-III-2016, Author = {Snigdha Chaturvedi and Mohit Iyyer and Hal {Daum\'{e} III}}, Booktitle = {Association for the Advancement of Artificial Intelligence}, Year = {2017}, Title = {Unsupervised Learning of Evolving Relationships Between Literary Characters}, } </pre> </div> </li> <li> <a href="pubs/2016_naacl_relationships.pdf">Feuding Families and Former Friends: Unsupervised Learning for Dynamic Fictional Relationships.</a><br> <b>Mohit Iyyer</b>, Anupam Guha, Snigdha Chaturvedi, Jordan Boyd-Graber, and Hal Daum茅 III.<br> <i>NAACL 2016 (<a href="http://naacl.org/naacl-hlt-2016/best_papers.html">best long paper</a>)</i><br> <h2><a href="http://github.com/miyyer/rmn">code + data</a> // <a href="pubs/2016_naacl_sup.pdf">supplementary</a> // <a href="data/2016_naacl_relationships.pdf">slides</a> // <a href="http://techtalks.tv/talks/feuding-families-and-former-friends-unsupervised-learning-for-dynamic-fictional-relationships/62898/">video</a> // <a href="javascript:unhide('naacl16');" class="bibtex">bibtex</a> // press: <a href="https://aeon.co/essays/how-ai-is-revolutionising-the-role-of-the-literary-critic">aeon</a></h2> <div id="naacl16" class="hidden"> <pre> @inproceedings{Iyyer:Guha:Chaturvedi:Boyd-Graber:Daume-III-2016, Author = {Mohit Iyyer and Anupam Guha and Snigdha Chaturvedi and Jordan Boyd-Graber and Hal {Daum\'{e} III}}, Booktitle = {North American Association for Computational Linguistics}, Location = {San Diego, CA}, Year = {2016}, Title = {Feuding Families and Former Friends: Unsupervised Learning for Dynamic Fictional Relationships}, } </pre> </div> </li> <li> <a href="https://www.cs.umd.edu/~aguha/publications/2016_naacl_paintings.pdf">"A Distorted Skull Lies in the Bottom Center..." Identifying Paintings from Text Descriptions</a>.<br> Anupam Guha, <b>Mohit Iyyer</b>, and Jordan Boyd-Graber.<br> <i>NAACL Human-Computer QA Workshop, 2016</i> <br> <h2><a href="https://www.cs.umd.edu/~aguha/data/paintdata.rar">data</a> // <a href="javascript:unhide('paintingqa16');" class="bibtex">bibtex</a></h2> <div id="paintingqa16" class="hidden"> <pre> @inproceedings{Guha:Iyyer:Boyd-Graber-2016, Author = {Anupam Guha and Mohit Iyyer and Jordan Boyd-Graber}, Booktitle = {NAACL Human-Computer Question Answering Workshop}, Location = {San Diego}, Year = {2016}, Title = {A Distorted Skull Lies in the Bottom Center: Identifying Paintings from Text Descriptions}, } </pre> </div> </li> <li> <a href="http://jmlr.org/proceedings/papers/v48/kumar16.pdf">Ask Me Anything: Dynamic Memory Networks for Natural Language Processing</a>.<br> Ankit Kumar, Ozan Irsoy, Peter Ondruska, <b>Mohit Iyyer</b>, James Bradbury, Ishaan Gulrajani, Victor Zhong, Romain Paulus, and Richard Socher.<br> <i>ICML 2016</i> <br> <h2><a href="javascript:unhide('dmn15');" class="bibtex">bibtex</a></h2> <div id="dmn15" class="hidden"> <pre> @inproceedings{DMN2016, Author = {Ankit Kumar and Ozan Irsoy and Peter Ondruska and Mohit Iyyer and James Bradbury and Ishaan Gulrajani and Victor Zhong and Romain Paulus and Richard Socher}, Booktitle = {International Conference on Machine Learning}, Year = {2016}, Title = {Ask Me Anything: Dynamic Memory Networks for Natural Language Processing}, } </pre> </div> </li> <li> Interactive Incremental Question Answering.<br> Jordan Boyd-Graber, <b>Mohit Iyyer</b>, He He, and Hal Daum茅 III.<br> <i>NIPS Demonstration Track, 2015 (<a href="https://nips.cc/Conferences/2015/Awards">outstanding demonstration</a>)</i><br> <h2><a href="javascript:unhide('nips15');" class="bibtex">bibtex</a></h2> <div id="nips15" class="hidden"> <pre> @inproceedings{Boyd-Graber:Iyyer:He:Daume-III-2015, Author = {Jordan Boyd-Graber and Mohit Iyyer and He He and Hal {Daum\'{e} III}}, Booktitle = {Neural Information Processing Systems}, Location = {Montreal, Canada}, Year = {2015}, Title = {Interactive Incremental Question Answering}, } </pre> </div> </li> <li> <a href="pubs/2015_acl_dan.pdf">Deep Unordered Composition Rivals Syntactic Methods for Text Classification</a>.<br> <b>Mohit Iyyer</b>, Varun Manjunatha, Jordan Boyd-Graber, and Hal Daum茅 III.<br> <i>ACL 2015</i><br> <h2><a href="http://github.com/miyyer/dan">code + data</a> // <a href="data/acldan_slides.pdf">slides</a> // <a href="http://techtalks.tv/talks/deep-unordered-composition-rivals-syntactic-methods-for-text-classification/61844/">video</a> // <a href="javascript:unhide('acl15');" class="bibtex">bibtex</a><br></h2> <div id="acl15" class="hidden"> <pre> @inproceedings{Iyyer:Manjunatha:Boyd-Graber:III}-2015, Title = {Deep Unordered Composition Rivals Syntactic Methods for Text Classification}, Booktitle = {Association for Computational Linguistics}, Author = {Mohit Iyyer and Varun Manjunatha and Jordan Boyd-Graber and Hal {Daum\'{e} III}}, Year = {2015}, Location = {Beijing, China} } </pre> </div> </li> <li> <a href="pubs/2015_naacl_qb_coref.pdf">Removing the Training Wheels: A Coreference Dataset that Entertains Humans and Challenges Computers</a>.<br> Anupam Guha, <b>Mohit Iyyer</b>, Danny Bouman, and Jordan Boyd-Graber.<br> <i>NAACL 2015</i> <br> <h2><a href=http://www.cs.umd.edu/~aguha/qbcoreference>code + data</a> // <a href="javascript:unhide('naacl15');" class="bibtex">bibtex</a></h2> <div id="naacl15" class="hidden"> <pre> @inproceedings{Guha:Iyyer:Bouman:Boyd-Graber-2015, Title = {Removing the Training Wheels: A Coreference Dataset that Entertains Humans and Challenges Computers.}, Author = {Anupam Guha and Mohit Iyyer and Danny Bouman and Jordan Boyd-Graber}, Booktitle = {North American Association for Computational Linguistics}, Year = {2015}, Location = {Denver, Colorado} } </pre> </div> </li> <li> <a href="pubs/2014_nips_generation.pdf">Generating Sentences from Semantic Vector Space Representations</a>.<br> <b>Mohit Iyyer</b>, Jordan Boyd-Graber, and Hal Daum茅 III.<br> <i>NIPS Workshop on Learning Semantics, 2014</i> <br> <h2><a href="javascript:unhide('nipsls14');" class="bibtex">bibtex</a></h2> <div id="nipsls14" class="hidden"> <pre> @inproceedings{Iyyer:Boyd-Graber:Daume-2014, Title = {Generating Sentences from Semantic Vector Space Representations}, Author = {Mohit Iyyer and Jordan Boyd-Graber and Hal {Daum\'e III}}, Booktitle = {NIPS Workshop on Learning Semantics}, Year = {2014}, Location = {Montreal, Canada} } </pre> </div> </li> <li> <a href="pubs/2014_qb_rnn.pdf">A Neural Network for Factoid Question Answering over Paragraphs</a>.<br> <b>Mohit Iyyer</b>, Jordan Boyd-Graber, Leonardo Claudino, Richard Socher, and Hal Daum茅 III.<br> <i>EMNLP 2014</i> <br> <h2><a href=qblearn/index.html>code + data</a> // <a href="javascript:unhide('emnlp14qb');" class="bibtex">bibtex</a> // press: <a href="https://www.umiacs.umd.edu/about-us/news/computerized-question-answering-system-built-umd-uc-boulder-bests-%E2%80%9Cjeopardy%E2%80%9D-champion">umiacs</a>, <a href="http://terp.umd.edu/what-is-a-jeopardy-worthy-computer#.WEb-Y8wrLdF">terp</a>, <a href="http://www.diamondbackonline.com/news/umd-researchers-computer-beats-jeopardy-star-ken-jennings-at-trivia/article_133f05f4-75d9-11e5-9c00-8718fca7f4ec.html">diamondback</a>, <a href="http://www.colorado.edu/cs/2015/07/23/professor%E2%80%99s-quiz-bowl-robot-goes-head-head-humans">colorado cs</a> </h2> <div id="emnlp14qb" class="hidden"> <pre> @inproceedings{Iyyer:Boyd-Graber:Claudino:Socher:Daume-2014, Title = {A Neural Network for Factoid Question Answering over Paragraphs}, Author = {Mohit Iyyer and Jordan Boyd-Graber and Leonardo Claudino and Richard Socher and Hal {Daum\'e III}}, Booktitle = {Empirical Methods in Natural Language Processing}, Year = {2014}, Location = {Doha, Qatar}, } </pre> </div> </li> <li> <a href="pubs/2014_RNN_framing.pdf">Political Ideology Detection Using Recursive Neural Networks</a>.<br> <b>Mohit Iyyer</b>, Peter Enns, Jordan Boyd-Graber, and Philip Resnik.<br> <i>ACL 2014</i><br> <h2><a href="ibc/index.html">data</a> // <a href="javascript:unhide('acl14pol');" class="bibtex">bibtex</a></h2> <div id="acl14pol" class="hidden"> <pre> @inproceedings{Iyyer:Enns:Boyd-Graber:Resnik-2014, Title = {Political Ideology Detection Using Recursive Neural Networks}, Author = {Mohit Iyyer and Peter Enns and Jordan Boyd-Graber and Philip Resnik}, Booktitle = {Association for Computational Linguistics}, Year = {2014}, Location = {Baltimore, Maryland}, } </pre> </div> </li> </ul> </div> </div> </body> </html>

Pages: 1 2 3 4 5 6 7 8 9 10