CINXE.COM

Narrative Information Theory

<!DOCTYPE html> <html lang="en"> <head> <meta content="text/html; charset=utf-8" http-equiv="content-type"/> <title>Narrative Information Theory</title> <!--Generated on Fri Nov 15 18:30:43 2024 by LaTeXML (version 0.8.8) http://dlmf.nist.gov/LaTeXML/.--> <meta content="width=device-width, initial-scale=1, shrink-to-fit=no" name="viewport"/> <link href="https://cdn.jsdelivr.net/npm/bootstrap@5.3.0/dist/css/bootstrap.min.css" rel="stylesheet" type="text/css"/> <link href="/static/browse/0.3.4/css/ar5iv.0.7.9.min.css" rel="stylesheet" type="text/css"/> <link href="/static/browse/0.3.4/css/ar5iv-fonts.0.7.9.min.css" rel="stylesheet" type="text/css"/> <link href="/static/browse/0.3.4/css/latexml_styles.css" rel="stylesheet" type="text/css"/> <script src="https://cdn.jsdelivr.net/npm/bootstrap@5.3.0/dist/js/bootstrap.bundle.min.js"></script> <script src="https://cdnjs.cloudflare.com/ajax/libs/html2canvas/1.3.3/html2canvas.min.js"></script> <script src="/static/browse/0.3.4/js/addons_new.js"></script> <script src="/static/browse/0.3.4/js/feedbackOverlay.js"></script> <base href="/html/2411.12907v1/"/></head> <body> <nav class="ltx_page_navbar"> <nav class="ltx_TOC"> <ol class="ltx_toclist"> <li class="ltx_tocentry ltx_tocentry_section"><a class="ltx_ref" href="https://arxiv.org/html/2411.12907v1#S1" title="In Narrative Information Theory"><span class="ltx_text ltx_ref_title"><span class="ltx_tag ltx_tag_ref">1 </span>Introduction</span></a></li> <li class="ltx_tocentry ltx_tocentry_section"> <a class="ltx_ref" href="https://arxiv.org/html/2411.12907v1#S2" title="In Narrative Information Theory"><span class="ltx_text ltx_ref_title"><span class="ltx_tag ltx_tag_ref">2 </span>Framework and Results</span></a> <ol class="ltx_toclist ltx_toclist_section"> <li class="ltx_tocentry ltx_tocentry_subsection"><a class="ltx_ref" href="https://arxiv.org/html/2411.12907v1#S2.SS1" title="In 2 Framework and Results ‣ Narrative Information Theory"><span class="ltx_text ltx_ref_title"><span class="ltx_tag ltx_tag_ref">2.1 </span>States, complexity and pivots</span></a></li> <li class="ltx_tocentry ltx_tocentry_subsection"><a class="ltx_ref" href="https://arxiv.org/html/2411.12907v1#S2.SS2" title="In 2 Framework and Results ‣ Narrative Information Theory"><span class="ltx_text ltx_ref_title"><span class="ltx_tag ltx_tag_ref">2.2 </span>Prediction-focussed metrics: suspense and plot twists</span></a></li> </ol> </li> <li class="ltx_tocentry ltx_tocentry_section"><a class="ltx_ref" href="https://arxiv.org/html/2411.12907v1#S3" title="In Narrative Information Theory"><span class="ltx_text ltx_ref_title"><span class="ltx_tag ltx_tag_ref">3 </span>Discussion</span></a></li> <li class="ltx_tocentry ltx_tocentry_appendix"> <a class="ltx_ref" href="https://arxiv.org/html/2411.12907v1#A1" title="In Narrative Information Theory"><span class="ltx_text ltx_ref_title"><span class="ltx_tag ltx_tag_ref">A </span>Appendix / supplemental material</span></a> <ol class="ltx_toclist ltx_toclist_appendix"> <li class="ltx_tocentry ltx_tocentry_subsection"><a class="ltx_ref" href="https://arxiv.org/html/2411.12907v1#A1.SS1" title="In Appendix A Appendix / supplemental material ‣ Narrative Information Theory"><span class="ltx_text ltx_ref_title"><span class="ltx_tag ltx_tag_ref">A.1 </span>Dataset</span></a></li> <li class="ltx_tocentry ltx_tocentry_subsection"><a class="ltx_ref" href="https://arxiv.org/html/2411.12907v1#A1.SS2" title="In Appendix A Appendix / supplemental material ‣ Narrative Information Theory"><span class="ltx_text ltx_ref_title"><span class="ltx_tag ltx_tag_ref">A.2 </span>Analysis pipeline</span></a></li> <li class="ltx_tocentry ltx_tocentry_subsection"><a class="ltx_ref" href="https://arxiv.org/html/2411.12907v1#A1.SS3" title="In Appendix A Appendix / supplemental material ‣ Narrative Information Theory"><span class="ltx_text ltx_ref_title"><span class="ltx_tag ltx_tag_ref">A.3 </span>Choice of divergence measure</span></a></li> </ol> </li> </ol></nav> </nav> <div class="ltx_page_main"> <div class="ltx_page_content"> <article class="ltx_document ltx_authors_1line"> <div class="ltx_para" id="p1"> <span class="ltx_ERROR undefined" id="p1.1">\UseRawInputEncoding</span> </div> <h1 class="ltx_title ltx_title_document">Narrative Information Theory</h1> <div class="ltx_authors"> <span class="ltx_creator ltx_role_author"> <span class="ltx_personname">Lion Schulz<sup class="ltx_sup" id="id6.6.id1"><span class="ltx_text ltx_font_italic" id="id6.6.id1.1">1</span></sup>  Miguel Patrício<sup class="ltx_sup" id="id7.7.id2"><span class="ltx_text ltx_font_italic" id="id7.7.id2.1">2</span></sup>  Daan Odijk<sup class="ltx_sup" id="id8.8.id3"><span class="ltx_text ltx_font_italic" id="id8.8.id3.1">2</span></sup> <br class="ltx_break"/> <sup class="ltx_sup" id="id9.9.id4">1</sup>Bertelsmann  <sup class="ltx_sup" id="id10.10.id5">2</sup>RTL Nederland <br class="ltx_break"/> <span class="ltx_text ltx_font_typewriter" id="id11.11.id6">lion.schulz@bertelsmann.de</span> </span></span> </div> <div class="ltx_abstract"> <h6 class="ltx_title ltx_title_abstract">Abstract</h6> <p class="ltx_p" id="id12.id1">We propose an information-theoretic framework to measure narratives, providing a formalism to understand pivotal moments, cliffhangers, and plot twists. This approach offers creatives and AI researchers tools to analyse and benchmark human- and AI-created stories. We illustrate our method in TV shows, showing its ability to quantify narrative complexity and emotional dynamics across genres. We discuss applications in media and in human-in-the-loop generative AI storytelling.</p> </div> <section class="ltx_section" id="S1"> <h2 class="ltx_title ltx_title_section"> <span class="ltx_tag ltx_tag_section">1 </span>Introduction</h2> <div class="ltx_para" id="S1.p1"> <p class="ltx_p" id="S1.p1.1">As AI systems begin to tell their own stories <cite class="ltx_cite ltx_citemacro_citep">[<a class="ltx_ref" href="https://arxiv.org/html/2411.12907v1#bib.bib1" title="">1</a>, <a class="ltx_ref" href="https://arxiv.org/html/2411.12907v1#bib.bib2" title="">2</a>]</cite>, it becomes crucial to develop formal methods for understanding and evaluating the content they produce <cite class="ltx_cite ltx_citemacro_citep">[<a class="ltx_ref" href="https://arxiv.org/html/2411.12907v1#bib.bib3" title="">3</a>, <a class="ltx_ref" href="https://arxiv.org/html/2411.12907v1#bib.bib4" title="">4</a>, <a class="ltx_ref" href="https://arxiv.org/html/2411.12907v1#bib.bib5" title="">5</a>, <a class="ltx_ref" href="https://arxiv.org/html/2411.12907v1#bib.bib6" title="">6</a>]</cite>. We introduce a general information-theoretic framework to measure narratives, capturing key storytelling elements like novelty and surprise <cite class="ltx_cite ltx_citemacro_citep">[<a class="ltx_ref" href="https://arxiv.org/html/2411.12907v1#bib.bib7" title="">7</a>, <a class="ltx_ref" href="https://arxiv.org/html/2411.12907v1#bib.bib8" title="">8</a>, <a class="ltx_ref" href="https://arxiv.org/html/2411.12907v1#bib.bib9" title="">9</a>]</cite>. This work provides tools for creatives and for applied machine learning scientists to analyse narrative structures and for GenAI researchers to benchmark stories told by machines. It thereby also offers a foundation for developing human-in-the-loop AI systems assisting in narrative creation while maintaining the nuanced human understanding of stories <cite class="ltx_cite ltx_citemacro_citep">[<a class="ltx_ref" href="https://arxiv.org/html/2411.12907v1#bib.bib10" title="">10</a>]</cite>.</p> </div> <div class="ltx_para" id="S1.p2"> <p class="ltx_p" id="S1.p2.1">Narratives represent a key higher-level representation of how a story gets told <cite class="ltx_cite ltx_citemacro_citep">[<a class="ltx_ref" href="https://arxiv.org/html/2411.12907v1#bib.bib3" title="">3</a>]</cite> and are crucial to how we understand worlds, both fictional and real. Previous machine learning work on detecting narratives has focussed on specific content and modalities <cite class="ltx_cite ltx_citemacro_citep">[<a class="ltx_ref" href="https://arxiv.org/html/2411.12907v1#bib.bib11" title="">11</a>, <a class="ltx_ref" href="https://arxiv.org/html/2411.12907v1#bib.bib12" title="">12</a>, <a class="ltx_ref" href="https://arxiv.org/html/2411.12907v1#bib.bib13" title="">13</a>, <a class="ltx_ref" href="https://arxiv.org/html/2411.12907v1#bib.bib14" title="">14</a>, <a class="ltx_ref" href="https://arxiv.org/html/2411.12907v1#bib.bib15" title="">15</a>, <a class="ltx_ref" href="https://arxiv.org/html/2411.12907v1#bib.bib16" title="">16</a>]</cite>. We build on this by introducing a general information-theoretic narrative framework that is agnostic to content and modality and lets us intuitively capture crucial storytelling devices, like novelty and surprise – which have thus far seen less attention in the study of narratives. We illustrate our measures in a corpus of over 3000 minutes of TV shows, one of the most popular forms of contemporary storytelling.</p> </div> <figure class="ltx_figure" id="S1.F1"><img alt="Refer to caption" class="ltx_graphics ltx_centering ltx_img_landscape" height="151" id="S1.F1.g1" src="extracted/5993258/Fig1_ultrawide.png" width="354"/> <figcaption class="ltx_caption ltx_centering"><span class="ltx_tag ltx_tag_figure">Figure 1: </span>Overview – information-theoretic measures of narratives.</figcaption> </figure> </section> <section class="ltx_section" id="S2"> <h2 class="ltx_title ltx_title_section"> <span class="ltx_tag ltx_tag_section">2 </span>Framework and Results</h2> <section class="ltx_subsection" id="S2.SS1"> <h3 class="ltx_title ltx_title_subsection"> <span class="ltx_tag ltx_tag_subsection">2.1 </span>States, complexity and pivots</h3> <div class="ltx_para" id="S2.SS1.p1"> <p class="ltx_p" id="S2.SS1.p1.2">We can intuitively grasp what is going on in a story: a happy scene follows a sad scene, cliffhangers leave us unsure about what will happen next. Here, we show how we can capture such dynamics using information theory. To construct our formal framework, we begin by decomposing a story into basic building blocks that we call states at each timepoint <math alttext="t" class="ltx_Math" display="inline" id="S2.SS1.p1.1.m1.1"><semantics id="S2.SS1.p1.1.m1.1a"><mi id="S2.SS1.p1.1.m1.1.1" xref="S2.SS1.p1.1.m1.1.1.cmml">t</mi><annotation-xml encoding="MathML-Content" id="S2.SS1.p1.1.m1.1b"><ci id="S2.SS1.p1.1.m1.1.1.cmml" xref="S2.SS1.p1.1.m1.1.1">𝑡</ci></annotation-xml><annotation encoding="application/x-tex" id="S2.SS1.p1.1.m1.1c">t</annotation><annotation encoding="application/x-llamapun" id="S2.SS1.p1.1.m1.1d">italic_t</annotation></semantics></math> in the narrative, <math alttext="s_{t}" class="ltx_Math" display="inline" id="S2.SS1.p1.2.m2.1"><semantics id="S2.SS1.p1.2.m2.1a"><msub id="S2.SS1.p1.2.m2.1.1" xref="S2.SS1.p1.2.m2.1.1.cmml"><mi id="S2.SS1.p1.2.m2.1.1.2" xref="S2.SS1.p1.2.m2.1.1.2.cmml">s</mi><mi id="S2.SS1.p1.2.m2.1.1.3" xref="S2.SS1.p1.2.m2.1.1.3.cmml">t</mi></msub><annotation-xml encoding="MathML-Content" id="S2.SS1.p1.2.m2.1b"><apply id="S2.SS1.p1.2.m2.1.1.cmml" xref="S2.SS1.p1.2.m2.1.1"><csymbol cd="ambiguous" id="S2.SS1.p1.2.m2.1.1.1.cmml" xref="S2.SS1.p1.2.m2.1.1">subscript</csymbol><ci id="S2.SS1.p1.2.m2.1.1.2.cmml" xref="S2.SS1.p1.2.m2.1.1.2">𝑠</ci><ci id="S2.SS1.p1.2.m2.1.1.3.cmml" xref="S2.SS1.p1.2.m2.1.1.3">𝑡</ci></apply></annotation-xml><annotation encoding="application/x-tex" id="S2.SS1.p1.2.m2.1c">s_{t}</annotation><annotation encoding="application/x-llamapun" id="S2.SS1.p1.2.m2.1d">italic_s start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT</annotation></semantics></math> (see Fig. <a class="ltx_ref" href="https://arxiv.org/html/2411.12907v1#S1.F1" title="Figure 1 ‣ 1 Introduction ‣ Narrative Information Theory"><span class="ltx_text ltx_ref_tag">1</span></a> for an overview). We are agnostic about the nature of this state. It might contain the setting of a scence, the emotions of the characters, or be based on a more complex models of the story.</p> </div> <div class="ltx_para" id="S2.SS1.p2"> <p class="ltx_p" id="S2.SS1.p2.1">For illustration, we applied our framework to a corpus of TV shows where we defined states as distributions of emotions inferred from actor faces (see appendix). However, we note that our framework is agnostic to the modality of the story told and so could also be applied to written text such as books or audio. Fig <a class="ltx_ref" href="https://arxiv.org/html/2411.12907v1#S2.F2" title="Figure 2 ‣ 2.1 States, complexity and pivots ‣ 2 Framework and Results ‣ Narrative Information Theory"><span class="ltx_text ltx_ref_tag">2</span></a>A shows an example trajectory of states across an episode of a drama TV show set in a crime setting. The episode starts out on a sad note, transitions into a more neutral phase, and then ends again on two sad beats.</p> </div> <div class="ltx_para" id="S2.SS1.p3"> <p class="ltx_p" id="S2.SS1.p3.1">Our first information theoretic measure to better understand <math alttext="s_{t}" class="ltx_Math" display="inline" id="S2.SS1.p3.1.m1.1"><semantics id="S2.SS1.p3.1.m1.1a"><msub id="S2.SS1.p3.1.m1.1.1" xref="S2.SS1.p3.1.m1.1.1.cmml"><mi id="S2.SS1.p3.1.m1.1.1.2" xref="S2.SS1.p3.1.m1.1.1.2.cmml">s</mi><mi id="S2.SS1.p3.1.m1.1.1.3" xref="S2.SS1.p3.1.m1.1.1.3.cmml">t</mi></msub><annotation-xml encoding="MathML-Content" id="S2.SS1.p3.1.m1.1b"><apply id="S2.SS1.p3.1.m1.1.1.cmml" xref="S2.SS1.p3.1.m1.1.1"><csymbol cd="ambiguous" id="S2.SS1.p3.1.m1.1.1.1.cmml" xref="S2.SS1.p3.1.m1.1.1">subscript</csymbol><ci id="S2.SS1.p3.1.m1.1.1.2.cmml" xref="S2.SS1.p3.1.m1.1.1.2">𝑠</ci><ci id="S2.SS1.p3.1.m1.1.1.3.cmml" xref="S2.SS1.p3.1.m1.1.1.3">𝑡</ci></apply></annotation-xml><annotation encoding="application/x-tex" id="S2.SS1.p3.1.m1.1c">s_{t}</annotation><annotation encoding="application/x-llamapun" id="S2.SS1.p3.1.m1.1d">italic_s start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT</annotation></semantics></math> is the entropy of this state which we can understand as the complexity <cite class="ltx_cite ltx_citemacro_citep">[<a class="ltx_ref" href="https://arxiv.org/html/2411.12907v1#bib.bib17" title="">17</a>]</cite> of a narrative state:</p> </div> <figure class="ltx_figure" id="S2.F2"><img alt="Refer to caption" class="ltx_graphics ltx_centering ltx_img_landscape" height="390" id="S2.F2.g1" src="extracted/5993258/NarrativeInfo_Fig1.png" width="773"/> <figcaption class="ltx_caption ltx_centering"><span class="ltx_tag ltx_tag_figure">Figure 2: </span>Results – emotions, as well as the complexity and pivot metrics in an example episode (crime thriller) and across different shows (see appendix for details).</figcaption> </figure> <div class="ltx_para" id="S2.SS1.p4"> <table class="ltx_equation ltx_eqn_table" id="S2.E1"> <tbody><tr class="ltx_equation ltx_eqn_row ltx_align_baseline"> <td class="ltx_eqn_cell ltx_eqn_center_padleft"></td> <td class="ltx_eqn_cell ltx_align_center"><math alttext="\textbf{Complexity}=\text{H}(s_{t})" class="ltx_Math" display="block" id="S2.E1.m1.1"><semantics id="S2.E1.m1.1a"><mrow id="S2.E1.m1.1.1" xref="S2.E1.m1.1.1.cmml"><mtext class="ltx_mathvariant_bold" id="S2.E1.m1.1.1.3" xref="S2.E1.m1.1.1.3a.cmml">Complexity</mtext><mo id="S2.E1.m1.1.1.2" xref="S2.E1.m1.1.1.2.cmml">=</mo><mrow id="S2.E1.m1.1.1.1" xref="S2.E1.m1.1.1.1.cmml"><mtext id="S2.E1.m1.1.1.1.3" xref="S2.E1.m1.1.1.1.3a.cmml">H</mtext><mo id="S2.E1.m1.1.1.1.2" xref="S2.E1.m1.1.1.1.2.cmml">⁢</mo><mrow id="S2.E1.m1.1.1.1.1.1" xref="S2.E1.m1.1.1.1.1.1.1.cmml"><mo id="S2.E1.m1.1.1.1.1.1.2" stretchy="false" xref="S2.E1.m1.1.1.1.1.1.1.cmml">(</mo><msub id="S2.E1.m1.1.1.1.1.1.1" xref="S2.E1.m1.1.1.1.1.1.1.cmml"><mi id="S2.E1.m1.1.1.1.1.1.1.2" xref="S2.E1.m1.1.1.1.1.1.1.2.cmml">s</mi><mi id="S2.E1.m1.1.1.1.1.1.1.3" xref="S2.E1.m1.1.1.1.1.1.1.3.cmml">t</mi></msub><mo id="S2.E1.m1.1.1.1.1.1.3" stretchy="false" xref="S2.E1.m1.1.1.1.1.1.1.cmml">)</mo></mrow></mrow></mrow><annotation-xml encoding="MathML-Content" id="S2.E1.m1.1b"><apply id="S2.E1.m1.1.1.cmml" xref="S2.E1.m1.1.1"><eq id="S2.E1.m1.1.1.2.cmml" xref="S2.E1.m1.1.1.2"></eq><ci id="S2.E1.m1.1.1.3a.cmml" xref="S2.E1.m1.1.1.3"><mtext class="ltx_mathvariant_bold" id="S2.E1.m1.1.1.3.cmml" xref="S2.E1.m1.1.1.3">Complexity</mtext></ci><apply id="S2.E1.m1.1.1.1.cmml" xref="S2.E1.m1.1.1.1"><times id="S2.E1.m1.1.1.1.2.cmml" xref="S2.E1.m1.1.1.1.2"></times><ci id="S2.E1.m1.1.1.1.3a.cmml" xref="S2.E1.m1.1.1.1.3"><mtext id="S2.E1.m1.1.1.1.3.cmml" xref="S2.E1.m1.1.1.1.3">H</mtext></ci><apply id="S2.E1.m1.1.1.1.1.1.1.cmml" xref="S2.E1.m1.1.1.1.1.1"><csymbol cd="ambiguous" id="S2.E1.m1.1.1.1.1.1.1.1.cmml" xref="S2.E1.m1.1.1.1.1.1">subscript</csymbol><ci id="S2.E1.m1.1.1.1.1.1.1.2.cmml" xref="S2.E1.m1.1.1.1.1.1.1.2">𝑠</ci><ci id="S2.E1.m1.1.1.1.1.1.1.3.cmml" xref="S2.E1.m1.1.1.1.1.1.1.3">𝑡</ci></apply></apply></apply></annotation-xml><annotation encoding="application/x-tex" id="S2.E1.m1.1c">\textbf{Complexity}=\text{H}(s_{t})</annotation><annotation encoding="application/x-llamapun" id="S2.E1.m1.1d">Complexity = H ( italic_s start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT )</annotation></semantics></math></td> <td class="ltx_eqn_cell ltx_eqn_center_padright"></td> <td class="ltx_eqn_cell ltx_eqn_eqno ltx_align_middle ltx_align_right" rowspan="1"><span class="ltx_tag ltx_tag_equation ltx_align_right">(1)</span></td> </tr></tbody> </table> </div> <div class="ltx_para" id="S2.SS1.p5"> <p class="ltx_p" id="S2.SS1.p5.1">In our example emotion case, the entropy of a scene will be lower when a distribution is dominated by one emotion (everyone is happy) but higher when there is a mixture of emotions. We show this in Fig <a class="ltx_ref" href="https://arxiv.org/html/2411.12907v1#S2.F2" title="Figure 2 ‣ 2.1 States, complexity and pivots ‣ 2 Framework and Results ‣ Narrative Information Theory"><span class="ltx_text ltx_ref_tag">2</span></a>C where we plot the state by state entropy for our example episode. The peaks in sadness in the episode’s finale are characterized by valleys in entropy, whereas the middle of the show is characterized by comparatively higher entropy because of its mix of emotions.</p> </div> <div class="ltx_para" id="S2.SS1.p6"> <p class="ltx_p" id="S2.SS1.p6.1">Our entropy measure can operate at different resolutions: For example, books are made up of chapters and paragraphs, and movies can be split into individual scenes. In turn, the complexity of these entire (sub-)stories can be expressed by the entropy of the average of the states. Consequently, a book that contains mostly sad scenes will have generally low entropy, whereas a movie with a mix of emotions will have high entropy.</p> </div> <div class="ltx_para" id="S2.SS1.p7"> <p class="ltx_p" id="S2.SS1.p7.1">We apply this analysis at the level of entire TV shows, analysing our corpus of over 3000 minutes of video across genres as broad as crime thrillers, historical dramas, and reality TV (see appendix for details). Our analysis revealed distinct patterns across genres (see Fig. <a class="ltx_ref" href="https://arxiv.org/html/2411.12907v1#S2.F2" title="Figure 2 ‣ 2.1 States, complexity and pivots ‣ 2 Framework and Results ‣ Narrative Information Theory"><span class="ltx_text ltx_ref_tag">2</span></a>D). Reality formats showed higher entropy, indicating a broader mix of emotions (see the underlying emotion distributions in Fig. <a class="ltx_ref" href="https://arxiv.org/html/2411.12907v1#S2.F2" title="Figure 2 ‣ 2.1 States, complexity and pivots ‣ 2 Framework and Results ‣ Narrative Information Theory"><span class="ltx_text ltx_ref_tag">2</span></a>B). In contrast dramas/thrillers had lower entropy, focussing on specific emotional tones.</p> </div> <div class="ltx_para" id="S2.SS1.p8"> <p class="ltx_p" id="S2.SS1.p8.1">We can extend this approach to story dynamics, or transition between states <cite class="ltx_cite ltx_citemacro_citep">[<a class="ltx_ref" href="https://arxiv.org/html/2411.12907v1#bib.bib7" title="">7</a>, <a class="ltx_ref" href="https://arxiv.org/html/2411.12907v1#bib.bib9" title="">9</a>]</cite>. A character might start out happy and then be sad. We describe such pivots as the Jensen-Shannon-divergence (JSD) between two states:</p> </div> <div class="ltx_para" id="S2.SS1.p9"> <table class="ltx_equation ltx_eqn_table" id="S2.E2"> <tbody><tr class="ltx_equation ltx_eqn_row ltx_align_baseline"> <td class="ltx_eqn_cell ltx_eqn_center_padleft"></td> <td class="ltx_eqn_cell ltx_align_center"><math alttext="\textbf{Pivot}=\text{JSD}(s_{t}\mid\mid s_{t-1})" class="ltx_math_unparsed" display="block" id="S2.E2.m1.1"><semantics id="S2.E2.m1.1a"><mrow id="S2.E2.m1.1b"><mtext class="ltx_mathvariant_bold" id="S2.E2.m1.1.1">Pivot</mtext><mo id="S2.E2.m1.1.2">=</mo><mtext id="S2.E2.m1.1.3">JSD</mtext><mrow id="S2.E2.m1.1.4"><mo id="S2.E2.m1.1.4.1" stretchy="false">(</mo><msub id="S2.E2.m1.1.4.2"><mi id="S2.E2.m1.1.4.2.2">s</mi><mi id="S2.E2.m1.1.4.2.3">t</mi></msub><mo id="S2.E2.m1.1.4.3" lspace="0em" rspace="0.0835em">∣</mo><mo id="S2.E2.m1.1.4.4" lspace="0.0835em" rspace="0.167em">∣</mo><msub id="S2.E2.m1.1.4.5"><mi id="S2.E2.m1.1.4.5.2">s</mi><mrow id="S2.E2.m1.1.4.5.3"><mi id="S2.E2.m1.1.4.5.3.2">t</mi><mo id="S2.E2.m1.1.4.5.3.1">−</mo><mn id="S2.E2.m1.1.4.5.3.3">1</mn></mrow></msub><mo id="S2.E2.m1.1.4.6" stretchy="false">)</mo></mrow></mrow><annotation encoding="application/x-tex" id="S2.E2.m1.1c">\textbf{Pivot}=\text{JSD}(s_{t}\mid\mid s_{t-1})</annotation><annotation encoding="application/x-llamapun" id="S2.E2.m1.1d">Pivot = JSD ( italic_s start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT ∣ ∣ italic_s start_POSTSUBSCRIPT italic_t - 1 end_POSTSUBSCRIPT )</annotation></semantics></math></td> <td class="ltx_eqn_cell ltx_eqn_center_padright"></td> <td class="ltx_eqn_cell ltx_eqn_eqno ltx_align_middle ltx_align_right" rowspan="1"><span class="ltx_tag ltx_tag_equation ltx_align_right">(2)</span></td> </tr></tbody> </table> </div> <div class="ltx_para" id="S2.SS1.p10"> <p class="ltx_p" id="S2.SS1.p10.1">Strong shifts in the state, for example a move from a shot full of mainly happy character to one dominated by angry characters would result in a high divergence. In contrast, two happy scenes following each other lead to a low divergence. We plot this metric for our example TV episode in Fig. <a class="ltx_ref" href="https://arxiv.org/html/2411.12907v1#S2.F2" title="Figure 2 ‣ 2.1 States, complexity and pivots ‣ 2 Framework and Results ‣ Narrative Information Theory"><span class="ltx_text ltx_ref_tag">2</span></a>E. Peaks indicate moments of significant shifts in emotional state (e.g. in our example at around 25:00) - essentially a story beat. Taking the liberty of more poetic language, we can interpret this curve as the heartbeat of the story.<span class="ltx_note ltx_role_footnote" id="footnote1"><sup class="ltx_note_mark">1</sup><span class="ltx_note_outer"><span class="ltx_note_content"><sup class="ltx_note_mark">1</sup><span class="ltx_tag ltx_tag_note">1</span>We here choose JSD over Kullback-Leibler Divergence because of its symmetry and because its boundedness makes it more suitable for usage as a metric <cite class="ltx_cite ltx_citemacro_citep">[<a class="ltx_ref" href="https://arxiv.org/html/2411.12907v1#bib.bib18" title="">18</a>]</cite>. We observe similar qualitative results with KLD, see appendix.</span></span></span></p> </div> <div class="ltx_para" id="S2.SS1.p11"> <p class="ltx_p" id="S2.SS1.p11.1">Our pivot measure also revealed genre-specific patterns (see Fig. <a class="ltx_ref" href="https://arxiv.org/html/2411.12907v1#S2.F2" title="Figure 2 ‣ 2.1 States, complexity and pivots ‣ 2 Framework and Results ‣ Narrative Information Theory"><span class="ltx_text ltx_ref_tag">2</span></a>F). Reality and dating shows displayed higher JSD, indicating frequent and stronger emotional shifts – essentially an "emotional rollercoaster" metric. This is compared to genres that rely on comparitively more gradual changes like dramas and thrillers - often "slower burns". These results demonstrate our framework’s ability to quantify narrative structures and emotional dynamics, providing a basis for comparing storytelling techniques across different genres and between human-created and AI-generated content.</p> </div> </section> <section class="ltx_subsection" id="S2.SS2"> <h3 class="ltx_title ltx_title_subsection"> <span class="ltx_tag ltx_tag_subsection">2.2 </span>Prediction-focussed metrics: suspense and plot twists</h3> <div class="ltx_para" id="S2.SS2.p1"> <p class="ltx_p" id="S2.SS2.p1.4">Audiences do not just observe what happens at time <math alttext="t" class="ltx_Math" display="inline" id="S2.SS2.p1.1.m1.1"><semantics id="S2.SS2.p1.1.m1.1a"><mi id="S2.SS2.p1.1.m1.1.1" xref="S2.SS2.p1.1.m1.1.1.cmml">t</mi><annotation-xml encoding="MathML-Content" id="S2.SS2.p1.1.m1.1b"><ci id="S2.SS2.p1.1.m1.1.1.cmml" xref="S2.SS2.p1.1.m1.1.1">𝑡</ci></annotation-xml><annotation encoding="application/x-tex" id="S2.SS2.p1.1.m1.1c">t</annotation><annotation encoding="application/x-llamapun" id="S2.SS2.p1.1.m1.1d">italic_t</annotation></semantics></math> but also wonder what happens next (see Fig. <a class="ltx_ref" href="https://arxiv.org/html/2411.12907v1#S1.F1" title="Figure 1 ‣ 1 Introduction ‣ Narrative Information Theory"><span class="ltx_text ltx_ref_tag">1</span></a>). We consider them to predict a next story state based on the story so far. We write this prediction as <math alttext="P(s_{t+1}|S_{t})" class="ltx_Math" display="inline" id="S2.SS2.p1.2.m2.1"><semantics id="S2.SS2.p1.2.m2.1a"><mrow id="S2.SS2.p1.2.m2.1.1" xref="S2.SS2.p1.2.m2.1.1.cmml"><mi id="S2.SS2.p1.2.m2.1.1.3" xref="S2.SS2.p1.2.m2.1.1.3.cmml">P</mi><mo id="S2.SS2.p1.2.m2.1.1.2" xref="S2.SS2.p1.2.m2.1.1.2.cmml">⁢</mo><mrow id="S2.SS2.p1.2.m2.1.1.1.1" xref="S2.SS2.p1.2.m2.1.1.1.1.1.cmml"><mo id="S2.SS2.p1.2.m2.1.1.1.1.2" stretchy="false" xref="S2.SS2.p1.2.m2.1.1.1.1.1.cmml">(</mo><mrow id="S2.SS2.p1.2.m2.1.1.1.1.1" xref="S2.SS2.p1.2.m2.1.1.1.1.1.cmml"><msub id="S2.SS2.p1.2.m2.1.1.1.1.1.2" xref="S2.SS2.p1.2.m2.1.1.1.1.1.2.cmml"><mi id="S2.SS2.p1.2.m2.1.1.1.1.1.2.2" xref="S2.SS2.p1.2.m2.1.1.1.1.1.2.2.cmml">s</mi><mrow id="S2.SS2.p1.2.m2.1.1.1.1.1.2.3" xref="S2.SS2.p1.2.m2.1.1.1.1.1.2.3.cmml"><mi id="S2.SS2.p1.2.m2.1.1.1.1.1.2.3.2" xref="S2.SS2.p1.2.m2.1.1.1.1.1.2.3.2.cmml">t</mi><mo id="S2.SS2.p1.2.m2.1.1.1.1.1.2.3.1" xref="S2.SS2.p1.2.m2.1.1.1.1.1.2.3.1.cmml">+</mo><mn id="S2.SS2.p1.2.m2.1.1.1.1.1.2.3.3" xref="S2.SS2.p1.2.m2.1.1.1.1.1.2.3.3.cmml">1</mn></mrow></msub><mo fence="false" id="S2.SS2.p1.2.m2.1.1.1.1.1.1" xref="S2.SS2.p1.2.m2.1.1.1.1.1.1.cmml">|</mo><msub id="S2.SS2.p1.2.m2.1.1.1.1.1.3" xref="S2.SS2.p1.2.m2.1.1.1.1.1.3.cmml"><mi id="S2.SS2.p1.2.m2.1.1.1.1.1.3.2" xref="S2.SS2.p1.2.m2.1.1.1.1.1.3.2.cmml">S</mi><mi id="S2.SS2.p1.2.m2.1.1.1.1.1.3.3" xref="S2.SS2.p1.2.m2.1.1.1.1.1.3.3.cmml">t</mi></msub></mrow><mo id="S2.SS2.p1.2.m2.1.1.1.1.3" stretchy="false" xref="S2.SS2.p1.2.m2.1.1.1.1.1.cmml">)</mo></mrow></mrow><annotation-xml encoding="MathML-Content" id="S2.SS2.p1.2.m2.1b"><apply id="S2.SS2.p1.2.m2.1.1.cmml" xref="S2.SS2.p1.2.m2.1.1"><times id="S2.SS2.p1.2.m2.1.1.2.cmml" xref="S2.SS2.p1.2.m2.1.1.2"></times><ci id="S2.SS2.p1.2.m2.1.1.3.cmml" xref="S2.SS2.p1.2.m2.1.1.3">𝑃</ci><apply id="S2.SS2.p1.2.m2.1.1.1.1.1.cmml" xref="S2.SS2.p1.2.m2.1.1.1.1"><csymbol cd="latexml" id="S2.SS2.p1.2.m2.1.1.1.1.1.1.cmml" xref="S2.SS2.p1.2.m2.1.1.1.1.1.1">conditional</csymbol><apply id="S2.SS2.p1.2.m2.1.1.1.1.1.2.cmml" xref="S2.SS2.p1.2.m2.1.1.1.1.1.2"><csymbol cd="ambiguous" id="S2.SS2.p1.2.m2.1.1.1.1.1.2.1.cmml" xref="S2.SS2.p1.2.m2.1.1.1.1.1.2">subscript</csymbol><ci id="S2.SS2.p1.2.m2.1.1.1.1.1.2.2.cmml" xref="S2.SS2.p1.2.m2.1.1.1.1.1.2.2">𝑠</ci><apply id="S2.SS2.p1.2.m2.1.1.1.1.1.2.3.cmml" xref="S2.SS2.p1.2.m2.1.1.1.1.1.2.3"><plus id="S2.SS2.p1.2.m2.1.1.1.1.1.2.3.1.cmml" xref="S2.SS2.p1.2.m2.1.1.1.1.1.2.3.1"></plus><ci id="S2.SS2.p1.2.m2.1.1.1.1.1.2.3.2.cmml" xref="S2.SS2.p1.2.m2.1.1.1.1.1.2.3.2">𝑡</ci><cn id="S2.SS2.p1.2.m2.1.1.1.1.1.2.3.3.cmml" type="integer" xref="S2.SS2.p1.2.m2.1.1.1.1.1.2.3.3">1</cn></apply></apply><apply id="S2.SS2.p1.2.m2.1.1.1.1.1.3.cmml" xref="S2.SS2.p1.2.m2.1.1.1.1.1.3"><csymbol cd="ambiguous" id="S2.SS2.p1.2.m2.1.1.1.1.1.3.1.cmml" xref="S2.SS2.p1.2.m2.1.1.1.1.1.3">subscript</csymbol><ci id="S2.SS2.p1.2.m2.1.1.1.1.1.3.2.cmml" xref="S2.SS2.p1.2.m2.1.1.1.1.1.3.2">𝑆</ci><ci id="S2.SS2.p1.2.m2.1.1.1.1.1.3.3.cmml" xref="S2.SS2.p1.2.m2.1.1.1.1.1.3.3">𝑡</ci></apply></apply></apply></annotation-xml><annotation encoding="application/x-tex" id="S2.SS2.p1.2.m2.1c">P(s_{t+1}|S_{t})</annotation><annotation encoding="application/x-llamapun" id="S2.SS2.p1.2.m2.1d">italic_P ( italic_s start_POSTSUBSCRIPT italic_t + 1 end_POSTSUBSCRIPT | italic_S start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT )</annotation></semantics></math>, essentially the output of a (generative) model <math alttext="m" class="ltx_Math" display="inline" id="S2.SS2.p1.3.m3.1"><semantics id="S2.SS2.p1.3.m3.1a"><mi id="S2.SS2.p1.3.m3.1.1" xref="S2.SS2.p1.3.m3.1.1.cmml">m</mi><annotation-xml encoding="MathML-Content" id="S2.SS2.p1.3.m3.1b"><ci id="S2.SS2.p1.3.m3.1.1.cmml" xref="S2.SS2.p1.3.m3.1.1">𝑚</ci></annotation-xml><annotation encoding="application/x-tex" id="S2.SS2.p1.3.m3.1c">m</annotation><annotation encoding="application/x-llamapun" id="S2.SS2.p1.3.m3.1d">italic_m</annotation></semantics></math> that takes <math alttext="S_{t}=\{s_{t},s_{t-1},...,s_{0}\}" class="ltx_Math" display="inline" id="S2.SS2.p1.4.m4.4"><semantics id="S2.SS2.p1.4.m4.4a"><mrow id="S2.SS2.p1.4.m4.4.4" xref="S2.SS2.p1.4.m4.4.4.cmml"><msub id="S2.SS2.p1.4.m4.4.4.5" xref="S2.SS2.p1.4.m4.4.4.5.cmml"><mi id="S2.SS2.p1.4.m4.4.4.5.2" xref="S2.SS2.p1.4.m4.4.4.5.2.cmml">S</mi><mi id="S2.SS2.p1.4.m4.4.4.5.3" xref="S2.SS2.p1.4.m4.4.4.5.3.cmml">t</mi></msub><mo id="S2.SS2.p1.4.m4.4.4.4" xref="S2.SS2.p1.4.m4.4.4.4.cmml">=</mo><mrow id="S2.SS2.p1.4.m4.4.4.3.3" xref="S2.SS2.p1.4.m4.4.4.3.4.cmml"><mo id="S2.SS2.p1.4.m4.4.4.3.3.4" stretchy="false" xref="S2.SS2.p1.4.m4.4.4.3.4.cmml">{</mo><msub id="S2.SS2.p1.4.m4.2.2.1.1.1" xref="S2.SS2.p1.4.m4.2.2.1.1.1.cmml"><mi id="S2.SS2.p1.4.m4.2.2.1.1.1.2" xref="S2.SS2.p1.4.m4.2.2.1.1.1.2.cmml">s</mi><mi id="S2.SS2.p1.4.m4.2.2.1.1.1.3" xref="S2.SS2.p1.4.m4.2.2.1.1.1.3.cmml">t</mi></msub><mo id="S2.SS2.p1.4.m4.4.4.3.3.5" xref="S2.SS2.p1.4.m4.4.4.3.4.cmml">,</mo><msub id="S2.SS2.p1.4.m4.3.3.2.2.2" xref="S2.SS2.p1.4.m4.3.3.2.2.2.cmml"><mi id="S2.SS2.p1.4.m4.3.3.2.2.2.2" xref="S2.SS2.p1.4.m4.3.3.2.2.2.2.cmml">s</mi><mrow id="S2.SS2.p1.4.m4.3.3.2.2.2.3" xref="S2.SS2.p1.4.m4.3.3.2.2.2.3.cmml"><mi id="S2.SS2.p1.4.m4.3.3.2.2.2.3.2" xref="S2.SS2.p1.4.m4.3.3.2.2.2.3.2.cmml">t</mi><mo id="S2.SS2.p1.4.m4.3.3.2.2.2.3.1" xref="S2.SS2.p1.4.m4.3.3.2.2.2.3.1.cmml">−</mo><mn id="S2.SS2.p1.4.m4.3.3.2.2.2.3.3" xref="S2.SS2.p1.4.m4.3.3.2.2.2.3.3.cmml">1</mn></mrow></msub><mo id="S2.SS2.p1.4.m4.4.4.3.3.6" xref="S2.SS2.p1.4.m4.4.4.3.4.cmml">,</mo><mi id="S2.SS2.p1.4.m4.1.1" mathvariant="normal" xref="S2.SS2.p1.4.m4.1.1.cmml">…</mi><mo id="S2.SS2.p1.4.m4.4.4.3.3.7" xref="S2.SS2.p1.4.m4.4.4.3.4.cmml">,</mo><msub id="S2.SS2.p1.4.m4.4.4.3.3.3" xref="S2.SS2.p1.4.m4.4.4.3.3.3.cmml"><mi id="S2.SS2.p1.4.m4.4.4.3.3.3.2" xref="S2.SS2.p1.4.m4.4.4.3.3.3.2.cmml">s</mi><mn id="S2.SS2.p1.4.m4.4.4.3.3.3.3" xref="S2.SS2.p1.4.m4.4.4.3.3.3.3.cmml">0</mn></msub><mo id="S2.SS2.p1.4.m4.4.4.3.3.8" stretchy="false" xref="S2.SS2.p1.4.m4.4.4.3.4.cmml">}</mo></mrow></mrow><annotation-xml encoding="MathML-Content" id="S2.SS2.p1.4.m4.4b"><apply id="S2.SS2.p1.4.m4.4.4.cmml" xref="S2.SS2.p1.4.m4.4.4"><eq id="S2.SS2.p1.4.m4.4.4.4.cmml" xref="S2.SS2.p1.4.m4.4.4.4"></eq><apply id="S2.SS2.p1.4.m4.4.4.5.cmml" xref="S2.SS2.p1.4.m4.4.4.5"><csymbol cd="ambiguous" id="S2.SS2.p1.4.m4.4.4.5.1.cmml" xref="S2.SS2.p1.4.m4.4.4.5">subscript</csymbol><ci id="S2.SS2.p1.4.m4.4.4.5.2.cmml" xref="S2.SS2.p1.4.m4.4.4.5.2">𝑆</ci><ci id="S2.SS2.p1.4.m4.4.4.5.3.cmml" xref="S2.SS2.p1.4.m4.4.4.5.3">𝑡</ci></apply><set id="S2.SS2.p1.4.m4.4.4.3.4.cmml" xref="S2.SS2.p1.4.m4.4.4.3.3"><apply id="S2.SS2.p1.4.m4.2.2.1.1.1.cmml" xref="S2.SS2.p1.4.m4.2.2.1.1.1"><csymbol cd="ambiguous" id="S2.SS2.p1.4.m4.2.2.1.1.1.1.cmml" xref="S2.SS2.p1.4.m4.2.2.1.1.1">subscript</csymbol><ci id="S2.SS2.p1.4.m4.2.2.1.1.1.2.cmml" xref="S2.SS2.p1.4.m4.2.2.1.1.1.2">𝑠</ci><ci id="S2.SS2.p1.4.m4.2.2.1.1.1.3.cmml" xref="S2.SS2.p1.4.m4.2.2.1.1.1.3">𝑡</ci></apply><apply id="S2.SS2.p1.4.m4.3.3.2.2.2.cmml" xref="S2.SS2.p1.4.m4.3.3.2.2.2"><csymbol cd="ambiguous" id="S2.SS2.p1.4.m4.3.3.2.2.2.1.cmml" xref="S2.SS2.p1.4.m4.3.3.2.2.2">subscript</csymbol><ci id="S2.SS2.p1.4.m4.3.3.2.2.2.2.cmml" xref="S2.SS2.p1.4.m4.3.3.2.2.2.2">𝑠</ci><apply id="S2.SS2.p1.4.m4.3.3.2.2.2.3.cmml" xref="S2.SS2.p1.4.m4.3.3.2.2.2.3"><minus id="S2.SS2.p1.4.m4.3.3.2.2.2.3.1.cmml" xref="S2.SS2.p1.4.m4.3.3.2.2.2.3.1"></minus><ci id="S2.SS2.p1.4.m4.3.3.2.2.2.3.2.cmml" xref="S2.SS2.p1.4.m4.3.3.2.2.2.3.2">𝑡</ci><cn id="S2.SS2.p1.4.m4.3.3.2.2.2.3.3.cmml" type="integer" xref="S2.SS2.p1.4.m4.3.3.2.2.2.3.3">1</cn></apply></apply><ci id="S2.SS2.p1.4.m4.1.1.cmml" xref="S2.SS2.p1.4.m4.1.1">…</ci><apply id="S2.SS2.p1.4.m4.4.4.3.3.3.cmml" xref="S2.SS2.p1.4.m4.4.4.3.3.3"><csymbol cd="ambiguous" id="S2.SS2.p1.4.m4.4.4.3.3.3.1.cmml" xref="S2.SS2.p1.4.m4.4.4.3.3.3">subscript</csymbol><ci id="S2.SS2.p1.4.m4.4.4.3.3.3.2.cmml" xref="S2.SS2.p1.4.m4.4.4.3.3.3.2">𝑠</ci><cn id="S2.SS2.p1.4.m4.4.4.3.3.3.3.cmml" type="integer" xref="S2.SS2.p1.4.m4.4.4.3.3.3.3">0</cn></apply></set></apply></annotation-xml><annotation encoding="application/x-tex" id="S2.SS2.p1.4.m4.4c">S_{t}=\{s_{t},s_{t-1},...,s_{0}\}</annotation><annotation encoding="application/x-llamapun" id="S2.SS2.p1.4.m4.4d">italic_S start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT = { italic_s start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT , italic_s start_POSTSUBSCRIPT italic_t - 1 end_POSTSUBSCRIPT , … , italic_s start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT }</annotation></semantics></math> as its input <cite class="ltx_cite ltx_citemacro_citep">[<a class="ltx_ref" href="https://arxiv.org/html/2411.12907v1#bib.bib9" title="">9</a>]</cite>. In this paper, we outline theoretically how we can understand these predictions and reserve analysis and the accompanying model development for future work. We remain agnostic towards the underlying model but note how it is essentially a sequence-to-sequence prediction problem.</p> </div> <div class="ltx_para" id="S2.SS2.p2"> <p class="ltx_p" id="S2.SS2.p2.1">First, we can ask how predictable a story is via the mutual information between the history and a future, realised state. High mutual information means a show is predictable - that is, if we know the history of the show, we can make better predictions about its future:</p> </div> <div class="ltx_para" id="S2.SS2.p3"> <table class="ltx_equation ltx_eqn_table" id="S2.E3"> <tbody><tr class="ltx_equation ltx_eqn_row ltx_align_baseline"> <td class="ltx_eqn_cell ltx_eqn_center_padleft"></td> <td class="ltx_eqn_cell ltx_align_center"><math alttext="\textbf{Predictability}=I(s_{t+1};S_{t})" class="ltx_Math" display="block" id="S2.E3.m1.2"><semantics id="S2.E3.m1.2a"><mrow id="S2.E3.m1.2.2" xref="S2.E3.m1.2.2.cmml"><mtext class="ltx_mathvariant_bold" id="S2.E3.m1.2.2.4" xref="S2.E3.m1.2.2.4a.cmml">Predictability</mtext><mo id="S2.E3.m1.2.2.3" xref="S2.E3.m1.2.2.3.cmml">=</mo><mrow id="S2.E3.m1.2.2.2" xref="S2.E3.m1.2.2.2.cmml"><mi id="S2.E3.m1.2.2.2.4" xref="S2.E3.m1.2.2.2.4.cmml">I</mi><mo id="S2.E3.m1.2.2.2.3" xref="S2.E3.m1.2.2.2.3.cmml">⁢</mo><mrow id="S2.E3.m1.2.2.2.2.2" xref="S2.E3.m1.2.2.2.2.3.cmml"><mo id="S2.E3.m1.2.2.2.2.2.3" stretchy="false" xref="S2.E3.m1.2.2.2.2.3.cmml">(</mo><msub id="S2.E3.m1.1.1.1.1.1.1" xref="S2.E3.m1.1.1.1.1.1.1.cmml"><mi id="S2.E3.m1.1.1.1.1.1.1.2" xref="S2.E3.m1.1.1.1.1.1.1.2.cmml">s</mi><mrow id="S2.E3.m1.1.1.1.1.1.1.3" xref="S2.E3.m1.1.1.1.1.1.1.3.cmml"><mi id="S2.E3.m1.1.1.1.1.1.1.3.2" xref="S2.E3.m1.1.1.1.1.1.1.3.2.cmml">t</mi><mo id="S2.E3.m1.1.1.1.1.1.1.3.1" xref="S2.E3.m1.1.1.1.1.1.1.3.1.cmml">+</mo><mn id="S2.E3.m1.1.1.1.1.1.1.3.3" xref="S2.E3.m1.1.1.1.1.1.1.3.3.cmml">1</mn></mrow></msub><mo id="S2.E3.m1.2.2.2.2.2.4" xref="S2.E3.m1.2.2.2.2.3.cmml">;</mo><msub id="S2.E3.m1.2.2.2.2.2.2" xref="S2.E3.m1.2.2.2.2.2.2.cmml"><mi id="S2.E3.m1.2.2.2.2.2.2.2" xref="S2.E3.m1.2.2.2.2.2.2.2.cmml">S</mi><mi id="S2.E3.m1.2.2.2.2.2.2.3" xref="S2.E3.m1.2.2.2.2.2.2.3.cmml">t</mi></msub><mo id="S2.E3.m1.2.2.2.2.2.5" stretchy="false" xref="S2.E3.m1.2.2.2.2.3.cmml">)</mo></mrow></mrow></mrow><annotation-xml encoding="MathML-Content" id="S2.E3.m1.2b"><apply id="S2.E3.m1.2.2.cmml" xref="S2.E3.m1.2.2"><eq id="S2.E3.m1.2.2.3.cmml" xref="S2.E3.m1.2.2.3"></eq><ci id="S2.E3.m1.2.2.4a.cmml" xref="S2.E3.m1.2.2.4"><mtext class="ltx_mathvariant_bold" id="S2.E3.m1.2.2.4.cmml" xref="S2.E3.m1.2.2.4">Predictability</mtext></ci><apply id="S2.E3.m1.2.2.2.cmml" xref="S2.E3.m1.2.2.2"><times id="S2.E3.m1.2.2.2.3.cmml" xref="S2.E3.m1.2.2.2.3"></times><ci id="S2.E3.m1.2.2.2.4.cmml" xref="S2.E3.m1.2.2.2.4">𝐼</ci><list id="S2.E3.m1.2.2.2.2.3.cmml" xref="S2.E3.m1.2.2.2.2.2"><apply id="S2.E3.m1.1.1.1.1.1.1.cmml" xref="S2.E3.m1.1.1.1.1.1.1"><csymbol cd="ambiguous" id="S2.E3.m1.1.1.1.1.1.1.1.cmml" xref="S2.E3.m1.1.1.1.1.1.1">subscript</csymbol><ci id="S2.E3.m1.1.1.1.1.1.1.2.cmml" xref="S2.E3.m1.1.1.1.1.1.1.2">𝑠</ci><apply id="S2.E3.m1.1.1.1.1.1.1.3.cmml" xref="S2.E3.m1.1.1.1.1.1.1.3"><plus id="S2.E3.m1.1.1.1.1.1.1.3.1.cmml" xref="S2.E3.m1.1.1.1.1.1.1.3.1"></plus><ci id="S2.E3.m1.1.1.1.1.1.1.3.2.cmml" xref="S2.E3.m1.1.1.1.1.1.1.3.2">𝑡</ci><cn id="S2.E3.m1.1.1.1.1.1.1.3.3.cmml" type="integer" xref="S2.E3.m1.1.1.1.1.1.1.3.3">1</cn></apply></apply><apply id="S2.E3.m1.2.2.2.2.2.2.cmml" xref="S2.E3.m1.2.2.2.2.2.2"><csymbol cd="ambiguous" id="S2.E3.m1.2.2.2.2.2.2.1.cmml" xref="S2.E3.m1.2.2.2.2.2.2">subscript</csymbol><ci id="S2.E3.m1.2.2.2.2.2.2.2.cmml" xref="S2.E3.m1.2.2.2.2.2.2.2">𝑆</ci><ci id="S2.E3.m1.2.2.2.2.2.2.3.cmml" xref="S2.E3.m1.2.2.2.2.2.2.3">𝑡</ci></apply></list></apply></apply></annotation-xml><annotation encoding="application/x-tex" id="S2.E3.m1.2c">\textbf{Predictability}=I(s_{t+1};S_{t})</annotation><annotation encoding="application/x-llamapun" id="S2.E3.m1.2d">Predictability = italic_I ( italic_s start_POSTSUBSCRIPT italic_t + 1 end_POSTSUBSCRIPT ; italic_S start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT )</annotation></semantics></math></td> <td class="ltx_eqn_cell ltx_eqn_center_padright"></td> <td class="ltx_eqn_cell ltx_eqn_eqno ltx_align_middle ltx_align_right" rowspan="1"><span class="ltx_tag ltx_tag_equation ltx_align_right">(3)</span></td> </tr></tbody> </table> </div> <div class="ltx_para" id="S2.SS2.p4"> <p class="ltx_p" id="S2.SS2.p4.1">A key question for an audience is how much it remains in the dark about what will happen next. We can capture this uncertainty in information-theoretic terms by computing the entropy over the predicted distribution:</p> </div> <div class="ltx_para" id="S2.SS2.p5"> <table class="ltx_equation ltx_eqn_table" id="S2.E4"> <tbody><tr class="ltx_equation ltx_eqn_row ltx_align_baseline"> <td class="ltx_eqn_cell ltx_eqn_center_padleft"></td> <td class="ltx_eqn_cell ltx_align_center"><math alttext="\textbf{Suspense}=\text{H}(P(s_{t+1}|S_{t}))" class="ltx_Math" display="block" id="S2.E4.m1.1"><semantics id="S2.E4.m1.1a"><mrow id="S2.E4.m1.1.1" xref="S2.E4.m1.1.1.cmml"><mtext class="ltx_mathvariant_bold" id="S2.E4.m1.1.1.3" xref="S2.E4.m1.1.1.3a.cmml">Suspense</mtext><mo id="S2.E4.m1.1.1.2" xref="S2.E4.m1.1.1.2.cmml">=</mo><mrow id="S2.E4.m1.1.1.1" xref="S2.E4.m1.1.1.1.cmml"><mtext id="S2.E4.m1.1.1.1.3" xref="S2.E4.m1.1.1.1.3a.cmml">H</mtext><mo id="S2.E4.m1.1.1.1.2" xref="S2.E4.m1.1.1.1.2.cmml">⁢</mo><mrow id="S2.E4.m1.1.1.1.1.1" xref="S2.E4.m1.1.1.1.1.1.1.cmml"><mo id="S2.E4.m1.1.1.1.1.1.2" stretchy="false" xref="S2.E4.m1.1.1.1.1.1.1.cmml">(</mo><mrow id="S2.E4.m1.1.1.1.1.1.1" xref="S2.E4.m1.1.1.1.1.1.1.cmml"><mi id="S2.E4.m1.1.1.1.1.1.1.3" xref="S2.E4.m1.1.1.1.1.1.1.3.cmml">P</mi><mo id="S2.E4.m1.1.1.1.1.1.1.2" xref="S2.E4.m1.1.1.1.1.1.1.2.cmml">⁢</mo><mrow id="S2.E4.m1.1.1.1.1.1.1.1.1" xref="S2.E4.m1.1.1.1.1.1.1.1.1.1.cmml"><mo id="S2.E4.m1.1.1.1.1.1.1.1.1.2" stretchy="false" xref="S2.E4.m1.1.1.1.1.1.1.1.1.1.cmml">(</mo><mrow id="S2.E4.m1.1.1.1.1.1.1.1.1.1" xref="S2.E4.m1.1.1.1.1.1.1.1.1.1.cmml"><msub id="S2.E4.m1.1.1.1.1.1.1.1.1.1.2" xref="S2.E4.m1.1.1.1.1.1.1.1.1.1.2.cmml"><mi id="S2.E4.m1.1.1.1.1.1.1.1.1.1.2.2" xref="S2.E4.m1.1.1.1.1.1.1.1.1.1.2.2.cmml">s</mi><mrow id="S2.E4.m1.1.1.1.1.1.1.1.1.1.2.3" xref="S2.E4.m1.1.1.1.1.1.1.1.1.1.2.3.cmml"><mi id="S2.E4.m1.1.1.1.1.1.1.1.1.1.2.3.2" xref="S2.E4.m1.1.1.1.1.1.1.1.1.1.2.3.2.cmml">t</mi><mo id="S2.E4.m1.1.1.1.1.1.1.1.1.1.2.3.1" xref="S2.E4.m1.1.1.1.1.1.1.1.1.1.2.3.1.cmml">+</mo><mn id="S2.E4.m1.1.1.1.1.1.1.1.1.1.2.3.3" xref="S2.E4.m1.1.1.1.1.1.1.1.1.1.2.3.3.cmml">1</mn></mrow></msub><mo fence="false" id="S2.E4.m1.1.1.1.1.1.1.1.1.1.1" xref="S2.E4.m1.1.1.1.1.1.1.1.1.1.1.cmml">|</mo><msub id="S2.E4.m1.1.1.1.1.1.1.1.1.1.3" xref="S2.E4.m1.1.1.1.1.1.1.1.1.1.3.cmml"><mi id="S2.E4.m1.1.1.1.1.1.1.1.1.1.3.2" xref="S2.E4.m1.1.1.1.1.1.1.1.1.1.3.2.cmml">S</mi><mi id="S2.E4.m1.1.1.1.1.1.1.1.1.1.3.3" xref="S2.E4.m1.1.1.1.1.1.1.1.1.1.3.3.cmml">t</mi></msub></mrow><mo id="S2.E4.m1.1.1.1.1.1.1.1.1.3" stretchy="false" xref="S2.E4.m1.1.1.1.1.1.1.1.1.1.cmml">)</mo></mrow></mrow><mo id="S2.E4.m1.1.1.1.1.1.3" stretchy="false" xref="S2.E4.m1.1.1.1.1.1.1.cmml">)</mo></mrow></mrow></mrow><annotation-xml encoding="MathML-Content" id="S2.E4.m1.1b"><apply id="S2.E4.m1.1.1.cmml" xref="S2.E4.m1.1.1"><eq id="S2.E4.m1.1.1.2.cmml" xref="S2.E4.m1.1.1.2"></eq><ci id="S2.E4.m1.1.1.3a.cmml" xref="S2.E4.m1.1.1.3"><mtext class="ltx_mathvariant_bold" id="S2.E4.m1.1.1.3.cmml" xref="S2.E4.m1.1.1.3">Suspense</mtext></ci><apply id="S2.E4.m1.1.1.1.cmml" xref="S2.E4.m1.1.1.1"><times id="S2.E4.m1.1.1.1.2.cmml" xref="S2.E4.m1.1.1.1.2"></times><ci id="S2.E4.m1.1.1.1.3a.cmml" xref="S2.E4.m1.1.1.1.3"><mtext id="S2.E4.m1.1.1.1.3.cmml" xref="S2.E4.m1.1.1.1.3">H</mtext></ci><apply id="S2.E4.m1.1.1.1.1.1.1.cmml" xref="S2.E4.m1.1.1.1.1.1"><times id="S2.E4.m1.1.1.1.1.1.1.2.cmml" xref="S2.E4.m1.1.1.1.1.1.1.2"></times><ci id="S2.E4.m1.1.1.1.1.1.1.3.cmml" xref="S2.E4.m1.1.1.1.1.1.1.3">𝑃</ci><apply id="S2.E4.m1.1.1.1.1.1.1.1.1.1.cmml" xref="S2.E4.m1.1.1.1.1.1.1.1.1"><csymbol cd="latexml" id="S2.E4.m1.1.1.1.1.1.1.1.1.1.1.cmml" xref="S2.E4.m1.1.1.1.1.1.1.1.1.1.1">conditional</csymbol><apply id="S2.E4.m1.1.1.1.1.1.1.1.1.1.2.cmml" xref="S2.E4.m1.1.1.1.1.1.1.1.1.1.2"><csymbol cd="ambiguous" id="S2.E4.m1.1.1.1.1.1.1.1.1.1.2.1.cmml" xref="S2.E4.m1.1.1.1.1.1.1.1.1.1.2">subscript</csymbol><ci id="S2.E4.m1.1.1.1.1.1.1.1.1.1.2.2.cmml" xref="S2.E4.m1.1.1.1.1.1.1.1.1.1.2.2">𝑠</ci><apply id="S2.E4.m1.1.1.1.1.1.1.1.1.1.2.3.cmml" xref="S2.E4.m1.1.1.1.1.1.1.1.1.1.2.3"><plus id="S2.E4.m1.1.1.1.1.1.1.1.1.1.2.3.1.cmml" xref="S2.E4.m1.1.1.1.1.1.1.1.1.1.2.3.1"></plus><ci id="S2.E4.m1.1.1.1.1.1.1.1.1.1.2.3.2.cmml" xref="S2.E4.m1.1.1.1.1.1.1.1.1.1.2.3.2">𝑡</ci><cn id="S2.E4.m1.1.1.1.1.1.1.1.1.1.2.3.3.cmml" type="integer" xref="S2.E4.m1.1.1.1.1.1.1.1.1.1.2.3.3">1</cn></apply></apply><apply id="S2.E4.m1.1.1.1.1.1.1.1.1.1.3.cmml" xref="S2.E4.m1.1.1.1.1.1.1.1.1.1.3"><csymbol cd="ambiguous" id="S2.E4.m1.1.1.1.1.1.1.1.1.1.3.1.cmml" xref="S2.E4.m1.1.1.1.1.1.1.1.1.1.3">subscript</csymbol><ci id="S2.E4.m1.1.1.1.1.1.1.1.1.1.3.2.cmml" xref="S2.E4.m1.1.1.1.1.1.1.1.1.1.3.2">𝑆</ci><ci id="S2.E4.m1.1.1.1.1.1.1.1.1.1.3.3.cmml" xref="S2.E4.m1.1.1.1.1.1.1.1.1.1.3.3">𝑡</ci></apply></apply></apply></apply></apply></annotation-xml><annotation encoding="application/x-tex" id="S2.E4.m1.1c">\textbf{Suspense}=\text{H}(P(s_{t+1}|S_{t}))</annotation><annotation encoding="application/x-llamapun" id="S2.E4.m1.1d">Suspense = H ( italic_P ( italic_s start_POSTSUBSCRIPT italic_t + 1 end_POSTSUBSCRIPT | italic_S start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT ) )</annotation></semantics></math></td> <td class="ltx_eqn_cell ltx_eqn_center_padright"></td> <td class="ltx_eqn_cell ltx_eqn_eqno ltx_align_middle ltx_align_right" rowspan="1"><span class="ltx_tag ltx_tag_equation ltx_align_right">(4)</span></td> </tr></tbody> </table> </div> <div class="ltx_para" id="S2.SS2.p6"> <p class="ltx_p" id="S2.SS2.p6.1">In simple terms, this entropy measure captures how confidently the audience can predict what will happen next. If it is low, a film viewer would (feel like they) clearly know what will happen in the next scene. In turn, if the entropy is high, we are left wondering what will happen next. Shows with cliffhangers would see a spike in this entropy towards the end of an episode whereas closed endings should have this entropy measure decline in the final chapters.</p> </div> <div class="ltx_para" id="S2.SS2.p7"> <p class="ltx_p" id="S2.SS2.p7.1">When predictions meet reality, we can again quantify an audience’s reaction by computing the JSD between the prediction and the revealed reality of the story:</p> </div> <div class="ltx_para" id="S2.SS2.p8"> <table class="ltx_equation ltx_eqn_table" id="S2.E5"> <tbody><tr class="ltx_equation ltx_eqn_row ltx_align_baseline"> <td class="ltx_eqn_cell ltx_eqn_center_padleft"></td> <td class="ltx_eqn_cell ltx_align_center"><math alttext="\textbf{Plot twist}=\text{JSD}(P(s_{t+1})\mid\mid s_{t+1})" class="ltx_math_unparsed" display="block" id="S2.E5.m1.1"><semantics id="S2.E5.m1.1a"><mrow id="S2.E5.m1.1b"><mtext class="ltx_mathvariant_bold" id="S2.E5.m1.1.1">Plot twist</mtext><mo id="S2.E5.m1.1.2">=</mo><mtext id="S2.E5.m1.1.3">JSD</mtext><mrow id="S2.E5.m1.1.4"><mo id="S2.E5.m1.1.4.1" stretchy="false">(</mo><mi id="S2.E5.m1.1.4.2">P</mi><mrow id="S2.E5.m1.1.4.3"><mo id="S2.E5.m1.1.4.3.1" stretchy="false">(</mo><msub id="S2.E5.m1.1.4.3.2"><mi id="S2.E5.m1.1.4.3.2.2">s</mi><mrow id="S2.E5.m1.1.4.3.2.3"><mi id="S2.E5.m1.1.4.3.2.3.2">t</mi><mo id="S2.E5.m1.1.4.3.2.3.1">+</mo><mn id="S2.E5.m1.1.4.3.2.3.3">1</mn></mrow></msub><mo id="S2.E5.m1.1.4.3.3" stretchy="false">)</mo></mrow><mo id="S2.E5.m1.1.4.4" lspace="0em" rspace="0.0835em">∣</mo><mo id="S2.E5.m1.1.4.5" lspace="0.0835em" rspace="0.167em">∣</mo><msub id="S2.E5.m1.1.4.6"><mi id="S2.E5.m1.1.4.6.2">s</mi><mrow id="S2.E5.m1.1.4.6.3"><mi id="S2.E5.m1.1.4.6.3.2">t</mi><mo id="S2.E5.m1.1.4.6.3.1">+</mo><mn id="S2.E5.m1.1.4.6.3.3">1</mn></mrow></msub><mo id="S2.E5.m1.1.4.7" stretchy="false">)</mo></mrow></mrow><annotation encoding="application/x-tex" id="S2.E5.m1.1c">\textbf{Plot twist}=\text{JSD}(P(s_{t+1})\mid\mid s_{t+1})</annotation><annotation encoding="application/x-llamapun" id="S2.E5.m1.1d">Plot twist = JSD ( italic_P ( italic_s start_POSTSUBSCRIPT italic_t + 1 end_POSTSUBSCRIPT ) ∣ ∣ italic_s start_POSTSUBSCRIPT italic_t + 1 end_POSTSUBSCRIPT )</annotation></semantics></math></td> <td class="ltx_eqn_cell ltx_eqn_center_padright"></td> <td class="ltx_eqn_cell ltx_eqn_eqno ltx_align_middle ltx_align_right" rowspan="1"><span class="ltx_tag ltx_tag_equation ltx_align_right">(5)</span></td> </tr></tbody> </table> </div> <div class="ltx_para" id="S2.SS2.p9"> <p class="ltx_p" id="S2.SS2.p9.1">This measure can capture key intuitions about a story: A high divergence will signify an unexpected plot twist. Low divergence will mean that a story treads along as expected. We note how these prediction-based measures are orthogonal to the measures introduced at the outset. Take for example our two JSD measures: On the one hand, changes in the state might be entirely predictable. On the other, the absence of a change might be suprising if it violates predictions. For example, imagine a character receiving good news but staying sad.</p> </div> </section> </section> <section class="ltx_section" id="S3"> <h2 class="ltx_title ltx_title_section"> <span class="ltx_tag ltx_tag_section">3 </span>Discussion</h2> <div class="ltx_para" id="S3.p1"> <p class="ltx_p" id="S3.p1.1">Our work introduces an information-theoretic framework to capture narratives. We showed how core principles from information theory provide a formal language for understanding story dynamics, and applied them to a real-world data set. Beyond these measures operating on the state of a narrative, we formally introduced metrics for predicted future courses of a storyline.</p> </div> <div class="ltx_para" id="S3.p2"> <p class="ltx_p" id="S3.p2.1">Our framework provides a foundation for several key areas of interest at the intersection of creativity and AI: For example, our measures offer quantitative metrics for assessing the complexity, unpredictability, and plot twists in stories. The genre-specific patterns observed in TV shows could serve as baselines for evaluating AI-generated stories in different styles. They may also help distinguish between human and AI-generated narratives, potentially revealing characteristic patterns in AI stories. Our metrics may also assists in identifying systemic biases in narratives across genres or cultures – ensuring AI-generated content reflects diverse storytelling traditions. Finally, our framework might assist human and AI co-creators, suggesting plot developments or highlighting areas that may need more tension or surprise to maintain audience engagement.</p> </div> <div class="ltx_para" id="S3.p3"> <p class="ltx_p" id="S3.p3.1">For more classical ML applications, our metrics can serve as metadata for viewership analyses and recommendations <cite class="ltx_cite ltx_citemacro_citep">[<a class="ltx_ref" href="https://arxiv.org/html/2411.12907v1#bib.bib19" title="">19</a>]</cite>, e.g. differentiating people’s interest for more or less (emotionally) complex movies. One may also investigate how likely people are to watch a next episode of a TV series based on the quantified strength of a cliffhanger. Our metrics could also help both human and AI editors identify crucial and relevant moments, accelerating summary or trailer generation <cite class="ltx_cite ltx_citemacro_citep">[<a class="ltx_ref" href="https://arxiv.org/html/2411.12907v1#bib.bib20" title="">20</a>]</cite>. For example, in summarisation, practicioners highlight pivotal moments. In turn, trailers or previews might want to avoid revealing plot twists.</p> </div> <div class="ltx_para" id="S3.p4"> <p class="ltx_p" id="S3.p4.1">Future work should focus on these applications and extend the framework to other modalities like books <cite class="ltx_cite ltx_citemacro_citep">[<a class="ltx_ref" href="https://arxiv.org/html/2411.12907v1#bib.bib7" title="">7</a>, <a class="ltx_ref" href="https://arxiv.org/html/2411.12907v1#bib.bib21" title="">21</a>]</cite> or even music <cite class="ltx_cite ltx_citemacro_citep">[<a class="ltx_ref" href="https://arxiv.org/html/2411.12907v1#bib.bib22" title="">22</a>]</cite>. For prediction-focused measures, applying (generative) models to capture human intuitions about stories will be a key challenge. An LLM’s next token distribution might be a starting point. Creative professionals’ attitudes towards such quantitative narrative analysis tools should guide further development. In conclusion, we offer a bridge between the creative intuitions of human storytellers and the analytical capabilities of ML, contributing to the ongoing dialogue about how AI can augment and enhance human creativity rather than replace it.</p> </div> </section> <section class="ltx_section" id="Sx1"> <h2 class="ltx_title ltx_title_section">Acknowledgments and Disclosure of Funding</h2> <div class="ltx_para" id="Sx1.p1"> <p class="ltx_p" id="Sx1.p1.1">We would like to thank Franziska Brändle, Peter Dayan, Dante Di Loreto, Tom Hoffman, Kit Thwaite and the Penguin Random House UK data science team for comments and discussions. We are particularly grateful to the RTL NL data science team for support and discussions, especially Mateo Gutierrez Granda, Iskaj Janssen, Prajakta Shouche and Ivan Yovchev. The authors are employees of Bertelsmann SE &amp; Co. KGaA (LS) and RTL Nederland BV (MP, DO).</p> </div> </section> <section class="ltx_bibliography" id="bib"> <h2 class="ltx_title ltx_title_bibliography">References</h2> <ul class="ltx_biblist"> <li class="ltx_bibitem" id="bib.bib1"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">[1]</span> <span class="ltx_bibblock"> Y. Sun, Z. Li, K. Fang, C. H. Lee, and A. Asadipour, “Language as Reality: A Co-Creative Storytelling Game Experience in 1001 Nights Using Generative AI,” <span class="ltx_text ltx_font_italic" id="bib.bib1.1.1">Proceedings of the AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment</span>, vol. 19, pp. 425–434, Oct. 2023. </span> <span class="ltx_bibblock">Number: 1. </span> </li> <li class="ltx_bibitem" id="bib.bib2"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">[2]</span> <span class="ltx_bibblock"> E. Chu, J. Dunn, D. Roy, G. Sands, and R. Stevens, “AI in storytelling: Machines as cocreators,” <span class="ltx_text ltx_font_italic" id="bib.bib2.1.1">McKinsey &amp; Company Media &amp; Entertainment</span>, 2017. </span> </li> <li class="ltx_bibitem" id="bib.bib3"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">[3]</span> <span class="ltx_bibblock"> A. Piper, R. J. So, and D. Bamman, “Narrative Theory for Computational Narrative Understanding,” in <span class="ltx_text ltx_font_italic" id="bib.bib3.1.1">Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing</span>, (Online and Punta Cana, Dominican Republic), pp. 298–311, Association for Computational Linguistics, 2021. </span> </li> <li class="ltx_bibitem" id="bib.bib4"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">[4]</span> <span class="ltx_bibblock"> M. Del Vecchio, A. Kharlamov, G. Parry, and G. Pogrebna, “The Data Science of Hollywood: Using Emotional Arcs of Movies to Drive Business Model Innovation in Entertainment Industries,” <span class="ltx_text ltx_font_italic" id="bib.bib4.1.1">Journal of the Operational Research Society</span>, vol. 72, pp. 1110–1137, May 2021. </span> <span class="ltx_bibblock">arXiv:1807.02221 [cs]. </span> </li> <li class="ltx_bibitem" id="bib.bib5"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">[5]</span> <span class="ltx_bibblock"> K. Vishnubhotla, A. Hammond, G. Hirst, and S. M. Mohammad, “The Emotion Dynamics of Literary Novels,” Mar. 2024. </span> <span class="ltx_bibblock">arXiv:2403.02474 [cs]. </span> </li> <li class="ltx_bibitem" id="bib.bib6"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">[6]</span> <span class="ltx_bibblock"> M. Sap, A. Jafarpour, Y. Choi, N. A. Smith, J. W. Pennebaker, and E. Horvitz, “Quantifying the narrative flow of imagined versus autobiographical stories,” <span class="ltx_text ltx_font_italic" id="bib.bib6.1.1">Proceedings of the National Academy of Sciences</span>, vol. 119, p. e2211715119, Nov. 2022. </span> <span class="ltx_bibblock">Publisher: Proceedings of the National Academy of Sciences. </span> </li> <li class="ltx_bibitem" id="bib.bib7"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">[7]</span> <span class="ltx_bibblock"> A. Piper, H. Xu, and E. D. Kolaczyk, “Modeling Narrative Revelation,” in <span class="ltx_text ltx_font_italic" id="bib.bib7.1.1">HR 2023: Computational Humanities Research Conference</span>, (Paris, France), 2023. </span> </li> <li class="ltx_bibitem" id="bib.bib8"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">[8]</span> <span class="ltx_bibblock"> J. Murdock, C. Allen, and S. DeDeo, “Exploration and exploitation of Victorian science in Darwin’s reading notebooks,” <span class="ltx_text ltx_font_italic" id="bib.bib8.1.1">Cognition</span>, vol. 159, pp. 117–126, Feb. 2017. </span> </li> <li class="ltx_bibitem" id="bib.bib9"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">[9]</span> <span class="ltx_bibblock"> A. T. J. Barron, J. Huang, R. L. Spang, and S. DeDeo, “Individuals, institutions, and innovation in the debates of the French Revolution,” <span class="ltx_text ltx_font_italic" id="bib.bib9.1.1">Proceedings of the National Academy of Sciences</span>, vol. 115, pp. 4607–4612, May 2018. </span> <span class="ltx_bibblock">Publisher: Proceedings of the National Academy of Sciences. </span> </li> <li class="ltx_bibitem" id="bib.bib10"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">[10]</span> <span class="ltx_bibblock"> A. Pizzo, V. Lombardo, and R. Damiano, <span class="ltx_text ltx_font_italic" id="bib.bib10.1.1">Interactive Storytelling: A Cross-Media Approach to Writing, Producing and Editing with AI</span>. </span> <span class="ltx_bibblock">Taylor &amp; Francis, Sept. 2023. </span> </li> <li class="ltx_bibitem" id="bib.bib11"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">[11]</span> <span class="ltx_bibblock"> D. Teodorescu and S. Mohammad, “Evaluating Emotion Arcs Across Languages: Bridging the Global Divide in Sentiment Analysis,” in <span class="ltx_text ltx_font_italic" id="bib.bib11.1.1">Findings of the Association for Computational Linguistics: EMNLP 2023</span>, (Singapore), pp. 4124–4137, Association for Computational Linguistics, 2023. </span> </li> <li class="ltx_bibitem" id="bib.bib12"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">[12]</span> <span class="ltx_bibblock"> H. Agarwal, K. Bansal, A. Joshi, and A. Modi, “Shapes of Emotions: Multimodal Emotion Recognition in Conversations via Emotion Shifts,” Nov. 2022. </span> <span class="ltx_bibblock">arXiv:2112.01938 [cs]. </span> </li> <li class="ltx_bibitem" id="bib.bib13"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">[13]</span> <span class="ltx_bibblock"> E. Kim and R. Klinger, “Frowning Frodo, Wincing Leia, and a Seriously Great Friendship: Learning to Classify Emotional Relationships of Fictional Characters,” Apr. 2019. </span> <span class="ltx_bibblock">arXiv:1903.12453 [cs]. </span> </li> <li class="ltx_bibitem" id="bib.bib14"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">[14]</span> <span class="ltx_bibblock"> W. E. Hipson and S. M. Mohammad, “Emotion dynamics in movie dialogues,” <span class="ltx_text ltx_font_italic" id="bib.bib14.1.1">PLOS ONE</span>, vol. 16, p. e0256153, Sept. 2021. </span> <span class="ltx_bibblock">Publisher: Public Library of Science. </span> </li> <li class="ltx_bibitem" id="bib.bib15"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">[15]</span> <span class="ltx_bibblock"> T. Crijns, M. Doyran, O. S. Kayhan, R. Klein, V. Koops, C. Laugs, D. Odijk, A. A. Salah, A. Serebrenik, Y. Tımar, and A. Volk, “Multimodal Emotion Recognition for Visualizing Storyline in a TV Series,” 2020. </span> </li> <li class="ltx_bibitem" id="bib.bib16"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">[16]</span> <span class="ltx_bibblock"> K. Elkins, “The Shapes of Stories: Sentiment Analysis for Narrative,” <span class="ltx_text ltx_font_italic" id="bib.bib16.1.1">Elements in Digital Literary Studies</span>, July 2022. </span> <span class="ltx_bibblock">ISBN: 9781009270403 9781009270397 Publisher: Cambridge University Press. </span> </li> <li class="ltx_bibitem" id="bib.bib17"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">[17]</span> <span class="ltx_bibblock"> S. S. Nath, F. Brändle, E. Schulz, P. Dayan, and A. A. Brielmann, “Relating objective complexity, subjective complexity and beauty,” 2023. </span> <span class="ltx_bibblock">Publisher: PsyArXiv. </span> </li> <li class="ltx_bibitem" id="bib.bib18"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">[18]</span> <span class="ltx_bibblock"> S. Vrijenhoek, G. Bénédict, M. Gutierrez Granada, and D. Odijk, “RADio* – An Introduction to Measuring Normative Diversity in News Recommendations,” <span class="ltx_text ltx_font_italic" id="bib.bib18.1.1">ACM Trans. Recomm. Syst.</span>, vol. 3, pp. 5:1–5:29, Aug. 2024. </span> </li> <li class="ltx_bibitem" id="bib.bib19"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">[19]</span> <span class="ltx_bibblock"> Q. Zhang, W. Wang, and Y. Chen, “Frontiers: In-Consumption Social Listening with Moment-to-Moment Unstructured Data: The Case of Movie Appreciation and Live Comments,” <span class="ltx_text ltx_font_italic" id="bib.bib19.1.1">Marketing Science</span>, vol. 39, pp. 285–295, Mar. 2020. </span> <span class="ltx_bibblock">Publisher: INFORMS. </span> </li> <li class="ltx_bibitem" id="bib.bib20"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">[20]</span> <span class="ltx_bibblock"> C. Bretti, P. Mettes, H. V. Koops, D. Odijk, and N. van Noord, “Find the Cliffhanger: Multi-modal Trailerness in Soap Operas,” in <span class="ltx_text ltx_font_italic" id="bib.bib20.1.1">MultiMedia Modeling</span> (S. Rudinac, A. Hanjalic, C. Liem, M. Worring, B. P. Jónsson, B. Liu, and Y. Yamakata, eds.), (Cham), pp. 199–212, Springer Nature Switzerland, 2024. </span> </li> <li class="ltx_bibitem" id="bib.bib21"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">[21]</span> <span class="ltx_bibblock"> O. Toubia, J. Berger, and J. Eliashberg, “How quantifying the shape of stories predicts their success,” <span class="ltx_text ltx_font_italic" id="bib.bib21.1.1">Proceedings of the National Academy of Sciences</span>, vol. 118, p. e2011695118, June 2021. </span> </li> <li class="ltx_bibitem" id="bib.bib22"> <span class="ltx_tag ltx_role_refnum ltx_tag_bibitem">[22]</span> <span class="ltx_bibblock"> J. E. Cohen, “Information theory and music,” <span class="ltx_text ltx_font_italic" id="bib.bib22.1.1">Behavioral Science</span>, vol. 7, no. 2, pp. 137–163, 1962. </span> <span class="ltx_bibblock">_eprint: https://onlinelibrary.wiley.com/doi/pdf/10.1002/bs.3830070202. </span> </li> </ul> </section> <section class="ltx_appendix" id="A1"> <h2 class="ltx_title ltx_title_appendix"> <span class="ltx_tag ltx_tag_appendix">Appendix A </span>Appendix / supplemental material</h2> <section class="ltx_subsection" id="A1.SS1"> <h3 class="ltx_title ltx_title_subsection"> <span class="ltx_tag ltx_tag_subsection">A.1 </span>Dataset</h3> <div class="ltx_para" id="A1.SS1.p1"> <p class="ltx_p" id="A1.SS1.p1.1">For the analysis that is illustrated in Figure <a class="ltx_ref" href="https://arxiv.org/html/2411.12907v1#S1.F1" title="Figure 1 ‣ 1 Introduction ‣ Narrative Information Theory"><span class="ltx_text ltx_ref_tag">1</span></a>, we had access to an internal dataset from a large European network. This dataset encompassed over 3000 minutes of video and represents a broad overview of the landscape of the most popular TV formats. It contained:</p> </div> <div class="ltx_para" id="A1.SS1.p2"> <ul class="ltx_itemize" id="A1.I1"> <li class="ltx_item" id="A1.I1.i1" style="list-style-type:none;"> <span class="ltx_tag ltx_tag_item">•</span> <div class="ltx_para" id="A1.I1.i1.p1"> <p class="ltx_p" id="A1.I1.i1.p1.1">16 episodes of a long-running daily soap focusing on the intertwined lives of residents of a small community (abbreviated as "Soap" in our plots).</p> </div> </li> <li class="ltx_item" id="A1.I1.i2" style="list-style-type:none;"> <span class="ltx_tag ltx_tag_item">•</span> <div class="ltx_para" id="A1.I1.i2.p1"> <p class="ltx_p" id="A1.I1.i2.p1.1">12 episodes of a dating format where couples test the strength of their relationships by living with attractive singles on a tropical island ("Dating 2").</p> </div> </li> <li class="ltx_item" id="A1.I1.i3" style="list-style-type:none;"> <span class="ltx_tag ltx_tag_item">•</span> <div class="ltx_para" id="A1.I1.i3.p1"> <p class="ltx_p" id="A1.I1.i3.p1.1">11 episodes of survival competition format where contestants are stranded on a remote island and must work together to overcome challenges ("Competition").</p> </div> </li> <li class="ltx_item" id="A1.I1.i4" style="list-style-type:none;"> <span class="ltx_tag ltx_tag_item">•</span> <div class="ltx_para" id="A1.I1.i4.p1"> <p class="ltx_p" id="A1.I1.i4.p1.1">8 episodes of a reality series where single individuals run a bed and breakfast while searching for love ("Dating 1").</p> </div> </li> <li class="ltx_item" id="A1.I1.i5" style="list-style-type:none;"> <span class="ltx_tag ltx_tag_item">•</span> <div class="ltx_para" id="A1.I1.i5.p1"> <p class="ltx_p" id="A1.I1.i5.p1.1">8 episodes of a gritty crime drama exploring the lives and conflicts within a criminal underworld ("Crime").</p> </div> </li> <li class="ltx_item" id="A1.I1.i6" style="list-style-type:none;"> <span class="ltx_tag ltx_tag_item">•</span> <div class="ltx_para" id="A1.I1.i6.p1"> <p class="ltx_p" id="A1.I1.i6.p1.1">6 episodes of a thriller series that delves into the secretive world of espionage and undercover operations ("Police").</p> </div> </li> <li class="ltx_item" id="A1.I1.i7" style="list-style-type:none;"> <span class="ltx_tag ltx_tag_item">•</span> <div class="ltx_para" id="A1.I1.i7.p1"> <p class="ltx_p" id="A1.I1.i7.p1.1">5 episodes of a crime drama based on true events, focusing on the complexities of family betrayal within a notorious crime family ("Drama 1").</p> </div> </li> <li class="ltx_item" id="A1.I1.i8" style="list-style-type:none;"> <span class="ltx_tag ltx_tag_item">•</span> <div class="ltx_para" id="A1.I1.i8.p1"> <p class="ltx_p" id="A1.I1.i8.p1.1">4 episodes of a biographical drama centered on the life and challenges of a prominent public figure ("Drama 2").</p> </div> </li> <li class="ltx_item" id="A1.I1.i9" style="list-style-type:none;"> <span class="ltx_tag ltx_tag_item">•</span> <div class="ltx_para" id="A1.I1.i9.p1"> <p class="ltx_p" id="A1.I1.i9.p1.1">3 episodes of a reality show following the everyday lives and humorous antics of a well-known family ("Reality").</p> </div> </li> </ul> </div> </section> <section class="ltx_subsection" id="A1.SS2"> <h3 class="ltx_title ltx_title_subsection"> <span class="ltx_tag ltx_tag_subsection">A.2 </span>Analysis pipeline</h3> <div class="ltx_para" id="A1.SS2.p1"> <p class="ltx_p" id="A1.SS2.p1.1">To analyse the individual episodes, we applied a machine learning pipeline. In short, this pipelines adhered to the following steps. We first subsampled the videos to extract a frame every 5 seconds. On these frames, we then applied face detection. On the detected faces, we applied emotion detection using the deepface library. This library outputs a distribution over 7 emotions ("angry", "disgust", "fear", "happy", "sad", "surprise", "neutral").</p> </div> <div class="ltx_para" id="A1.SS2.p2"> <p class="ltx_p" id="A1.SS2.p2.1">For our analysis, we then applied the following steps: We first averaged the distributions per frame (here using a rolling average over 20 extracted frames that contain a face). Fig. <a class="ltx_ref" href="https://arxiv.org/html/2411.12907v1#S1.F1" title="Figure 1 ‣ 1 Introduction ‣ Narrative Information Theory"><span class="ltx_text ltx_ref_tag">1</span></a>A plots these averaged values for the distributions of an episode of the "crime" series. We plot the entropy of these averaged states in Fig. <a class="ltx_ref" href="https://arxiv.org/html/2411.12907v1#S1.F1" title="Figure 1 ‣ 1 Introduction ‣ Narrative Information Theory"><span class="ltx_text ltx_ref_tag">1</span></a>C. Fig <a class="ltx_ref" href="https://arxiv.org/html/2411.12907v1#S1.F1" title="Figure 1 ‣ 1 Introduction ‣ Narrative Information Theory"><span class="ltx_text ltx_ref_tag">1</span></a>D in turn shows the JSD between raw states, again averaged over 20 frames with faces. We note that we only plot the most prominent 5 emotions in Fig. <a class="ltx_ref" href="https://arxiv.org/html/2411.12907v1#S1.F1" title="Figure 1 ‣ 1 Introduction ‣ Narrative Information Theory"><span class="ltx_text ltx_ref_tag">1</span></a>A and B for readability - which make up more than 95 % of the emotions in the shows. Entropies and JSD in turn were computed on the seven original emotions.</p> </div> <div class="ltx_para" id="A1.SS2.p3"> <p class="ltx_p" id="A1.SS2.p3.1">To compute entropies per show, we averaged all states across episodes to form an average emotion distribution per show (Fig. <a class="ltx_ref" href="https://arxiv.org/html/2411.12907v1#S1.F1" title="Figure 1 ‣ 1 Introduction ‣ Narrative Information Theory"><span class="ltx_text ltx_ref_tag">1</span></a>B), and computed the entropy based on this single distribution per show (Fig. <a class="ltx_ref" href="https://arxiv.org/html/2411.12907v1#S1.F1" title="Figure 1 ‣ 1 Introduction ‣ Narrative Information Theory"><span class="ltx_text ltx_ref_tag">1</span></a>D). We used this single distribution because it allowed us the best high-level overview of the emotions contained in a show. In turn, to compute average values for the pivot-metric, we first averaged the JSD per episode, and then plotted the per-show means of these episode-level JSD’s and the accompanying standard error of the mean in Fig <a class="ltx_ref" href="https://arxiv.org/html/2411.12907v1#S1.F1" title="Figure 1 ‣ 1 Introduction ‣ Narrative Information Theory"><span class="ltx_text ltx_ref_tag">1</span></a> F.</p> </div> <figure class="ltx_figure" id="A1.F3"><img alt="Refer to caption" class="ltx_graphics ltx_centering ltx_img_square" height="219" id="A1.F3.g1" src="extracted/5993258/Fig2_mock2.png" width="268"/> <figcaption class="ltx_caption ltx_centering"><span class="ltx_tag ltx_tag_figure">Figure 3: </span>Supplementary figure: KL-Divergences for example show and trajectories (left) and summary statistics (right). Note the overlapping qualitative patterns with JS-D.</figcaption> </figure> </section> <section class="ltx_subsection" id="A1.SS3"> <h3 class="ltx_title ltx_title_subsection"> <span class="ltx_tag ltx_tag_subsection">A.3 </span>Choice of divergence measure</h3> <div class="ltx_para" id="A1.SS3.p1"> <p class="ltx_p" id="A1.SS3.p1.1">We here choose JSD over Kullback-Leibler Divergence because of its symmetry and because its boundedness makes it more suitable for usage as a metric <cite class="ltx_cite ltx_citemacro_citep">[<a class="ltx_ref" href="https://arxiv.org/html/2411.12907v1#bib.bib18" title="">18</a>]</cite>. We observe similar qualitative results with KLD, see Fig. <a class="ltx_ref" href="https://arxiv.org/html/2411.12907v1#A1.F3" title="Figure 3 ‣ A.2 Analysis pipeline ‣ Appendix A Appendix / supplemental material ‣ Narrative Information Theory"><span class="ltx_text ltx_ref_tag">3</span></a>.</p> </div> </section> </section> </article> </div> <footer class="ltx_page_footer"> <div class="ltx_page_logo">Generated on Fri Nov 15 18:30:43 2024 by <a class="ltx_LaTeXML_logo" href="http://dlmf.nist.gov/LaTeXML/"><span style="letter-spacing:-0.2em; margin-right:0.1em;">L<span class="ltx_font_smallcaps" style="position:relative; bottom:2.2pt;">a</span>T<span class="ltx_font_smallcaps" style="font-size:120%;position:relative; bottom:-0.2ex;">e</span></span><span style="font-size:90%; position:relative; bottom:-0.2ex;">XML</span><img alt="Mascot Sammy" src="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAAsAAAAOCAYAAAD5YeaVAAAAAXNSR0IArs4c6QAAAAZiS0dEAP8A/wD/oL2nkwAAAAlwSFlzAAALEwAACxMBAJqcGAAAAAd0SU1FB9wKExQZLWTEaOUAAAAddEVYdENvbW1lbnQAQ3JlYXRlZCB3aXRoIFRoZSBHSU1Q72QlbgAAAdpJREFUKM9tkL+L2nAARz9fPZNCKFapUn8kyI0e4iRHSR1Kb8ng0lJw6FYHFwv2LwhOpcWxTjeUunYqOmqd6hEoRDhtDWdA8ApRYsSUCDHNt5ul13vz4w0vWCgUnnEc975arX6ORqN3VqtVZbfbTQC4uEHANM3jSqXymFI6yWazP2KxWAXAL9zCUa1Wy2tXVxheKA9YNoR8Pt+aTqe4FVVVvz05O6MBhqUIBGk8Hn8HAOVy+T+XLJfLS4ZhTiRJgqIoVBRFIoric47jPnmeB1mW/9rr9ZpSSn3Lsmir1fJZlqWlUonKsvwWwD8ymc/nXwVBeLjf7xEKhdBut9Hr9WgmkyGEkJwsy5eHG5vN5g0AKIoCAEgkEkin0wQAfN9/cXPdheu6P33fBwB4ngcAcByHJpPJl+fn54mD3Gg0NrquXxeLRQAAwzAYj8cwTZPwPH9/sVg8PXweDAauqqr2cDjEer1GJBLBZDJBs9mE4zjwfZ85lAGg2+06hmGgXq+j3+/DsixYlgVN03a9Xu8jgCNCyIegIAgx13Vfd7vdu+FweG8YRkjXdWy329+dTgeSJD3ieZ7RNO0VAXAPwDEAO5VKndi2fWrb9jWl9Esul6PZbDY9Go1OZ7PZ9z/lyuD3OozU2wAAAABJRU5ErkJggg=="/></a> </div></footer> </div> </body> </html>

Pages: 1 2 3 4 5 6 7 8 9 10