CINXE.COM
DHQ: Digital Humanities Quarterly: 2021
<?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"><head><meta http-equiv="content-type" content="text/html; charset=utf-8"/><title>DHQ: Digital Humanities Quarterly: 2021</title><link rel="stylesheet" type="text/css" href="/dhq/common/css/dhq.css"/><link rel="stylesheet" type="text/css" media="screen" href="/dhq/common/css/dhq_screen.css"/><link rel="stylesheet" type="text/css" media="print" href="/dhq/common/css/dhq_print.css"/><link rel="alternate" type="application/atom+xml" href="/dhq/feed/news.xml"/><link rel="shortcut icon" href="/dhq/common/images/favicon.ico"/><script defer="defer" type="text/javascript" src="/dhq/common/js/javascriptLibrary.js"><!-- serialize --></script><script type="text/javascript"> var _gaq = _gaq || []; _gaq.push(['_setAccount', 'UA-15812721-1']); _gaq.push(['_trackPageview']); (function() { var ga = document.createElement('script'); ga.type = 'text/javascript'; ga.async = true; ga.src = ('https:' == document.location.protocol ? 'https://ssl' : 'http://www') + '.google-analytics.com/ga.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(ga, s); })(); </script><script async="async" src="https://www.googletagmanager.com/gtag/js?id=G-F59WMFKXLW"/><script> window.dataLayer = window.dataLayer || []; function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'G-F59WMFKXLW'); </script><!--WTF?--><script> MathJax = { options: { skipHtmlTags: {'[-]': ['code', 'pre']} } }; </script><script id="MathJax-script" async="" src="https://cdn.jsdelivr.net/npm/mathjax@3/es5/tex-mml-chtml.js"><!--Gimme some comment!--></script><link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/highlight.js/10.7.2/styles/xcode.min.css"/><script src="https://cdnjs.cloudflare.com/ajax/libs/highlight.js/10.7.2/highlight.min.js"><!--Gimme some comment!--></script><script src="https://code.jquery.com/jquery-3.4.0.min.js" integrity="sha256-BJeo0qm959uMBGb65z40ejJYGSgR7REI4+CW1fNKwOg=" crossorigin="anonymous"><!--Gimme some comment!--></script></head><body><div id="top"><div id="backgroundpic"><script type="text/javascript" src="/dhq/common/js/pics.js"><!--displays banner image--></script></div><div id="banner"><div id="dhqlogo"><img src="/dhq/common/images/dhqlogo.png" alt="DHQ Logo"/></div><div id="longdhqlogo"><img src="/dhq/common/images/dhqlogolonger.png" alt="Digital Humanities Quarterly Logo"/></div></div><div id="topNavigation"><div id="topnavlinks"><span><a href="/dhq/" class="topnav">home</a></span><span><a href="/dhq/submissions/index.html" class="topnav">submissions</a></span><span><a href="/dhq/about/about.html" class="topnav">about dhq</a></span><span><a href="/dhq/people/people.html" class="topnav">dhq people</a></span><span><a href="/dhq/news/news.html" class="topnav">news</a></span><span id="rightmost"><a href="/dhq/contact/contact.html" class="topnav">contact</a></span></div><div id="search"><form action="/dhq/findIt" method="get" onsubmit="javascript:document.location.href=cleanSearch(this.queryString.value); return false;"><div><input type="text" name="queryString" size="18"/> <input type="submit" value="Search"/></div></form></div></div></div><div id="main"><div id="leftsidebar"><div id="leftsidenav"><span>Current Issue<br/></span><ul><li><a href="/dhq/vol/18/4/index.html">2024: 18.4</a></li></ul><span>Preview Issue<br/></span><ul><li><a href="/dhq/preview/index.html">2025: 19.1</a></li></ul><span>Previous Issues<br/></span><ul><li><a href="/dhq/vol/18/3/index.html">2024: 18.3</a></li><li><a href="/dhq/vol/18/2/index.html">2024: 18.2</a></li><li><a href="/dhq/vol/18/1/index.html">2024: 18.1</a></li><li><a href="/dhq/vol/17/4/index.html">2023: 17.4</a></li><li><a href="/dhq/vol/17/3/index.html">2023: 17.3</a></li><li><a href="/dhq/vol/17/2/index.html">2023: 17.2</a></li><li><a href="/dhq/vol/17/1/index.html">2023: 17.1</a></li><li><a href="/dhq/vol/16/4/index.html">2022: 16.4</a></li><li><a href="/dhq/vol/16/3/index.html">2022: 16.3</a></li><li><a href="/dhq/vol/16/2/index.html">2022: 16.2</a></li><li><a href="/dhq/vol/16/1/index.html">2022: 16.1</a></li><li><a href="/dhq/vol/15/4/index.html">2021: 15.4</a></li><li><a href="/dhq/vol/15/3/index.html">2021: 15.3</a></li><li><a href="/dhq/vol/15/2/index.html">2021: 15.2</a></li><li><a href="/dhq/vol/15/1/index.html">2021: 15.1</a></li><li><a href="/dhq/vol/14/4/index.html">2020: 14.4</a></li><li><a href="/dhq/vol/14/3/index.html">2020: 14.3</a></li><li><a href="/dhq/vol/14/2/index.html">2020: 14.2</a></li><li><a href="/dhq/vol/14/1/index.html">2020: 14.1</a></li><li><a href="/dhq/vol/13/4/index.html">2019: 13.4</a></li><li><a href="/dhq/vol/13/3/index.html">2019: 13.3</a></li><li><a href="/dhq/vol/13/2/index.html">2019: 13.2</a></li><li><a href="/dhq/vol/13/1/index.html">2019: 13.1</a></li><li><a href="/dhq/vol/12/4/index.html">2018: 12.4</a></li><li><a href="/dhq/vol/12/3/index.html">2018: 12.3</a></li><li><a href="/dhq/vol/12/2/index.html">2018: 12.2</a></li><li><a href="/dhq/vol/12/1/index.html">2018: 12.1</a></li><li><a href="/dhq/vol/11/4/index.html">2017: 11.4</a></li><li><a href="/dhq/vol/11/3/index.html">2017: 11.3</a></li><li><a href="/dhq/vol/11/2/index.html">2017: 11.2</a></li><li><a href="/dhq/vol/11/1/index.html">2017: 11.1</a></li><li><a href="/dhq/vol/10/4/index.html">2016: 10.4</a></li><li><a href="/dhq/vol/10/3/index.html">2016: 10.3</a></li><li><a href="/dhq/vol/10/2/index.html">2016: 10.2</a></li><li><a href="/dhq/vol/10/1/index.html">2016: 10.1</a></li><li><a href="/dhq/vol/9/4/index.html">2015: 9.4</a></li><li><a href="/dhq/vol/9/3/index.html">2015: 9.3</a></li><li><a href="/dhq/vol/9/2/index.html">2015: 9.2</a></li><li><a href="/dhq/vol/9/1/index.html">2015: 9.1</a></li><li><a href="/dhq/vol/8/4/index.html">2014: 8.4</a></li><li><a href="/dhq/vol/8/3/index.html">2014: 8.3</a></li><li><a href="/dhq/vol/8/2/index.html">2014: 8.2</a></li><li><a href="/dhq/vol/8/1/index.html">2014: 8.1</a></li><li><a href="/dhq/vol/7/3/index.html">2013: 7.3</a></li><li><a href="/dhq/vol/7/2/index.html">2013: 7.2</a></li><li><a href="/dhq/vol/7/1/index.html">2013: 7.1</a></li><li><a href="/dhq/vol/6/3/index.html">2012: 6.3</a></li><li><a href="/dhq/vol/6/2/index.html">2012: 6.2</a></li><li><a href="/dhq/vol/6/1/index.html">2012: 6.1</a></li><li><a href="/dhq/vol/5/3/index.html">2011: 5.3</a></li><li><a href="/dhq/vol/5/2/index.html">2011: 5.2</a></li><li><a href="/dhq/vol/5/1/index.html">2011: 5.1</a></li><li><a href="/dhq/vol/4/2/index.html">2010: 4.2</a></li><li><a href="/dhq/vol/4/1/index.html">2010: 4.1</a></li><li><a href="/dhq/vol/3/4/index.html">2009: 3.4</a></li><li><a href="/dhq/vol/3/3/index.html">2009: 3.3</a></li><li><a href="/dhq/vol/3/2/index.html">2009: 3.2</a></li><li><a href="/dhq/vol/3/1/index.html">2009: 3.1</a></li><li><a href="/dhq/vol/2/1/index.html">2008: 2.1</a></li><li><a href="/dhq/vol/1/2/index.html">2007: 1.2</a></li><li><a href="/dhq/vol/1/1/index.html">2007: 1.1</a></li></ul><span>Indexes<br/></span><ul><li><a href="/dhq/index/title.html"> Title</a></li><li><a href="/dhq/index/author.html"> Author</a></li></ul></div><img src="/dhq/common/images/lbarrev.png" style="margin-left : 7px;" alt=""/><div id="leftsideID"><b>ISSN 1938-4122</b><br/></div><div class="leftsidecontent"><h3>Announcements</h3><ul><li><a href="/dhq/news/news.html#peer_reviews">Call for Reviewers</a></li><li><a href="/dhq/submissions/index.html#logistics">Call for Submissions</a></li></ul></div><div class="leftsidecontent"><script type="text/javascript">addthis_pub = 'dhq';</script><a href="http://www.addthis.com/bookmark.php" onmouseover="return addthis_open(this, '', '[URL]', '[TITLE]')" onmouseout="addthis_close()" onclick="return addthis_sendto()"><img src="http://s9.addthis.com/button1-addthis.gif" width="125" height="16" alt="button1-addthis.gif"/></a><script type="text/javascript" src="http://s7.addthis.com/js/152/addthis_widget.js"><!-- Javascript functions --></script></div></div><div id="mainContent"><div id="printSiteTitle">DHQ: Digital Humanities Quarterly</div><div id="toc"> <h1>2021 15.1</h1> <h2>AudioVisual Data in DH</h2> <div class="cluster"><h3>Editors: Taylor Arnold, Jasmijn van Gorp, Stefania Scagliola, and Lauren Tilton</h3></div> <div class="cluster"><h3>Front Matter</h3> <div class="articleInfo" style="margin:0 0 1em 0;"><span class="monospace">[en] </span><a href="/dhq/vol/15/1/000541/000541.html">Introduction: Special Issue on AudioVisual Data in DH</a><div style="padding-left:1em; margin:0;text-indent:-1em;">Taylor Arnold, University of Richmond; Stefania Scagliola, Université du Luxembourg; Lauren Tilton, University of Richmond; Jasmijn Van Gorp, Utrecht University</div><span class="viewAbstract">Abstract <span class="viewAbstract monospace" style="display:inline" id="abstractExpanderabstract000541en"><a title="View Abstract" class="expandCollapse monospace" href="javascript:expandAbstract('abstract000541en')">[en]</a></span><span style="display:none" class="abstract" id="abstract000541en"> Our special issue explores audio and visual (AV) data as form, method, and practice in the digital humanities. Spurred by recent advances in computing alongside disciplinary expansions of what counts as evidence, audio and visual ways of knowing are enjoying a more prominent place in the field. Whether the creation, analysis, and sharing of audiovisual data or audiovisual ways of communicating scholarly knowledge, scholars are building compelling avenues of inquiry that are changing how we know, what we know, and why we know in the digital humanities (DH). These epistemological shifts not only challenge existing methodological and theoretical pathways within the field of audiovisual studies, but most importantly defy existing knowledge hierarchies within the entire field of DH. </span></span><span class="Z3988" title="url_ver=Z39.88-2004&ctx_ver=Z39.88-2004&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rfr_id=info%3Asid%2Fzotero.org%3A2&rft.genre=article&rft.atitle=Introduction%3A%20Special%20Issue%20on%20AudioVisual%20Data%20in%20DH&rft.jtitle=Digital%20Humanities%20Quarterly&rft.stitle=DHQ&rft.issn=1938-4122&rft.date=2021-03-05&rft.volume=015&rft.issue=1&rft.aulast=Arnold&rft.aufirst=Taylor&rft.au=Taylor%20Arnold&rft.au=Stefania%20Scagliola&rft.au=Lauren%20Tilton&rft.au=Jasmijn%20Van Gorp"> </span></div> <div class="articleInfo" style="margin:0 0 1em 0;"><span class="monospace">[en] </span><a href="/dhq/vol/15/1/000542/000542.html">Founding the Special Interest Group Audio-Visual in Digital Humanities: An Interview with Franciska de Jong, Martijn Kleppe, and Max Kemman </a><div style="padding-left:1em; margin:0;text-indent:-1em;">Stefania Scagliola, Université du Luxembourg</div><span class="viewAbstract">Abstract <span class="viewAbstract monospace" style="display:inline" id="abstractExpanderabstract000542en"><a title="View Abstract" class="expandCollapse monospace" href="javascript:expandAbstract('abstract000542en')">[en]</a></span><span style="display:none" class="abstract" id="abstract000542en"> An interview with Professor Franciska de Jong (Director at CLARIN ERIC), Dr. Martijn Kleppe (Head of Research at the KB, National Library of the Netherlands), and Dr. Max Kemman (Researcher/Consultant at Dialogic) on the founding of the ADHO Audiovisual in Digital Humanities (AVinDH) Special Interest Group. They are interviewed by Stefania Scagliola (Centre for Contemporary and Digital History), who co-founded the group and is a co-editor of this special issue. </span></span><span class="Z3988" title="url_ver=Z39.88-2004&ctx_ver=Z39.88-2004&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rfr_id=info%3Asid%2Fzotero.org%3A2&rft.genre=article&rft.atitle=Founding%20the%20Special%20Interest%20Group%20Audio-Visual%20in%20Digital%20Humanities%3A%20An%20Interview%20with%20Franciska%20de%20Jong,%20Martijn%20Kleppe,%20and%20Max%20Kemman&rft.jtitle=Digital%20Humanities%20Quarterly&rft.stitle=DHQ&rft.issn=1938-4122&rft.date=2021-03-05&rft.volume=015&rft.issue=1&rft.aulast=Scagliola&rft.aufirst=Stefania&rft.au=Stefania%20Scagliola"> </span></div> </div> <div class="cluster"> <h3>Section 1: Annotation of AV Material as Method and Theory</h3> <div class="articleInfo" style="margin:0 0 1em 0;"><span class="monospace">[en] </span><a href="/dhq/vol/15/1/000515/000515.html">Exploring Film Language with a Digital Analysis Tool: the Case of Kinolab</a><div style="padding-left:1em; margin:0;text-indent:-1em;">Allison Cooper, Bowdoin College; Fernando Nascimento, Bowdoin College; David Francis, Bowdoin College</div><span class="viewAbstract">Abstract <span class="viewAbstract monospace" style="display:inline" id="abstractExpanderabstract000515en"><a title="View Abstract" class="expandCollapse monospace" href="javascript:expandAbstract('abstract000515en')">[en]</a></span><span style="display:none" class="abstract" id="abstract000515en"> This article presents a case study of Kinolab, a digital platform for the analysis of narrative film language. It describes the need for a scholarly database of clips focusing on film language for cinema and media studies faculty and students, highlighting recent technological and legal advances that have created a favorable environment for this kind of digital humanities work. Discussion of the project is situated within the broader context of contemporary developments in moving image annotation and a discussion of the unique challenges posed by computationally-driven moving image analysis. The article also argues for a universally accepted data model for film language to facilitate the academic crowdsourcing of film clips and the sharing of research and resources across the Semantic Web. </span></span><span class="Z3988" title="url_ver=Z39.88-2004&ctx_ver=Z39.88-2004&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rfr_id=info%3Asid%2Fzotero.org%3A2&rft.genre=article&rft.atitle=Exploring%20Film%20Language%20with%20a%20Digital%20Analysis%20Tool%3A%20the%20Case%20of%20Kinolab&rft.jtitle=Digital%20Humanities%20Quarterly&rft.stitle=DHQ&rft.issn=1938-4122&rft.date=2021-03-05&rft.volume=015&rft.issue=1&rft.aulast=Cooper&rft.aufirst=Allison&rft.au=Allison%20Cooper&rft.au=Fernando%20Nascimento&rft.au=David%20Francis"> </span></div> <div class="articleInfo" style="margin:0 0 1em 0;"><span class="monospace">[en] </span><a href="/dhq/vol/15/1/000507/000507.html">Audiovisualities out of Annotation: Three Case Studies in Teaching Digital Annotation with Mediate</a><div style="padding-left:1em; margin:0;text-indent:-1em;">Joel Burges, University of Rochester; Solvegia Armoskaite, University of Rochester; Tiamat Fox, University of Rochester; Darren Mueller, University of Rochester; Joshua Romphf, University of Rochester; Emily Sherwood, University of Rochester; Madeline Ullrich, University of Rochester</div><span class="viewAbstract">Abstract <span class="viewAbstract monospace" style="display:inline" id="abstractExpanderabstract000507en"><a title="View Abstract" class="expandCollapse monospace" href="javascript:expandAbstract('abstract000507en')">[en]</a></span><span style="display:none" class="abstract" id="abstract000507en"> This article describes Mediate: An Annotation Tool for Audiovisual Media, developed at the University of Rochester, and emphasizes the platform as a source for the understanding of film, television, poetry, pop songs, live performance, music, and advertising as shown in three cases studies from film and media studies, music history, and linguistics. In each case collaboration amongst students was not only key, but also enabled by Mediate, which allows students to work in groups to generate large amounts of data about audiovisual media. Further, the process of data generation produces quantitative and qualitative observation of the mediated interplay of sight and sound. A major outcome of these classes for the faculty teaching them has been the concept of audiovisualities: the physically and culturally interpenetrating modes of audiovisual experience and audiovisual inscription where hearing and seeing remediate one another for all of us as sensory and social subjects. Throughout the article, we chart how audiovisualities have emerged for students and ourselves out of digital annotation in Mediate. </span></span><span class="Z3988" title="url_ver=Z39.88-2004&ctx_ver=Z39.88-2004&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rfr_id=info%3Asid%2Fzotero.org%3A2&rft.genre=article&rft.atitle=Audiovisualities%20out%20of%20Annotation%3A%20Three%20Case%20Studies%20in%20Teaching%20Digital%20Annotation%20with%20Mediate&rft.jtitle=Digital%20Humanities%20Quarterly&rft.stitle=DHQ&rft.issn=1938-4122&rft.date=2021-03-05&rft.volume=015&rft.issue=1&rft.aulast=Burges&rft.aufirst=Joel&rft.au=Joel%20Burges&rft.au=Solvegia%20Armoskaite&rft.au=Tiamat%20Fox&rft.au=Darren%20Mueller&rft.au=Joshua%20Romphf&rft.au=Emily%20Sherwood&rft.au=Madeline%20Ullrich"> </span></div> <div class="articleInfo" style="margin:0 0 1em 0;"><span class="monospace">[en] </span><a href="/dhq/vol/15/1/000524/000524.html">The Media Ecology Project: Collaborative DH Synergies to Produce New Research in Visual Culture History</a><div style="padding-left:1em; margin:0;text-indent:-1em;">Mark Williams, Dartmouth College; John Bell, Dartmouth College</div><span class="viewAbstract">Abstract <span class="viewAbstract monospace" style="display:inline" id="abstractExpanderabstract000524en"><a title="View Abstract" class="expandCollapse monospace" href="javascript:expandAbstract('abstract000524en')">[en]</a></span><span style="display:none" class="abstract" id="abstract000524en"> This essay details the development and current NEH-funded research goals of The Media Ecology Project (MEP), directed by Prof. Mark Williams and designed by Dr. John Bell at Dartmouth. The virtuous cycle of access, research, and preservation that MEP realizes is built upon a foundation of technological advance (software development) plus large-scale partnership networks with scholars, students, and institutions of historical memory such as moving image archives. The development of our Onomy vocabulary tool and NEH-funded Semantic Annotation Tool (SAT) are detailed, including their application in two advancement grants from the NEH regarding 1) early cinema history, and 2) television newsfilm that covered the civil rights movement in the U.S. MEP is fundamentally 1) a sustainability project that 2) develops literacies of moving image and visual culture history, and 3) functions as a collaborative incubator that fosters new research questions and methods ranging from traditional Arts and Humanities close-textual analysis to computational distant reading. New research questions in relation to these workflows will literally transform the value of media archives and support the development of interdisciplinary research and pedagogy/curricular goals (e.g., media literacy) regarding the study of visual culture history and its legacies in the 21st century. </span></span><span class="Z3988" title="url_ver=Z39.88-2004&ctx_ver=Z39.88-2004&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rfr_id=info%3Asid%2Fzotero.org%3A2&rft.genre=article&rft.atitle=The%20Media%20Ecology%20Project%3A%20Collaborative%20DH%20Synergies%20to%20Produce%20New%20Research%20in%20Visual%20Culture%20History&rft.jtitle=Digital%20Humanities%20Quarterly&rft.stitle=DHQ&rft.issn=1938-4122&rft.date=2021-03-05&rft.volume=015&rft.issue=1&rft.aulast=Williams&rft.aufirst=Mark&rft.au=Mark%20Williams&rft.au=John%20Bell"> </span></div> <div class="articleInfo" style="margin:0 0 1em 0;"><span class="monospace">[en] </span><a href="/dhq/vol/15/1/000512/000512.html">Audiated Annotation from the Middle Ages to the Open Web</a><div style="padding-left:1em; margin:0;text-indent:-1em;">Tanya E. Clement, University of Texas at Austin; Liz Fischer, University of Texas at Austin</div><span class="viewAbstract">Abstract <span class="viewAbstract monospace" style="display:inline" id="abstractExpanderabstract000512en"><a title="View Abstract" class="expandCollapse monospace" href="javascript:expandAbstract('abstract000512en')">[en]</a></span><span style="display:none" class="abstract" id="abstract000512en"> Current theories about the significance of annotations in literary studies are based primarily on assumptions developed in print culture about verbal texts. In these textual theories, the text is typically present, authorized, and centralized as the ideal text for an ideal reader, and to annotate is to add authorized comments in a sociotechnical system that includes publication, dissemination, and reception. To audiate is to imagine a song that's not playing. In music learning theory, audiation is based on the concept that the musician learns to play music by developing their own musical aptitude, her individual interpretation of a musical score based on her particular experience of the music. This short article introduces audiation as an alternate theoretical framing for articulating the significance of personal literary annotations. Comparing commentary on psalms in the Middle Ages to IIIF (International Image Interoperability Framework) web annotations, we use the concept of audiation to situate annotations within literary study in terms of a more capacious understanding of the individual's interpretation of text and of the reading experience as part of an unauthorized, distributed, and decentralized system. By bringing together theories and technologies of annotation with sound, we offer the concept of audiated annotations as a means to re-evaluate modes of access, discovery, and analysis of cultural objects in digital sound studies. </span></span><span class="Z3988" title="url_ver=Z39.88-2004&ctx_ver=Z39.88-2004&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rfr_id=info%3Asid%2Fzotero.org%3A2&rft.genre=article&rft.atitle=Audiated%20Annotation%20from%20the%20Middle%20Ages%20to%20the%20Open%20Web&rft.jtitle=Digital%20Humanities%20Quarterly&rft.stitle=DHQ&rft.issn=1938-4122&rft.date=2021-03-05&rft.volume=015&rft.issue=1&rft.aulast=Clement&rft.aufirst=Tanya E.&rft.au=Tanya E.%20Clement&rft.au=Liz%20Fischer"> </span></div> </div> <div class="cluster"> <h3>Section 2: Analyzing (Meta)Data</h3> <div class="articleInfo" style="margin:0 0 1em 0;"><span class="monospace">[en] </span><a href="/dhq/vol/15/1/000509/000509.html">Healing the Gap: Digital Humanities Methods for the Virtual Reunification of Split Media and Paper Collections</a><div style="padding-left:1em; margin:0;text-indent:-1em;">Stephanie Sapienza, Maryland Institute for Technology in the Humanities; Eric Hoyt, University of Wisconsin-Madison; Matt St. John, University of Wisconsin-Madison; Ed Summers, Maryland Institute for Technology in the Humanities; JJ Bersch, University of Wisconsin-Madison</div><span class="viewAbstract">Abstract <span class="viewAbstract monospace" style="display:inline" id="abstractExpanderabstract000509en"><a title="View Abstract" class="expandCollapse monospace" href="javascript:expandAbstract('abstract000509en')">[en]</a></span><span style="display:none" class="abstract" id="abstract000509en"> This paper introduces and unpacks several challenges faced by stewards who work with audiovisual resources, departing from the premise that audiovisual resources are undervalued and underutilized as primary source materials for scholarship and therefore receive less attention in the sphere of digital humanities. It will then present original research from the Maryland Institute for Technology in the Humanities (MITH), in conjunction with the University of Wisconsin-Madison and the Wisconsin Historical Society, on a project entitled Unlocking the Airwaves: Revitalizing an Early Public Radio Collection. As a case study, Unlocking the Airwaves successfully meets these challenges by employing strategies such as virtual reunification, linked data, minimal computing, and synced transcripts, to provide integrated access to the collections of the National Association of Educational Broadcasters (NAEB), which are currently split between the University of Maryland (audio files) and the Wisconsin Historical Society (paper collections). The project demonstrates innovative approaches towards increasing the discoverability of audiovisual collections in ways that allow for better contextual description, and offers a flexible framework for connecting audiovisual collections to related archival collections. </span></span><span class="Z3988" title="url_ver=Z39.88-2004&ctx_ver=Z39.88-2004&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rfr_id=info%3Asid%2Fzotero.org%3A2&rft.genre=article&rft.atitle=Healing%20the%20Gap%3A%20Digital%20Humanities%20Methods%20for%20the%20Virtual%20Reunification%20of%20Split%20Media%20and%20Paper%20Collections&rft.jtitle=Digital%20Humanities%20Quarterly&rft.stitle=DHQ&rft.issn=1938-4122&rft.date=2021-03-05&rft.volume=015&rft.issue=1&rft.aulast=Sapienza&rft.aufirst=Stephanie&rft.au=Stephanie%20Sapienza&rft.au=Eric%20Hoyt&rft.au=Matt%20St. John&rft.au=Ed%20Summers&rft.au=JJ%20Bersch"> </span></div> <div class="articleInfo" style="margin:0 0 1em 0;"><span class="monospace">[en] </span><a href="/dhq/vol/15/1/000519/000519.html">PodcastRE Analytics: Using RSS to Study the Cultures and Norms of Podcasting </a><div style="padding-left:1em; margin:0;text-indent:-1em;">Eric Hoyt, University of Wisconsin-Madison; J.J. Bersch, University of Wisconsin-Madison; Susan Noh, University of Wisconsin-Madison; Samuel Hansen, University of Michigan and University of Wisconsin-Madison; Jacob Mertens, University of Wisconsin-Madison; Jeremy Wade Morris, University of Wisconsin-Madison</div><span class="viewAbstract">Abstract <span class="viewAbstract monospace" style="display:inline" id="abstractExpanderabstract000519en"><a title="View Abstract" class="expandCollapse monospace" href="javascript:expandAbstract('abstract000519en')">[en]</a></span><span style="display:none" class="abstract" id="abstract000519en"> Over the past decade, podcasting has grown into one of the most important media forms in the world. This article argues that podcasting’s unique technical affordances — particularly RSS feeds and user-entered metadata — open up productive methods for exploring the cultural practices and meanings of the medium. We share three different methods for studying RSS feeds and podcast metadata: 1) visualizing how topics and keywords trend over time; 2) visualizing networks of commonly associated keywords entered by podcasters; and 3) analyzing norms and common practices for the duration of podcasts (as a time-based media format, podcasting is unusual in that it is not bound by the programming schedules and technical limitations that provide strict parameters for most audiovisual forms). The methods and preliminary results reveal how metadata can function as a surrogate for studying large collections of time-based media objects. Yet our study also shows that, when it comes to born digital media, the metadata are never fully separate from the objects they describe. We argue that future work in AV in DH needs to delineate between methods best suited for digitized media collections compared to those most appropriate for born digital media collections. </span></span><span class="Z3988" title="url_ver=Z39.88-2004&ctx_ver=Z39.88-2004&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rfr_id=info%3Asid%2Fzotero.org%3A2&rft.genre=article&rft.atitle=PodcastRE%20Analytics%3A%20Using%20RSS%20to%20Study%20the%20Cultures%20and%20Norms%20of%20Podcasting&rft.jtitle=Digital%20Humanities%20Quarterly&rft.stitle=DHQ&rft.issn=1938-4122&rft.date=2021-03-05&rft.volume=015&rft.issue=1&rft.aulast=Hoyt&rft.aufirst=Eric&rft.au=Eric%20Hoyt&rft.au=J.J.%20Bersch&rft.au=Susan%20Noh&rft.au=Samuel%20Hansen&rft.au=Jacob%20Mertens&rft.au=Jeremy Wade%20Morris"> </span></div> <div class="articleInfo" style="margin:0 0 1em 0;"><span class="monospace">[en] </span><a href="/dhq/vol/15/1/000523/000523.html">Transdisciplinary Analysis of a Corpus of French Newsreels: The ANTRACT Project</a><div style="padding-left:1em; margin:0;text-indent:-1em;">Jean Carrive, Institut National de l'Audiovisuel; Abdelkrim Beloued, Institut National de l'Audiovisuel; Pascale Goetschel, Centre d'Histoire Sociale des Mondes Contemporains; Serge Heiden, ENS Lyon; Antoine Laurent, Laboratoire d'Informatique de l'Université du Mans; Pasquale Lisena, EURECOM; Franck Mazuet, Centre d'Histoire Sociale des Mondes Contemporains; Sylvain Meignier, Laboratoire d'Informatique de l'Université du Mans; Bénédicte Pincemin, ENS Lyon; Géraldine Poels, Institut National de l'Audiovisuel; Raphaël Troncy, EURECOM</div><span class="viewAbstract">Abstract <span class="viewAbstract monospace" style="display:inline" id="abstractExpanderabstract000523en"><a title="View Abstract" class="expandCollapse monospace" href="javascript:expandAbstract('abstract000523en')">[en]</a></span><span style="display:none" class="abstract" id="abstract000523en"> The ANTRACT project is a cross-disciplinary apparatus dedicated to the analysis of the French newsreel company <cite class="italic">Les Actualités Françaises</cite> (1945-1969) and its film productions. Founded during the liberation of France, this state-owned company filmed more than 20,000 news reports shown in French cinemas and throughout the world over its 24 years of activity. The project brings together research organizations with a dual historical and technological perspective. ANTRACT's goal is to study the production process, the film content, the way historical events are represented and the audience reception of <cite class="italic">Les Actualités Françaises</cite> newsreels using innovative AI-based data processing tools developed by partners specialized in image, audio, and text analysis. This article focuses on the data processing apparatus and tools of the project. Automatic content analysis is used to select data, to segment video units and typescript images, and to align them with their archival description. Automatic speech recognition provides a textual representation and natural language processing can extract named entities from the voice-over recording; automatic visual analysis is applied to detect and recognize faces of well-known characters in videos. These multifaceted data can then be queried and explored with the TXM text-mining platform. The results of these automatic analysis processes are feeding the Okapi platform, a client-server software that integrates documentation, information retrieval, and hypermedia capabilities within a single environment based on the Semantic Web standards. The complete corpus of <cite class="italic">Les Actualités Françaises</cite>, enriched with data and metadata, will be made available to the scientific community by the end of the project. </span></span><span class="Z3988" title="url_ver=Z39.88-2004&ctx_ver=Z39.88-2004&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rfr_id=info%3Asid%2Fzotero.org%3A2&rft.genre=article&rft.atitle=Transdisciplinary%20Analysis%20of%20a%20Corpus%20of%20French%20Newsreels%3A%20The%20ANTRACT%20Project&rft.jtitle=Digital%20Humanities%20Quarterly&rft.stitle=DHQ&rft.issn=1938-4122&rft.date=2021-03-05&rft.volume=015&rft.issue=1&rft.aulast=Carrive&rft.aufirst=Jean&rft.au=Jean%20Carrive&rft.au=Abdelkrim%20Beloued&rft.au=Pascale%20Goetschel&rft.au=Serge%20Heiden&rft.au=Antoine%20Laurent&rft.au=Pasquale%20Lisena&rft.au=Franck%20Mazuet&rft.au=Sylvain%20Meignier&rft.au=Bénédicte%20Pincemin&rft.au=Géraldine%20Poels&rft.au=Raphaël%20Troncy"> </span></div> <div class="articleInfo" style="margin:0 0 1em 0;"><span class="monospace">[en] </span><a href="/dhq/vol/15/1/000504/000504.html">Topological properties of music collaboration networks: The case of Jazz and Hip Hop</a><div style="padding-left:1em; margin:0;text-indent:-1em;">Lukas Gienapp, Leipzig University; Clara Kruckenberg, Leipzig University; Manuel Burghardt, Leipzig University</div><span class="viewAbstract">Abstract <span class="viewAbstract monospace" style="display:inline" id="abstractExpanderabstract000504en"><a title="View Abstract" class="expandCollapse monospace" href="javascript:expandAbstract('abstract000504en')">[en]</a></span><span style="display:none" class="abstract" id="abstract000504en"> Studying collaboration in music is a prominent area of research in fields such as cultural studies, history, and musicology. For scholars interested in studying collaboration, network analysis has proven to be a viable methodological approach. Yet, a challenge is that heterogeneous data makes it difficult to study collaboration networks across music genres, which means that there are almost only studies on individual genres. To solve this problem, we propose a generalizable approach to studying the topological properties of music collaboration networks within and between genres that relies on data from the freely available Discogs database. To illustrate the approach, we provide a comparison of the genres Jazz and Hip Hop. </span></span><span class="Z3988" title="url_ver=Z39.88-2004&ctx_ver=Z39.88-2004&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rfr_id=info%3Asid%2Fzotero.org%3A2&rft.genre=article&rft.atitle=Topological%20properties%20of%20music%20collaboration%20networks%3A%20The%20case%20of%20Jazz%20and%20Hip%20Hop&rft.jtitle=Digital%20Humanities%20Quarterly&rft.stitle=DHQ&rft.issn=1938-4122&rft.date=2021-03-05&rft.volume=015&rft.issue=1&rft.aulast=Gienapp&rft.aufirst=Lukas&rft.au=Lukas%20Gienapp&rft.au=Clara%20Kruckenberg&rft.au=Manuel%20Burghardt"> </span></div> </div> <div class="cluster"> <h3>Section 3: Creative and Liberatory Ways to Remix AV Data</h3> <div class="articleInfo" style="margin:0 0 1em 0;"><span class="monospace">[en] </span><a href="/dhq/vol/15/1/000516/000516.html">Afrofuturist Intellectual Mixtapes: A Classroom Case Study </a><div style="padding-left:1em; margin:0;text-indent:-1em;">Tyechia L. Thompson, Virginia Tech; Dashiel Carrera, Virginia Tech</div><span class="viewAbstract">Abstract <span class="viewAbstract monospace" style="display:inline" id="abstractExpanderabstract000516en"><a title="View Abstract" class="expandCollapse monospace" href="javascript:expandAbstract('abstract000516en')">[en]</a></span><span style="display:none" class="abstract" id="abstract000516en"> This article is a classroom case study of the Intellectual Mixtape Project, an AudioVisual digital humanities module. The intellectual mixtape uses jazz and hip hop as a framework to create an audio compilation and “conversation” that samples literary-audio texts (such as SunRa speeches, Octavia Butler interviews, Tracy K. Smith’s poetry readings, etc.). Each track of the intellectual mixtape has three audios: 1) the literary-audio texts from the syllabus, 2) the students’ voice in their own words, and 3) an audio of the students’ choice. As a companion to each track, students write 500 words of liner notes that must include the title of their track and their curation and mixing decisions.Students then publish their entire intellectual mixtape (three or more tracks) with original or “remixed” cover art on an online platform. The first part of the study will discuss the structure of the intellectual mixtape assignments. In these assignments, students are provided with literary-audio texts, required to complete and submit audio homework assignments, and taught the basics of audio editing. This method of teaching and analyzing literature shifts the practice of literary analysis from top down approaches that privileges the authority of the text and instead encourages the student to “converse” with the text to create new knowledge. This methods also reflects the artistic practice of Afrofuturist artists and theorist who improvise, remix, and sample to create their work. The second part of the study will discuss a performance and midterm adaptation of the Intellectual Mixtape Project entitled Sound of Space: An Interactive Afrofuturist Experience. “Sound of Space” was an immersive performance with four-sensory stations that featured Afrofuturist themes. The midterm adaptation was showcased in the Cube, a four-story high, state-of-the-art multimedia black box theater at Virginia Tech. In preparation for the performance, students merged sound engineering, 360 degree-video-projection, improvisational performance, and light design. “Sound of Space” introduced students and audiences’ to an immersive Afrofuturist-audio experience and pushed the boundaries of literary analysis. The third part of the study will address challenges with the Intellectual Mixtape Project. Challenges include finding relevant literary-audio texts and dealing with the many limitations imposed by U.S. copyright law. Some ways to address the challenges imposed by U.S. copyright law might be to 1) reclassify sampling audio as a form of quotation, 2) use databases of copyright-free music, 3) find culturally significant works from lesser-known artists who will license their tracks, and/or 4) pay royalties. </span></span><span class="Z3988" title="url_ver=Z39.88-2004&ctx_ver=Z39.88-2004&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rfr_id=info%3Asid%2Fzotero.org%3A2&rft.genre=article&rft.atitle=Afrofuturist%20Intellectual%20Mixtapes%3A%20A%20Classroom%20Case%20Study&rft.jtitle=Digital%20Humanities%20Quarterly&rft.stitle=DHQ&rft.issn=1938-4122&rft.date=2021-03-05&rft.volume=015&rft.issue=1&rft.aulast=Thompson&rft.aufirst=Tyechia L.&rft.au=Tyechia L.%20Thompson&rft.au=Dashiel%20Carrera"> </span></div> <div class="articleInfo" style="margin:0 0 1em 0;"><span class="monospace">[en] </span><a href="/dhq/vol/15/1/000505/000505.html">Annotating our Environs with the Sound and Sight of Numbers: The DataScapes Project</a><div style="padding-left:1em; margin:0;text-indent:-1em;">John Bonnett, Brock University; Joe Bolton, Business Insight 3; William Ralph, Brock University; Amy Legault, Billyard Insurance Group; Erin MacAfee, University of Ottawa; Michael Winter, Brock University; Chris Jaques, Badal.io; Mark Anderson, Independent Consultant</div><span class="viewAbstract">Abstract <span class="viewAbstract monospace" style="display:inline" id="abstractExpanderabstract000505en"><a title="View Abstract" class="expandCollapse monospace" href="javascript:expandAbstract('abstract000505en')">[en]</a></span><span style="display:none" class="abstract" id="abstract000505en"> The DataScapes Project is an exploration of how Augmented Reality objects can be used as constituents for Landscape Architecture. Using Stephen Ramsay’s Screwmeneutics and Harold Innis' Oral Tradition as our theoretical points of departure, the project integrated the products of Data Art – the visualisation and sonification of data – as the constituents for our two works: The Five Senses and Emergence. The Five Senses was the product of protein data, while Emergence was generated using text from the King James version of the Holy Bible. In this exploratory treatment, we present the methods used to generate and display our two pieces. We further present anecdotal, qualitative evidence of viewer feedback, and use that as a basis to consider the ethics, challenges and opportunities that a future AR Landscape Architecture will present for scholars in the Digital Humanities. </span></span><span class="Z3988" title="url_ver=Z39.88-2004&ctx_ver=Z39.88-2004&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rfr_id=info%3Asid%2Fzotero.org%3A2&rft.genre=article&rft.atitle=Annotating%20our%20Environs%20with%20the%20Sound%20and%20Sight%20of%20Numbers%3A%20The%20DataScapes%20Project&rft.jtitle=Digital%20Humanities%20Quarterly&rft.stitle=DHQ&rft.issn=1938-4122&rft.date=2021-03-05&rft.volume=015&rft.issue=1&rft.aulast=Bonnett&rft.aufirst=John&rft.au=John%20Bonnett&rft.au=Joe%20Bolton&rft.au=William%20Ralph&rft.au=Amy%20Legault&rft.au=Erin%20MacAfee&rft.au=Michael%20Winter&rft.au=Chris%20Jaques&rft.au=Mark%20Anderson"> </span></div> <div class="articleInfo" style="margin:0 0 1em 0;"><span class="monospace">[en] </span><a href="/dhq/vol/15/1/000508/000508.html">What Does A Photograph Sound Like? Digital Image Sonification As Synesthetic AudioVisual Digital Humanities</a><div style="padding-left:1em; margin:0;text-indent:-1em;">Michael J. Kramer, SUNY Brockport</div><span class="viewAbstract">Abstract <span class="viewAbstract monospace" style="display:inline" id="abstractExpanderabstract000508en"><a title="View Abstract" class="expandCollapse monospace" href="javascript:expandAbstract('abstract000508en')">[en]</a></span><span style="display:none" class="abstract" id="abstract000508en"> Computers have the capacity to transpose the pixels, shapes, and other features of visual material into sound. This act of data correlation between the visual and the audial produces a new artifact, a sonic composition created from the visual source. The new artifact, however, correlates precisely to data in the original, thus allowing for fresh ways of perceiving its form, content, and context. Seeming to distort the visual object into an aural one paradoxically allows an observer to observe the visual evidence anew, with more accuracy. A kind of generative, synesthetic criticism becomes possible by cutting across typical boundaries between the visual and the audio, the optic and the aural. Listening to as well as looking at visual artifacts by way of digital transpositions of data enables better close readings, more compelling interpretations, and deeper contextual understandings. Building on my earlier scholarship into image glitching, remixing, and sonification, this essay investigates a photograph of Joan Baez performing at the Greek Amphitheater in Berkeley, California, during the early 1960s. The image comes from my project on the Berkeley Folk Music Festival and the history of the folk music revival on the West Coast of the United States. Here, the use of digital image sonification becomes particularly intriguing. While we cannot magically recover the music being made in the photograph, we can more closely attend to the ghosts of sound within the silent snapshot. Digital image sonification does not recover the music itself, but it does help to amplify issues of gender, power, embodiment, spectacle, performance, hierarchy, and performance in my perceptions of Baez making music in the photograph. Using the ear as well as the eye to scan the image for its multiple levels of meaning leads to unsuspected perceptions, which then support more revealing analysis. In digital image sonification, a cyborgian dance of data, signal, image, sound, history, and human perception emerges, activating visual materials for renewed scrutiny. In doing so, this mode of AudioVisual DH activates the scholarly imagination in promising new ways. </span></span><span class="Z3988" title="url_ver=Z39.88-2004&ctx_ver=Z39.88-2004&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rfr_id=info%3Asid%2Fzotero.org%3A2&rft.genre=article&rft.atitle=What%20Does%20A%20Photograph%20Sound%20Like%3F%20Digital%20Image%20Sonification%20As%20Synesthetic%20AudioVisual%20Digital%20Humanities&rft.jtitle=Digital%20Humanities%20Quarterly&rft.stitle=DHQ&rft.issn=1938-4122&rft.date=2021-03-05&rft.volume=015&rft.issue=1&rft.aulast=Kramer&rft.aufirst=Michael J.&rft.au=Michael J.%20Kramer"> </span></div> <div class="articleInfo" style="margin:0 0 1em 0;"><span class="monospace">[en] </span><a href="/dhq/vol/15/1/000522/000522.html">From close listening to distant listening: Developing tools for Speech-Music discrimination of Danish music radio </a><div style="padding-left:1em; margin:0;text-indent:-1em;">Iben Have, Aarhus University; Kenneth Enevoldsen, Aarhus University</div><span class="viewAbstract">Abstract <span class="viewAbstract monospace" style="display:inline" id="abstractExpanderabstract000522en"><a title="View Abstract" class="expandCollapse monospace" href="javascript:expandAbstract('abstract000522en')">[en]</a></span><span style="display:none" class="abstract" id="abstract000522en"> Digitization has changed flow music radio. Competition from music streaming services like Spotify and iTunes has to a large extend outperformed traditional playlist radio, and the global dissemination of software generated playlists in public service radio stations in the 1990s has superseded the passionate music radio host. But digitization has also changed the way we can do research in radio. In Denmark digitization of almost all radio programming back to 1989, have made it possible to actually listen to the archive to investigate how radio content has changed historically. This article investigates the research question: How has the distribution of music and talk on the Danish Broadcasting Corporation’s radio channel P3 developed 1989-2019 by comparing a qualitative case study with a new large-scale study. Methodologically this shift from a close listening to a few programs to large-scale distant listening to more than 65,000 hours of radio enables us to discuss and critically compare the methods, results, strengths and shortcomings of the two analysis. Previous studies have demonstrated that Convolutional Neural Networks (CNNs) trained for image recognition of spectograms of the audio outperforms alternative approaches, such as Support Vector Machines (SVMs). The large-scale study presented shows that the CNN-based approach generalizes well, even without fine-tuning, to speech and music classification in Danish radio, with an overall accuracy of 98%. </span></span><span class="Z3988" title="url_ver=Z39.88-2004&ctx_ver=Z39.88-2004&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rfr_id=info%3Asid%2Fzotero.org%3A2&rft.genre=article&rft.atitle=From%20close%20listening%20to%20distant%20listening%3A%20Developing%20tools%20for%20Speech-Music%20discrimination%20of%20Danish%20music%20radio&rft.jtitle=Digital%20Humanities%20Quarterly&rft.stitle=DHQ&rft.issn=1938-4122&rft.date=2021-03-05&rft.volume=015&rft.issue=1&rft.aulast=Have&rft.aufirst=Iben&rft.au=Iben%20Have&rft.au=Kenneth%20Enevoldsen"> </span></div> <div class="articleInfo" style="margin:0 0 1em 0;"><span class="monospace">[en] </span><a href="/dhq/vol/15/1/000513/000513.html">Hearing Change in the Chocolate City: Computational Methods for Listening to Gentrification</a><div style="padding-left:1em; margin:0;text-indent:-1em;">Alison Martin, Dartmouth College</div><span class="viewAbstract">Abstract <span class="viewAbstract monospace" style="display:inline" id="abstractExpanderabstract000513en"><a title="View Abstract" class="expandCollapse monospace" href="javascript:expandAbstract('abstract000513en')">[en]</a></span><span style="display:none" class="abstract" id="abstract000513en"> In this article, I outline a method of combining ethnography and computational soundscape analysis in order to listen to processes of gentrification in Washington, DC. I utilize Kaleidoscope Pro, a software suite built to visualize and cluster bioacoustic recordings (typically birds and bats) to cluster points of tension in the soundscape of Shaw, a rapidly gentrifying neighborhood in DC. Clustering and visualizing these sounds (which include car horns, sirens, public transportation, and music) makes audible the sonic markers of gentrification in Shaw. Furthermore, listening to gentrification is a call to engage with the sonic right to the city, histories of legislating sound, and sonorities of memory and nostalgia. This work contributes to the burgeoning black digital humanities canon by thinking through how computational methods can help us to hear black life. Although the digital humanities have turned to embrace the sonic in recent years, there is still much to be done in considering how to embrace the aural in DH work. This project invites us to listen closely to a changing neighborhood, and emphasizes sound as a valid mode of knowledge production, questioning how a sonic rendering of gentrifying space through the digital might move us toward more equitable soundscapes. </span></span><span class="Z3988" title="url_ver=Z39.88-2004&ctx_ver=Z39.88-2004&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rfr_id=info%3Asid%2Fzotero.org%3A2&rft.genre=article&rft.atitle=Hearing%20Change%20in%20the%20Chocolate%20City%3A%20Computational%20Methods%20for%20Listening%20to%20Gentrification&rft.jtitle=Digital%20Humanities%20Quarterly&rft.stitle=DHQ&rft.issn=1938-4122&rft.date=2021-03-05&rft.volume=015&rft.issue=1&rft.aulast=Martin&rft.aufirst=Alison&rft.au=Alison%20Martin"> </span></div> </div> <div class="cluster"> <h3>Section 4: Reconfiguring Computational Methods for AV</h3> <div class="articleInfo" style="margin:0 0 1em 0;"><span class="monospace">[en] </span><a href="/dhq/vol/15/1/000517/000517.html">Advances in Digital Music Iconography: Benchmarking the detection of musical instruments in unrestricted, non-photorealistic images from the artistic domain</a><div style="padding-left:1em; margin:0;text-indent:-1em;">Matthia Sabatelli, Montefiore Institute; Nikolay Banar, University of Antwerp; Marie Cocriamont, Royal Museums of Art and History, Brussels; Eva Coudyzer, Royal Institute for Cultural Heritage; Karine Lasaracina, Royal Museums of Fine Arts of Belgium, Brussels; Walter Daelemans, University of Antwerp; Pierre Geurts, University of Liège; Mike Kestemont, University of Antwerp</div><span class="viewAbstract">Abstract <span class="viewAbstract monospace" style="display:inline" id="abstractExpanderabstract000517en"><a title="View Abstract" class="expandCollapse monospace" href="javascript:expandAbstract('abstract000517en')">[en]</a></span><span style="display:none" class="abstract" id="abstract000517en"> In this paper, we present MINERVA, the first benchmark dataset for the detection of musical instruments in non-photorealistic, unrestricted image collections from the realm of the visual arts. This effort is situated against the scholarly background of music iconography, an interdisciplinary field at the intersection of musicology and art history. We benchmark a number of state-of-the-art systems for image classification and object detection. Our results demonstrate the feasibility of the task but also highlight the significant challenges which this artistic material poses to computer vision. We evaluate the system to an out-of-sample collection and offer an interpretive discussion of the false positives detected. The error analysis yields a number of unexpected insights into the contextual cues that trigger the detector. The iconography surrounding children and musical instruments, for instance, shares some core properties, such as an intimacy in body language. </span></span><span class="Z3988" title="url_ver=Z39.88-2004&ctx_ver=Z39.88-2004&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rfr_id=info%3Asid%2Fzotero.org%3A2&rft.genre=article&rft.atitle=Advances%20in%20Digital%20Music%20Iconography%3A%20Benchmarking%20the%20detection%20of%20musical%20instruments%20in%20unrestricted,%20non-photorealistic%20images%20from%20the%20artistic%20domain&rft.jtitle=Digital%20Humanities%20Quarterly&rft.stitle=DHQ&rft.issn=1938-4122&rft.date=2021-03-05&rft.volume=015&rft.issue=1&rft.aulast=Sabatelli&rft.aufirst=Matthia&rft.au=Matthia%20Sabatelli&rft.au=Nikolay%20Banar&rft.au=Marie%20Cocriamont&rft.au=Eva%20Coudyzer&rft.au=Karine%20Lasaracina&rft.au=Walter%20Daelemans&rft.au=Pierre%20Geurts&rft.au=Mike%20Kestemont"> </span></div> <div class="articleInfo" style="margin:0 0 1em 0;"><span class="monospace">[en] </span><a href="/dhq/vol/15/1/000520/000520.html">Music Theory, the Missing Link Between Music-Related Big Data and Artificial Intelligence</a><div style="padding-left:1em; margin:0;text-indent:-1em;">Jeffrey A. T. Lupker, The University of Western Ontario; William J. Turkel, The University of Western Ontario</div><span class="viewAbstract">Abstract <span class="viewAbstract monospace" style="display:inline" id="abstractExpanderabstract000520en"><a title="View Abstract" class="expandCollapse monospace" href="javascript:expandAbstract('abstract000520en')">[en]</a></span><span style="display:none" class="abstract" id="abstract000520en"> This paper examines musical artificial intelligence (AI) algorithms that can not only learn from big data, but learn in ways that would be familiar to a musician or music theorist. This paper aims to find more effective links between music-related big data and artificial intelligence algorithms by incorporating principles with a strong grounding in music theory. We show that it is possible to increase the accuracy of two common algorithms (mode prediction and key prediction) by using music-theory based techniques during the data preparation process. We offer methods to alter often-used Krumhansl Kessler profiles , and the manner in which they are employed during preprocessing, to aid the connection of musical big data and mode or key predicting algorithms. </span></span><span class="Z3988" title="url_ver=Z39.88-2004&ctx_ver=Z39.88-2004&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rfr_id=info%3Asid%2Fzotero.org%3A2&rft.genre=article&rft.atitle=Music%20Theory,%20the%20Missing%20Link%20Between%20Music-Related%20Big%20Data%20and%20Artificial%20Intelligence&rft.jtitle=Digital%20Humanities%20Quarterly&rft.stitle=DHQ&rft.issn=1938-4122&rft.date=2021-03-05&rft.volume=015&rft.issue=1&rft.aulast=Lupker&rft.aufirst=Jeffrey A. T.&rft.au=Jeffrey A. T.%20Lupker&rft.au=William J.%20Turkel"> </span></div> <div class="articleInfo" style="margin:0 0 1em 0;"><span class="monospace">[en] </span><a href="/dhq/vol/15/1/000506/000506.html">Comparative K-Pop Choreography Analysis through Deep-Learning Pose Estimation across a Large Video Corpus</a><div style="padding-left:1em; margin:0;text-indent:-1em;">Peter Broadwell, Stanford University; Timothy R. Tangherlini, University of Calfornia, Berkeley</div><span class="viewAbstract">Abstract <span class="viewAbstract monospace" style="display:inline" id="abstractExpanderabstract000506en"><a title="View Abstract" class="expandCollapse monospace" href="javascript:expandAbstract('abstract000506en')">[en]</a></span><span style="display:none" class="abstract" id="abstract000506en"> The recent advent of deep learning-based pose detection methods that can reliably detect human body/limb positions from video frames, together with the online availability of massive digital video corpora, gives digital humanities researchers the ability to conduct "distant viewing" analyses of movement and particularly full-body choreography at much larger scales than previously feasible. These developments make possible innovative, revelatory digital cultural analytics work across many sources, from historical footage to contemporary images. They are also ideally suited to provide novel insight to the study of K-pop choreography. As a specifically non-textual modality, K-pop dance performances, particularly those of corporate and government-sponsored "idol" groups, are a key component of K-pop’s core mission of projecting "soft power" into the international sphere. A related consequence of this strategy is the ready availability in online video repositories of many K-pop music videos, starting from the milieu's origins in the 1990s, including an ever-growing collection of official "dance practice" videos and fan-contributed dance cover videos and supercuts from live performances. These latter videos are a direct consequence of the online propagation of the "Korean wave" by generations of tech-savvy fans on social media platforms. In this paper, we describe the considerations and choices made in the process of applying deep learning-based posed detection to a large corpus of K-pop music videos, and present the analytical methods we developed while focusing on a smaller subset of dance practice videos. A guiding principle for these efforts was to adopt techniques for characterizing, categorizing and comparing poses within and between videos, and for analyzing various qualities of motion as time-series data, that would be applicable to many kinds of movement choreography, rather than specific to K-pop dance. We conclude with case studies demonstrating how our methods contribute to the development of a typography of K-pop poses and sequences of poses ("moves") that can facilitate a data-driven study of the constitutive interdependence of K-pop and other cultural genres. We also show how this work advances methods for "distant" analyses of dance performances and larger corpora, considering such criteria as repetitiveness and degree of synchronization, as well as more idiosyncratic measures such as the "tightness" of a group performance. </span></span><span class="Z3988" title="url_ver=Z39.88-2004&ctx_ver=Z39.88-2004&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rfr_id=info%3Asid%2Fzotero.org%3A2&rft.genre=article&rft.atitle=Comparative%20K-Pop%20Choreography%20Analysis%20through%20Deep-Learning%20Pose%20Estimation%20across%20a%20Large%20Video%20Corpus&rft.jtitle=Digital%20Humanities%20Quarterly&rft.stitle=DHQ&rft.issn=1938-4122&rft.date=2021-03-05&rft.volume=015&rft.issue=1&rft.aulast=Broadwell&rft.aufirst=Peter&rft.au=Peter%20Broadwell&rft.au=Timothy R.%20Tangherlini"> </span></div> <div class="articleInfo" style="margin:0 0 1em 0;"><span class="monospace">[en] </span><a href="/dhq/vol/15/1/000511/000511.html">Moving Cinematic History: Filmic Analysis through Performative Research </a><div style="padding-left:1em; margin:0;text-indent:-1em;">Jenny Oyallon-Koloski, University of Illinois at Urbana–Champaign; Dora Valkanova, University of Illinois at Urbana–Champaign; Michael J. Junokas, University of Illinois at Urbana–Champaign; Kayt MacMaster, University of Illinois at Urbana–Champaign; Sarah Marks Mininsohn, University of Illinois at Urbana–Champaign</div><span class="viewAbstract">Abstract <span class="viewAbstract monospace" style="display:inline" id="abstractExpanderabstract000511en"><a title="View Abstract" class="expandCollapse monospace" href="javascript:expandAbstract('abstract000511en')">[en]</a></span><span style="display:none" class="abstract" id="abstract000511en"> In this paper, we argue for the value of motion capture-driven research that moves audiovisual analysis in a performative direction to integrate the dancer/researcher into the cinematic space. Like the work of videographic practitioners who communicate their findings through the audiovisual medium, rather than the written medium, this work seeks to engage with what Catherine Grant and Brad Haseman have called performative research by applying a practice-led approach to moving image analysis. Through a physical and virtual embodiment of film and dance form, we seek to better understand the formal implications of dance’s integration into cinematic space and the material conditions that affected filmmakers’ narrative and stylistic choices. The Movement Visualization Tool (mv tool) is a virtual research environment that generates live feedback of multiple agents’ movement. Accessible motion capture technology renders an abstracted skeleton of the moving agents, providing information about movement pathways through space using color-based and historical traceform filters. The tool can also replicate a mobile frame aesthetic, allowing for a constructed mover and a virtually constructed camera to engage in performative dialogue. We use the mv tool and videographic methods to recreate and disseminate two cases: movement scales from Laban/Bartenieff Movement Studies and dance sequences from narrative cinema. Rather than working from existing audiovisual content, we posit that the act of recreating the movement phrases leads to a deeper understanding of the choreography and, in the case of the filmic examples, of the formal practices that led to their creation. </span></span><span class="Z3988" title="url_ver=Z39.88-2004&ctx_ver=Z39.88-2004&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rfr_id=info%3Asid%2Fzotero.org%3A2&rft.genre=article&rft.atitle=Moving%20Cinematic%20History%3A%20Filmic%20Analysis%20through%20Performative%20Research&rft.jtitle=Digital%20Humanities%20Quarterly&rft.stitle=DHQ&rft.issn=1938-4122&rft.date=2021-03-05&rft.volume=015&rft.issue=1&rft.aulast=Oyallon-Koloski&rft.aufirst=Jenny&rft.au=Jenny%20Oyallon-Koloski&rft.au=Dora%20Valkanova&rft.au=Michael J.%20Junokas&rft.au=Kayt%20MacMaster&rft.au=Sarah Marks%20Mininsohn"> </span></div> <div class="articleInfo" style="margin:0 0 1em 0;"><span class="monospace">[en] </span><a href="/dhq/vol/15/1/000510/000510.html">Towards a User-Friendly Tool for Automated Sign Annotation: Identification and Annotation of Time Slots, Number of Hands, and Handshape </a><div style="padding-left:1em; margin:0;text-indent:-1em;">Manolis Fragkiadakis, Leiden University; Victoria Nyst, Leiden University; Peter van der Putten, Leiden University</div><span class="viewAbstract">Abstract <span class="viewAbstract monospace" style="display:inline" id="abstractExpanderabstract000510en"><a title="View Abstract" class="expandCollapse monospace" href="javascript:expandAbstract('abstract000510en')">[en]</a></span><span style="display:none" class="abstract" id="abstract000510en"> The annotation process of sign language corpora in terms of glosses, is a highly labor-intensive task, but a condition for a reliable quantitative analysis. During the annotation process the researcher typically defines the precise time slot in which a sign occurs and then enters the appropriate gloss for the sign. The aim of this project is to develop a set of tools to assist the annotation of the signs and their formal features in a video irrespectively of its content and quality. Recent advances in the field of deep learning have led to the development of accurate and fast pose estimation frameworks. In this study, such a framework (namely OpenPose) has been used to develop three different methods and tools to facilitate the annotation process. The first tool estimates the span of a sign sequence and creates empty slots in an annotation file. The second tool detects whether a sign is one- or two-handed. The last tool recognizes the different handshapes presented in a video sample. All tools can be easily re-trained to fit the needs of the researcher. </span></span><span class="Z3988" title="url_ver=Z39.88-2004&ctx_ver=Z39.88-2004&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rfr_id=info%3Asid%2Fzotero.org%3A2&rft.genre=article&rft.atitle=Towards%20a%20User-Friendly%20Tool%20for%20Automated%20Sign%20Annotation%3A%20Identification%20and%20Annotation%20of%20Time%20Slots,%20Number%20of%20Hands,%20and%20Handshape&rft.jtitle=Digital%20Humanities%20Quarterly&rft.stitle=DHQ&rft.issn=1938-4122&rft.date=2021-03-05&rft.volume=015&rft.issue=1&rft.aulast=Fragkiadakis&rft.aufirst=Manolis&rft.au=Manolis%20Fragkiadakis&rft.au=Victoria%20Nyst&rft.au=Peter%20van der Putten"> </span></div> </div> <div class="cluster"> <h3>Section 5: Forms of AV Scholarship</h3> <div class="articleInfo" style="margin:0 0 1em 0;"><span class="monospace">[en] </span><a href="/dhq/vol/15/1/000527/000527.html">Books Aren't Dead: Resurrecting Audio Technology and Feminist Digital Humanities Approaches to Publication and Authorship</a><div style="padding-left:1em; margin:0;text-indent:-1em;">Emily Edwards, Bowling Green State University; Robin Hershkowitz, Bowling Green State University</div><span class="viewAbstract">Abstract <span class="viewAbstract monospace" style="display:inline" id="abstractExpanderabstract000527en"><a title="View Abstract" class="expandCollapse monospace" href="javascript:expandAbstract('abstract000527en')">[en]</a></span><span style="display:none" class="abstract" id="abstract000527en"> This article explores how the podcast medium as a form of audio technology has facilitated the reimagining of academic publication and feminist praxis. In this case study, we situate the podcast <cite class="italic">Books Aren’t Dead (BAD)</cite>, an affiliate of the Fembot Collective, within a broader context of digital humanities scholarship and the field's potential to utilize audio technology to realize feminist approaches. <cite class="italic">BAD</cite>, as a podcast, serves as an open-access medium that brings authors and reviewers together in a collaborative context. Audiobook reviews allow for a conversation between author and interviewer, whereby the author can place the work in a broader scholastic and contemporary context for listeners as well as actively engage with constructive critique and questions. The result is a dynamic scholarly communication rather than the static textual product of a book review. We discuss the unique role of audio technology within the knowledge production process from a performance studies and archival point of view. Additionally, in the spirit of the project <cite class="italic">BAD</cite>, we also provide an addendum to our textual discussion by including a podcast where we discuss these themes as co-producers, graduate students, and young academics, exploring how audio technology can break down barriers to publication and authorship. </span></span><span class="Z3988" title="url_ver=Z39.88-2004&ctx_ver=Z39.88-2004&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rfr_id=info%3Asid%2Fzotero.org%3A2&rft.genre=article&rft.atitle=Books%20Aren't%20Dead%3A%20Resurrecting%20Audio%20Technology%20and%20Feminist%20Digital%20Humanities%20Approaches%20to%20Publication%20and%20Authorship&rft.jtitle=Digital%20Humanities%20Quarterly&rft.stitle=DHQ&rft.issn=1938-4122&rft.date=2021-03-05&rft.volume=015&rft.issue=1&rft.aulast=Edwards&rft.aufirst=Emily&rft.au=Emily%20Edwards&rft.au=Robin%20Hershkowitz"> </span></div> <div class="articleInfo" style="margin:0 0 1em 0;"><span class="monospace">[en] </span><a href="/dhq/vol/15/1/000514/000514.html">Another Type of Human Narrative: Visualizing Movement Histories Through Motion Capture Data and Virtual Reality</a><div style="padding-left:1em; margin:0;text-indent:-1em;">Eugenia S. Kim, The Hong Kong Academy for Performing Arts</div><span class="viewAbstract">Abstract <span class="viewAbstract monospace" style="display:inline" id="abstractExpanderabstract000514en"><a title="View Abstract" class="expandCollapse monospace" href="javascript:expandAbstract('abstract000514en')">[en]</a></span><span style="display:none" class="abstract" id="abstract000514en"> In this article I propose that motion capture (mocap) and virtual reality (VR) technology can be used to record and visualize movement histories as a supplement to oral histories or for when a memory is based in a embodied experience. One specific example would be the presentation of illness narratives. To illustrate this situation, I examine the concept of illness narratives, particularly those created by dance artists, and use my movement history, Lithium Hindsight 360, as a case study. This analysis comes from the perspective of a hybrid movement artist, VR creator, archivist and digital humanist, with first-hand experience of the challenges encountered when creating a movement history. The challenges are presented within the context of mocap recording, data curation, digital preservation and sustainability issues. I end this article by providing some basic practical strategies and recommendations for researchers who are new to documenting movement histories. </span></span><span class="Z3988" title="url_ver=Z39.88-2004&ctx_ver=Z39.88-2004&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rfr_id=info%3Asid%2Fzotero.org%3A2&rft.genre=article&rft.atitle=Another%20Type%20of%20Human%20Narrative%3A%20Visualizing%20Movement%20Histories%20Through%20Motion%20Capture%20Data%20and%20Virtual%20Reality&rft.jtitle=Digital%20Humanities%20Quarterly&rft.stitle=DHQ&rft.issn=1938-4122&rft.date=2021-03-05&rft.volume=015&rft.issue=1&rft.aulast=Kim&rft.aufirst=Eugenia S.&rft.au=Eugenia S.%20Kim"> </span></div> <div class="articleInfo" style="margin:0 0 1em 0;"><span class="monospace">[en] </span><a href="/dhq/vol/15/1/000521/000521.html">Deformin' in the Rain: How (and Why) to Break a Classic Film </a><div style="padding-left:1em; margin:0;text-indent:-1em;">Jason Mittell, Middlebury College</div><span class="viewAbstract">Abstract <span class="viewAbstract monospace" style="display:inline" id="abstractExpanderabstract000521en"><a title="View Abstract" class="expandCollapse monospace" href="javascript:expandAbstract('abstract000521en')">[en]</a></span><span style="display:none" class="abstract" id="abstract000521en"> Digital source materials such as films can be transformed in ways that suggest an innovative path for digital humanities research: computationally manipulating sounds and images to create new audiovisual artifacts whose insights might be revealed through their aesthetic power and transformative strangeness. Following upon the strain of digital humanities practice that Mark Sample terms the “deformed humanities,” this essay subjects a single film to a series of deformations: the classic musical <cite class="italic">Singin' in the Rain</cite>. Accompanying more than twenty original audiovisual deformations in still image, GIF, and video formats, the essay considers both what each new version reveals about the film (and cinema more broadly) and how we might engage with the emergent derivative aesthetic object created by algorithmic practice as a product of the deformed humanities. </span></span><span class="Z3988" title="url_ver=Z39.88-2004&ctx_ver=Z39.88-2004&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rfr_id=info%3Asid%2Fzotero.org%3A2&rft.genre=article&rft.atitle=Deformin'%20in%20the%20Rain%3A%20How%20(and%20Why)%20to%20Break%20a%20Classic%20Film&rft.jtitle=Digital%20Humanities%20Quarterly&rft.stitle=DHQ&rft.issn=1938-4122&rft.date=2021-03-05&rft.volume=015&rft.issue=1&rft.aulast=Mittell&rft.aufirst=Jason&rft.au=Jason%20Mittell"> </span></div> </div> <div class="cluster"><h3>Reviews</h3> <div class="articleInfo" style="margin:0 0 1em 0;"><span class="monospace">[en] </span><a href="/dhq/vol/15/1/000543/000543.html">Book Review: Digital Sound Studies (2018) </a><div style="padding-left:1em; margin:0;text-indent:-1em;">Tracey El Hajj, University of Victoria</div><span class="viewAbstract">Abstract <span class="viewAbstract monospace" style="display:inline" id="abstractExpanderabstract000543en"><a title="View Abstract" class="expandCollapse monospace" href="javascript:expandAbstract('abstract000543en')">[en]</a></span><span style="display:none" class="abstract" id="abstract000543en"> The edited volume <cite class="italic">Digital Sound Studies</cite> brings together various voices addressing the potential of digital approaches to sound, practically and theoretically . Contributors explore methodologies, platforms, and initiatives that demonstrate interdisciplinary and inclusive work that centers sound and listening while demonstrating how such work can advance humanities scholarship. The contributions provide a balanced critique of DH as a norm and culture alongside detailing digital sound studies' contributions to DH, the humanities, and the public. The volume is an excellent resource for those interested in digital sound studies. </span></span><span class="Z3988" title="url_ver=Z39.88-2004&ctx_ver=Z39.88-2004&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rfr_id=info%3Asid%2Fzotero.org%3A2&rft.genre=article&rft.atitle=Book%20Review%3A%20Digital%20Sound%20Studies%20(2018)&rft.jtitle=Digital%20Humanities%20Quarterly&rft.stitle=DHQ&rft.issn=1938-4122&rft.date=2021-03-05&rft.volume=015&rft.issue=1&rft.aulast=El Hajj&rft.aufirst=Tracey&rft.au=Tracey%20El Hajj"> </span></div> </div> <h2>Göttingen Dialogues in Digital Humanities</h2> <div class="cluster"><h3>Editor: Marco Büchler</h3></div> <div class="cluster"><h3>Articles</h3> <div class="articleInfo" style="margin:0 0 1em 0;"><span class="monospace">[en] </span><a href="/dhq/vol/15/1/000551/000551.html">Introduction to Göttingen Dialogues 2016</a><div style="padding-left:1em; margin:0;text-indent:-1em;">Marco Büchler, Institut für Angewandte Informatik (InfAI)</div><span class="viewAbstract">Abstract <span class="viewAbstract monospace" style="display:inline" id="abstractExpanderabstract000551en"><a title="View Abstract" class="expandCollapse monospace" href="javascript:expandAbstract('abstract000551en')">[en]</a></span><span style="display:none" class="abstract" id="abstract000551en"> An introduction to the special issue on the 2016 Göttingen Dialogues </span></span><span class="Z3988" title="url_ver=Z39.88-2004&ctx_ver=Z39.88-2004&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rfr_id=info%3Asid%2Fzotero.org%3A2&rft.genre=article&rft.atitle=Introduction%20to%20Göttingen%20Dialogues%202016&rft.jtitle=Digital%20Humanities%20Quarterly&rft.stitle=DHQ&rft.issn=1938-4122&rft.date=2021-05-21&rft.volume=015&rft.issue=1&rft.aulast=Büchler&rft.aufirst=Marco&rft.au=Marco%20Büchler"> </span></div> <div class="articleInfo" style="margin:0 0 1em 0;"><span class="monospace">[en] </span><a href="/dhq/vol/15/1/000525/000525.html">Hierarchical or Non-hierarchical? A Philosophical Approach to a Debate in Text Encoding</a><div style="padding-left:1em; margin:0;text-indent:-1em;">Alois Pichler, University of Bergen</div><span class="viewAbstract">Abstract <span class="viewAbstract monospace" style="display:inline" id="abstractExpanderabstract000525en"><a title="View Abstract" class="expandCollapse monospace" href="javascript:expandAbstract('abstract000525en')">[en]</a></span><span style="display:none" class="abstract" id="abstract000525en"> Is hierarchical XML apt for the encoding of complex manuscript materials? Some scholars have argued that texts are non-hierarchical entities and that XML therefore is inadequate. This paper argues that the nature of text is such that it supports both hierarchical and non-hierarchical representations. The paper distinguishes (1) texts from documents and document carriers, (2) writing from "texting", (3) actions that can be performed by one agent only from actions that require at least two agents to come about (“shared actions”), (4) finite actions from potentially infinitely ongoing actions. Texts are described as potentially infinitely ongoing shared actions which are co-produced by author and reader agents. This makes texts into entities that are more akin to events than to objects or properties, and shows, moreover, that texts are dependent on human understanding and thus mind-dependent entities. One consequence from this is that text encoding needs to be recognized as an act participating in texting which in turn makes hierarchical XML as apt a markup for “text representation”, or rather: for texting, as non-hierarchical markup. The encoding practices of the Bergen Wittgenstein Archives (WAB) serve as the main touchstone for my discussion. </span></span><span class="Z3988" title="url_ver=Z39.88-2004&ctx_ver=Z39.88-2004&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rfr_id=info%3Asid%2Fzotero.org%3A2&rft.genre=article&rft.atitle=Hierarchical%20or%20Non-hierarchical%3F%20A%20Philosophical%20Approach%20to%20a%20Debate%20in%20Text%20Encoding&rft.jtitle=Digital%20Humanities%20Quarterly&rft.stitle=DHQ&rft.issn=1938-4122&rft.date=2021-05-21&rft.volume=015&rft.issue=1&rft.aulast=Pichler&rft.aufirst=Alois&rft.au=Alois%20Pichler"> </span></div> <div class="articleInfo" style="margin:0 0 1em 0;"><span class="monospace">[en] </span><a href="/dhq/vol/15/1/000538/000538.html">Annotating ritual in ancient Greek tragedy: a bottom-up approach in action</a><div style="padding-left:1em; margin:0;text-indent:-1em;">Gloria Mugelli, Laboratorio di Antropologia del Mondo Antico, Università di Pisa; Federico Boschetti, CoPhiLab, ILC, CNR Pisa; Andrea Bellandi, CoPhiLab, ILC, CNR Pisa; Riccardo Del Gratta, CoPhiLab, ILC, CNR Pisa; Anas Fahad Khan, CoPhiLab, ILC, CNR Pisa; Andrea Taddei, Laboratorio di Antropologia del Mondo Antico, Università di Pisa</div><span class="viewAbstract">Abstract <span class="viewAbstract monospace" style="display:inline" id="abstractExpanderabstract000538en"><a title="View Abstract" class="expandCollapse monospace" href="javascript:expandAbstract('abstract000538en')">[en]</a></span><span style="display:none" class="abstract" id="abstract000538en"> EuporiaRAGT is one of the pilot projects that adopt the Euporia system as a digital support to an historico-anthropological research on the form and function of rituals in the texts of ancient Greek tragedy. This paper describes the bottom-up approach adopted in the project: during the annotation stage, performed with a Domain Specific Language designed with a user-centred approach, the domain expert can annotate ritual and religious phenomena, with the possibility of registering different textual and interpretive variants; the design of a search engine, in a second phase of the work, allows the database to be tested and reorganized. Finally, the construction of an ontology allows to structure the tags, in order to perform complex queries on the database. </span></span><span class="Z3988" title="url_ver=Z39.88-2004&ctx_ver=Z39.88-2004&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rfr_id=info%3Asid%2Fzotero.org%3A2&rft.genre=article&rft.atitle=Annotating%20ritual%20in%20ancient%20Greek%20tragedy%3A%20a%20bottom-up%20approach%20in%20action&rft.jtitle=Digital%20Humanities%20Quarterly&rft.stitle=DHQ&rft.issn=1938-4122&rft.date=2021-05-21&rft.volume=015&rft.issue=1&rft.aulast=Mugelli&rft.aufirst=Gloria&rft.au=Gloria%20Mugelli&rft.au=Federico%20Boschetti&rft.au=Andrea%20Bellandi&rft.au=Riccardo%20Del Gratta&rft.au=Anas Fahad%20Khan&rft.au=Andrea%20Taddei"> </span></div> <div class="articleInfo" style="margin:0 0 1em 0;"><span class="monospace">[en] </span><a href="/dhq/vol/15/1/000539/000539.html">Can an author style be unveiled through word distribution?</a><div style="padding-left:1em; margin:0;text-indent:-1em;">Giulia Benotto, Extra Group</div><span class="viewAbstract">Abstract <span class="viewAbstract monospace" style="display:inline" id="abstractExpanderabstract000539en"><a title="View Abstract" class="expandCollapse monospace" href="javascript:expandAbstract('abstract000539en')">[en]</a></span><span style="display:none" class="abstract" id="abstract000539en"> The inclusion of semantic features in the stylometric analysis of literary texts appears to be poorly investigated. In this work, we experiment with the application of Distributional Semantics to a corpus of Italian literature to test if words distribution can convey stylistic cues. To verify our hypothesis, we have set up an Authorship Attribution experiment. Indeed, the results we have obtained suggest that the style of an author can reveal itself through words distribution too. </span></span><span class="Z3988" title="url_ver=Z39.88-2004&ctx_ver=Z39.88-2004&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rfr_id=info%3Asid%2Fzotero.org%3A2&rft.genre=article&rft.atitle=Can%20an%20author%20style%20be%20unveiled%20through%20word%20distribution%3F&rft.jtitle=Digital%20Humanities%20Quarterly&rft.stitle=DHQ&rft.issn=1938-4122&rft.date=2021-05-21&rft.volume=015&rft.issue=1&rft.aulast=Benotto&rft.aufirst=Giulia&rft.au=Giulia%20Benotto"> </span></div> <div class="articleInfo" style="margin:0 0 1em 0;"><span class="monospace">[en] </span><a href="/dhq/vol/15/1/000526/000526.html">Using an Advanced Text Index Structure for Corpus Exploration in Digital Humanities</a><div style="padding-left:1em; margin:0;text-indent:-1em;">Tobias Englmeier, CIS, Ludwig-Maximilians University, Munich, Germany; Marco Büchler, Institute of Computer Science, University of Göttingen, Göttingen, Germany; Stefan Gerdjikov, FMI, University of Sofia "St. Kliment Ohridski", Sofia, Bulgaria; Klaus U. Schulz, CIS, Ludwig-Maximilians University, Munich, Germany</div><span class="viewAbstract">Abstract <span class="viewAbstract monospace" style="display:inline" id="abstractExpanderabstract000526"><a title="View Abstract" class="expandCollapse monospace" href="javascript:expandAbstract('abstract000526')">[en]</a></span><span style="display:none" class="abstract" id="abstract000526"> With suitable index structures many corpus exploration tasks can be solved in an efficient way without rescanning the text repository in an online manner. In this paper we show that symmetric compacted directed acyclic word graphs (SCDAWGs) - a refinement of suffix trees - offer an ideal basis for corpus exploration, helping to answer many of the questions raised in DH research in an elegant way. From a simplified point of view, the advantages of SCDAWGs rely on two properties. First, needing linear computation time, the index offers a joint view on the similarities (in terms of common substrings) and differences between all text. Second, structural regularities of the index help to mine interesting portions of texts (such as phrases and concept names) and their relationship in a language independent way without using prior linguistic knowledge. As a demonstration of the power of these principles we look at text alignment, text reuse in distinct texts or between distinct authors, automated detection of concepts, temporal distribution of phrases in diachronic corpora, and related problems. </span></span><span class="Z3988" title="url_ver=Z39.88-2004&ctx_ver=Z39.88-2004&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rfr_id=info%3Asid%2Fzotero.org%3A2&rft.genre=article&rft.atitle=Using%20an%20Advanced%20Text%20Index%20Structure%20for%20Corpus%20Exploration%20in%20Digital%20Humanities&rft.jtitle=Digital%20Humanities%20Quarterly&rft.stitle=DHQ&rft.issn=1938-4122&rft.date=2021-05-21&rft.volume=015&rft.issue=1&rft.aulast=Englmeier&rft.aufirst=Tobias&rft.au=Tobias%20Englmeier&rft.au=Marco%20Büchler&rft.au=Stefan%20Gerdjikov&rft.au=Klaus U.%20Schulz"> </span></div> <div class="articleInfo" style="margin:0 0 1em 0;"><span class="monospace">[en] </span><a href="/dhq/vol/15/1/000537/000537.html">Computer Vision and the Creation of a Database of Printers’ Ornaments</a><div style="padding-left:1em; margin:0;text-indent:-1em;">Hazel Wilkinson, University of Birmingham, UK; James Briggs, Graphcore; Dirk Gorissen, Machine Learning Ltd.</div><span class="viewAbstract">Abstract <span class="viewAbstract monospace" style="display:inline" id="abstractExpanderabstract000537en"><a title="View Abstract" class="expandCollapse monospace" href="javascript:expandAbstract('abstract000537en')">[en]</a></span><span style="display:none" class="abstract" id="abstract000537en"> This article describes the creation of a database of over 1 million images of eighteenth-century printers’ ornaments (or fleurons) using computer vision, and how the database was refined using machine learning. The successes and limitations of the method used are discussed, and the purpose of the database for research in the humanities is outlined. The article concludes with a summary of recent developments in the project, which include the addition of a visual search function provided by the Seebibyte Project. </span></span><span class="Z3988" title="url_ver=Z39.88-2004&ctx_ver=Z39.88-2004&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rfr_id=info%3Asid%2Fzotero.org%3A2&rft.genre=article&rft.atitle=Computer%20Vision%20and%20the%20Creation%20of%20a%20Database%20of%20Printers%E2%80%99%20Ornaments&rft.jtitle=Digital%20Humanities%20Quarterly&rft.stitle=DHQ&rft.issn=1938-4122&rft.date=2021-05-21&rft.volume=015&rft.issue=1&rft.aulast=Wilkinson&rft.aufirst=Hazel&rft.au=Hazel%20Wilkinson&rft.au=James%20Briggs&rft.au=Dirk%20Gorissen"> </span></div> <div class="articleInfo" style="margin:0 0 1em 0;"><span class="monospace">[en] </span><a href="/dhq/vol/15/1/000544/000544.html">Inferring standard name form, gender and nobility from historical texts using stable model semantics</a><div style="padding-left:1em; margin:0;text-indent:-1em;">Davor Lauc, Chair of Logic, Department of Philosophy, Faculty of Humanities and Social Sciences, University of Zagreb; Darko Vitek, Department of History, Centre of Croatian Studies, University of Zagreb</div><span class="viewAbstract">Abstract <span class="viewAbstract monospace" style="display:inline" id="abstractExpanderabstract000544en"><a title="View Abstract" class="expandCollapse monospace" href="javascript:expandAbstract('abstract000544en')">[en]</a></span><span style="display:none" class="abstract" id="abstract000544en"> In this paper, we attack the problem of parsing name expressions and inferring standard name form, gender and nobility status from serial historical sources. This is a small but important part of modelling historians’ analysis of such sources, as they extract a lot of information from the names in text, and this information constrain their search. The task of parsing proper names seems to be easy, but it is a hard problem even for the modern languages, and even more challenging for the languages of historical sources. The test case used for the research was from the middle 19th century census for the old town centre of Zagreb. In order to evaluate and compare the fitness of the probabilistic and rule-based models for the task of inferring standard name form, both conditional random field (CRF) and rule-based models based on stable model semantics (Answer Set Programming Rules) were developed. Our results indicated that the rule-based approach is more suitable for inferring standard name forms from historical texts than the more widespread statistical approach. </span></span><span class="Z3988" title="url_ver=Z39.88-2004&ctx_ver=Z39.88-2004&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rfr_id=info%3Asid%2Fzotero.org%3A2&rft.genre=article&rft.atitle=Inferring%20standard%20name%20form,%20gender%20and%20nobility%20from%20historical%20texts%20using%20stable%20model%20semantics&rft.jtitle=Digital%20Humanities%20Quarterly&rft.stitle=DHQ&rft.issn=1938-4122&rft.date=2021-05-21&rft.volume=015&rft.issue=1&rft.aulast=Lauc&rft.aufirst=Davor&rft.au=Davor%20Lauc&rft.au=Darko%20Vitek"> </span></div> <div class="articleInfo" style="margin:0 0 1em 0;"><span class="monospace">[en] </span><a href="/dhq/vol/15/1/000545/000545.html">German Narratives in International Television Format Adaptations: Comparing <cite class="italic">Du und Ich</cite> (ZDF 2002) with <cite class="italic">Un Gars</cite>, <cite class="italic">Une Fille</cite> (Quebec 1997-2002)</a><div style="padding-left:1em; margin:0;text-indent:-1em;">Edward Larkey, University of Maryland, Baltimore County</div><span class="viewAbstract">Abstract <span class="viewAbstract monospace" style="display:inline" id="abstractExpanderabstract000545en"><a title="View Abstract" class="expandCollapse monospace" href="javascript:expandAbstract('abstract000545en')">[en]</a></span><span style="display:none" class="abstract" id="abstract000545en"> This article cross-culturally compares the German remake of the Quebec sketch comedy/sitcom series <cite class="italic">Un Gars, Une Fille</cite> ("A Guy and a Girl") the the original version by correlating quantitative data derived from the Adobe Premiere Pro annotation function on the duration of narrative segments and incorporating these data into a an interpretation of family conflict management strategies, gender roles and conflicts between the mother-in-law and the the young 30-something couple as protagonists. The article examines a scene in which the daughter confronts her mother's trauma-inducing behavior on her as a young girl, and the boyfriend confronts the mother-in-law's animosity toward him. The article delves into the background for the transactional, belligerent, and obligational thinking behind the family relationship of the German couple compared to the more affectionate and conciliatory relationship in all other versions. The investigation postulates that the family relationships in the German context must be seen within the context of the <cite class="italic">Inability to Mourn</cite>, a major psychological study by Margarete and Alexander Mitscherlich in the 1960s of politically salient post World War 2 trauma among a large variety of social groups in West (and East) Germany. The emotional repression as a result of various forms of guilt, which never explicitly surfaced in the confrontations, was passed down from generation to generation, while this same, or similar social psychological contexts was seemingly not a factor in other countries, many of which had also experienced repressive dictatorships during World War 2 and afterwards, in which adaptations of these series were produced. Further collaborative investigations would be required to uncover the reasons for this discrepancy. </span></span><span class="Z3988" title="url_ver=Z39.88-2004&ctx_ver=Z39.88-2004&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rfr_id=info%3Asid%2Fzotero.org%3A2&rft.genre=article&rft.atitle=German%20Narratives%20in%20International%20Television%20Format%20Adaptations%3A%20Comparing%20Du%20und%20Ich%20(ZDF%202002)%20with%20Un%20Gars,%20Une%20Fille%20(Quebec%201997-2002)&rft.jtitle=Digital%20Humanities%20Quarterly&rft.stitle=DHQ&rft.issn=1938-4122&rft.date=2021-05-21&rft.volume=015&rft.issue=1&rft.aulast=Larkey&rft.aufirst=Edward&rft.au=Edward%20Larkey"> </span></div> </div> <h2>Articles</h2> <div class="articleInfo" style="margin:0 0 1em 0;"><span class="monospace">[en] </span><a href="/dhq/vol/15/1/000501/000501.html">From the Presupposition of Doom to the Manifestation of Code: Using Emulated Citation in the Study of Games and Cultural Software</a><div style="padding-left:1em; margin:0;text-indent:-1em;">Eric Kaltman, Department of Computer Science, California State University Channel Islands; Joseph Osborn, Department of Computer Science, Pomona College; Noah Wardrip-Fruin, Department of Computational Media, University of California, Santa Cruz</div><span class="viewAbstract">Abstract <span class="viewAbstract monospace" style="display:inline" id="abstractExpanderabstract000501en"><a title="View Abstract" class="expandCollapse monospace" href="javascript:expandAbstract('abstract000501en')">[en]</a></span><span style="display:none" class="abstract" id="abstract000501en"> For the field of game history to mature, and for game studies more broadly to function in a scholarly manner in the coming decades, one necessity will be improvement of game citation practices. Current practices have some obvious problems, such as a lack of standardization even within the same journal or book series. But a more pressing problem is disguised by the field’s youth: Common citation practices depend on the play experiences and cultural knowledge of a generation of game studies scholars and readers who are largely old enough to have lived through the eras they are discussing. More sustainable and precise alternatives cannot fall back on the tools available for fixed media — such as the direct quotations and page numbers used for books or the screenshots (of images that appear to all viewers) and timecode used for video. Instead, this essay imagines an alternative approach, working in the digital humanities traditions of speculative collections and tool-based argumentation. In the speculative future we present, there are scholarly collections of software, as well as tools available for citing software states and integrating these citations into scholarly arguments. A working prototype of such a tool is presented, together with examples of scholarly use and the results of an evaluation of the concept with game scholars. </span></span><span class="Z3988" title="url_ver=Z39.88-2004&ctx_ver=Z39.88-2004&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rfr_id=info%3Asid%2Fzotero.org%3A2&rft.genre=article&rft.atitle=From%20the%20Presupposition%20of%20Doom%20to%20the%20Manifestation%20of%20Code%3A%20Using%20Emulated%20Citation%20in%20the%20Study%20of%20Games%20and%20Cultural%20Software&rft.jtitle=Digital%20Humanities%20Quarterly&rft.stitle=DHQ&rft.issn=1938-4122&rft.date=2021-03-05&rft.volume=015&rft.issue=1&rft.aulast=Kaltman&rft.aufirst=Eric&rft.au=Eric%20Kaltman&rft.au=Joseph%20Osborn&rft.au=Noah%20Wardrip-Fruin"> </span></div> <div class="articleInfo" style="margin:0 0 1em 0;"><span class="monospace">[en] </span><a href="/dhq/vol/15/1/000536/000536.html">Fostering Community Engagement through Datathon Events: The Archives Unleashed Experience</a><div style="padding-left:1em; margin:0;text-indent:-1em;">Samantha Fritz, Department of History, University of Waterloo; Ian Milligan, Department of History, University of Waterloo; Nick Ruest, Digital Scholarship Infrastructure Department, York Univeristy; Jimmy Lin, David R. Cheriton School of Computer Science, University of Waterloo</div><span class="viewAbstract">Abstract <span class="viewAbstract monospace" style="display:inline" id="abstractExpanderabstract000536en"><a title="View Abstract" class="expandCollapse monospace" href="javascript:expandAbstract('abstract000536en')">[en]</a></span><span style="display:none" class="abstract" id="abstract000536en"> This article explores the impact that a series of Archives Unleashed datathon events have had on community engagement both within the web archiving field, and more specifically, on the professional practices of attendees. We present results from surveyed datathon participants, in addition to related evidence from our events, to discuss how our participants saw the datathons as dramatically impacting both their professional practices as well as the broader web archiving community. Drawing on and adapting two leading community engagement models, we combine them to introduce a new understanding of how to build and engage users in an open-source digital humanities project. Our model illustrates both the activities undertaken by our project as well as the related impact they have on the field. The model can be broadly applied to other digital humanities projects seeking to engage their communities. </span></span><span class="Z3988" title="url_ver=Z39.88-2004&ctx_ver=Z39.88-2004&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rfr_id=info%3Asid%2Fzotero.org%3A2&rft.genre=article&rft.atitle=Fostering%20Community%20Engagement%20through%20Datathon%20Events%3A%20The%20Archives%20Unleashed%20Experience&rft.jtitle=Digital%20Humanities%20Quarterly&rft.stitle=DHQ&rft.issn=1938-4122&rft.date=2021-03-05&rft.volume=015&rft.issue=1&rft.aulast=Fritz&rft.aufirst=Samantha&rft.au=Samantha%20Fritz&rft.au=Ian%20Milligan&rft.au=Nick%20Ruest&rft.au=Jimmy%20Lin"> </span></div> <div class="articleInfo" style="margin:0 0 1em 0;"><span class="monospace">[en] </span><a href="/dhq/vol/15/1/000540/000540.html">Leonardo, Morelli, and the Computational Mirror</a><div style="padding-left:1em; margin:0;text-indent:-1em;">Alison Langmead, University of Pittsburgh; Christopher J. Nygren, University of Pittsburgh; Paul Rodriguez, San Diego Supercomputer Center; Alan Craig, Independent Consultant and Researcher</div><span class="viewAbstract">Abstract <span class="viewAbstract monospace" style="display:inline" id="abstractExpanderabstract000540en"><a title="View Abstract" class="expandCollapse monospace" href="javascript:expandAbstract('abstract000540en')">[en]</a></span><span style="display:none" class="abstract" id="abstract000540en"> By bringing forward and interpreting the results from a collaborative research project that used contemporary computing techniques to investigate Giovanni Morelli’s nineteenth-century method for making stylistic attributions of old master paintings, this article examines how humanists make claims to knowledge and how this process may or may not be modellable or mechanizable within the context of classical, deterministic, digital computation. It begins with an explanation of the rationale behind choosing the Morellian practice of attribution, continues with a survey of another effort at computationally implementing Morelli’s method, and then presents our own computational techniques and results. The article concludes with what we have come to understand about the roles of responsibility, trust, and expertise in the social practice of art attribution, and the dangers in assuming that such human entailments are native to digital computers. </span></span><span class="Z3988" title="url_ver=Z39.88-2004&ctx_ver=Z39.88-2004&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rfr_id=info%3Asid%2Fzotero.org%3A2&rft.genre=article&rft.atitle=Leonardo,%20Morelli,%20and%20the%20Computational%20Mirror&rft.jtitle=Digital%20Humanities%20Quarterly&rft.stitle=DHQ&rft.issn=1938-4122&rft.date=2021-03-05&rft.volume=015&rft.issue=1&rft.aulast=Langmead&rft.aufirst=Alison&rft.au=Alison%20Langmead&rft.au=Christopher J.%20Nygren&rft.au=Paul%20Rodriguez&rft.au=Alan%20Craig"> </span></div> <h2>Reviews</h2> <div class="articleInfo" style="margin:0 0 1em 0;"><span class="monospace">[en] </span><a href="/dhq/vol/15/1/000534/000534.html">Networks, Maps, and Time: Visualizing Historical Networks Using Palladio</a><div style="padding-left:1em; margin:0;text-indent:-1em;">Melanie Conroy, The University of Memphis</div><span class="viewAbstract">Abstract <span class="viewAbstract monospace" style="display:inline" id="abstractExpanderabstract000534en"><a title="View Abstract" class="expandCollapse monospace" href="javascript:expandAbstract('abstract000534en')">[en]</a></span><span style="display:none" class="abstract" id="abstract000534en"> Many tools can produce maps, graphs, and charts that may differ in seemingly minor ways. Data visualization tools are one type of “middleware” that can be all but forgotten when one is presented with final products such as papers and presentations containing visualizations . Since the output of various software packages is sometimes similar, it is easy to forget the assumptions that went into the diagram, the dataset, and the software when looking at the final product — or even while using the tool if one becomes sufficiently accustomed to the interface. In this review, I revisit the visualization suite Palladio – which Miriam Posner has called a “Swiss Army knife for humanities data” – and the many projects that have made use of Palladio’s core features in the years since its launch. I examine the strengths and limitations of Palladio, as a network and map-making tool for exploring data and for rapidly prototyping diagrams, designed with an iterative process in mind. I contrast this iterative mentality with the analytic sensibility of tools like Gephi and Cytoscape, and review the primary features of Palladio with one primary case study (my own visualizations of the French Enlightenment network) and examples of how the features have been used in other digital humanities projects. Palladio is very useful for qualitative studies of data that include geospatial and chronological dimensions, especially when the data are tagged with different types of qualitative metadata, but it also tends to impose a historical geographical view on the data by foregrounding geospatial relationships, time, and other historical considerations. </span></span><span class="Z3988" title="url_ver=Z39.88-2004&ctx_ver=Z39.88-2004&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rfr_id=info%3Asid%2Fzotero.org%3A2&rft.genre=article&rft.atitle=Networks,%20Maps,%20and%20Time%3A%20Visualizing%20Historical%20Networks%20Using%20Palladio&rft.jtitle=Digital%20Humanities%20Quarterly&rft.stitle=DHQ&rft.issn=1938-4122&rft.date=2021-03-05&rft.volume=015&rft.issue=1&rft.aulast=Conroy&rft.aufirst=Melanie&rft.au=Melanie%20Conroy"> </span></div> <div class="articleInfo" style="margin:0 0 1em 0;"><span class="monospace">[en] </span><a href="/dhq/vol/15/1/000535/000535.html">A Review of <cite class="italic">Twitter and Tear Gas</cite></a><div style="padding-left:1em; margin:0;text-indent:-1em;">Nanditha Narayanamoorthy, York University</div><span class="viewAbstract">Abstract <span class="viewAbstract monospace" style="display:inline" id="abstractExpanderabstract000535en"><a title="View Abstract" class="expandCollapse monospace" href="javascript:expandAbstract('abstract000535en')">[en]</a></span><span style="display:none" class="abstract" id="abstract000535en"> Zeynep Tufekci’s book <cite class="italic">Twitter and Tear Gas</cite> (Yale University Press; 2017) speaks to high-profile, anti-authoritarian networked protests. She engages with street protests and online movements to bring new perspectives and dialogues on the need for reconfiguration of digitally networked online spaces, and the trajectories of these social movements online. Her work contributes to scholarship in digital activism, and digital humanities in the context of networked movements. </span></span><span class="Z3988" title="url_ver=Z39.88-2004&ctx_ver=Z39.88-2004&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rfr_id=info%3Asid%2Fzotero.org%3A2&rft.genre=article&rft.atitle=A%20Review%20of%20Twitter%20and%20Tear%20Gas&rft.jtitle=Digital%20Humanities%20Quarterly&rft.stitle=DHQ&rft.issn=1938-4122&rft.date=2021-03-05&rft.volume=015&rft.issue=1&rft.aulast=Narayanamoorthy&rft.aufirst=Nanditha&rft.au=Nanditha%20Narayanamoorthy"> </span></div> <div class="articleInfo" style="margin:0 0 1em 0;"><span class="monospace">[en] </span><a href="/dhq/vol/15/1/000530/000530.html">A Review of <cite class="italic">Intergenerational Connections in Digital Families</cite></a><div style="padding-left:1em; margin:0;text-indent:-1em;">Sucharita Sarkar, D.T.S.S. College of Commerce, Mumbai, India</div><span class="viewAbstract">Abstract <span class="viewAbstract monospace" style="display:inline" id="abstractExpanderabstract000530en"><a title="View Abstract" class="expandCollapse monospace" href="javascript:expandAbstract('abstract000530en')">[en]</a></span><span style="display:none" class="abstract" id="abstract000530en"> This review synthesizes Sakari Taipale’s book <cite class="italic">Intergenerational Connections in Digital Families</cite> (Springer International Publishing, 2019), partially from an auto-ethnographic perspective. Borrowing from the book’s structure, the review is divided into three parts. The first section examines the definitions of digital families and the role of everyday communication technologies in connecting such families. The second section critiques the interconnected roles of family members and generations in maintaining digital connections, especially through Taipale’s revival of the notion of the “warm expert.” The final section assesses the book’s conclusions in the context of changing social policies. It also looks at the possibilities for future research in the domain of digital family studies (and, by extension, in digital humanities) that can germinate from Taipale’s concise study. </span></span><span class="Z3988" title="url_ver=Z39.88-2004&ctx_ver=Z39.88-2004&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rfr_id=info%3Asid%2Fzotero.org%3A2&rft.genre=article&rft.atitle=A%20Review%20of%20Intergenerational%20Connections%20in%20Digital%20Families&rft.jtitle=Digital%20Humanities%20Quarterly&rft.stitle=DHQ&rft.issn=1938-4122&rft.date=2021-03-05&rft.volume=015&rft.issue=1&rft.aulast=Sarkar&rft.aufirst=Sucharita&rft.au=Sucharita%20Sarkar"> </span></div> <h2><a href="/dhq/vol/15/1/bios.html">Author Biographies</a></h2></div><div id="footer"><div style="float:left; max-width:70%;"> URL: http://www.digitalhumanities.org/dhq/vol/15/1/index.html<br/> Comments: <a href="mailto:dhqinfo@digitalhumanities.org" class="footer">dhqinfo@digitalhumanities.org</a><br/> Published by: <a href="http://www.digitalhumanities.org" class="footer">The Alliance of Digital Humanities Organizations</a> and <a href="http://www.ach.org" class="footer">The Association for Computers and the Humanities</a><br/>Affiliated with: <a href="https://academic.oup.com/dsh">Digital Scholarship in the Humanities</a><br/> DHQ has been made possible in part by the <a href="https://www.neh.gov/">National Endowment for the Humanities</a>.<br/>Copyright © 2005 - <script type="text/javascript"> var currentDate = new Date(); document.write(currentDate.getFullYear());</script><br/><a rel="license" href="http://creativecommons.org/licenses/by-nd/4.0/"><img alt="Creative Commons License" style="border-width:0" src="https://i.creativecommons.org/l/by-nd/4.0/80x15.png"/></a><br/>Unless otherwise noted, the DHQ web site and all DHQ published content are published under a <a rel="license" href="http://creativecommons.org/licenses/by-nd/4.0/">Creative Commons Attribution-NoDerivatives 4.0 International License</a>. Individual articles may carry a more permissive license, as described in the footer for the individual article, and in the article’s metadata. </div><img style="max-width:200px;float:right;" src="https://www.neh.gov/sites/default/files/styles/medium/public/2019-08/NEH-Preferred-Seal820.jpg?itok=VyHHX8pd"/></div></div></div></body></html>