CINXE.COM
DHQ: Digital Humanities Quarterly: Introducing DREaM (Distant Reading Early Modernity)
<?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd"> <html xmlns="http://www.w3.org/1999/xhtml" xmlns:m="http://www.w3.org/1998/Math/MathML"><head><meta http-equiv="content-type" content="text/html; charset=utf-8"/><title>DHQ: Digital Humanities Quarterly: Introducing DREaM (Distant Reading Early Modernity)</title><link rel="stylesheet" type="text/css" href="/dhq/common/css/dhq.css"/><link rel="stylesheet" type="text/css" media="screen" href="/dhq/common/css/dhq_screen.css"/><link rel="stylesheet" type="text/css" media="print" href="/dhq/common/css/dhq_print.css"/><link rel="alternate" type="application/atom+xml" href="/dhq/feed/news.xml"/><link rel="shortcut icon" href="/dhq/common/images/favicon.ico"/><script defer="defer" type="text/javascript" src="/dhq/common/js/javascriptLibrary.js"><!-- serialize --></script><script type="text/javascript"> var _gaq = _gaq || []; _gaq.push(['_setAccount', 'UA-15812721-1']); _gaq.push(['_trackPageview']); (function() { var ga = document.createElement('script'); ga.type = 'text/javascript'; ga.async = true; ga.src = ('https:' == document.location.protocol ? 'https://ssl' : 'http://www') + '.google-analytics.com/ga.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(ga, s); })(); </script><script async="async" src="https://www.googletagmanager.com/gtag/js?id=G-F59WMFKXLW"/><script> window.dataLayer = window.dataLayer || []; function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'G-F59WMFKXLW'); </script><!--WTF?--><script> MathJax = { options: { skipHtmlTags: {'[-]': ['code', 'pre']} } }; </script><script id="MathJax-script" async="" src="https://cdn.jsdelivr.net/npm/mathjax@3/es5/tex-mml-chtml.js"><!--Gimme some comment!--></script><link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/highlight.js/10.7.2/styles/xcode.min.css"/><script src="https://cdnjs.cloudflare.com/ajax/libs/highlight.js/10.7.2/highlight.min.js"><!--Gimme some comment!--></script><script src="https://code.jquery.com/jquery-3.4.0.min.js" integrity="sha256-BJeo0qm959uMBGb65z40ejJYGSgR7REI4+CW1fNKwOg=" crossorigin="anonymous"><!--Gimme some comment!--></script></head><body><div id="top"><div id="backgroundpic"><script type="text/javascript" src="/dhq/common/js/pics.js"><!--displays banner image--></script></div><div id="banner"><div id="dhqlogo"><img src="/dhq/common/images/dhqlogo.png" alt="DHQ Logo"/></div><div id="longdhqlogo"><img src="/dhq/common/images/dhqlogolonger.png" alt="Digital Humanities Quarterly Logo"/></div></div><div id="topNavigation"><div id="topnavlinks"><span><a href="/dhq/" class="topnav">home</a></span><span><a href="/dhq/submissions/index.html" class="topnav">submissions</a></span><span><a href="/dhq/about/about.html" class="topnav">about dhq</a></span><span><a href="/dhq/people/people.html" class="topnav">dhq people</a></span><span><a href="/dhq/news/news.html" class="topnav">news</a></span><span id="rightmost"><a href="/dhq/contact/contact.html" class="topnav">contact</a></span></div><div id="search"><form action="/dhq/findIt" method="get" onsubmit="javascript:document.location.href=cleanSearch(this.queryString.value); return false;"><div><input type="text" name="queryString" size="18"/> <input type="submit" value="Search"/></div></form></div></div></div><div id="main"><div id="leftsidebar"><div id="leftsidenav"><span>Current Issue<br/></span><ul><li><a href="/dhq/vol/18/2/index.html">2024: 18.2</a></li></ul><span>Preview Issue<br/></span><ul><li><a href="/dhq/preview/index.html">2024: 18.3</a></li></ul><span>Previous Issues<br/></span><ul><li><a href="/dhq/vol/18/1/index.html">2024: 18.1</a></li><li><a href="/dhq/vol/17/4/index.html">2023: 17.4</a></li><li><a href="/dhq/vol/17/3/index.html">2023: 17.3</a></li><li><a href="/dhq/vol/17/2/index.html">2023: 17.2</a></li><li><a href="/dhq/vol/17/1/index.html">2023: 17.1</a></li><li><a href="/dhq/vol/16/4/index.html">2022: 16.4</a></li><li><a href="/dhq/vol/16/3/index.html">2022: 16.3</a></li><li><a href="/dhq/vol/16/2/index.html">2022: 16.2</a></li><li><a href="/dhq/vol/16/1/index.html">2022: 16.1</a></li><li><a href="/dhq/vol/15/4/index.html">2021: 15.4</a></li><li><a href="/dhq/vol/15/3/index.html">2021: 15.3</a></li><li><a href="/dhq/vol/15/2/index.html">2021: 15.2</a></li><li><a href="/dhq/vol/15/1/index.html">2021: 15.1</a></li><li><a href="/dhq/vol/14/4/index.html">2020: 14.4</a></li><li><a href="/dhq/vol/14/3/index.html">2020: 14.3</a></li><li><a href="/dhq/vol/14/2/index.html">2020: 14.2</a></li><li><a href="/dhq/vol/14/1/index.html">2020: 14.1</a></li><li><a href="/dhq/vol/13/4/index.html">2019: 13.4</a></li><li><a href="/dhq/vol/13/3/index.html">2019: 13.3</a></li><li><a href="/dhq/vol/13/2/index.html">2019: 13.2</a></li><li><a href="/dhq/vol/13/1/index.html">2019: 13.1</a></li><li><a href="/dhq/vol/12/4/index.html">2018: 12.4</a></li><li><a href="/dhq/vol/12/3/index.html">2018: 12.3</a></li><li><a href="/dhq/vol/12/2/index.html">2018: 12.2</a></li><li><a href="/dhq/vol/12/1/index.html">2018: 12.1</a></li><li><a href="/dhq/vol/11/4/index.html">2017: 11.4</a></li><li><a href="/dhq/vol/11/3/index.html">2017: 11.3</a></li><li><a href="/dhq/vol/11/2/index.html">2017: 11.2</a></li><li><a href="/dhq/vol/11/1/index.html">2017: 11.1</a></li><li><a href="/dhq/vol/10/4/index.html">2016: 10.4</a></li><li><a href="/dhq/vol/10/3/index.html">2016: 10.3</a></li><li><a href="/dhq/vol/10/2/index.html">2016: 10.2</a></li><li><a href="/dhq/vol/10/1/index.html">2016: 10.1</a></li><li><a href="/dhq/vol/9/4/index.html">2015: 9.4</a></li><li><a href="/dhq/vol/9/3/index.html">2015: 9.3</a></li><li><a href="/dhq/vol/9/2/index.html">2015: 9.2</a></li><li><a href="/dhq/vol/9/1/index.html">2015: 9.1</a></li><li><a href="/dhq/vol/8/4/index.html">2014: 8.4</a></li><li><a href="/dhq/vol/8/3/index.html">2014: 8.3</a></li><li><a href="/dhq/vol/8/2/index.html">2014: 8.2</a></li><li><a href="/dhq/vol/8/1/index.html">2014: 8.1</a></li><li><a href="/dhq/vol/7/3/index.html">2013: 7.3</a></li><li><a href="/dhq/vol/7/2/index.html">2013: 7.2</a></li><li><a href="/dhq/vol/7/1/index.html">2013: 7.1</a></li><li><a href="/dhq/vol/6/3/index.html">2012: 6.3</a></li><li><a href="/dhq/vol/6/2/index.html">2012: 6.2</a></li><li><a href="/dhq/vol/6/1/index.html">2012: 6.1</a></li><li><a href="/dhq/vol/5/3/index.html">2011: 5.3</a></li><li><a href="/dhq/vol/5/2/index.html">2011: 5.2</a></li><li><a href="/dhq/vol/5/1/index.html">2011: 5.1</a></li><li><a href="/dhq/vol/4/2/index.html">2010: 4.2</a></li><li><a href="/dhq/vol/4/1/index.html">2010: 4.1</a></li><li><a href="/dhq/vol/3/4/index.html">2009: 3.4</a></li><li><a href="/dhq/vol/3/3/index.html">2009: 3.3</a></li><li><a href="/dhq/vol/3/2/index.html">2009: 3.2</a></li><li><a href="/dhq/vol/3/1/index.html">2009: 3.1</a></li><li><a href="/dhq/vol/2/1/index.html">2008: 2.1</a></li><li><a href="/dhq/vol/1/2/index.html">2007: 1.2</a></li><li><a href="/dhq/vol/1/1/index.html">2007: 1.1</a></li></ul><span>Indexes<br/></span><ul><li><a href="/dhq/index/title.html"> Title</a></li><li><a href="/dhq/index/author.html"> Author</a></li></ul></div><img src="/dhq/common/images/lbarrev.png" style="margin-left : 7px;" alt=""/><div id="leftsideID"><b>ISSN 1938-4122</b><br/></div><div class="leftsidecontent"><h3>Announcements</h3><ul><li><a href="/dhq/news/news.html#peer_reviews">Call for Reviewers</a></li><li><a href="/dhq/submissions/index.html#logistics">Call for Submissions</a></li></ul></div><div class="leftsidecontent"><script type="text/javascript">addthis_pub = 'dhq';</script><a href="http://www.addthis.com/bookmark.php" onmouseover="return addthis_open(this, '', '[URL]', '[TITLE]')" onmouseout="addthis_close()" onclick="return addthis_sendto()"><img src="http://s9.addthis.com/button1-addthis.gif" width="125" height="16" alt="button1-addthis.gif"/></a><script type="text/javascript" src="http://s7.addthis.com/js/152/addthis_widget.js"><!-- Javascript functions --></script></div></div><div id="mainContent"><div id="printSiteTitle">DHQ: Digital Humanities Quarterly</div><div class="DHQarticle"><div id="pubInfo">2017<br/>Volume 11 Number 4</div><div class="toolbar"><a href="/dhq/vol/11/4/index.html">2017 11.4</a> | <a rel="external" href="/dhq/vol/11/4/000313.xml">XML</a> | <a rel="external" href="https://dhq-static.digitalhumanities.org/pdf/000313.pdf">PDF</a> | <a href="#" onclick="javascript:window.print();" title="Click for print friendly version">Print</a></div> <div class="DHQheader"> <h1 class="articleTitle lang en">Introducing DREaM (Distant Reading Early Modernity)</h1> <div class="author"><a rel="external" href="../bios.html#milner_matthew">Matthew Milner</a></div> <div class="author"><a rel="external" href="../bios.html#wittek_stephen">Stephen Wittek</a> <<a href="mailto:stephen_dot_wittek_at_mcgill_dot_ca" onclick="javascript:window.location.href='mailto:'+deobfuscate('stephen_dot_wittek_at_mcgill_dot_ca'); return false;" onkeypress="javascript:window.location.href='mailto:'+deobfuscate('stephen_dot_wittek_at_mcgill_dot_ca'); return false;">stephen_dot_wittek_at_mcgill_dot_ca</a>>, Carnegie Mellon</div> <div class="author"><a rel="external" href="../bios.html#sinclair_stéfan">Stéfan Sinclair</a> <<a href="mailto:stefan_dot_sinclair_at_mcgill_dot_ca" onclick="javascript:window.location.href='mailto:'+deobfuscate('stefan_dot_sinclair_at_mcgill_dot_ca'); return false;" onkeypress="javascript:window.location.href='mailto:'+deobfuscate('stefan_dot_sinclair_at_mcgill_dot_ca'); return false;">stefan_dot_sinclair_at_mcgill_dot_ca</a>>, McGill University</div> <span class="Z3988" title="url_ver=Z39.88-2004&ctx_ver=Z39.88-2004&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rfr_id=info%3Asid%2Fzotero.org%3A2&rft.genre=article&rft.atitle=Introducing%20DREaM%20(Distant%20Reading%20Early%20Modernity)&rft.jtitle=Digital%20Humanities%20Quarterly&rft.stitle=DHQ&rft.issn=1938-4122&rft.date=2017-12-22&rft.volume=011&rft.issue=4&rft.aulast=Milner&rft.aufirst=Matthew&rft.au=Matthew%20Milner&rft.au=Stephen%20Wittek&rft.au=Stéfan%20Sinclair"> </span></div> <div id="DHQtext"> <div id="abstract"><h2>Abstract</h2> <p>We provide a comprehensive introduction to DREaM (Distant Reading Early Modernity), a hybrid text analysis and text archive project that opens up new possibilities for working with the collection of early modern texts in the EEBO-TCP collection (Phases I & II). Key functionalities of DREaM include i) management of orthographic variance; ii) the ability to create specially-tailored subsets of the EEBO-TCP corpus based on criteria such as date, title keyword, or author; and iii) direct export of subsets to Voyant Tools, a multi-purpose environment for textual visualization and analysis.</p> </div> <div class="counter"><a href="#p1">1</a></div><div class="ptext" id="p1">DREaM<a class="noteRef" href="#d3372e237">[1]</a> (Distant Reading Early Modernity) is a corpus-building interface that opens up new possibilities for working with the collection of early modern English texts transcribed thus far by the <cite class="title italic">Text Creation Partnership</cite> (TCP), an ongoing initiative to create searchable, full-text versions of all materials available from <cite class="title italic">Early English Books Online</cite> (EEBO).<a class="noteRef" href="#d3372e246">[2]</a> To offer maximum access within the bounds of restrictions on protected materials, the interface provides access to two versions of EEBO-TCP: i) a public version, which comprises all openly accessible texts (EEBO-TCP Phase I only, 25,363 texts), and ii) a restricted-access version, which offers access to all texts in the collection (EEBO-TCP Phase I and Phase II, approximately 44,400 texts).<a class="noteRef" href="#d3372e249">[3]</a> The URL for the public version of DREaM is <a href="http://dream.voyant-tools.org/dream/?corpus=dream" onclick="window.open('http://dream.voyant-tools.org/dream/?corpus=dream'); return false" class="ref">http://dream.voyant-tools.org/dream/?corpus=dream</a>. For a quick overview, please take a moment to view the following three-minute demonstration video: <div id="figure01" class="figure"> <div><iframe src="https://www.youtube.com/embed/aZkz7qn6hzo" frameborder="0" allow="autoplay; fullscreen" allowfullscreen="" width="640" height="524"><!--Gimme some comment!--></iframe></div> <div class="caption"><div class="label">Figure 1. </div>DREaM demo video.</div></div></div> <div class="counter"><a href="#p2">2</a></div><div class="ptext" id="p2">The creators of DREaM (Matthew Milner, Stéfan Sinclair, and Stephen Wittek) are members of the Digital Humanities Team for <cite class="title italic">Early Modern Conversions</cite>, an international, five-year project that has brought together a group of more than one hundred humanities scholars, graduate students, and artists in order to study the tremendous surge of activity around conversion that followed from developments such as the Reformation, the colonization of the Americas, and increased interaction amongst European cultures.<a class="noteRef" href="#d3372e277">[4]</a> Proceeding from the basic observation that conversion in early modernity was not an exclusively religious phenomenon, contributors to the project have endeavored to chart the movement and evolution of conversional thinking in the period, and to ask how the stories, spaces, and material affordances of conversion contributed to a conceptual legacy that has persisted into modernity (see [<a class="ref" href="#hadot2010">Hadot 2010</a>, 1]; [<a class="ref" href="#marcocci2015">Marcocci et al. 2015</a>]; [<a class="ref" href="#mills2003">Mills and Grafton 2003</a>, xii–xv]; [<a class="ref" href="#questier1996">Questier 1996</a>, 40–75].<a class="noteRef" href="#d3372e298">[5]</a> In order to accommodate the massive scope of this enquiry, the DREaM development team began to consider how one might apply “distant reading” techniques to a collection of texts like those made available through EEBO-TCP, with the long-term goal of lighting a way forward for similar investigations of other early modern corpora in the future [<a class="ref" href="#moretti2013">Moretti 2013</a>, 3–4] [<a class="ref" href="#jockers2013">Jockers 2013</a>, 48].<a class="noteRef" href="#d3372e316">[6]</a> It is our intention to position our work with EEBO-TCP as a test-case for how scholars of early modernity might collect and create personalized corpora using resources and electronic archives of heterogeneous texts. In this respect, working with EEBO-TCP points to fundamental issues of corpus-building and textual analysis that become acute in an early modern context: orthographic regularity, thematic collation, and the use of metadata to allow easy assembly of new corpora for scholarly investigation. This kind of functionality manifests what scholars have long known about archives: that they are driven by the interests and politics of users and the communities they represent. DREaM instantiates this archive-building theory into ready practice as a tool.<a class="noteRef" href="#d3372e328">[7]</a></div> <div class="counter"><a href="#p3">3</a></div><div class="ptext" id="p3">The EEBO-TCP collection does not lend itself to computer-based analysis without a great deal of labour-intensive preparation and re-organization, primarily because of two distinct sets of problems: a) the relative inflexibility in the EEBO interface (see <a href="http://quod.lib.umich.edu/e/eebogroup/" onclick="window.open('http://quod.lib.umich.edu/e/eebogroup/'); return false" class="ref">http://quod.lib.umich.edu/e/eebogroup/</a>), which functions very well as a finding aid, but is a poor mechanism for compiling subsets of full texts; and b) orthographic irregularity. To get a sense of how these problems can impact a workflow, suppose a user wanted to analyze the 740 texts in the EEBO-TCP corpus dating from 1623 to 1625. Although identifying the desired material on EEBO requires nothing more than a simple date search, the process of bringing all the files together into a unified, searchable subset is considerably more difficult because the EEBO interface does not offer options for batch downloading, and therefore requires users to download files in a desired subset on a one-by-one basis. Once assembled, any subset created in this manner will also require significant editing because the plain text file one can download from EEBO does not come along with an accompanying file for metadata (i.e., information to indicate the title, date of publication, author, etc.). Rather, metadata appears within the file itself, a convention that can significantly complicate or contaminate the results of macro-scale analysis. </div> <div class="counter"><a href="#p4">4</a></div><div class="ptext" id="p4">On a similar note, texts in the EEBO-TCP collection also feature a high degree of orthographic irregularity, a characteristic that adds an extra layer of complication for researchers who want to conduct any sort of query that involves finding and measuring words. As anyone who has ever read an unedited early modern text will be aware, writers in the period did not have comprehensive standards for spelling, so any given term can have multiple iterations (e.g., <span class="hi italic">wife, wif, wiv, wyf, wyfe, wyff</span>, etc.). Without a method for managing spelling variance, one cannot reliably track the distribution of words in a corpus, or perform many of the basic functions fundamental to textual analytics. </div> <div class="counter"><a href="#p5">5</a></div><div class="ptext" id="p5">DREaM addresses both of these issues directly. Although the interface is similar in some ways to the interface for EEBO, it is much more flexible, and offers different kinds of search parameters, making the process of subset creation more powerful and convenient, and allowing users of the EEBO-TCP corpus to perform macro-scale textual analysis with greater ease. Secondly, and perhaps most important for corpus text-analysis, DREaM utilizes a version of the EEBO-TCP corpus specially encoded with normalizations for orthographic variants, a feature that enables frequency tracking across multiple iterations of a given term. In other words, DREaM can search orthographically standardized versions of texts <em class="emph">as well as</em> the original texts, and can produce subsets containing orthographic variations. This innovation solves a major issue for text-analysis of pre-modern literature. Designed to work seamlessly with Voyant Tools, in many ways DREaM can be viewed as a kind of archive-engine. It comprises an interface that allows quick and easy subset building and export of new corpora from pre-processed texts of EEBO-TCP. Conceptually, however, DREaM is a prototype that facilitates rapid creation of groups of texts for analysis around user-determined parameters.</div> <div id="figure02" class="figure"> <div class="ptext"><a href="resources/images/figure02.jpg" rel="external"><img src="resources/images/figure02.jpg" style="" alt="Fig. 2. DREaM interface.jpg"/></a></div> <div class="caption"><div class="label">Figure 2. </div>The interface for DREaM.</div></div> <div class="counter"><a href="#p6">6</a></div><div class="ptext" id="p6"><a href="#figure02">Figure 2</a> shows an example search on the DREaM interface. In the top half of the screen, there are four fields that one can use to define a subset in terms of keyword, title keyword, author, and publisher. Just below these fields, there is a horizontal slider with two handles that users can drag to define a date range. As one enters the subset parameters, a number appears in the top right hand corner of the Export button to indicate the number of texts the proposed subset will contain (for example, in <a href="#figure02">Figure 2</a>, the user has defined a subset of 56 texts that feature the term “conversion” in the title). To the right of the Export button, a thumbnail line graph offers a rough idea of text distribution across the date range (the graph in <a href="#figure02">Figure 2</a> shows a significant peak toward the latter end of the range).</div> <div class="counter"><a href="#p7">7</a></div><div class="ptext" id="p7">Clicking on the Export button will bring up a window where users can choose to download the subset as a ZIP archive, or send it directly in Voyant Tools, a multi-purpose environment for textual visualization and analysis. Users can also choose to download the subset as a collection of XML or plain text files. At the bottom of the Export window, a convenient drag-and-drop mechanism offers options for naming the files according to year, title, author, publisher, or combinations thereof, a functionality that facilitates custom tailoring for specific sorts of enquiries. For example, a researcher comparing works by various authors would likely want to put the author at the beginning of the file name, while a researcher tracking developments across a specific date range would likely want to begin the file name with the year. </div> <div id="figure03" class="figure"> <div class="ptext"><a href="resources/images/figure03.jpg" rel="external"><img src="resources/images/figure03.jpg" style="" alt="Fig. 3. Voyant Corpus view.jpg"/></a></div> <div class="caption"><div class="label">Figure 3. </div>Voyant Tools.</div></div> <div class="counter"><a href="#p8">8</a></div><div class="ptext" id="p8">Clicking on “Send to Voyant Tools” in the Export window will open up a new page that shows textual analysis from a suite of digital tools (see <a href="#figure03">Figure 3</a>). The tools displayed in the default settings are i) <cite class="title italic">Cirrus</cite>, a visualization tool that correlates term frequency to font size (top left panel), ii) <cite class="title italic">Corpus Summary</cite>, a précis of key frequency data (bottom left panel), iii) <cite class="title italic">Keywords in Context</cite>, a tool that shows brief excerpts from the subset featuring a target term (bottom right panel), iv) <cite class="title italic">Trends</cite>, a line graph that visualizes frequency data for select terms (top right panel), and v) <cite class="title italic">Reader</cite>, a tool that enables users to scroll through texts in the subset and view highlighted instances of select terms (top center panel). Users can access further tools or adjust the arrangement of tools on the page by clicking on the Panel Selector icon in the top right corner of each panel. Notably, the tools in Voyant function inter-operably, so an action in one tool will carry over to the analysis for others. For example, if a user clicks on a term in <cite class="title italic">Cirrus</cite>, a line graph for the term will appear in the <cite class="title italic">Trends</cite> tool, and the <cite class="title italic">Keywords in Context</cite> tool will provide a series of excerpts to demonstrate usage of the term throughout the corpus. This functionality enables researchers to switch back-and-forth very quickly between “distant reading” and “close reading” perspectives, and also makes it easier to follow up on unexpected discoveries, or explore specific items of interest on the fly. </div> <div class="counter"><a href="#p9">9</a></div><div class="ptext" id="p9">In order to clarify the intervention that DREaM aims to bring to digital humanities research on early English print, it will help to briefly review two similar projects based around the EEBO-TCP collection, and to situate them in comparison to DREaM. The first is <cite class="title italic">Early Modern Print: Text Mining Early Printed English</cite> (EMP), a project developed by Joseph Loewenstein, Anupam Basu, Doug Knox, and Stephen Pentecost, all of whom are researchers for the Humanities Digital Workshop at Washington University in St. Louis.<a class="noteRef" href="#d3372e446">[8]</a> Like DREaM, EMP features a <cite class="title italic">Keywords in Context</cite> tool, a <cite class="title italic">Texts Counts</cite> tool (similar in function to <cite class="title italic">Corpus Summary</cite> in DREaM), and a version of the EEBO-TCP corpus specially encoded with normalizations for variant spellings. Other key features include a <cite class="title italic">Words Per Year</cite> tool and, most impressively, an <cite class="title italic">EEBO N-gram Browser</cite>, which charts frequencies of a given word or short sentence using n-gram counts for each year represented in the EEBO-TCP corpus. The second project of note is the <cite class="title italic">BYU Corpora Interface for EEBO-TCP</cite> (BYU-EEBO), a site created by Mark Davies at Brigham Young University.<a class="noteRef" href="#d3372e471">[9]</a> As with the other interfaces developed by Davies for large corpora, BYU-EEBO visualizes term frequency data by decade across the full date range of the corpus, facilitates decade-by-decade comparisons of frequency data, and shows how the collocates of a given term evolve over time.<a class="noteRef" href="#d3372e477">[10]</a></div> <div class="counter"><a href="#p10">10</a></div><div class="ptext" id="p10">Despite overlap of certain functions, DREaM, EMP, and BYU-EEBO represent very different responses to a growing demand for tools that reach beyond the conventional use scenarios envisioned by the designers of the EEBO interface in the late nineties. Each project has distinct strengths and weaknesses. At the risk of over-generalization, one might say that BYU-EEBO caters primarily to linguistics research, while DREaM aims for a more open-ended, exploratory style of corpus interrogation — and EMP is somewhere in between. Although all three projects bring benefits of value to a growing field, it is important to note that, because it works seamlessly with Voyant Tools, DREaM is the only one that enables full, direct access to the texts in the EEBO-TCP corpus, a feature that allow users to check the source of data very quickly, or follow up on elements of particular interest. On a similar note, DREaM is also the only platform designed to work in conjunction with other tools. For example, a user could create a subset in DREaM, download it, and conduct analysis in other platforms, such as R or Python.<a class="noteRef" href="#d3372e485">[11]</a></div> <div class="div div0"> <h1 class="head">Transforming & Enhancing EEBO-TCP</h1> <div class="counter"><a href="#p11">11</a></div><div class="ptext" id="p11">Our vision of an easy-to-use interface, and quick corpus creation required transformation and enhancement of the existing EEBO-TCP collection in two major ways. First, was the normalization, or standardization, of orthographic variants which are critical for the statistical analytics that power distant reading methods. Second was the enhancement of the metadata, allowing a richer set of parameters for building corpora of texts to analyze. Both were iterative processes. The resulting texts which power DREaM are new versions of the EEBO-TCP corpus that combine the EEBO-TCP metadata header with the outputs of each process: a full normalized version of each text containing the EEBO-TCP TEI SGML, with tagged normalizations, and new metadata drawn from linked open data provided by OCLC.</div> <div class="counter"><a href="#p12">12</a></div><div class="ptext" id="p12">Before we could normalize the spelling of our 44,418-document corpus the texts need some pre-processing. Although the EEBO-TCP is primarily English, it also contains texts in Latin, French, German, Dutch, Spanish, Portuguese, Italian, Hebrew, and Welsh. To make things more complicated, some EEBO-TCP texts are multi-lingual. We decided to use VARD2, a tool built specifically for Early Modern English.<a class="noteRef" href="#d3372e504">[12]</a> Although its creator Alastair Baron notes in the tool’s guide that customizing VARD2 for another language is possible (it has been used successfully with Portuguese), we opted to work with the English-only sections of the texts.We used xPath to identify which documents contained English text elements <span class="monospace"><text lang="eng"></span>. Since TCP allows for multiple values in the language attribute, we initially identified potential texts if “eng” appeared as a value of the attribute, e.g. <span class="monospace"><text lang="eng lat ita"></span>. The result was a working set of 40,170 documents which contained declared English text in some place or another. </div> <div class="counter"><a href="#p13">13</a></div><div class="ptext" id="p13">In EEBO-TCP <span class="monospace"><text></span> elements can be nested. Combined with multilingual values for the language attribute of the element, this creates potentially complicated conditions for extraction. We quickly found that there were 128 possible xPath locations of <span class="monospace"><text></span> in the TCP SGML documents. Here are a few examples of these locations:</div> <blockquote class="eg"><pre><code class="nohighlight">/EEBO/ETS/EEBO/GROUP/TEXT /EEBO/ETS/EEBO/TEXT /EEBO/ETS/EEBO/TEXT/GROUP/TEXT /EEBO/ETS/EEBO/TEXT/BODY/DIV1/P/TEXT /EEBO/ETS/EEBO/TEXT/BODY/DIV1/Q/TEXT /EEBO/ETS/EEBO/TEXT/BODY/DIV1/DIV2/P/Q/TEXT /EEBO/ETS/EEBO/TEXT/FRONT/DIV1/Q/TEXT /EEBO/ETS/EEBO/TEXT/FRONT/DIV1/P/TEXT /EEBO/ETS/EEBO/TEXT/GROUP/TEXT/BODY/DIV1/DIV2/P/NOTE/P/TEXT</code></pre></blockquote> <div class="counter"><a href="#p14">14</a></div><div class="ptext" id="p14">Since we were only interested in normalizing English text, we had to isolate the <span class="monospace"><text lang="eng"></span> elements, and preserve their order. As noted by the following examples, where the parent was English-only, we extracted the parent; however, when <span class="monospace"><text></span> elements contained multilingual language attribute values extracted English-only children.</div> <div class="table"><table class="table"><tr class="row"> <td valign="top" class="cell"> <blockquote class="eg"><pre><code class="xml nohighlight"> <span class="hi" style="red"><text lang="eng"></span> <text lang="eng"> </text> </text></code></pre></blockquote> </td> <td valign="top" class="cell"> <blockquote class="eg"><pre><code class="xml"><text lang="eng lat"> <span class="hi" style="red"><text lang="eng"></span> </text> <text lang="lat"> </text> </text></code></pre></blockquote> </td> </tr></table><div class="caption-no-label"><div class="label">Table 1. </div></div></div> <div class="counter"><a href="#p15">15</a></div><div class="ptext" id="p15">The script we created concatenated the matching <span class="monospace"><text lang="eng"></span> elements, and remarried the existing EEBO-TCP metadata file (*.hdr in the EECO-TCP file dump) to the new English-only “body” to produce a corpus of 40,170 truncated texts that could stand on its own, or be used for orthographic normalization.</div> <div class="counter"><a href="#p16">16</a></div><div class="ptext" id="p16">Normalizing the 40,170 English-only texts was iterative, as we optimized and tweaked our method. Prior to producing any final sets using VARD2, we ran it on the entire English corpus, and edited the dictionary by hand, catching some 373 normalizations which were problematic in one way or another. These amounted to 462,975 changes overall — only 1.03% of the total number of normalizations. Several examples are illustrative of the problem normalizations: “strawberie” became “strawy” rather than “strawberry,” and “hoouering” became “hoovering” rather than “hovering”, effectively creating a word that simply didn’t exist in period English. VARD’s statistical method is not robust enough to analyze the particular context that would allow it to discern which was appropriate: more ambiguous words like “peece” could either be “piece” or “peace”. In such cases, we decided normalization was too problematic, and thus set VARD to ignore “peece” entirely. </div> <div class="counter"><a href="#p17">17</a></div><div class="ptext" id="p17">Our pre-processing script not only extracted the <span class="monospace"><text></span> elements, it also altered them prior to loading the texts into VARD2. We experimented with both the texts we input into VARD2, and with its orthographic normalization parameters, creating twelve overall versions of the EEBO-TCP English corpus. The purpose was straightforward: to ascertain what degree of normalization best balanced the contextual ambiguities like “peece” but achieved a level of optimal normalization for text-analysis tools like Voyant, and to see whether limited pre-processing of the EEBO-TCP texts themselves, prior to VARD2 processing, improved the results. We ran each degree of normalization on three versions of the English-only <span class="monospace"><text></span> elements: a “Regular” unedited version acted as our control; a “Cleaned” version which removed characters and tags that impeded VARD’s normalization process by splitting words; and an “Expanded” version that expanded the macron diacritic, typically used in early modern English to represent a compressed “m” or “n” on the preceding vowel (e.g. “com̄ited” became “committed”).The “Cleaned” version removed pipe characters | and editorial square brackets [], but also <span class="monospace"><supr></span> and <span class="monospace"><subs></span> tags which broke up words. We opted not to remove any <span class="monospace"><GAP DESC="illegible"...></span> elements as it quickly became apparent that doing so would create more problems than it might resolve. Although VARD could likely handle a single missing character, the highly variable nature of a text gap made it questionable what gaps we should let VARD “patch”, and which it could not: two characters, or only a single character? Is that character a word on its own? We felt that the subjective nature of matching whether legibility was accurately assessed or not, and where to set the bar for VARD normalization, made the removal of these elements unpredictable enough that it would confound later analytical interests. </div> <div class="counter"><a href="#p18">18</a></div><div class="ptext" id="p18">We ran each of these versions in VARD2, in turn, at four distinct “match” levels in order to assess what level seems best for the overall corpus. VARD normalizes words if it finds a match that is +0.01% higher than the match parameter a user sets: a 50% setting will only alter a variant with a statistical match of 50.01%. We first ran VARD at 50%, it was evident this excluded a large number of obvious variants we needed normalized which appeared between 45% and 50%. We re-ran the normalization at 45%. We also ran VARD2 at 35% and 65% both as controls, with a mind to producing sets of the English-only corpus that might help us determine whether normalization levels were best adjusted as we proceeded chronologically in the corpus. In each case we set the output to “XML” so that VARD2 would tag the normalizations, retaining the original spelling as an attribute in the surrounding <span class="monospace"><normalized></span> tag. These tags were critical later on for indexing the texts in DREaM itself. In the end we decided to use the 45% match “Cleaned” version of the English only EEBO-TCP corpus for DREaM because it seems to have the best balanced normalization following the removal of tags that prevented VARD2 from working correctly, but without any expansions or “guess work” resulting from replacement of <span class="monospace"><GAP></span> elements. When collating the final file, we noted the set name, as well as the date and match level, as attributes in a wrapper element that enclosed the normalized text. We then coupled the entire XML text to the EEBO-TCP TEI header, and enclosed everything once again in an <span class="monospace"><EEBO></span> element.</div> <div class="counter"><a href="#p19">19</a></div><div class="ptext" id="p19">VARD2 itself was fairly easy to use. Even so, it took some doing to ensure it operated smoothly with the variable sizes of EEBO-TCP texts. Although it has a command line batch mode, we quickly ran into trouble as VARD2 would crash handling 5MB texts on the recommended memory settings of “–Xms256M –Xmx512M”. The crashes occurred frequently enough, despite raising the memory settings to over 1GB, that we decided to run VARD2 through our own php script, executing the program once for each of the 40,170 input files, with the appropriate setting. Even then, 1-3GB memory settings were insufficient to handle the ~100 or so EEBO-TCP files that are over 10MB. We ended up running our VARD2 processing script over a four-day period at “-Xms6000M -Xmx7000M” – or, significantly higher than the recommended settings. Undoubtedly this took longer than batch mode, but it was more stable.</div> <div class="counter"><a href="#p20">20</a></div><div class="ptext" id="p20">Enhancing the EEBO-TCP metadata was also iterative, but did not use a specific tool; rather, we processed the metadata using a combination of php, mysql, and text files to build gazetteer data. The existing metadata files of the EEBO-TCP (*.hdr) contain a wealth of information about the file encoding process, as well as several types of identifiers. They also employ a standardized or canonized list of authors and publication places, as well as publication dates. Though the metadata contains the critical identifiers from the Short Title Catalogue, it lacks data that is now available via Open Data resources like OCLC and VIAF that is often of interest to scholars using the EEBO text, such as gender of authors, or better dates of birth and death. Moreover, in both OCLC and EEBO-TCP metadata, the <span class="monospace"><publisher></span> remains an unparsed string despite containing a wealth of information on publication such as publishers, dates, and historic addresses.</div> <div class="counter"><a href="#p21">21</a></div><div class="ptext" id="p21">The enhancement of the metadata was two-fold. First, it was to find possible matching OCLC records and IDs for EEBO-TCP files, and pull in OCLC data to create a new metadata header containing authority dates (some <span class="monospace"><date></span> elements in EEBO-TCP contained artifacts like the letter L for the number 1 and the letter O for the number 0), places of publication, and titles, as well as data like gender, and birth and death dates from VIAF for EEBO-TCP authors, and referencing the OCLC and VIAF IDs as online resources in the new XML. Second, to identify individuals, places, and addresses in the unparsed <span class="monospace"><publisher></span> element.</div> <div class="counter"><a href="#p22">22</a></div><div class="ptext" id="p22">Matching EEBO-TCP texts to OCLC records was not as straightforward as it might appear. EEBO-TCP comprises textual witnesses or instances, while OCLC collates records from its partner libraries in order to build records that are manifestations or “works”. This conceptual distinction is critical as it means that there might well be several OCLC IDs for an individual EEBO-TCP text, potentially with variable metadata. Initially we had thought it possible to obtain a dump of the OCLC using EEBO as a “series” in our own McGill Library catalogue, allowing quick matching of the OCLC IDs with the EEBO texts. It turns out this was not possible, and so we opted to employ a combination of OCLC’s WorldCat Search API and web page searching using the titles of each EEBO text to create lists of possible matching OCLC IDs. Using OCLC’s xID service, we compared possible OCLC matches with the EEBO-TCP metadata using the title, dates of publication, and authors. Matching was a matter of confidence: titles were compared using both metaphone and levenshtein distances, to create a confidence level. We did the same with publication dates, as well as places of publication, where present. In the case of authors, we employed the same method (in order to account for spelling variants like smith vs. smythe), but also tallied the resulting matches to ensure that when EEBO-TCP noted four authors, an OCLC match did the same. We created a strict scoring system based on levenshtein distances for authors and titles, and exact matching for dates and places of publication (if they were noted). The same parameters were used to create a score for both the EEBO-TCP metadata and a possible OCLC match: we considered a high confidence match an equal score, or within 1 of the original EEBO-TCP. Inevitably this excluded some possible OCLC candidates, but it resulted in high levels of confidence matching of OCLC IDs for c. 39,000 of the 44,418 texts in the full EEBO-TCP corpus. With these OCLC IDs, we produced the first revised version of the metadata, pulling in information from the linked VIAF records to flesh out authorial data like dates of birth and death, and gender (EEBO uses TEI, which employs a <span class="monospace"><sex></span> element, rather than gender, to describe this information despite the problems inherent with sex / gender distinction). This version was coupled to the English only corpus we processed with VARD2, along with a truncated version of the original EEBO-TCP metadata. Combining the new metadata, and the original, in a larger metadata header for the files, gave us the ability to present users with the option of searching using the original EEBO-TCP metadata, or the hybrid OCLC-EEBO-TCP metadata.</div> <div class="counter"><a href="#p23">23</a></div><div class="ptext" id="p23">The second version of the DREaM metadata, which we produced in early fall 2015, took up the challenge of parsing the <span class="monospace"><publisher></span> element which remains unparsed in both OCLC and EEBO-TCP metadata. EEBO-TCP A00257 provides a good example of this string: <span class="monospace"><publisher>By Iohn Allde and Richarde Iohnes and are to be solde at the long shop adioining vnto S. Mildreds Churche in the Pultrie and at the litle shop adioining to the northwest doore of Paules Churche,</publisher></span> John Allde and Richard Jones are not mentioned anywhere in the original EEBO-TCP metadata, only the author <span class="monospace"><author>H. B., fl. 1566</author></span>. Isolating these individuals required creation of two gazetteers: first, for place names, and second, one for known agents (from the EEBO-TCP author list). Rather than working with 44,418 entries, we ran our script iteratively over the 23,644 distinct <span class="monospace"><publisher></span> strings, adding the possible places and individuals to the growing gazetteers, and pulling in variants for authors’ names from VIAF’s RDF XML files (<span class="monospace"><schema:alternateName></span>) retrieved by using the results from VIAF’s AutoSuggest API (<a href="http://viaf.org/viaf/AutoSuggest?query={searchterms}" onclick="window.open('http://viaf.org/viaf/AutoSuggest?query={searchterms}'); return false" class="ref">http://viaf.org/viaf/AutoSuggest?query={searchterms}</a>). We also created a short list to translate common early modern first names and abbreviations into modern versions, such as Io. for John, or Wyliam for William. After some 20 passes over the data, including manual editing of the agents gazetteer, patching for new scenarios, and comparing possible matches to the dates of publications in EEBO-TCP texts, we were left with a gazetteer of 195,213 variants representing 24,076 distinct possible names for 19,836 distinct VIAF IDs. Some, like “F.M.”, were too imprecise to resolve. Our work also alerted OCLC to problems with their AutoSuggest API when it returned the canonical names of an author with another individual’s VIAF ID, usually for co-author (e.g. Nicholas Bourne’s VIAF ID appeared for Thomas Goodwill). This required additional processing of incoming data to see if the names it returned matched those we wanted to query. Equally problematic were dates, which appeared inconsistently throughout the VIAF open data. RDF XML might lack any dates, while the VIAF “cluster” XML would contain it under <span class="monospace"><ns2:birthDate></span> and <span class="monospace"><ns2:deathDate></span> or as part of a MARC individual agent record element <span class="monospace"><ns2:subfield code="d"></span>. Many values in these locations were “0” despite the presence of birth and death years as part of the canonical name; this became a third option for checking whether someone could be the author of the text in question. Last resort was using the decades of publishing activity indicated in the Cluster XML as <span class="monospace"><ns2:dates max="201" min="150"></span>. Consequently, all of these matches have a confidence marker.<a class="noteRef" href="#d3372e764">[13]</a> The latest version of the DREaM metadata for A00257, from EEBO-TCP Phase 1, is available as part of our github GIST. The metadata of the new <span class="monospace"><dreamheader></span> is richer than the original EECO-TCP metadata, containing not only the biographical data of VIAF records for the author, H. B., but also similar data for the individuals found in the <span class="monospace"><publisher></span> element.</div> <div class="counter"><a href="#p24">24</a></div><div class="ptext" id="p24"> The addition of confidence markers in our new DREaM metadata indicates a different approach to open metadata. Rather than seeing metadata as an authoritative or concrete accounting of actual attribution and representation (as is the practice of archivists and cataloguers), the DREaM metadata (especially in regards to the parsed <span class="monospace"><publisher></span> data) should be viewed more as a kind of contingent scholarly assertion. DREaM does not have the resources to double check all some 40,000 texts to ensure exact accuracy of the matching of EECO-TCP metadata with VIAF and OCLC identifiers. In many cases, doing so requires expert domain knowledge, and means to accurately resolve entities. A good example of this is “B. Alsop”: is it Bernard or Benjamin Alsop? The two printer publishers were most likely related, but in some instances it isn’t possible to distinguish between the two, as in the case of EEBO-TCP text A00012, Robert Aylett, <cite class="title italic">Ioseph, or Pharoah’s Favourite</cite>, printed by B. Alsop for Matthew Law … (1623), because VIAF lacks birth and death dates for both. Experts know it is Bernard Alsop, but without a corroborating data source there is no method, programmatically, to distinguish between the two as matches for “B. Alsop”. By documenting both, and marking a confidence level, we’re asserting that metadata is very much the product of ongoing research: it should not be seen as definitive. While DREaM allows researchers the ability to create corpora based on either exact or fuzzy searches, we’re also publishing this metadata separately, and offering it to EEBO-TCP so that the wider scholarly community can refine and critique it.<a class="noteRef" href="#d3372e797">[14]</a> </div> </div> <div class="div div0"> <h1 class="head">DREaM as Archive Engine: Enhancements to Voyant Tools</h1> <div class="counter"><a href="#p25">25</a></div><div class="ptext" id="p25">DREaM has led to the enrichment and enhancement of the EEBO-TCP corpus, but the project has also led to some significant architectural and functionality improvements in Voyant Tools.</div> <div class="counter"><a href="#p26">26</a></div><div class="ptext" id="p26">In particular: <ul class="list"><li class="item">a skin designed specifically for subsetting of a very large corpus based on full-text and metadata searches (the current DREaM skin is specific to EEBO-TCP but the underlying design and functionality can be reused);</li><li class="item">efficient querying of a corpus to determine the number of matching documents, much like a search engine (previously functionality was limited to term frequencies);</li><li class="item">support for NOT operators to filter out documents that match a query</li><li class="item">additional native metadata indexing (e.g. for publisher and publication location) as well as the ability for user-defined metadata fields that can be included in subsequent queries;</li><li class="item">exporting full texts from Voyant in compressed archives of plain text or XML, with user-defined file-naming protocols;</li><li class="item">efficiency-optimized creation of a new corpus subsetted from an existing corpus.</li></ul></div> <div class="counter"><a href="#p27">27</a></div><div class="ptext" id="p27">In addition, work on DREaM helped prioritized other planned functionality, such as reordering documents in a corpus, editing document metadata, access management for corpora (to reflect restrictions for EEBO-TCP), and overall scalability improvements. We had not yet worked on a single corpus with more than 44,000 texts of variable lengths (the input XML for DREaM weighs in at more than 10GB). In short, DREaM provided an ideal test bed for efforts to enhance the scalability of Voyant Tools. This use-case driven development seems to us an ideal scenario for building new generations of resources in the Digital Humanities.</div> </div> </div> <div id="notes"><h2>Notes</h2><div class="endnote" id="d3372e237"><span class="noteRef lang en">[1] Support for DREaM and the Early Modern Conversions Project comes from the Social Sciences and Humanities Research Council of Canada (SSHRC), the Canada Foundation for Innovation, McGill University, and the Institute for the Public Life of Arts and Ideas (McGill).</span></div><div class="endnote" id="d3372e246"><span class="noteRef lang en">[2] Early English Books Online brings together page images — but not necessarily transcriptions — of all English printed matter from 1473 to 1700. Much of the collection derives from early microfilm photographs created by Eugene Powers in the 1930s. There are approximately 125,000 titles in the collection. The Text Creation Partnership has completed transcription work on approximately 44,000, or one-third of all texts currently available.</span></div><div class="endnote" id="d3372e249"><span class="noteRef lang en">[3] The URL for Early English Books Online (EEBO) is: <a href="http://www.eebo.chadwyck.com/home" onclick="window.open('http://www.eebo.chadwyck.com/home'); return false" class="ref">www.eebo.chadwyck.com/home</a>. For information about the Text Creation Partnership (TCP), see: <a href="http://www.textcreationpartnership.org" onclick="window.open('http://www.textcreationpartnership.org'); return false" class="ref">www.textcreationpartnership.org</a>.</span></div><div class="endnote" id="d3372e277"><span class="noteRef lang en">[4] Early Modern Conversions: <a href="http://earlymodernconversions.com" onclick="window.open('http://earlymodernconversions.com'); return false" class="ref">earlymodernconversions.com</a><br/><br/> Early Modern Conversions Digital Humanities Team:<a href="http://www.earlymodernconversions.com/people/digital-humanities-team/" onclick="window.open('http://www.earlymodernconversions.com/people/digital-humanities-team/'); return false" class="ref">http://www.earlymodernconversions.com/people/digital-humanities-team/</a>.<br/><br/></span></div><div class="endnote" id="d3372e298"><span class="noteRef lang en">[5] Pierre Hadot wrote that conversion is “one of the constitutive notions of Western consciousness and conscience,” arguing that, “in effect, one can represent the whole history of the West as a ceaseless effort at renewal by perfecting the techniques of ‘conversion,’ which is to say the techniques intended to transform human reality, either by bringing it back to its original essence (conversion-return) or by radically modifying it (conversion-mutation).”</span></div><div class="endnote" id="d3372e316"><span class="noteRef lang en">[6] As Franco Moretti has argued, distant reading creates possibilities for analysis where no other option exists: “A canon of two hundred novels, for instance, sounds very large for nineteenth-century Britain (and is much larger than the current one), but is still less than one per cent of the novels that were actually published: twenty thousand, thirty, more, no one really knows — and close reading won’t help here, a novel a day every day of the year would take a century or so.” Matt Jockers makes a similar point: “Macroanalysis is not a competitor pitted against close reading. Both the theory and the methodology are aimed at the discovery and delivery of evidence. This evidence is different from what is derived through close reading, but it is evidence, important evidence. At times, the new evidence will confirm what we have already gathered through anecdotal study. At other times, the evidence will alter our sense of what we thought we knew. Either way the result is a more accurate picture of our subject. This is not the stuff of radical campaigns or individual efforts to ‘conquer’ and lay waste to traditional modes of scholarship.”</span></div><div class="endnote" id="d3372e328"><span class="noteRef lang en">[7] This view of archives as collections of interest, connected to deeply embedded epistemological positions and cultural politics, as much as what is archivable, owes much to the principles of Michel Foucault’s <cite class="title italic">Archaeology of Knowledge</cite> and Jacques Derrida’s <cite class="title italic">Archive Fever</cite>. [<a class="ref" href="#manoff2004">Manoff 2004</a>] provides a useful overview of the field. See also [<a class="ref" href="#parikka2012">Parikka 2012</a>].</span></div><div class="endnote" id="d3372e446"><span class="noteRef lang en">[8] See <a href="http://earlyprint.wustl.edu" onclick="window.open('http://earlyprint.wustl.edu'); return false" class="ref">http://earlyprint.wustl.edu</a>.</span></div><div class="endnote" id="d3372e471"><span class="noteRef lang en">[9] See <a href="http://corpus.byu.edu/eebo" onclick="window.open('http://corpus.byu.edu/eebo'); return false" class="ref">http://corpus.byu.edu/eebo</a>.</span></div><div class="endnote" id="d3372e477"><span class="noteRef lang en">[10] For Davies’ other corpus interfaces see <a href="http://corpus.byu.edu/overview.asp" onclick="window.open('http://corpus.byu.edu/overview.asp'); return false" class="ref">http://corpus.byu.edu/overview.asp</a>.</span></div><div class="endnote" id="d3372e485"><span class="noteRef lang en">[11] For R, see <a href="https://www.r-project.org" onclick="window.open('https://www.r-project.org'); return false" class="ref">https://www.r-project.org</a>; for Python, see <a href="https://www.python.org" onclick="window.open('https://www.python.org'); return false" class="ref">https://www.python.org</a>. </span></div><div class="endnote" id="d3372e504"><span class="noteRef lang en">[12] <a href="http://ucrel.lancs.ac.uk/vard" onclick="window.open('http://ucrel.lancs.ac.uk/vard'); return false" class="ref">http://ucrel.lancs.ac.uk/vard</a>. </span></div><div class="endnote" id="d3372e764"><span class="noteRef lang en">[13] Confidence was a matter of dating. “0” denotes an exact match or a match with a EEBO-TCP author, with a publication date which falls in between an individual’s birth and death dates; “1” lacked either birth or death dates, or used publishing activity dates; and lastly “2” lacked any dates.</span></div><div class="endnote" id="d3372e797"><span class="noteRef lang en">[14] <a href="http://www.matthewmilner.name/2016/05/06/EEBO-TCP-Phase-I-Metadata-Mashup-revision-II/" onclick="window.open('http://www.matthewmilner.name/2016/05/06/EEBO-TCP-Phase-I-Metadata-Mashup-revision-II/'); return false" class="ref">http://www.matthewmilner.name/2016/05/06/EEBO-TCP-Phase-I-Metadata-Mashup-revision-II/</a></span></div></div><div id="worksCited"><h2>Works Cited</h2><div class="bibl"><span class="ref" id="derrida1996"><!-- close -->Derrida 1996</span> Derrida, Jacques. <cite class="title italic">Archive Fever</cite>. Chicago: University of Chicago Press, 1996.</div><div class="bibl"><span class="ref" id="foucault2002"><!-- close -->Foucault 2002</span> Foucault, Michel. <cite class="title italic">Archaeology of Knowledge</cite>. London and New York: Routledge, 2002.</div><div class="bibl"><span class="ref" id="hadot2010"><!-- close -->Hadot 2010</span> Hadot, Pierre. “Conversion.” Translated by Andrew B. Irvine. Accessed September 21, 2015. <a href="https://aioz.wordpress.com/2010/05/17/pierre-hadot-conversion-translated-by-andrew-irvine/" onclick="window.open('https://aioz.wordpress.com/2010/05/17/pierre-hadot-conversion-translated-by-andrew-irvine/'); return false" class="ref">https://aioz.wordpress.com/2010/05/17/pierre-hadot-conversion-translated-by-andrew-irvine/</a>. Originally published in <cite class="title italic">Encyclopaedia Universalis</cite>, vol. 4 (Paris: Universalis France), 979-981.</div><div class="bibl"><span class="ref" id="jockers2013"><!-- close -->Jockers 2013</span> Jockers, Matthew. <cite class="title italic">Macroanalysis: Digital Methods and Literary History</cite>. Chicago: University of Illinois Press, 2013.</div><div class="bibl"><span class="ref" id="manoff2004"><!-- close -->Manoff 2004</span> Manoff, Marlene. “Theories of the Archive from Across the Disciplines”, <cite class="title italic">portal: Libraries and the Academy</cite>, vol. 4, no. 1 (2004), 9–25.</div><div class="bibl"><span class="ref" id="marcocci2015"><!-- close -->Marcocci et al. 2015</span> Marcocci, Giuseppe, Wietse de Boer, Aliocha Maldavsky, and Ilaria Pavan, eds. <cite class="title italic">Space and Conversion in Global Perspective.</cite> Leiden: Brill, 2015.</div><div class="bibl"><span class="ref" id="mills2003"><!-- close -->Mills and Grafton 2003</span> Mills, Kenneth, and Anthony Grafton, eds. <cite class="title italic">Conversion: Old Worlds and New</cite>. Rochester, N.Y.: University of Rochester Press, 2003.</div><div class="bibl"><span class="ref" id="moretti2013"><!-- close -->Moretti 2013</span> Moretti, Franco. <cite class="title italic">Distant Reading</cite>. London: Verso, 2013.</div><div class="bibl"><span class="ref" id="parikka2012"><!-- close -->Parikka 2012</span> Parikka, Jussi. “Archives in Media Theory: Material Media Archaeology and Digital Humanities,” in <cite class="title italic">Understanding Digital Humanities</cite>, ed. David M. Berry. Basingstoke: Palgrave MacMillan, 2012: 85-104.</div><div class="bibl"><span class="ref" id="questier1996"><!-- close -->Questier 1996</span> Questier, Michael. <cite class="title italic">Conversion, Politics and Religion in England, 1580-1625</cite>. Cambridge, U.K.: Cambridge University Press, 1996.</div></div><div class="toolbar"><a href="/dhq/vol/11/4/index.html">2017 11.4</a> | <a rel="external" href="/dhq/vol/11/4/000313.xml">XML</a> | <a rel="external" href="https://dhq-static.digitalhumanities.org/pdf/000313.pdf">PDF</a> | <a href="#" onclick="javascript:window.print();" title="Click for print friendly version">Print</a></div><div class="license"><a rel="license" href="http://creativecommons.org/licenses/by-nd/4.0/"><img alt="Creative Commons License" style="border-width:0" src="https://i.creativecommons.org/l/by-nd/4.0/88x31.png"/></a><br/>This work is licensed under a <a rel="license" href="http://creativecommons.org/licenses/by-nd/4.0/">Creative Commons Attribution-NoDerivatives 4.0 International License</a>. </div></div><div id="footer"><div style="float:left; max-width:70%;"> URL: http://www.digitalhumanities.org/dhq/vol/11/4/000313/000313.html<br/> Comments: <a href="mailto:dhqinfo@digitalhumanities.org" class="footer">dhqinfo@digitalhumanities.org</a><br/> Published by: <a href="http://www.digitalhumanities.org" class="footer">The Alliance of Digital Humanities Organizations</a> and <a href="http://www.ach.org" class="footer">The Association for Computers and the Humanities</a><br/>Affiliated with: <a href="https://academic.oup.com/dsh">Digital Scholarship in the Humanities</a><br/> DHQ has been made possible in part by the <a href="https://www.neh.gov/">National Endowment for the Humanities</a>.<br/>Copyright © 2005 - <script type="text/javascript"> var currentDate = new Date(); document.write(currentDate.getFullYear());</script><br/><a rel="license" href="http://creativecommons.org/licenses/by-nd/4.0/"><img alt="Creative Commons License" style="border-width:0" src="https://i.creativecommons.org/l/by-nd/4.0/80x15.png"/></a><br/>Unless otherwise noted, the DHQ web site and all DHQ published content are published under a <a rel="license" href="http://creativecommons.org/licenses/by-nd/4.0/">Creative Commons Attribution-NoDerivatives 4.0 International License</a>. Individual articles may carry a more permissive license, as described in the footer for the individual article, and in the article’s metadata. </div><img style="max-width:200px;float:right;" src="https://www.neh.gov/sites/default/files/styles/medium/public/2019-08/NEH-Preferred-Seal820.jpg?itok=VyHHX8pd"/></div></div></div><script>hljs.highlightAll();</script></body></html>