CINXE.COM

Current Research

<html> <head> <!-- Google Analytics --> <!-- Google tag (gtag.js) --> <script async src="https://www.googletagmanager.com/gtag/js?id=G-NEP4GYP06J"></script> <script> window.dataLayer = window.dataLayer || []; function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'G-NEP4GYP06J'); </script> <script type="text/javascript"> // avoid javascript errors without editing every page var pageTracker = {"_trackPageview": function(page){console.log('');}} </script> <title>Current Research</title> <link rel="stylesheet" type="text/css" href="/css/hopper.css"/> <script type="text/javascript" src="/js/hopper.js"></script> </head> <body> <div id="header"> <a id="banner" href="/hopper/"> </a> <div id="header_side"> <form action="/hopper/searchresults" class="search_form" onsubmit="return validate_form(this,q);"> <input name="q" /> <input type="submit" value="Search" /> <p>("Agamemnon", "Hom. Od. 9.1", "denarius")</p> <p><a href="/hopper/search">All Search Options</a> [<a href="javascript:abbrev_help()">view abbreviations</a>]</p> </form> </div> </div> <div id="main"> <div id="main_top"> <div id="tabs"> <a class=tab href="/hopper/home">Home</a> <a class=tab href="/hopper/collections">Collections/Texts</a> <a class=tab href="http://catalog.perseus.org" target="_blank">Perseus Catalog</a> <a class=tab_active href="/hopper/research">Research</a> <a class=tab href="/hopper/grants">Grants</a> <a class=tab href="/hopper/opensource">Open Source</a> <a class=tab href="/hopper/about">About</a> <a class=tab href="/hopper/help">Help</a> </div> <div id="subtabs"> <a class=tab href="/hopper/research/background">Background</a> <a class=tab_active href="/hopper/research/current">Current</a> </div> </div> <div id="content" class="2column"> <div id="index_main_col"> <div id="research"> <h3>Much work is on-going since this page was last updated!</h3> <p>Of particular note:</p> <p> - the <strong>Perseus Catalog</strong>, a foundational resource that provides the basis for all of our research, created and maintained here at Tufts; </p> <p>- the <strong>Perseids Project</strong> (note the spelling!), a collaborative editing platform for source documents on which users can create micropublications consisting of transcriptions, translations, linguistic annotations and commentaries of and on a variety of ancient source documents;</p> <p>- work under the <strong>Humboldt Chair of Digital Humanities at the University of Leipzig</strong> that includes the second release of the <strong>The Ancient Greek and Latin Dependency Treebank</strong>, a project that began at Tufts and has continued at Leipzig.</p> <p>For more information on these projects, see:</p> <ul> <li><a href="http://catalog.perseus.org">The Perseus Catalog</a></li> <li><a href="http://www.perseids.org">The Perseids Project</a> <ul> <li><a href="http://sites.tufts.edu/perseids/">About Perseids</a></li> </ul> </li> <li><a href="http://www.dh.uni-leipzig.de">The Humboldt Chair of Digital Humanities at the University of Leipzig</a> <ul> <li><a href="https://perseusdl.github.io/treebank_data/">The Ancient Greek and Latin Dependency Treebank</a></li> <li><a href="http://www.dh.uni-leipzig.de/wo/lofts/">Leipzig Open Fragmentary Texts Series (LOFTS)</a></li> <li><a href="http://www.dh.uni-leipzig.de/wo/projects/open-greek-and-latin-project/">Open Greek and Latin Project</a></li> <li><a href="http://www.dh.uni-leipzig.de/wo/open-philology-project/open-persian/">Open Persian</a></li> </ul> </li> </ul> <hr/> <hr/> <h3>Research in 2008/09</h3> <p> The following projects cluster around a number of themes: </p> <ul> <li> <p> <a name="undergradresearch"></a><b>Enabling undergraduate research</b>: Nothing in our view offers more benefits to classics in particular and the humanities in general than our ability to make it the norm for our students to contribute early and often in tangible ways, large and small, to the field. We can in this way make good on our promise to produce active citizens who expect to contribute to their world. The print infrastructure of classics had in the twentieth century grown so mature and so cumbersome that even the most advanced undergraduates in the most demanding programs could not expect that they would, as a matter of course, conduct meaningful research or contribute in any tangible, if small, way to the field. As we build a new digital infrastructure for Classics in particular and for the Humanities in general, the situation is now completely different. We can now provide our students with opportunities to begin contributing in small but tangible ways at a very early stage, to disseminate those digital contributions far more widely than any print publications, to allow many contributions to be reused in novel ways to support additional new scholarship and to put those contributions in an infrastructure designed to preserve them along with scientific data sets on which human civilization depends. </p> <p> The section on <a href="/hopper/about/research">research opportunities</a> suggests specific opportunities for students and classes but we encourage students and faculty to suggest ways to contribute to these <a href="/hopper/about/research">active research projects</a> or in any other way. </p> <p> At least three factors allow us to rethink the possibilities for undergraduate research. </p> <ul> <li> <p> First, students have more and better access to primary sources than were accessible in print. We have the tools already whereby we much carry much further the visions behind publications such as the Loeb Classical Library and Bud&eacute; Editions, providing a range of background and translation support. Scholars can now also include full citations for the primary sources behind their statements, knowing that electronic publications do not have the mechanical space limitations of print and that even primary sources previously available only in research libraries are or will soon be available to the world and will contain links to basic background information. Thomas Martin's <a href="/hopper/text?doc=Perseus:text:1999.04.0009">Historical Overview in Perseus</a> and Christopher Blackwell's <a target="_blank" href="http://www.stoa.org/projects/demos/home" onclick="javascript: pageTracker._trackPageview('/outgoing/StoaDemos');">Demos</a>. </p> </li> <li> <p> Second, we simply cannot do all the work that needs doing if we only rely upon professional scholars and automated systems - we need to enlist our students in this task. The projects outlined <a href="/hopper/about/research">here</a> offer a wide range of tasks well within the range of supervised students with various levels of Greek and Latin. Projects such as the <a target="_blank" href="http://chs.harvard.edu/chs/homer_multitext" onclick="javascript: pageTracker._trackPageview('/outgoing/CHSHomer');">Center for Hellenic Studies Homer Multitext</a> have students transcribing scholia and readings from the 10th century Venetus A that have never found their way into print and that are now visible to anyone who downloads the newly created high resolution scans of the manuscript. Student-centered and driven annotations can provide a new generation of commentaries that address the actual problems that readers confront as they struggle with linguistically or culturally challenging texts. Students or classes might systematically review and revise the entries for people and places and realia in digitized versions of the old Smith's encyclopedias. </p> </li> <li> <p> Third, wholly new scholarly instruments are now becoming available that open up new avenues of research. We have studied Greek and Latin for millennia but Treebanks (see below) allow us to place our ideas about Greek and Latin lexicography, linguistics and style on a quantifiable and explicit foundation. We urge students to adopt particular authors or works, adding new syntactic data to the larger treebanks and then using that data to conduct original research. We have, for example, already published a <a target="_blank" href="http://nlp.perseus.tufts.edu/syntax/treebank/">Treebank</a> that includes Sallust's <em>Catiline</em> among other samples of Latin. Students could begin now comparing Sallust with the other samples while taking on the task of adding the <em>Jugurtha</em> and fragments. </p> </li> </ul> </li> <li> <p> <a name="2500years"></a><b>2500 years later in 2010</b>: the world that Marathon made: Some of the efforts outlined below are inherently broad, others much more focused on particular texts or problems. As a general theme for the coming year, however, we have chosen to focus our efforts on the Battle of Marathon in particular and the world that it helped create in general. Our ultimate goal is to prepare for a conference to commemorate Marathon 2500 years afterwards, in the late summer of 2010. We thus will, where possible, focus collection development on resources that allow us to better address this topic. The topic is, however, a very broad one and includes not only all of the conventional classical Greek period but major elements of Roman history. The topic also invites participation from scholars in contemporary Iran and raises the general topic of classical studies and its ancient ties with not only the geographic Middle East but Islamic scholarship as well. </p> </li> <li> <p> <a name="library"></a><b>A comprehensive, open source, fourth-generation library of Greek and Latin editions</b>: The first digital texts contained transcription with markup representing the page-layout of the print source (e.g., representing that a word is in italics). A second generation of collections (of which the primary sources in Perseus provide one example) began to add semantic markup (e.g., representing that a word is in italics because it is a Latin quotation). Collections such as the <a target="_blank" href="http://quod.lib.umich.edu/m/moagrp/" onclick="javascript: pageTracker._trackPageview('/outgoing/MoA');">Making of America</a> and <a target="_blank" href="http://www.jstor.org/" onclick="javascript: pageTracker._trackPageview('/outgoing/JSTOR');">JSTOR</a> then demonstrated a third, much larger generation of collections where readers search text automatically generated by Optical Character Recognition (OCR) software and basic library cataloging data, and then view scanned page images of the source. Where second generation collections extend the scope of first generation collections by adding semantic markup to carefully transcribed text, third-generation collections reverse production philosophies, emphasizing automation and scalability over the artisanal techniques of first and second-generation collections. </p> <p> Third-generation collections focus on the quality of the page images and associated meta-data and assume that the automatically generated text will improve with each new generation of OCR software. In the 1970s and 1980s, when the first- and second-generation collections emerged, scanning and storage technology made libraries of scanned page images impractical - the several hundred megabytes of transcribed Greek from the Thesaurus Linguae Graecae (TLG) required special disk drives that cost tens of thousands of dollars. Texts had to be transcribed well enough to stand on their own and readers would have to rely upon printed copies to identify errors and to find the textual notes, introductions, indices, appendices and other scholarly apparatus. At least one major library has informally reported that it would not accept first- or second-generation collections unless they came with digital page images aligned to the transcribed texts. </p> <p> Fourth-generation collections integrate not only carefully transcribed text and the original page images but also other forms of annotation (e.g., morphological and syntactic analysis, indices of people and places, markup for the particular sense of particular words in context). </p> <p> The first fourth-generation texts became available at least as early as 2006, when Perseus aligned manually produced, TEI-compliant editions to page images that it scanned in-house. In 2007, Perseus tested the <a target="_blank" href="http://www.opencontentalliance.org/" onclick="javascript: pageTracker._trackPageview('/outgoing/OCA');">Open Content Alliance</a> (OCA) workflow, in which scholars can pay to scan selected books from OCA partner libraries, and as a result a number of scholarly materials, including not only <a target="_blank" href="http://www.archive.org/details/autolycidesphaer00autouoft" onclick="javascript: pageTracker._trackPageview('/outgoing/InternetArchiveGreek');">Greek</a> and <a target="_blank" href="http://www.archive.org/details/marcusdelingua00varruoft" onclick="javascript: pageTracker._trackPageview('/outgoing/InternetArchiveLatin');">Latin</a> but <a target="_blank" href="http://www.archive.org/details/syriacusthesaur01paynuoft" onclick="javascript: pageTracker._trackPageview('/outgoing/InternetArchiveSyriac');">Syriac</a>, <a target="_blank" href="http://www.archive.org/details/sanskritwrterb01bhuoft" onclick="javascript: pageTracker._trackPageview('/outgoing/InternetArchiveSanskrit');">Sanskrit</a> and <a target="_blank" href="http://www.archive.org/details/sturlungasagainc01aronuoft" onclick="javascript: pageTracker._trackPageview('/outgoing/InternetArchiveOldNorse');">Old Norse</a>, have become available for download from the OCA. In 2008-09, Perseus is creating a fourth-generation collection that includes: </p> <ul> <li> <p> Expanded TEI-compliant XML transcriptions of Greek and Latin primary sources within Perseus. </p> </li> <li> <p> An open source collection of image-books representing at least one (and where possible more than one) edition of every classical Greek and Latin author within the OCA. </p> </li> <li> <p> Cataloging data in XML <a target="_blank" href="http://www.loc.gov/standards/mods/" onclick="javascript: pageTracker._trackPageview('/outgoing/MODS');">MODS</a> and <a target="_blank" href="http://www.loc.gov/standards/mads/" onclick="javascript: pageTracker._trackPageview('/outgoing/MADS');">MADS</a> format that is modeled after the <a target="_blank" href="http://www.ifla.org/VII/s13/wgfrbr/" onclick="javascript: pageTracker._trackPageview('/outgoing/FRBR');">Functional Requirements for Bibliographic Records (FRBR)</a> to represent multiple editions, translations, commentaries, indices and other scholarly data. This <a href="http://www.perseus.tufts.edu/~ababeu/PerseusFRBRExperiment.pdf" onclick="javascript: pageTracker._trackPageview('/outgoing/FRBRExperiment');">catalogue</a> is designed to provide the detail now offered by discipline-specific checklists of single-editions (such as the Greek works and authors in the printed Liddell Scott Jones Lexicon and the <a target="_blank" href="http://stephanus.tlg.uci.edu/canon/fontsel" onclick="javascript: pageTracker._trackPageview('/outgoing/TLG');">on-line TLG Canon</a>) within an extensible, standards-compliant library infrastructure. </p> </li> <li> <p> Metadata to support access by book/chapter/section/verse or other conventional scholarly citations under the <a target="_blank" href="http://katoptron.holycross.edu/cocoon/diginc/specs/cts" onclick="javascript: pageTracker._trackPageview('/outgoing/CTS');">Canonical Text Services (CTS) Protocol</a>. This metadata would make it possible to generate from a textual citation a dynamic link into electronic page images and/or XML-transcriptions. </p> </li> </ul> </li> <li> <p> <a name="collections"></a><b>Focused collections on selected Greek and Latin authors</b>: To complement the general collection development and scalable services we are choosing a small number of authors on which to focus particular attention. For these authors, we will collect more editions and associated publications (especially commentaries, indices, specialized lexica), with targeted creation of TEI-compliant XML transcriptions. We will focus upon Herodotus, Aeschylus and Thucydides to illustrate classical Greece and the world that Marathon made. We also have a major commitment to Homer that reflects work already begun at Perseus and collaborations with projects such as the Homer Multitext Project of Harvard's Center for Hellenic Studies. On the Roman side, we will concentrate on Sallust and Propertius, whose corpora are small enough for close study and for which we can, for example, provide comprehensive Treebanks (see below), and on Livy and Cicero, whose corpora are large enough to demand automated methods. </p> </li> <li> <p> <a name="scalable"></a><b>Scalable methods to identify, transcribe and automatically tag Greek and Latin</b>: These services include not only optimized OCR but algorithms that compare the OCR output from different editions of the same work to distinguish text from headers, textual notes and marginalia and OCR errors in the text from intentional editorial variations. The immediate goal is to create a searchable collection of Greek and Latin that provides better scholarly recall than manually produced collections on which scholars have traditionally relied: about 8% of the unique Greek and Latin words on a given pages from any standard edition only appear in the textual notes (in series such as the Loeb Classical Library which traditionally restrict readings to a minimum, this figure remains c. 4%). Curated collections that contain perfect transcriptions but only the reconstructed text can only deliver 92-96% of the words that the editor chose to print. <a target="_blank" href="http://dl.tufts.edu/view_pdf.jsp?pid=tufts:PB.001.001.00006" onclick="javascript: pageTracker._trackPageview('/outgoing/TDLOCR');">OCR-generated text can already deliver 98-99%</a> of the words from printed Greek and thus immediately provide better recall than perfect transcriptions. The result returned to the reader is, in addition, an image of the full printed edition. </p> </li> <li> <p> <a name="fragmentary"></a><b>Fragmentary authors</b>: Humanists have been working with digital texts for a generation but we have in these first decades focused our efforts upon the large body of texts that survive more or less intact. Most of the works written in antiquity are, however, lost - less than 10% of the works of Aeschylus, Euripides and Sophocles, for example survive. Most classical authors exist, therefore, in a fragmentary state. In some cases, these texts are scraps of papyrus that survived in the sands of Egypt and are literally fragments. In most cases, however, our surviving fragments are, in fact, passages where surviving authors quote, summarize or simply allude to authors and works that have not survived. Print editions of fragmentary authors typically print excerpts about a fragmentary authors along with various categories of scholarly apparatus (the editor's commentary, a translation, variant readings etc.) In a digital world, such fragmentary editions should contain dynamic links that point to editions of the quoting source. The comprehensive collection of Greek and Latin source texts, with scanned page images and searchable OCR-generated text for all, and carefully transcribed TEI-compliant XML for some, gives us the foundation on which we can build the dynamic, hypertextual editions of fragmentary authors. </p> <p> In 2008-09 we will begin work on the Greek fragmentary historians, using M&uuml;ller's <a target="_blank" href="http://www.archive.org/details/fragmentahistori01mueluoft" onclick="javascript: pageTracker._trackPageview('/outgoing/InternetArchiveMuller');">Fragmenta Graecorum Historicorum</a> as a starting point. The output of this work will be both an initial edition of Greek fragmentary historians and the methods by which we represent pointers into source works and associated scholarly annotation. We will create a broad first pass at a comprehensive database of fragments for all Greek authors, but we will focus particular attention on those authors most relevant to the theme of the world that Marathon made. </p> </li> <li> <p> <a name="machine"></a><b>From human-readable information to machine actionable knowledge</b>: If a lexicon includes an entry such as "insula, -ae, f.," students of Latin can recognize this is a statement that there is a first declension feminine Latin noun with stem <em>insul</em>- and endings with nominative singular <em>insula</em>, genitive plural <em>insularum</em> etc. A machine can generate and recognize forms of this noun but it needs the information about stems and endings in a format that it can process. Commentaries contain information about particular passages - if we can represent the commentary entries in a format that machines can recognize. Encyclopedias contain many statements about birth and death dates, offices held ("X consul in Y"), kinship (e.g., X son of Y), and other propositional statements. While we cannot carefully transcribe every book about classics from our print libraries, a relatively constrained number of reference books contain a large body of information that could, if converted into a machine actionable format, drive a range of services. Every funded project on which we are working depends upon the conversion of some part of the print infrastructure into such machine actionable knowledge bases. We are therefore preparing to convert a range of such print resources into structured, machine-actionable form including lexica, grammars, commentaries, editions, editions of surviving texts and editions of fragmentary authors. </p> </li> <li> <p> <a name="borndigital"></a><b>Born-digital knowledge bases</b>: While print reference works contain a great deal of information that can be converted into machine actionable form, they cannot provide all of the data that we need to drive some of the services that are most promising for humanists. </p> <ul> <li> <p> First, information available in print format does not always lend itself to automatic extraction - in the general case, the automatic analysis of full text is an unsolved problem. An encyclopedia or dictionary entry may contain propositional statements that automated systems could use but that we cannot extract from the text. Critical editions contain a wealth of statements about how one version of a text differs from various others but these print annotations are hard for automatic systems to decode. </p> </li> <li> <p> Second, our printed reference works leave out information that their authors collected and which automated systems need. The authors of lexica, for example, often have space to print only a selection of the passages that they have sorted into distinct word sense - their sorted slips of paper or file cards contained the wealth of training examples on which machine learning thrives but these are lost or available only as archival materials. </p> </li> <li> <p> Third, some categories of information do not have exact print antecedents. Classical philologists can see from the emerging field of corpus linguistics a wide range of annotations relevant to their work. These range from basic categories such as co-reference (e.g., determining whether <em>hic</em>, "this person," refers back to Caesar or Antony in a particular passage) to more broadly interpretive categories such as labeling expressions about time and events (e.g., languages such as <a target="_blank" href="http://www.timeml.org/site/index.html" onclick="javascript: pageTracker._trackPageview('/outgoing/TimeML');">TimeML</a> and the Bruce Robertson's <a target="_blank" href="http://heml.mta.ca/heml-cocoon/" onclick="javascript: pageTracker._trackPageview('/outgoing/HEML');">Historical Event Markup and Linking language</a>). Even as we build up treebanks with core syntactic data we need to explore other categories of linguistic markup. </p> </li> </ul> </li> <li> <p> <a name="treebank"></a><b>The Classical Greek and Latin Treebank Projects</b>: Syntactic annotations record information about the relationships between the words: e.g., <em>orationes</em> in a given sentence is the object of <em>dicit</em> ("s/he speaks, says") and has the modifier ZZZ. Such annotations organize the words in a sentence into tree-like structures and can be collected into linguistic databases conventionally called Treebanks. These Treebanks can let us see phenomena such as the changing subjects and objects that a given verb takes over time, sentence structure (e.g., subject-verb-object vs. subject-object-verb), and individual style of particular authors, genres and periods. Automated systems can automatically analyze more than 90% of English sentences but these systems do so by analyzing pre-existing Treebanks with a million or more words. For complex, stylistically idiosyncratic and relatively small classical texts, manual annotation would be necessary in any case, but such manual annotation allows us then to place our understanding of these texts on a fundamentally new, more explicit foundation. </p> <p> In August 2008, we published the latest version of the <a target="_blank" href="http://nlp.perseus.tufts.edu/syntax/treebank/">Latin Treebank</a>, which now includes more than 50,000 words. At the same time, we began work on what will be a 1,000,000 word Treebank for classical Greek. </p> </li> <li> <p> <a name="datamining"></a><b>Text/Data-mining and the Automated production of new knowledge</b>: Once we have converted even the simplest print resources into machine actionable knowledge, we can use that knowledge to generate new knowledge. Consider the examples of traditional print indices and translations. Conversion of print indices involves, at the simplest level, identifying the headwords and citations. This amount of structure allows machines to see that there are, for example, six figures named Alexander in a given corpus and a list of passages where each separate Alexander appears. A named entity identification system can use machine learning algorithms to analyze the context in which the different Alexanders appear to predict the most likely Alexander to which other passages refer. Likewise, if we add basic citations to an English translation (i.e., this passage of English corresponds to the Greek in Thucydides, Book 1, chapter 86), then we can identify words and phrases in the English translation that correspond to the Greek: e.g., Latin <em>orationes</em> corresponds to the English word "speeches" in one passage but to "prayers" in another. We can then use machine learning algorithms to predict in passages where there is no English translation whether <em>orationes</em> more likely corresponds to "speeches" or "prayers." We can also begin to use these lower level conclusions (e.g., Antonius in passage X designates the famous Marc Antony the Triumvir who appears also in Shakespeare, <em>orationes</em> in passage Y corresponds to "speeches") to identify more patterns that indicate people, places, word meanings and other topics (e.g., what other people and places appear in conjunction with Antony? What other words in Latin and Greek correspond to the English word "prayer" in various periods and genres?) At this point, we have moved from patterns that human beings have already labeled (e.g., passage X describes Marc Antony while passage Y describes another particular Antonius), to inferences that human beings make hundreds or thousands of times a day but do not have time to record (e.g., when readers automatically distinguish references to Alexandria, Egypt, vs. Alexandria, VA), to patterns that no human being would see by simply reading through a texts (e.g., a survey of the Latin and Greek terms corresponding to "prayer" that appear in texts containing hundreds of millions of words and written over two millennia). </p> </li> <li> <p> <a name="adaptinfo"></a><b>Adapting linguistic and cultural information for particular readers</b>: Once we begin assembling large bodies of information, we need methods to provide individual users with information adapted their general backgrounds and their immediate purposes. There are two ways in which to adapt large bodies of information. Personalization compares the behavior of a given user against that of previous users to suggest actions of interest (e.g., people who bought book X also bought books Y and Z). Early experiments showed that similar techniques were applicable for readers of Greek and Latin: once readers ask about four words from a particular passage, we can predict two thirds of the other words about which they will have questions. </p> <p> Our work in 2008-09 focuses primarily upon customized reading support. Customization follows directions from the user (e.g., a user created profile that requests all new information about Pericles' <em>Funeral Oration</em>). Our work focuses upon customized vocabulary profiles in which we have digitized the vocabularies from textbooks of Greek, Latin and Arabic. We want to be able to answer two basic questions: first, we want readers to be able to identify words that they have not yet encountered in a given chunk of text and then to rank the unseen words according to various criteria of significance; second, we want to be able to find passages that best match the existing vocabulary of a particular reader. </p> </li> <li> <p> <a name="scaife"></a><b>The Scaife Digital Library</b>: Named after the late Ross Scaife, the Scaife Digital Library is being developed as a distributed collection and a method whereby humanists from around the world can automatically aggregate their content. The Scaife Digital Library contains durable objects that (1) have received peer review, (2) are in sustainable formats such as the <a target="_blank" href="http://epidoc.sourceforge.net/resources.shtml" onclick="javascript: pageTracker._trackPageview('/outgoing/epiDoc');">epiDoc TEI stylesheet</a>, (3) have a long-term home such as an institutional repository separate from the producer of the object, and (4) are available under open licensing for third-party redistribution and/or further development. </p> <p> All of the TEI-compliant XML texts already available for download from the Perseus Digital Library satisfy the conditions 1, 2, and 4. Placing these and other objects within the <a target="_blank" href="http://dl.tufts.edu/" onclick="javascript: pageTracker._trackPageview('/outgoing/TDL');">Tufts Digital Library</a> will satisfy the third condition. We plan therefore to move as many Perseus objects as possible into the Tufts Digital Library, with a particular focus upon newly scanned image books and existing commentaries, lexica, encyclopedias and other materials not yet released under an open source license. Our goal at this stage is to provide basic identifiers that will allow users to retrieve these objects from the Tufts Digital Library. </p> </li> <li> <p> <a name="repository"></a><b>Institutional Repositories for Advanced Humanities Content</b>: The Scaife Digital Library addresses the problem of long-term preservation for particular objects but we need services as well with which to use the objects. Libraries have successfully maintained the products of intellectual labor for generations and have begun designing institutional repositories that can maintain digital content. These institutional repositories are, however, generally prepared to support very simple digital objects such as images and lightly structured journal articles. We are thus preparing to develop for one major institutional repository system, <a target="_blank" href="http://www.fedora-commons.org/" onclick="javascript: pageTracker._trackPageview('/outgoing/Fedora');">Fedora</a>, the data models needed to support the more complex objects with which students of the three classical languages regularly work. These include the ability to extract reference articles (e.g., the entry on Alexander the Great in encyclopedia A), dictionary entries and particular word senses from machine readable dictionaries (e.g., word sense II.2.a for word X), and the text associated with canonical text citations (e.g., the Greek text and English translation for section 2, chapter 86, book 1 of Thucydides). To do this, we are starting to adapt the Perseus Digital Library system to work with Fedora as a backend system. The goal of this effort is ultimately to release a version of the Perseus Digital Library system that institutions can download as a turn-key solution for scholarly collections. </p> </li> <li> <p> <a name="grid"></a><b>Grid-Enabled Open Services</b>: The Perseus infrastructure has depended upon a traditional architecture where we apply programs stored on local servers to locally stored texts and other data. We are working with colleagues at Imperial College London to begin a distributed architecture that works with services and collections from multiple sources. Such an architecture is designed to allow scholars and projects to create their own configurations, perhaps substituting one morphological analyzer for another or adding new modules for particular text mining or visualization functions. Such an architecture also allows us to tap into much greater computational resources, drawing upon services driven by grid computing and/or the services from internet giants such as Google. </p> </li> </ul> </div> <!-- Research div --> </div> <!-- main_col div --> <div id="img_side_col" style="font-size:small; text-align:left"> <h4>Research Themes</h4> <ul> <li><a href="#undergradresearch">Enabling undergraduate research</a></li> <li><a href="#2500years">2500 years later in 2010</a></li> <li><a href="#library">A comprehensive, open source, fourth-generation library of Greek and Latin editions</a></li> <li><a href="#collections">Focused collections on selected Greek and Latin authors</a></li> <li><a href="#scalable">Scalable methods to identify, transcribe and automatically tag Greek and Latin</a></li> <li><a href="#fragmentary">Fragmentary authors</a></li> <li><a href="#machine">From human-readable information to machine actionable knowledge</a></li> <li><a href="#borndigital">Born-digital knowledge bases</a></li> <li><a href="#treebank">The Classical Greek and Latin Treebank Projects</a></li> <li><a href="#datamining">Text/Data-mining and the Automated production of new knowledge</a></li> <li><a href="#adaptinfo">Adapting linguistic and cultural information for particular readers</a></li> <li><a href="#scaife">The Scaife Digital Library</a></li> <li><a href="#repository">Institutional Repositories for Advanced Humanities Content</a></li> <li><a href="#grid">Grid-Enabled Open Services</a></li> </ul> </div> </div> <!-- 2column div --> </div> <!-- main div --> </body> </html>

Pages: 1 2 3 4 5 6 7 8 9 10