<< Chapter < Page Chapter >> Page >

***

A quick word about the PVW project itself: the Rochester Institute of Technology, Stanford University, the University of Illinois at Urbana-Champaign and the University of Maryland are concluding a two-year exploratory study investigating the preservation of computers games and interactive fiction. For more on PVW, please refer to the project Web site: (External Link) Sponsored under the Library of Congress’ National Digital Information Infrastructure for Preservation Program (NDIIPP), this project seeks to identify the specific difficulties in the preservation of computer games and interactive fiction that distinguish them from other forms of digital information we wish to preserve, to develop metadata and packaging practices to allow us to manage the long-term preservation of these digital materials in a manner consistent with the Open Archival Information System Reference Model, and to test those practices via ingest of computer games and interactive fiction into a set of functioning digital repositories. Key deliverables include development of metadata schema and wrapper recommendations, the archiving of key representative content and the development of generalizable archiving approaches for preserving this content. Our approach is intended to address both the pressing need to preserve the bits and available representation information of early and significant works now, and the need to begin to address more difficult issues surrounding long-term preservation of more recent multi-player interactive virtual worlds.

Based on our experiences with this work and lessons learned, I would like to proffer two key potential preservation paths for HyperCities:

  • HyperCities as software . Presner describes HyperCities as a “platform,” and more specifically as “a generalizable, easily scalable data model for linking together and publishing geo-temporal content using a unified front-end delivery system and a distributed back-end architecture” (6). He adds, “The front-end is almost a complete application itself because it contains all the display logic” (7). To the extent these display logics, presumably calibrated to the needs and imperatives of humanities scholarship, contribute “significant properties” (see Supporting Digital Scholarship) to the end-user experience of HyperCities, what we have here is a problem in software preservation. The challenge is not simply to maintain access to a pile of files—which might be accomplished through periodic migration to more stable formats—but to ensure the possibility of actually running the HyperCities front-end on some future system. Software preservation is at the center of our work in Preserving Virtual Worlds because that is what computer games actually are: software programs. To that end, we have engaged in extensive “packaging” of the original executables, contextualizing them according to the requirements of the Open Archival Information System reference model with so-called “representation information,” that is all of the second-order information related to operating system and peripherals that is necessary to recreate the complete environment in which the original program executed. This is not a trivial undertaking: the 1980 interactive fiction game Mystery House , a relatively simple text-adventure with some crude vector graphics released by Sierra Online Systems, required almost 400 MB of representation information, none of it rich media files. Nonetheless, such approaches are technically viable, and demonstrate what may lie ahead for HyperCities.
  • HyperCities as distributed or user-generated content . This is, as I have described it elsewhere, the brick wall at which the whole archival enterprise is currently hurtling at 70 mph. I have suggested and I believe that with hindsight, digital artifacts from the first twenty-five years or so of personal computing will represent an anomalous window as the archivist or digital conservationist will stand some reasonable prospect of actually holding the original storage media in his or her hand. See Kirschenbaum et al. 2009. Approaches to Managing and Collecting Born-Digital Literary Materials for Scholarly Use. Office of Digital Humanities, National Endowment of the Humanities: (External Link)&id=37 . Nowadays, of course, more and more of our online activity takes place in the so-called cloud. HyperCities leverages this to great effect, as Presner describes eloquently in his paper; the contrast to a project such as the Blake Archive as I described it above could not be more striking. Nonetheless, the possibility of gathering into one’s arms all of the constituent parts and pieces of one of HyperCities’ slices of curative argumentation is virtually impossible (so to speak), not only for technical reasons but also because of the barbed wire thicket of terms of service and end-user license agreements that govern our access to all of the most common “Web 2.0” services. (This matter has gotten some popular media play as next of kin of servicemen and -women killed overseas have had to fight for access to their loved one’s email accounts. (External Link) . The best summary of the issues to date is by Simson Garfinkel and David Cox. Simson Garfinkel and David Cox, “Finding and Archiving the Internet Footprint”: (External Link) . ) We have had some success in Preserving Virtual Worlds using the Internet Archive’s crawl services to harvest material from sites devoted to games and gamer culture, as well as spot-setting TwapperKeeper archives and the like. We did not have occasion to look into the current state of archiving KML layers in Google Earth/Maps, but presumably some tools exist. Nonetheless, the primary issues will surely be legal and social, as opposed to technical. This was manifested most dramatically in our attempts to archive content from several Second Life islands: while it proved possible to write scripts to collect rendering information as it was passed from server to client and therefore store aspects of the world’s virtual geometry, permission to do so was another matter. Everything in Second Life is user-generated and user-contributed. Therefore, in order to archive the contents of a space we were forced to attempt to reach out to hundreds of individual users with a request for permission; return of response was low. In this respect, Second Life functions as a dramatization of the kind of scenario one might expect from collections of user-contributed data on the 2-D Web.

Get Jobilize Job Search Mobile App in your pocket Now!

Get it on Google Play Download on the App Store Now




Source:  OpenStax, Online humanities scholarship: the shape of things to come. OpenStax CNX. May 08, 2010 Download for free at http://cnx.org/content/col11199/1.1
Google Play and the Google Play logo are trademarks of Google Inc.

Notification Switch

Would you like to follow the 'Online humanities scholarship: the shape of things to come' conversation and receive update notifications?

Ask