PREVIOUS   NEXT   CONTENTS   HOME 

Why e-monographs haven't caught on (yet)

It's not as though there are no likely visions for e-monographs - there are, and more are coming rapidly. Holtorf and Costopoulos in this volume describe successes, as have Harold Dibble and his Combe Capelle CD (Dibble and McPherron 1995) and Davis et al. (1996) have published an excellent model for how an e-monograph might be envisioned, just to name a very few.

One of the most stimulating conferences on the future of the e-monograph was held in January 1999. Hosted by the Digital Archaeology Laboratory of the University of California, Los Angeles, and its director, Louise Krasniewicz, it was concerned with how e-monographs might be creatively developed, what sorts of standards might emerge that could guide their publication, and what strategies can be employed to make these publications legitimate in the minds of the scholars who will have to read them and the administrators who will pass judgement on them and us as we use them to advance our careers. Over 30 archaeologists, publishers and researchers, many of whom have actually published e-monographs, debated over the course of two days what the future of the e-monograph was likely to be in our field.

It became quickly apparent that in great part, we were preaching to the choir. Almost everyone in the room accepted a digital future as necessary, inevitable, and basically good. But then that was the intent of the conference -- and as Louise described her vision of the conference and what she hoped it would accomplish, she narrowed her goals to three themes: she asked for some evaluation of the model of the e-monograph currently in production at the Digital Archaeology Lab, asked that we reflect on how a template for e-monographs might emerge from the evaluation, and finally, she asked that we discuss in some detail what vision each of us had of e-monographs and how these might be codified or at least offered as potential publication standards for others to follow or emulate.

It also became apparent rather quickly that while most of us felt comfortable with a digital future, we often had fairly distinct and sometimes opposite views on what that future might be. In many ways, the discussion that followed resembled an expanded version of the exchange between Holtorf and Costopoulos in this volume. Fundamentally, these debates focused on how one defined an e-monograph and to what extent the e-monograph was likely to become a vehicle for the routine presentation of primary archaeological data. Here we reprise the arguments by Gaffney and Exon, who argue that the main goal of archaeological publication in the future will turn increasingly to the presentation of data in formats that any researcher could access with relative ease. Many in the audience agreed with this sentiment, but others were somewhat more sceptical that e-monographs per se would become or should become archive-like.

As the discussion progressed, a summary sheet was given to the participants. I offer my copy, along with a few marginalia (Figure 3). As you can see, the possibilities that an e-monograph offers are limitless, but there was nevertheless spirited debate about just what form an ideal e-monograph might take. We did easily agree on one thing: an e-monograph can be whatever its author intends, and may present as much or as little of the primary archaeological data used to construct the argument (or the synthesis) as the author sees fit. Here, however, we began to diverge. For some, there was a strong sense that the e-monograph would be very much like its p-monograph counterparts, and could be 'read' in essentially the same fashion. Any data contained in the monograph would be 'value-added', but would not be the focus of the e-monograph per se. Others argued for more radical forms that took advantage of hypertext and its powerful linking capacity as a means by which readers and students could explore the e-monograph rather than simply 'read' it. In such instances, it would hardly resemble a 'book' in the traditional form, but would instead be more like a carefully constructed web site. Following on this, others still charted a middle course, one that retains some of the traditional hierarchical structure of the p-monograph and its mode of data presentation, but nevertheless allows readers multiple pathways or access points into the 'text'. One or two of the more sceptical participants noted that this compromise is an enhanced version of how most people 'read' p-monographs - one looks at the pictures, goes first to the index or table of contents or perhaps citations, and moves through the book in a very non-linear manner.

This compromise model is the one selected by the Digital Archaeology Lab for their first imprint, which will present the work of Karen Wise of the Los Angeles County Museum of Natural History and her excavations at a site called K4, a Preceramic village on the far southern Peruvian coast. Parts of the e-monograph have a more-or-less traditional form, and one can read them as one would a book. However, because she took extensive video footage of the excavations, including some of very painstaking removals of burials, there are many non-traditional elements contained within. Although significant quantities of data are presented, especially imagery of the excavations and many of the artefacts, most of the data are summarised into table form, and these are presented in their entirety. Relatively little other primary data in the sense of how I have defined it will be a part of the e-monograph.

All of this sounds, and is, wonderful. Here we can have our cake and eat it too. You want synthesis? Write as much as you want within the editorial parameters of the publisher since space is not a serious obstacle unless one gets very creative. You want data? You can't publish it all, but you can give us most of it and, if you are insistent, you can burn as many CDs as you like. So if things are as wonderful as this, why aren't we all rushing madly to publish e-monographs?

Cornelius Holtorf certainly knows many of the reasons why. One of the most obvious, especially in his experience, is the simple matter of the generally conservative nature of academic publishing and the ways in which the products of the publication process are evaluated. Although e-journals have overcome some of this inertia, it has yet to be surmounted for the e-monograph. Some authors, such as Donald Sanders (in press), have noted that the creation of an e-monograph, especially one that makes use of virtual reality or complex reconstructions of archaeological sites, begins to look more like a film with a director and producer instead of a monograph that has a series of authors. Complex e-monographs require a very significant range of skills to create, and the 'author' may provide the intellectual guidance, but relatively little of the actual work to create the publication. Deciding academic credit, one of the functions of the peer review and publication system, is thus complicated. One can see through Holtorf's narrative the careful steps taken by the administrators as they were confronted with his vision of the e-monograph. His success, however, should remind us that questions of authorship in these circumstances will be resolved relatively easily, but it will take constant challenges to make it happen.

So if resistance can be overcome, are we looking at simply a matter of time for the acceptance of the e-monograph, or is there something else that seems to thwart its maturation in archaeology? Another issue is certainly cost. At the conference at UCLA, I asked one of the participants, who runs a commercial shop that assists archaeologists and others in the creation of complex digital imagery, how much a particular project cost to create. I had a vision of a substantial sum, and was surprised to learn that the cost was only about $80,000. As I reflected on that cost, however, it became clear to me that this amount is relatively large when compared to the costs of publishing p-monographs, but not substantially so. And although I don't know how much time and money is being spent on UCLA's e-monograph, I do know that the time spent by Wise and her associates in creating the publication is quite substantial. Vin Steponaitis made it clear that the Occaneechi Town monograph was not inexpensive to produce. However, each of these e-monographs is rich in hypertextual links, data, imagery, and reconstructions, and so the time and expense faced in their creation is not surprising. One can see, though, how most publishers would be daunted by the costs of developing these e-monographs, especially in an economic climate for p-monograph publication that seems to worsen yearly (Wasserman 1998). We also learned at the conference that most publishers do not have the infrastructure to develop their own e-monographs and that, because of cost, they have been reluctant to outsource this work unless they are either true pioneers intent on changing the status quo or are convinced there is a market out there that will buy the book and help them recoup their costs. Understanding this also helps in understanding better the call to use current, off-the-shelf, software both to develop the e-monograph as well as to provide the mechanism through which it is 'read'.

The costs of e-monograph publication, however, will decrease somewhat as archaeologists begin to employ digital technologies early in the fieldwork process. Basic field recording techniques in archaeology have changed very little over the past 100 years, and involve the use of a combination of paper forms, notebooks, graph-paper drawings, and standard 35mm and large format photography. While these techniques are reliable, they are very limiting, especially as one moves from the field to analysis into data publication, presentation, and archiving. Field drawings must often be redrawn and digitized by hand for integration into advanced geographic information systems (GIS). These same field drawings must also be linked by hand to computerised databases that describe their contents. Handwritten field notes are rarely transcribed and searched electronically for information, and forms, while they always contain important information, have to be summarised and described, and their content re-transcribed into other paper or possibly digital records. They are searched like paper records are always searched -- visual scanning by flipping through ring binders or file folders. Slides, prints, and negatives can be integrated into databases, but it is difficult to integrate them easily into sets of field drawings and maps in a consistent manner. And while many archaeologists have begun to digitize these data so that modern information technology tools can be used to examine them in a more rapid manner, the costs of this post-hoc approach are very substantial and, further, they tend to introduce new sources of error into these primary data. Indeed, many archaeologists have come to believe that traditional field recording methods substantially slow the pace of analysis and certainly the publication and archiving of the results of field research, especially as opportunities for digital publishing of archaeological projects (either via the WWW or on CD/DVD) become more desirable and commonplace.

Recent advances in information and imaging technology, specifically the appearance of rugged, fast, and high capacity pen computers and the specialised software necessary to use them, high-end total stations for mapping complex archaeological sites, digital cameras, and significant improvements in GIS and CAD software that enhance their ability to deal with near-3D representation and visualisation, offer a number of potential improvements to traditional field recording procedures that can lead to substantially more rapid analysis, publication, and archiving of primary archaeological data. I've just been awarded a National Science Foundation grant to pursue the implementation of these sorts of technologies in my ongoing field research in Tibet and Peru. Our vision of how some of these technologies might be melded together can be seen in Figure 4, and how we have begun to approach it has been stimulated by Hodder's work at Çatalhöyük, which he describes briefly in this volume. This strategy, and others like it, will become increasingly common as better and more sophisticated technologies are developed.

So if we can eventually reduce the costs of e-monograph publication, what problems remain? It is basic: longevity. Currently, e-monographs are published on CD-ROM, and will likely migrate to the DVD format as it becomes more common. One dimension of the longevity problem is the stability of the medium itself. No one is quite sure how long CDs will last: estimates vary widely; the National Media Laboratory has published its expectations for the longevity of CDs stored at 68 degrees Fahrenheit and 40% relative humidity: standard quality CD-R, 2 years, and play-only CD-ROM, 5 years; high quality CD-R, 30 years, and CD-ROM, 50 years. So-called permanent paper (low acid, buffered) will last for 500 years; most other papers other than copier paper will last 20-50 years. The medium itself, then, compares well to all but the highest quality of paper.

The more serious dimension of longevity is that of the hardware and software systems used to read these media. We all have or know of horror stories of the ancient 9-track tapes with your original dissertation data that can now no longer be read. I know some colleagues who will remain nameless that still have all of their punch cards from long-finished projects, and who rely on these as their 'data archive'. Good luck. Countless authors have discussed this problem, and the long-term implications of it are nicely summarised in a recent volume called Time and Bits: Managing Digital Continuity (MacLean and Davis 1998). Simply put, most of what goes on to some sort of digital medium these days is quite likely to be lost in a very short period of time. Both hardware and software change at incredibly rapid rates, and the costs of data migration (the transfer of data from one platform to another) continue to rise. Alternative strategies, such as emulation (creating a 'virtual machine' that runs on another) are not much better. Many of these changes offer limited or no upgrade paths, and unless a stakeholder is willing to make the investment to make the upgrade, it is likely that it will never be done. As our e-monographs become more complex, and invoke more esoteric bits of software and obscure programming languages, it will be all the more difficult for future generations of readers to study whatever it is we have done.

One of the most instructive things an archaeologist can do is to visit the Dead Media project web page (http://www.islandnet.com/~ianc/dm/dm.html). Listed on it are hundreds of technologies for which media were developed and which can no longer be accessed because the technology has 'died', either through competition with others or through simple neglect. The Dead Media Project was stimulated by the writings of the cyberpunk author Bruce Sterling who, in 1995, delivered a paper entitled 'The Life and Death of Media' (http://www.well.com/conf/mirrorshades/deadmed.html). In one of his felicitous phrases, he wrote:

"You see ladies and gentleman, we live in the Golden Age of Dead Media. What we brightly call "multimedia" provides a whole galaxy of mutant recombinant media, most of them with the working lifespan of a pack of Twinkies. Mastering a typical CD-ROM is like mastering an entire new medium by using a frozen watch-cursor. And then the machine dies. And then the operating system dies. And then the computer language supporting that operating system because as dead as the Hittite language. And in the meantime, our entire culture has been sucked into the black hole of computation, an utterly frenetic process of virtual planned obsolescence." Sterling 1995.

Is this the future of archaeological data if published in e-monographs? In this context, I find Holtorf's attitude to his CD dissertation somewhat disconcerting:

"I am therefore not very optimistic as far as the life-time of the two CD-Roms are concerned which I submitted. The day will come sooner than many may imagine now that someone will be curious enough to want to read my original thesis and find out that this has actually become physically impossible. This is more a problem for the National Library than it is one for me. As far as I am concerned, the thesis has fulfilled its main (though not exclusive) purpose of getting me a Doctoral degree and satisfying those who gave me financial and other support. Regarding the content of my work, it is about memory and multiple interpretations of ancient monuments anyway." Holtorf 1999.

It may well be the library's problem, and that is precisely the point: should we not have some strong concern for the materials we publish. The maintenance of the canonical record, as it is called, the warehousing of knowledge, is one of our primary tasks as scholars.

Long-term access, then, seems to be the most important stumbling block to the success of the e-monograph. Should it be? Libraries are dealing with e-journals and creating solutions for archiving them, so are plans for archiving e-monographs far behind? One solution is to develop a so-called 'just-in-time' scholarly monograph, in which a library or central archive acts as a repository for e-monographs (Bennett 1999). This model is based on the success of University Microfilms, Inc., who provides photocopies of many different kinds of documents on demand. The primary stumbling block to this model appears to be the lack, at least at the moment, of an effective peer review process that creates a context for quality control.

In the long run, it may be the case that the e-monograph is not the best place for the presentation of primary archeological data. But if Gaffney and Exon are right, as I believe they are, and that we must publish and make more accessible our data to very disparate audiences, how shall we do it?


 PREVIOUS   NEXT   CONTENTS   HOME 

© Internet Archaeology URL: http://intarch.ac.uk/journal/issue6/aldenderfer/6.html
Last updated: Thu Jul 15 1999