PREVIOUS   NEXT   CONTENTS   SUMMARY   ISSUE   HOME 

4.1 Crowd-sourcing research; data collection versus data analysis

Our project sought to use data produced by members of the public. As such it fits into the context of 'citizen science' projects, which have been long established, especially in natural history research, with the US National Audubon Society's annual Christmas bird count first undertaken in 1900 (Cohn 2008). There is also a relatively long association of intellectual radicalism with what are now called citizen science projects (cf. Silvertown 2009; Zimmerman et al. 1972). Citizen science projects can have a commendable ethos of inclusivity and political engagement, raising awareness around important issues for contemporary society (e.g. Cohn 2008, 197). Such projects are therefore potentially rich tools for the promotion of archaeology.

What is relatively new is the number of citizen science projects with a digital collection component and the volumes of data generated. In 2008, Cornell University noted 200 US citizen science research projects (Cohn 2008). The potential to organise scientists, collate and share data through the Internet means that digital technologies are now critical in the rise of such projects, GalaxyZoo being one of the most famous examples (Raddick et al. 2010; Nov et al. 2014).

As part of the gamification of these research projects, many provide in-play rewards, such as conferring levels of expertise. Some, such as StarDust, list successful contributors as authors on publications (Nov et al. 2014).

Far fewer citizen science projects appear to be associated with the arts or heritage. In British archaeology, the Hillforts Atlas project relies on 'citizen scientists' to record and survey details of Iron Age hillforts, while the ACCORD project is engaged in another public photogrammetry project (cf. Bonacchi 2012). In the arts field, TAGGER classifies paintings to produce a robust inventory of the publicly available catalogue. It is possible to assess the success of these projects at various levels, from the number of contributors, number of records created and so on. The diversity of such project research aims makes assessing 'success' in digital public archaeology challenging.

Attempts at classifying digital citizen science projects have varied according to the discipline, goal of the project, and the degree of the virtual component of the project (Wiggins and Crowston 2011). We believe that a more fundamental distinction is between projects that work with members of the public to classify and characterise extant data, and those that ask members of the public to help generate data. Classification is ideally suited to a digital platform; as the success of GalaxyZoo demonstrates. Digital public archaeology projects often crowd-source classification and characterisation, for example the AncientLives, and MicroPasts projects.

'Generative' projects are particularly common in the natural sciences. There may be specific tensions with undertaking generative digital archaeological projects; these tensions may be most notable in projects that seek to research material culture or sites with specific hyper-local physical presences, i.e. that are rooted in the fundamental materiality of archaeological sites and finds, and those that attempt to work with a cyber-local community to generate digital data.

The approach taken in the HeritageTogether project was a 'generative' one. The archaeological fieldwork is done by volunteers. By asking members of the public to actively go to sites and photographically survey them, we were asking people to work with us on a relatively specific set of fieldwork undertakings. We suggest that 'generative' rather than 'analytical' projects will probably always see fewer participants, but this may be especially so when dealing with a specific set of material culture or landscapes. We would probably have seen more 'digital archaeologists' actively contributing data if we had been more ambitious in our study area.