PREVIOUS   NEXT   CONTENTS   HOME 

2.2 Recent developments

To date, although much progress has been made in implementing predictive maps, in terms of accuracy and precision, it is not yet possible to establish quality standards that ensure their real effectiveness.

In a recent paper, Dalla Bona (2005) discusses the possibility of formalising the application of predictive models through the establishment of clear rules, so that they may become part of a codified system accessible to non-expert users. This is just the final step of a long debate presenting many dichotomies over the possible approaches predictive modelling could be faced with. In particular, Van Leusen emphasises how through time the debate has increasingly focused on the definition of the aims of a predictive model rather than on the theoretical and methodological aspects that characterise it (Van Leusen et al. 2005).

Predictive models have often been split into two categories: inductive models, based on the relationship between a sample of known archaeological sites and a set of variables (generally environmental) that exert influence on the location of such sites (Brandt et al. 1992), and deductive ones that attempt to construct hypotheses about site location on the basis of our understanding of past human behaviour, by means of variables that are relevant to the assumptions made (Van Leusen et al. 2005; Warren and Asch 2000; Kvamme 1990). However, before discussing the methodological approach that implies the construction of a model, it is perhaps more appropriate to talk about how the aims are achieved; as Van Leusen says (Van Leusen et al. 2005), we can think of predictive models as explanatory models or correlative models. In the first case, the main goal of the model is to try to understand aspects of past settlement, so that the final model becomes a simple tool to assist in understanding past human behaviour. In the second case, correlative models try to promote the preservation of archaeological features, estimating, as accurately as possible, the probability of the presence of archaeological remains inside the study area. This is what happens in CRM management plans, where the predictive model becomes a tool for managing cultural resources, thus making it possible to calculate which areas are in most need of preservation (Connolly and Lake 2006, 179-86).

We believe that the methodology applied for the construction of a predictive map is actually guided by the purpose for which the model is built (research, cultural resource management etc.). In the case of research-orientated models, the strength of the map will be in terms of accuracy, namely in the ability of the model to detect as many archaeological sites as possible in high potential areas (Verhagen 2007). On the other hand, preservation-oriented models will tend to increase their precision over the accuracy, by reducing, as much as possible, the extension of high potential areas (enhancing their density value i.e. the ratio surface extension/no. of sites), in order to allow a better and careful control of the study area (Verhagen 2007).

It is possible to identify two different trends regarding predictive modelling, an American and a European one, according to a distinction that tends to categorise one as 'pragmatic', the other as 'idealistic'. North America tends to use GIS and predictive models as tools for the management and preservation of cultural resources, regarding these as the best and cheapest option. In Europe, and particularly in Britain, the tendency is to use predictive modelling to try to understand and interpret past settlement primarily to predict the location of archaeological sites (Van Leusen 2002). As Lock and Harris stress (2006), the landscape taken into account in models of CRM is a 'landscape as now', a landscape that is based on the perception of space where archaeological sites are included in the context of administrative limits within which the action of preservation of cultural heritage takes place. An opposite conception of 'landscape as then' is where (as in Great Britain) attention is focused on trying to explain and interpret the ancient landscape.

In general, the authors agree that a final test (which can confirm or refute the initial assumptions) needs to be applied in order to guarantee the good quality of a predictive model. As mentioned by Warren (1990), the prediction is a mechanism for testing explanations and, in order to verify the initial hypothesis, it is important that this be put to the test with new and independent data.

The problem of testing, however, is very complex because there is no agreement on defining the modalities by which it has to be applied. A great debate is still ongoing in the Netherlands, where during the late 1980s much discussion took place about the different methods of testing to be applied to predictive models. In this sense, frequently asked questions included the possibility of identifying a measure of quality of the model, the need to have datasets that would allow a rigorous application of testing and the kind of model best suited to be reliably checked (Verhagen 2007). Some authors feel that the quality of a model is given by three fundamental aspects: specificity, falsifiability and expert consensus. A model can be considered specific when its levels of precision and accuracy are very high (according to the so-called Kvamme gain). It is falsifiable if it follows a defined set of rules (protocol) leading to the production of data that can be verified in order to understand any problems in its building process and, finally, the model must rely on consensus of the experts, those in a position to judge whether the prediction correlates with the expectations of settlement distribution in the ancient landscape (Van Leusen et al. 2005).

Therefore, according to the summary made by Verhagen it is possible to identify at least three distinct phases for a model quality assessment: firstly one to measure the level of performance, namely the real capacity of the model to work in terms of accuracy and precision. A second phase of validation, in which the model is compared with a test sample dataset, not necessarily new, and finally a model testing phase, in which the model is compared with a test dataset of the original sample, in this case collected in an independent manner (Verhagen 2007). However, no test can ensure a good quality predictive model if this is not based upon a good dataset, and in many cases these tend to be static, incomplete, imprecise and inaccurate (Church et al. 2000). This is due to incorrect selection of variables, chosen very often on the basis of their availability in databases rather than on the basis of a careful consideration of the desired outcome (Ebert 2000).

In this regard, a criticism that has often been made about inductive models is that they base their predictive power on the use of environmental variables, which are an expression of a present-day geographic asset and therefore unlikely to give useful indications about the palaeo-environment (obviously this aspect counts more if the ultimate goal of the work is to produce a model that seeks to provide explanations about past human behaviour). The site-centred vision of space analysis has also been highly criticised, where sites are considered and analysed as separate units, and measured in terms of the spatial relationships they have with other sites. This does not take into account a host of human activities, areas of land use, issues such as mobility, which are inevitably part of the settlement system but that can hardly be analysed with a correlative analysis considering physical proximity between one site and another (Ebert 2000).

In the last few years there has been an increasing interest in socio-cultural variables, due to the fact that environmental variables are insufficient in themselves to explain past human behaviour and socio-economic phenomena, giving a 'deterministic' interpretation of ancient human settlement (Wheatley 2003; Gaffney and Van Leusen 1995).

However, at present socio-cultural variables are difficult to model and their use has been limited to a few case studies, mainly related to the reconstruction of ritual prehistoric landscapes (Verhagen 2007). Nevertheless, this aspect should not discourage their use in predictive modelling since, as Kohler stresses (1988), archaeologists always consider social factors (political, cognitive and religious) as determinants in site location and in the interpretation of their functions. In this regard, the identification of more significant factors affecting the site location could foster the implementation of socio-cultural variables in predictive models. Indeed, they could be analyzed based on their 'predictive significance', by comparing and evaluating, in terms of performance, the obtained models with models built solely on the basis of environmental variables (Verhagen 2007). Furthermore, a special mention goes to the use of Bayesian statistical techniques, namely the implementation of expert judgement into the final model, resulting in the transformation of knowledge about the best suited areas for site location, into quantitative data (Verhagen 2006).

In short there is no single way to determine a predictive model, but much depends on the aims for which it was created, and the final result will inevitably be affected by the working method, which in turn will be influenced by the dataset used. Moreover, it is crucial that a predictive model, be it a correlative or explanatory one, is based upon a solid database that can ensure a reliable basis for predicting the past (Zubrow 1990).


 PREVIOUS   NEXT   CONTENTS   HOME 

© Internet Archaeology/Author(s)
University of York legal statements | Terms and Conditions | File last updated: Thu Dec 1 2011