[Back] [Forward] [Contents] [Journal Homepage]

4. Uses of models

In looking at the uses of models in archaeology, we have to ask both why we use them and how we use them, distinguishing uses that are 'tactical' in nature (means to an end) and ones that are more strategic (ends in themselves).

4.1 Why use models?

Starting at the tactical end of the spectrum, we could see the creation of a simple statistical model as the first step towards the estimation of a parameter of, or the testing of a hypothesis about, a particular archaeological situation. For example, a test that the value of a certain variable has increased over time may be based on a (possibly implicit) assumption that the variable has a normal distribution at any given point in time. This is such a simple use of a model that it is often used without being recognised as such, and may be avoided by the use of nonparametric techniques if there is no clear choice of an appropriate model.

A valuable, and more self-conscious, tactical use of models is the assessment of the relative merits of different statistical techniques or measures. For example, the long and continuing debate about the respective merits of different ways of quantifying fragmentary material (see Orton 1993 for an overview and Shott 2001 for a contemporary assessment) requires the development of stochastic models of site formation and taphonomic processes as a framework within which to compare the competing measures. Otherwise we remain in the realm of personal feelings, and progress is hard to make.

The construction of models may also help us to visualise complex datasets. Visualising high-dimensional data is a perennial problem in many disciplines; modelling data as sets of points in multi-dimensional spaces leads to a trade-off between the ability to 'see' the data in one, two or three dimensions, and the information lost through the inevitable simplification of the data's structure. For example, the technique of correspondence analysis (Greenacre 1984) allows us to visualise the data from contingency tables as scatter plots, which simultaneously represent both rows and columns of the tables. Behind this widely-used technique lies a model of archaeological objects as points in a space in which distance is defined in a very unfamiliar way.

A class of models which has flourished in recent years is that of predictive models, which are common in GIS and spatial statistics generally (see Wheatley, this volume). A statistical model of the relationship between the locations of known archaeological sites and environmental and topographical models in intensively surveyed areas can be used to predict the likelihood of the discovery of further sites in different zones of less heavily surveyed areas. Different forms of model (e.g. linear, non-linear) and different choices of variables are open to debate.

The benefits that can be obtained by using stochastic models in problems of archaeological interpretation are less well known and deserve wider recognition. It may happen that an unknown variable of interest (e.g. the number of firing seasons of a pottery production site) can be expressed in terms of other factors (e.g. number of waster pots excavated, the waster rate, capacity of kilns, etc.), each of which can be given an informal confidence interval (Orton 2002). Simply substituting the upper and lower limits of each variable into the equation is likely to lead to an unacceptably wide interval for the unknown variable (in this example, of 1 to 150 seasons for a site known from other evidence to have operated over a period of about 50 years). Replacing the deterministic model with a stochastic one enables us to estimate the variance of the unknown variable, and so derive a much narrowed confidence interval (in this example, from about 4 to 40 seasons, i.e. 25% of the original range). Although still vague, this outcome is precise enough to generate interesting archaeological conclusions. The reason for this gain is that the 'extreme' outcomes (e.g. <4 or >40 seasons) are the product of extreme values of several factors, each of which has a low probability, and the product of which therefore has a very low probability, putting the outcome outside any reasonable confidence interval. There is a general assumption here that the factors are themselves uncorrelated; in this particular case, either this is archaeologically reasonable, or any hypothetical correlations are such as to decrease rather than increase the variance.

An unexpected by-product of this work was that it could be used to highlight areas where further work would be most likely to yield the largest gains in precision. In this example, it showed that it would be useful to try to refine the estimates of the length of the firing season and the frequency of firing, but that little would be gained from refining the estimate of the capacity of the kilns.

So far, all these uses have been of a 'tactical' nature — obtaining results of limited scope, which can contribute to a bigger picture. This reflects my belief that this is where the main value of mathematical models in archaeology lies, but the uses of the 'big picture' models should not be overlooked. The first of them is probably the clarification of our own ideas about an archaeological problem; the effort needed to express our views as (for example) a series of equations, will at least focus the mind, and may point out gaps or weaknesses in our argument. It is also likely to increase our understanding of the problem, since early versions of our model may well generate unrealistic outcomes, and point the need for further refinement. Communication of our ideas may be considerably enhanced by the use of a model, particularly a graphical one (or an algebraic one, depending on the intended audience), and there is no denying the personal satisfaction that may be derived from the creation of a mathematical model.

Aldenderfer (1981, 20) lists three sorts of usefulness (utility) that such a model might have:

However, it could be argued that the use of models as ends in themselves suffers from the same defects as are often noted in the use of formal hypothesis-testing procedures (Cowgill 1977). There is, for example, the problem of equifinality — the possibility that quite different models may lead to indistinguishable outcomes (Hodder and Orton 1976, 95) — so that a fit to the data does not necessarily support the hypothesis or model. There is also the problem that the acceptance or rejection of a hypothesis is often simply a reflection of the quantity of data available to test it — the more data we have, the more likely we are to reject the hypothesis (other things being equal). This suggests that, in the long run, all 'strategic' models are likely to be found wanting, and that, paradoxically, their main purpose may actually be to fail. It has been said that 'The purpose of models is not to fit the data but to sharpen the questions' (Karlin 1983). We shall return to this point in more detail in the next section, when we look at the 'how' of using models.


[Back] [Forward] [Contents] [Journal Homepage]

© Internet Archaeology URL: http://intarch.ac.uk/journal/issue15/6/co4.html
Last updated: Wed 28 Jan 2004