The matter of workflow has become an important issue in archaeological field work. Technology increasingly acts to structure the framework and activities that take place during field recording (De Reu et al. 2014; Berggren et al. 2015), but as we have seen above, the challenges of spatial data management must also be addressed. Since 2009, the '3D-Digging at Çatalhöyük' project has explored numerous techniques for 3D recording of the excavations at this important Neolithic site in Turkey, and it is one of the best documented examples of how complex spatial data may be integrated both in terms of recording and managing 3D structures (Forte 2014). Other projects have demonstrated how 3D models may be subjected to visual analysis within GIS environments (Katsianis et al. 2006; 2008; Landeschi et al. 2016). It is understandable, why these developments are taking place within GIS frameworks, as it is most likely due to the trending convergence of functionalities related to 3D. The dividing lines between GIS and CAD are becoming increasingly diffuse, as each continuously add functionalities from the other. AutoDesk has, for example, issued numerous 3D and GIS enabled variants of their CAD-suite including AutoCAD Map 3D, which, as the name implies, seeks to fulfil the needs of 3D map-making. ESRIs flagship ArcGIS has developed to include the ArcScene application and its latest version of ArcGIS Pro supports the integration of 3D point clouds, meshes and 3D polygons together with its native data management capabilities. Even dedicated 3D modelling software such as AutoDesk 3D Studio Max 2015 is now supporting point clouds as a native geometry type, emphasising the link to real-world 3D recorded objects. Central to this development are also several freeware or open source projects like Meshlab, which provide the interoperability and processing capabilities needed to use 3D data efficiently. From a data management perspective, development of the ADS 3D Viewer will provide visualisation capabilities and integrate with archived archaeological 3D data (Galeazzi 2014).
The application of 3D GIS at Çatalhöyük is currently relying on ArcScene for the management of different 3D documentation objects, units or layers to visualise stratigraphy, combined with 3D vectorisation of the interpreted contexts and features. These polygons can then be extruded to model the 3D interpretation as solid objects, and more clearly visualise building elements (Berggren et al. 2015; Forte 2014; Forte et al. 2015). The shortcomings of relying on GIS as the technical platform is, however, again manifested in the way ArcScene handles vectors. Like ordinary GIS, ArcScene is based on a principle of projecting and interpolating vectors, but in this case not onto a geographical geodesic model like a sphere or cylinder, but onto a mesh surface. Depending on the resolution of the surface model and numbers of vertices in the vector, it will only be able to conform to the surface to a certain degree, leading to lines and polygons that either float above or intersect with the mesh (Dell'Unto 2014; Kimball 2016). This raises a question of the value of continued use of vector geometries as conveyors of archaeological classification, when in reality they are limited to the quality of the underlying mesh and the interpolation and projection algorithms used by the software.
If we are to fully exploit the rich detail of 3D documentation we must adapt to the premise of new datatypes such as point clouds and meshes. In effect we need the ability to directly classify these types of data according to our interpretation (Wulff and Koch 2012). This allows us to work with data in a completely different manner. It is actually straightforward to do a simple classification of, for instance, a point cloud or a mesh from a vector drawing by simply projecting the two data types onto a plane, and execute a point-in-polygon algorithm (see Figure 11).
An important issue is, however, the lack of systems that meet archaeological criteria regarding visual representation of vertical and horizontal, integrated with data management and strong semantic data models. A system that combines an incredible range of scale, from representations of the smallest things such as samples of pollen and individual pieces of charcoal, through to artefacts, features and buildings as well as intra-site spatial distributions. GIS, for example, often lacks the full data management capabilities needed when we want to represent a complex hierarchical archaeological data structure. Data management support is virtually absent in other types of spatial management systems, such as dedicated 3D-modelling software and to some extend CAD, and it very clearly accounts for the need to apply and even develop specialised management systems, as we have seen for example with IADB. Understandably, there is much focus on these technical issues related to new methods of spatial recording, and it is easy to blame technology and lack of tools that fit the archaeological workflow, but in fact it is only half the problem. It is arguably equally as much an issue of lack of conceptualisation of new types of data in the documentation process and to a large extent a matter of neglecting to conceptualise archaeological methodology onto new digital approaches. As drawing conventions are necessary to communicate our interpretations consistently in a 2D framework (Roskams 2001, 136) , so are conventions or conceptualisations of the 3D recording. We need a migration of an analogue frame of conceptualisations into a digital equivalent, and technical solutions that combine our textual and interpretational data and spatial data, taking into account the dynamics and heterogeneity of new types of spatial data; GPS-measurements, photogrammetric techniques, vectors, rasters, 3D point clouds and meshes. Tools such as 'X-bones' (Isaksen et al. 2008), which was actually developed a relatively long time ago, illustrate how it is possible to transform spatial data into a visual representation and embed semantic information, in this case for the analysis of human bones. Excavation projects in northern Greece have also demonstrated how it is possible to extend existing GIS systems and use ArcScene as a framework to include all aspects of spatial, conceptual and semantic information, such as excavation units, contexts, rasters and vectors, and even advanced 3D symbology (Katsianis et al. 2008). Looking beyond archaeology towards other disciplines such as chemistry or medicine, we find that 3D visualisation of semantic information is something that has been worked on for years, and which with modifications could be applied to archaeology (Hanwell et al. 2012).
To fully exploit the semantic, analytical and data management capabilities of 3D documentation within archaeology, we must be open to other types of data representations as well, which may not necessarily fit into our usual concepts of observation and interpretation, but act as a hybrid. One such hybrid, which is actually also a hybrid of raster and vector representations, is the voxel model – or volume pixels. It has previously been extensively used in medicine as the framework of visualising MRI scans, but has also found its way to archaeology, especially through the extensions available for open source Grass GIS (Orengo 2013; Lieberwirth 2008a; 2008b; Bezzi et al. 2006). It is potentially less abstract and conceptually much more in correlation with our physical object, which is probably why voxels have also been widely used for data from ground-penetrating radar (GPR) – in fact blurring the lines between above/below ground archaeology (Leckebusch 2003).
In 3D printing, physical voxels can be different shapes and sizes (see Beale and Reilly this issue). When working with volume pixels in the context of 3D visualisation (Figure 12), we must also choose a level of generalisation - the size of each little cube - depending on the size and amount of detail in our documentation. It is then a matter of projecting and applying a 3D grid to our 3D point clouds, effectively merging and splitting everything into neighbouring cubes. Voxels have several advantages over a point cloud or a complex 3D mesh. First of all, depending on the size of the grid, it reduces the data amount significantly. Instead of recording x, y and z coordinates, the relative position in the grid and an ID is sufficient. Additionally, voxels can be stored as a sequence or stack of raster images or slices. Handling large amounts of image-data very quickly and efficiently and maybe even compressing it is something computers do very well. The voxels can even inherit the classifications and semantic information from our vectorisations, allowing us to do spatial analyses and easily perform arbitrary cross-sections, as we are no longer dealing with simple surfaces, as in a mesh, but actually have some depth and volume to work with.
In terms of documentation ideals, only time will tell to what extend new spatial data representations such as voxels will affect archaeological methodology. It could very well be a game-changer, as it is conceptually totally different from our usual approaches. Instead of identifying and working with borders, surfaces and interfaces, we would actually be working with the volumes of 'stuff', allowing interpolation of layers and contexts between sections, and in effect changing the paradigm of documentation ideals. At the same time it could help to break down the separation between sensory data such as geophysical surveys and archaeological observations and interpretations. This, however, still requires work into technologies and excavation methods that provide effective means of acquiring the necessary spatial data.
Internet Archaeology is an open access journal. Except where otherwise noted, content from this work may be used under the terms of the Creative Commons Attribution 3.0 (CC BY) Unported licence, which permits unrestricted use, distribution, and reproduction in any medium, provided that attribution to the author(s), the title of the work, the Internet Archaeology journal and the relevant URL/DOI are given.