PREVIOUS   NEXT   CONTENTS   HOME 

3.3 Post-acquisition data processing

All objects were initially processed in Innovmetric's Polyworks suite. Following scan completion, the individual scans and/or rotations for an object were then aligned to one another using N point pairs in a total least squares model to estimate the parameters of a similarity transformation (scale was provided by the scanner and not estimated). Following this pair-wise alignment, a single scan was selected to define the coordinate system (or locked in the language of Polyworks) and a global best-fit alignment based on the Iterated Closest Point (ICP) algorithm was performed to fine tune the alignment (Chetverikov et al. 2005). Due to the significant overlap between scans, a reduce overlap function was performed to remove extensive areas of overlap and prepare the model for meshing. Areas of overlap are removed by an automatic process that chooses the best data based on scan location and direction. Points with surface normals that are more orthogonal to the digitiser are considered to be of higher quality and more oblique surface data are removed. Additional manual overlap reduction is then performed to remove any remaining overlap areas that were not effectively removed in the automatic process.

Following overlap reduction, a meshed polygonal model of the object is created. Depending on the complexity of the object, meshing may be difficult since it is difficult to determine the connectivity between points. Although the objects in this collection did not prove difficult to mesh, careful and iterative parameter selection (such as the maximum allowable edge length and maximum obliquity of a facet) is often required to find a suitable representative mesh. Additional automated and manual editing is required at this stage to eliminate holes, reduce scanner noise, and remove other data defects. Once the meshed polygonal model has been created and all editing is finalised and a mesh is complete, the texture data are converted from vertex colour data to a separate texture image file. The mesh data are then output in OBJ format with an additional material (.mtl) and texture file (.jpg). OBJ is a relatively simple, well-recognised file format for 3-D objects that manages both 3-D geometry and texture. The format is open and adopted by a very wide range of 3-D applications. Following the creation of the high-resolution OBJ, a number of additional low-resolution versions of the object are created, as described below.

Reader Comments

Add a Comment

  • Internet Archaeology will never publish or share your email with anyone.
    Required fields are marked *.
  • Receive emails when this thread is updated

 PREVIOUS   NEXT   CONTENTS   HOME 

© Internet Archaeology/Author(s)
University of York legal statements | Terms and Conditions | File last updated: Tue Jun 28 2011