Shape from Light!
As part of our project, we are attempting the photogrammetric rendering of arms from our studied sites. Currently, we are working on the 3d reconstruction of spearheads and swords from Tragilos (Aidonochori) as well as weapons from the sanctuary of Oisyme.
Photogrammetry is a fascinating method of reconstructing a three-dimensional object from a set of photographs, taken from different angles. These photographs are fed into a specialized software suite, where points of reference are identified on the objects; then, through a process called ‘Structure from Motion’, these points are used to track the movement of the camera between photos and, through that, ‘assemble’ the shape of the photographed object.
As we’ve mentioned before, we’ll be using 3df Zephyr Lite for our project: it is a lightweight, fast software solution, that can handle image sets of over a hundred photographs (500 to be exact!). We readily recommend it for Early Career researchers, for its competitive price, intuitive workflow and the customer support provided by the issuing company (the Italy-based ‘3DFlow’).
–Our photography setup. We are using 5600 Kelvin ‘daylight’ lamps for lighting and a white tent to provide diffuse ambient light.
To capture the photographs, we are using a light tent, so as to ensure diffuse white lighting. Our single camera is a trusty Nikon D7200 (a simply excellent DSLR), with a Tamron macro lens; we are not capturing at max resolution, as that would be prohibitive with regards to time consumed for the handling of the dataset and provide no benefit beyond around 10k-12k megapixels. The camera is mounted on a tripod (as exposure times vary between 2 and 4 seconds) and the object to be reconstructed in 3d is placed on a turntable, inside the light tent, where is it photographed from multiple angles.
The successful reconstruction of a 3d structure through photogrammetry requires the following three key elements:
- The scanned object must have a textured surface, providing the software with a variety of control points to ‘follow’. If the surface is too uniform, the software cannot distinguish between various picture capture angles. And the ‘Structure from Motion’ reconstruction will fail horribly.
- The shape of the scanned object and the camera positioning strategy must allow for a considerable overlap of control points between photographs; otherwise, the software will fail to ‘understand’ the relation in camera positioning between snapshots. And, again, the ‘Structure from Motion’ reconstruction will fail horribly.
- The lighting of the object must remain constant throughout the photography process. For the control points to be identified correctly, their relative color values must not change significantly. Otherwise, the software will fail to identify the same control point across multiple photographs. And, yes, of course, the ‘Structure from Motion’ reconstruction will fail horribly.
All of the above makes metal artefacts a nightmare to work with; blades and spearheads doubly so.
First of all, metal is reflective, which means that its effective surface color changes as you view it from different angles; specular highlights can also significantly shift with the camera between photographs, drastically changing how the system perceives the object (so, key element 3 presents us with a major problem). In addition, most of our objects are blades, or sheet metal; i.e. simple geometric planes, with a very thin cross-section. Photogrammetry can provide us with a very good relief reconstruction of the flat sides of a blade but, as soon as the camera turns to switch from one side to the other, all original reference points are lost and the ‘back’ of the blade or its edge are far too thin to provide us with sufficient points for a smooth transition. Blades are, by their very nature, an utter anathema regarding key element 2.
–A demonstration of the problem. Observe how the flat side of the blade becomes a thin line, resulting in the loss of all reference point information as we move from one side of the blade to the other.
To deal with the above problems, we have implemented several experimental solutions:
Thankfully, most of our objects have received the attention of conservators during the past years and are coated with non-metallic varnishes, for their long-term preservation. This renders all specular reflections non-metallic and polarized. By implementing a light tent to achieve diffuse, ambient lighting and by employing a polarized filter with our camera lens, we can eliminate most of (if not all) such reflections. Our installation also allows us to set the camera in a fixed spot and rotate the object on a turntable, instead of the other way around; this is crucial, as it allows us to use a tripod for those long, 2-4 second exposures. As the lighting is even, the rotation of the object does not affect the way its surface is lit. That’s our problems regarding element 3 dealt with!
The matter of overlapping control points between different capture angles (or, as is the case, the lack of such overlap) is a more serious problem, and one that stymied us for a long time. We eventually reached the conclusion that three (!) reconstructions per object were necessary: one for each side of the blade, and one to provide a ‘base’ model, for the two ‘side’ surfaces to wrap around.
–Our reference base, with a spearhead in place for photography. No, those patterns are not a cryptic magic formula, although their effect might as well be!
The first two reconstructions are easily and intuitively handled, by placing the object on a white background and taking up to 50 photographs of each side, from different angles. The third capture is much more complex. A chemically neutral soft foam ‘base’ was prepared (to avoid harming the artefacts) and its surface was marked with a series of random symbols and markings, providing it with a texture that 3df Zephyr could ‘lock’ onto. Artefacts were placed on this base vertically, and the entire structure was rotated on the turntable. This, effectively, allowed each side of the blade to be registered against a third object (the textured base) giving us the necessary alignment information for the two ‘side’ scans to be placed correctly in relation to each other.
–The complete reconstruction of a machaira sword from Aidonochori.
Over the past week, we have captured more than one thousand (!) images, and we are still working diligently to compile a 3d reconstruction corpus. For now, we leave you with the above image record of our labors; however, rest assured that we are currently exploring options for embedding 3d models on the website, which we will do in the near future, for your viewing pleasure.