Virtualization Process Overview

How does IVL staff virtualize an object?

We use a seven step-process which is relatively simple, even if the technology used to do it is not. Throughout the process, we follow the procedures outlined in our Handling Protocols.

Step One — Compile Available Data

Nick-PhotographAll of the available data for an object to be scanned is compiled into a database that we have constructed for the IVL Virtual Specimen Library. This information can come from collections data, individuals, publications, and other sources. No less than six photographs are also taken of every object we scan. This is done for both archival purposes and to facilitate the creation of a texture map for the finished model (step six).


Last Lumbar Vertebra of a Sea Otter



Step Two — Object Assessment and Process Design


Jesse-ScanningAll of our models are produced through the same basic pipeline or process. That is, regardless of the subject, any object must be scanned and edited in several orientations in order to sample the entire surface area and create a solid model. However, as we employ multiple surface scanners, each possessing its own strengths and weaknesses, an object’s morphology, size, and complexity must first be assessed by the staff at the IVL before any scanning can begin. This is done in order to tailor a process that will produce the best possible model in the shortest amount of time.


2-FARO-Edge-and-Laser-Line-Probe-Scanners FARO-Focus Minolta-VIVID-9i-Scanner nextengine Cyberware-Desktop-3D-Scanner-

FARO Edge and Laser Line Probe Scanners, FARO Focus, Minolta VIVID 9i, NextEngine 3D scanner HD, Cyberware Desktop scanner.


Step Three — Edit, Clean and Consolidate Multiple Scans

Robert-EditingOnce the appropriate number of scans has been completed by any of our scanners, the resultant model consists of a rough series of merged polygon meshes. These need to be manually edited in order to remove any artifacts or aberrations that resulted from the scanning process. This includes editing out whichever support structure was used to support the object while it was being scanned. These individual scans are eventually consolidated into a single solid model which is then ready for the final stage of editing.

scan 1 support wire
Raw scan data.

scan 1 support

Raw scan flat rendered.

scan 1 garbage
Garbage in scan.
scan 1 no support
Scan with garbage removed.
scan 1 anchor xform
Alignment Stage 1
scan 1 merge
Alignment Stage 2

Alignment Final


Modeling in Geomagic Studio.


Step Four — Final Edit

A final edit is conducted inside of Geomagic Studio. This final edit removes any intersecting polygons, fills any remaining holes, and clears the vertex color data that gets ‘baked’ into the model while it is being scanned. From this final editing, two very important things need to occur in order to prepare the model for the final stages of its creation. One is the creation of a manifold surface, which allows the model’s dimension and topology to be translated in terms of Euclidean Space and thus be measureable in a consistent form. The second is that the now manifold model must be saved at two different levels of polymesh density; one at full resolution, and one at roughly 1000 polygons.

Geo bluescale fullres
Full-resolution model.
Geo bluescale lores
Low-resolution model.


Step Five — Create Clean Topological Surfaces and Multiple Resolutions

The raw full resolution models produced by the end of this final edit are often very large, averaging from several hundred thousand to several million polygons per model. This is generally too large for most computers to handle smoothly. Further, after the final stage of editing is complete, the topology, or arrangement of polygons on the surface of the model, is what we refer to as “polygon soup”. In other words, there are areas of extremely dense polygon clusters, and areas with large and over-stretched polygon clusters. The next step in the pipeline is to fix these two issues. This is accomplished using the two models that were saved out of Geomagic studios.

This is a two-step process. First we load the low density mesh of around 1000 polygons into ZBrush. We then subdivide this low density model’s geometry 5 or 6 times until we have roughly the same polygon count as the original model. This results in an overly smooth base model in the same shape and size as the original object but now containing several successive levels of consistent polygon mesh densities. We then superimpose, or project the original model onto the smooth surface of this now subdivided low density mesh, which reintroduces the detail and proper dimensions of the original model. The resultant model has a clean topological surface at every level of subdivision, and is now more flexible in terms of its file size and usability. This allows us to cater our models for their individual purposes.

z-loresLow-resolution model.


z-lores subdivideLow-resolution model subdivided.
z-projectedLow-resolution model and full-resolution model appended. z-grey hiresFinal low-resolution model after detail projection.



Detail projection in ZBrush.

 Step Six — Create Texture Map

Now that the model is fully edited, has a consistent topology, and contains several available levels of resolution, the next step is to create a texture map. A texture map is the color data that we overlay onto the model’s surface. We do this using the photographs that we took of the object before scanning it. Each photograph is cut from its background and then projected onto the surface of the model at the same orientation. Generally six photographs are sufficient to create a texture map that fills the model’s surface; however, in objects with a high degree of surface morphology, more photos are taken to fill in whichever holes may exist. The result of this is a dimensionally accurate virtual object with photographically accurate color data.

z-app text1
Model and photo.
z-app text2
Photo being manipulated on top of the model.
z-app text3
Final positioning of photo on the model.
z-app text4
Photo applied to model.
z-app text5
 Fully textured model.


Texturing with photographs in ZBrush.


Step Seven —Translate Files into Distributable Formats

The final stage in our modeling pipeline is translating and rendering these files into a usable and distributable file format. For this, we use the Daz Studio which allows us to set up our material, lighting, and orientation and exports a .u3d file which we then convert into a .pdf. This .pdf can then be viewed by anybody with the current Adobe Acrobat application installed on their computer. From within this .pdf, measurements can be made, proper taxonomic orientations can be viewed, and comparisons can be conducted among many other possible uses. Aside from digital distribution, the end stage of these models can be applied through other modes of reproduction as well. After a model has undergone its final stage of editing, the resultant Euclidean geometry can be used to extrapolate all the necessary dimensional data needed for any secondary fabrication process such as casting, machining, or rapid prototyping.


Final low resolution PDF file of the vertebra. Click image to activate 3D.