Content-Adaptive 3D Mesh Modeling for Representation of Volumetric Images

Mesh modeling of an image involves partitioning the image domain into a collection of non-overlapping patches, called mesh elements. The image function is then determined over each element through interpolation. A critical issue in mesh modeling is how to determine the mesh structure in a mesh model for a given image.
In this paper we propose a fully 3D approach for content-adaptive mesh generation. Based on a result derived on the error bound for a 3D mesh representation, we design an algorithm that aims to adaptively distribute mesh nodes (hence mesh elements) in the 3D image domain in such a way that the error level achieved by the mesh representation is kept small over individual elements. 
In next three figure visualization of the spatial placement of mesh structure nodal positions with respect to the volumetric objects are shown. This allows us to examine location of nodes in respect to the object boundaries and intuitively evaluate the algorithms. Clearly, for lower interpolation error, the mesh structure nodes should be place around the object boundaries.
Nodal position are showed as white dots.

For more information refer to:
Content-Adaptive 3D Mesh Modeling for Representation of Volumetric Images
Jovan G. Brankov, Yongyi Yang, and Miles N. Wernick 

IEEE ICIP02 September 22-25, 2002 Rochester, New York, USA

 Proposed method

Octtree

Clearly, a proposed method produce a mesh structure better adopt to the objects to be represented.

For more details send a request to: jovan [at] brankov [dot] com