• Keine Ergebnisse gefunden

In this chapter we successfully completed the evaluation of the refinement reconstruction. The exper-iment requires a local homotopy stable sampling set which is sparser than the local feature size based -samplings provided by related algorithms. To provably meet the demands on the sampling we derived a stability criterion on the critical points and discrete homotopical axis.

The computation of the discrete homotopical axis is derived from the flow relation framework which is also an underlying step in refinement reconstruction. The stability of the computed homotopical axis provides feature size estimation on data points and consequently the lower bounds for the sampling.

The estimated feature size defines the maximum distance to the nearest neighbor, so all points with lesser distance are not required in the data set. Successive deletion of points sorted in increasing order according to the estimated value provably results in a local homotopy stable sampling.

The framework on stability of critical points and homotopical axis experimentally developed in this chapter extends the results on boundary reconstruction and offers a basis for further research on homo-topy equivalence of reconstructed space partitions. We provided arguments for homohomo-topy type preser-vation in the discrete homotopical axis computation deriving the stability of the reconstruction on decimated data sets.

6.8. CONCLUSION 167

(a) (b)

(c) (d)

Fig. 6.4: left: original mesh, right: reconstruction from the decimated local homotopy stable point set

The experiments provide a nonmanifold multi-regional shape which is non-uniformly sampled ac-cording to the framework on data set decimation derived in this chapter. The underlying shape is nonmanifold, not smooth and multi-regional which makes reconstruction with all related computational topology based approaches known to us not possible. The thinned-(α, β)-shape-reconstruction handles nonmanifold multi-regional shapes but does not process locally adaptive samplings which is successfully carried out by refinement reconstruction. So, we developed an experiment which shows the advantage of refinement reconstruction over thinned-(α, β)-shape-reconstruction and all related computational topol-ogy based approaches known to us.

Chapter 7

Conclusion and Outlook

The main goal of this work was to define locally variable sampling conditions and a reconstruction method which results in a digital representation of a given real world scene provably preserving original topological properties. The starting point for our research was theoretical framework and reconstruction method for non-manifold 3D-surface reconstruction originally introduced for 3d in [Stelldinger, 2008b].

This framework requires the sampling parameters to be globally set for the whole scene.

The real world scene in [Stelldinger, 2008b] is divided into a set of disjoin open regions called space partition. It is assumed that the space partition consists of more than two regions and that the boundary of more than two regions can intersect. It follows that the boundary of the assumed space partition is a non-manifold 2D-surface embedded into a 3D Euclidean space. The research aim in [Stelldinger, 2008b]

is then the topology preserving reconstruction of the non-manifold 2D-boundary. This research on sampling fosters questioning what the most sparse sampling conditions are and which amount of noise can be handled by the reconstruction method.

The sampling conditions are defined in such a way that the homotopy-equivalent and geometrically similar subsets of the original regions are enveloped by sampling points denser than the tightest narrowing in the subsets. In other words the discrete distance value on the original boundary is always less than the discrete distance value on the critical point with smallest continuous distance value.

The sampling density serves as a parameter to adjust theα-parameter of the reconstruction algorithm.

Theα-shape already separates the relevant homotopy-equivalent regions correctly but contains spurious holes. The maximal sample point deviation is then used to detect the holes which do not correspond to any original regions. The regions less than the internal parameter β, which is computed using the maximal sample point deviation, are filled and the resulting thick boundary is then thinned by a topology-preserving post-processing step.

The sampling conditions defined in [Stelldinger, 2008b] ensure even the tightest narrowing to be sufficiently sampled. But the sampling density is then constant for the whole boundary. In our framework we explore the idea of limiting the sampling density and - depending on the sampling density - to limit the maximal sample point deviation. But our requirement for the sampling is to ensure all steepest increasing paths on discrete distance transform - starting in the relevant subsets of the original regions - to stay in these relevant parts. This renders possible to define locally adaptive sampling conditions.

The framework developed in [Edelsbrunner, 2003] defines a strict relation called “flow relation” on Delaunay simplices which mimics the steepest increasing paths. The result of the reconstruction method called “Geomagic Wrapc” (applied on each Delaunay tetrahedron containing its own circumcenter) is the 3D extension of the Gabriel graph [Gabriel and Sokal, 1969]. We call this reconstruction step on

169

samplings fulfilling our conditions theElementary Thinning.

The result of elementary thinning is a correct separation of local maxima, where the discrete local maxima are uniquely associated with continuous local maxima. The separation is calledcorrect if the associated discrete local maxima of the continuous local maxima being in different original regions are in different discrete regions.

The original regions may contain more than one local maximum. But the result of elementary thinning is the separation of all maxima. Such separation corresponds to an oversegmentation of the original scene. Since the oversegmentation naturally carries boundary elements which may be removed to merge different regions while preserving correct separation, we call such separation arefinement.

The Wrap algorithm follows the steepest decreasing path in the discrete distance transform and successively deletes Delaunay simplices from the complex which it passes through. The remaining De-launay simplices have then the smallest discrete distance values in the relation. We call such boundary reconstructionminimal refinement.

The medial axis [Blum, 1967] is a complete shape descriptor, but only a subset of the medial axis is necessary to represent the homotopy type of the shape or, as we call it, theregion. Contractible regions have star-like medial axes. The homotopy type can be represented by one point only. Thelocal region sizemeasures for each point on the boundary the minimal distance value of the local maxima reachable by steepest ascent on the continuous distance transform. This value gives the lower bound of the largest inscribing ball of the region. Thus the local maxima only are relevant for definition of the local region size.

Our new sampling conditions limit the ratio between the discrete distance value on the boundary and the local region size. The result is, the discrete distance value on the associate local maximum is at least a fraction of the continuous distance value on the original local maximum. Since the reconstructed boundary is minimal, by scaling the discrete distance value by the reciprocal of the fraction we obtain a value of the maximal boundary simplex. Thus all boundary simplices which exceed this value cannot separate different regions and can be deleted. We call such region merging on too large simplices the refinement reduction.

In this work we show that the sampling conditions defined by local region size are insufficient in order to obtain a reconstruction which can be reduced by further deletion of boundary simplices to a homotopical equivalent of the original space partition. So, we defined a new subset of the medial axis - the minimal superset of all discrete critical points. As proven in [Chazal and Lieutier, 2005a], this subset of the medial axis, in our framework called thehomotopical axis, is homotopy-equivalent to the medial axis and, as we implied, the homotopical axis is homotopy-equivalent to the space partition.

Thelocal homotopical feature sizeis the minimum between the local region size and the distance to the homotopical axis. The local homotopical feature size measures the distance to the critical points and to the steepest paths between the critical. So, the sampling conditions defined by local homotopical feature size ensure a denser sampling in narrowings. The result of refinement reduction is then a boundary reconstruction which is reducible to a space partition with a boundary which does not cut the original homotopical axis. So, the result is reducible to the correct separation of connected components of the original homotopical axis.

As we have shown, the our algorithm for refinement reconstruction has advantages over all recon-struction methods based on a computational geometry approach known to us, including the “Thinned-(α, β)-Shape-Reconstruction” published in [Stelldinger, 2008b]. In summary:

• The class of shapes which can be handled by our new refinement reconstruction has been ex-tended to space partitions consisting of multiple regions with non-manifold boundary. So, the assumed shapes generalizer-regular, r-halfregular, non-smooth andr-stable objects. ( Note that in [Stelldinger, 2008b] the multi-regional space partitions with non-manifold boundary are classified by the value of maximal boundary dilation which does not change the homotopy type. )

• The assumption of multi-regional space partition enables the reconstruction for volume-based sam-plings such as in computer tomography or magnet resonance imaging.

7.1. CONTRIBUTIONS 171

• The refinement reconstruction handles locally non-uniform and highly noisy sampling. To our knowledge the sampling conditions generalize the requirements made in [Stelldinger, 2008b] as well as the sampling conditions defined for all topology preserving reconstruction methods based on a computational geometry approach.

• The method handles noise arising from blurring which is here defined as excessive sample point deviation from the boundary, as well as a large amount of outliers (over 20% in our experiments).

• Given the sampling conditions and parameters as defined in [Stelldinger, 2008b], the refinement reconstruction results in an equivalent boundary approximation.

However our method reconstructs a reducible refinement only. The result preserves the original topo-logical properties but needs further processing and knowledge to be reduced to a homotopy-equivalent space partition.

7.1 Contributions

In our ambition to make this work self-contained lucid we achieved theoretical, experimental and empir-ical contributions. In the introduction of theoretempir-ical concepts we presented:

• The proof that the local maxima of the distance transform are the local maxima of the correspond-ing medial axis.

• The definition of the homotopical axis as the smallest superset of critical points and steepest increasing paths between them and derivation of the homotopical equivalence to the space partition.

• The definition of the local region size as a function which maps each boundary point to the smallest distance value of the local maxima reachable by steepest increasing paths.

• The definition of the local homotopical feature size as the minimum between the local region size and the distance to the homotopical axis.

• The proof of equivalence between the concepts of equivocal and “Not-Gabriel” simplices which is needed to combine the concepts in the framework of refinement reconstruction.

• Observations and proved claims for geometrical dependence between certain Delaunay simplices.

• A discussion on comparison of Delaunay simplices and introduction of a newsize of a simplex as the largest distance value in the simplex which corresponds to the flow defined on discrete distance transform.

• A proposition on how to compute the size of a simplex and a discussion when the computation is necessary.

The evaluation of the framework developed for thinned-(α, β)-shape-reconstruction contributed in the following respect:

• Unification of sampling conditions and comparison to other approaches showed that the thinned-(α, β)-shape-reconstruction has advantages over all other reconstruction methods known to us before we developed refinement reconstruction.

• Experiments on laser range scan data sets with nearly noise-free dense as well as blurred data, which demonstrate equivalent results achievable with previous reconstruction methods as well as the advantage and robustness of the algorithm on blurred data sets with excessive sample point deviation.

• Experiments with samplings taken from non-manifold boundary of multi-regional space partitions demonstrating the ability of the method to reconstruct homotopy-equivalent surfaces.

• Discussion of problems of thinned-(α, β)-shape-reconstruction due to excessive noise corruption and propositions to overcome these. The excessive noise corruption causes topological distortions on the reconstructed boundary which can be removed only under further assumptions. We propose an extension of the algorithm to detect certain topological artifacts such as chains of singular edges or surface patches without boundary which do not separate two different regions and have the same region on both sides.

Taken together the main contribution of our work is the introduction of the theoretical framework, the derivation of guarantees or, respectively, proofs, and the evaluation of the refinement reconstruction.

The theoretical framework includes:

• Introduction of unique association between local maxima on distance transform defined on original boundary and the local maxima on distance transform defined on sample points.

• Definition of a refinement as the correct separation of associated local maxima.

• Definition of a minimal refinement consisting of minimal Delaunay simplices according to steepest decreasing paths starting on local maxima.

• Definition of locally adaptive sampling conditions which generalize all sampling conditions known to us and contain information on local homotopy.

• Proof that application of the constructed retraction algorithm on Delaunay tetrahedrons containing own circumcenter results in a minimal refinement.

• Proof that minimal refinement can be reduced by merging of regions on boundary simplices which exceed a certain value. This value can be computed by the largest circumradius of all Delaunay simplices in the region.

• Proof that the result of refinement reconstruction is reducible to a stable refinement which correctly separates the connected components of the homotopical axis.

The evaluation of the reconstruction algorithm resulted in the following new results:

• Given the sampling parameters as required for thinned-(α, β)-shape-reconstruction, the result of refinement reconstruction is homotopy-equivalent.

• The sampling conditions defined for refinement reconstruction generalize all sampling conditions defined for previous reconstruction methods known to us.

• Refinement reconstruction handles the most sparse sampling and the largest amount of noise of all methods known to us.

• Refinement reconstruction results in a reducible refinement on samplings of non-manifold bound-aries of multi-regional space partitions.

• Our method handles large data sets from laser range scanners. Even if we cannot guarantee that the result is homotopy-equivalent, in practice for very dense sampling sets we observed no need of reduction.

• Our new method reconstructs the non-manifold boundary of a multi-regional space partition which has been sampled by a volume-based approach such as computer tomography post-processed by 3D Canny edge detection algorithm for point cloud extraction.