• Keine Ergebnisse gefunden

112 Feature Detection

Detected regions

Correctly detected (% of 164 manually labeled vesicles)

False positives5 (% of:

detected regions | labeled vesicles )

Not detected (% of 164 manually labeled vesicles) 1. Gradient Tracing

threshold: 40% of max 321 149 (91) 172 (53 | 105)

15 (9) 2. Laplace Transform

threshold: 54% of max 132 100 (61) 32

(24 |20)

64 (39) 2. Laplace Transform

threshold: 41% of max 196 140 (85) 56

(29 |34)

24 (15) 3. Cross-Validation

of un-thresholded 1 & 2 190 145 (88) 45 (23 |27)

19 (12) 3. Cross-Validation

of second test set

190 (of 136 man.

labeled)

123 (90% of 136)

67 (35% of 190 49% of 136)

13 (10% of 136)

Table 8.1: Summary of Vesicle Detection Results:

for the Gradient Tracing method (row 1), the Laplace method with two different thresholds (rows 2 and 3) and the cross-validation of the former two unthresholded partial results (row 3) computed on the large ROI in the Lamina of Drosophila melanogaster (Figure 7.9.b); row 4 gives the cross-validation result on a second test set (figure not shown).

The advantage of this method compared to the Gradient Tracing and the simple Laplace method is that the product of the two results enhances regions, where both methods have overlapping high responses. Therefore the vesicle centers are represented by relatively large spots which can be selected directly by setting a minimal allowed spot size. In contrast, for the simple methods, the detected spots have first to be enlarged by a Seed-Fill step, for which three additional parameters are required. Since these methods demand more expertise from the user, they are more exposed to faults.

8.4 Conclusions 113

used directly for feature detection (as it is mostly done for artificial benchmark data (Lindeberg, 1998; Florack et al., 1994)), but have to be included in more elaborated systems to give the desired results.

First, branching points and sharp bending points were marked by the nega-tive extrema of the multiscale Gaussian curvature operator. Since the operator computes the curvature of iso-intensity surfaces, it is sensitive to the gray value variations inside the neuronal branches which arise due to inhomogeneous staining.

Applied instead on the binary segmented neuron, the operator localizes well the saddle points of the neuronal branches. This led to the first successful application of the operator to real world 3D data (for application to artificial data see Florack et al. (1994)). Note that the key points of the neuron are detected independently of the skeleton computation itself, hence artifacts due to the skeleton calculation do not impair the detection of key points as in previous work (cf. Cesar Jr. and Costa (1998) for 2D and He et al. (2001), Zhou and Toga (1999) for 3D). This is a nov-elty for neuronal 3D image processing which will simplify significantly subsequent work.

Additionally, the strong condition for good localization imposed usually to corner detectors can be weakened for this application to an approximate cover of the branching or bending region, since a cross-validation with heuristically detected branching points can be performed in a further processing step (as described in Section 9.3.1).

Secondly, the problem of circular object detection was approached by the multi-scale Laplace transform. It was shown that the Laplace transform itself - similarly to the Gaussian Curvature operator - does not suffice to complete the detection task successfully, but a post-processing including also the Seed-Fill segmentation is needed. It was shown that a cross-validation of the two independent methods for circular object detection (i.e. the one based on Gradient Tracing - Section 7.4.4 - and the one presented here) eliminates some of the false positive responses, such that the reliability of the validated regions is increased. This allows an almost parameter free vesicle detection.

It is here for the first time that such composed systems are established and used for the detection of branching and bending points, as well as for the detection of circularly shaped vesicles. It was shown that these systems provide significantly better results than the simple differential operators do.

Chapter 9

Graph Construction

The problem of automatic skeleton construction of neurons is, due to the large va-riety of neuronal cell types and image characteristics (enumerated in Chapter 2), a controversed and still not completely solved issue. The current chapter presents first the currently available methods (in Section 9.1), which show, that better re-sults are obtained, if appropriate prior knowledge about the extracted shapes is used (Kim and Gilies, 1998). Therefore the assumptions made by the here pre-sented skeletonization algorithm and the general conditions imposed to the final skeleton are stated in Section 9.2. Thereafter a new method for the computation of a raw skeleton augmented with estimations of axial directions, and of critical processing regions is introduced in Section 9.3. The information gathered so far is finally used by the main contribution of this work, the graph construction algo-rithm (Dima et al., 2003) (Section 9.4). The Results (Section 9.5) shows the partial results arising from each processing step and the influence of different parameter choices upon the final properties of the graph and presents complete reconstruc-tions on neurons having different spatial and noise characteristics. A discussion and comparison with other skeletonization procedures is done in Section 9.6 and conclusions are drawn in Section 9.7.

9.1 State of the Art

The skeletonization of 3D neuron scans is currently most often performed by an electronically surveilled manual procedure (Neurolucida - Micro Brightfield Inc., or Eutectic - Eutectic Electronics Inc.) which facilitates the hand drawing of the probe and its storage in a digitized form. For complicated objects, like neuronal datasets (having usually 512x512x100 voxels and thousands of branches) this is very tedious and time consuming (de Schutter and Bower, 1994a), and lacks as well the possibility of objective quality determination of the reconstruction result (Capowski, 1989). For this reason a precise automatic reconstruction of neurons from 3D confocal microscope scans is needed.

The problem of automatic skeleton construction has proven even in 2D not to

116 Graph Construction

be trivial. The issue is discussed very broadly in literature, starting with abrasive procedures like grass fire (Blum, 1967) and thinning algorithms (Naccache and Shinghal, 1984), continuing with distance maps (Gagvani and Silver, 1997; Ge and Fitzpatrick, 1996), and not the least by the formulation of analytic solutions for the center line construction of difficult parameterized 2D shapes (Shaked and Bruckstein, 1996).

Thinning algorithms (Naccache and Shinghal, 1984; Mao and Sonka, 1996; Ran-wez and Soille, 1999) rely on the deletion of border points, as long as their deletion is not disrupting the topological structure of the underlying data. Unfortunately, these procedures are sensitive to noise and to border irregularities of the objects.

The analytic solution of symmetry line reconstruction (Shaked and Bruck-stein, 1996) is a geometrically precise procedure, but it requires the parameterized analytic expression of the analyzed curve rather than the common discrete pix-elized form. Whereas a parametric description of the object borders is feasible in 2D, the approximation of parametric surfaces in 3D is computationally expensive.

The extension of the 2D skeletonization paradigms to 3D space is often not straight forward. Currently there are several different approaches to extract 3D center line graphs, like: 1) the 3D extension of thinning (Mukherjee et al., 1989;

Latecki and Ma, 1996; Mao and Sonka, 1996; Ranwez and Soille, 1999), or 2) continuous skeletonization by the construction of Voronoi diagrams (Ogniewicz, 1992; N¨af et al., 1996), 3) distance maps containing the minimal distance to the object’s boundary for each discrete point (Ge and Fitzpatrick, 1996; Gagvani and Silver, 1997; Zhou and Toga, 1999), 4) tracing the highest gray values of the original gray value image along a smooth path (Herzog et al., 1998), 5) surface shrinking in the gradient direction of a distance field (Schirmacher et al., 1998).

3D center line graph construction is mostly done on cylindrically shaped bi-ological data, like bronchial arterioles, sulci of the brain (Zhou and Toga, 1999), blood vessels (Zahlten et al., 1995), bones (N¨af et al., 1996), colon, trachea or sinus (Gagvani and Silver, 1997; Vilanova et al., 1999). This kind of data is relatively simple to process, since the images are of high contrast with relatively smooth contours.

The analysis of 3D microscope images of neurons is more difficult, because neu-ronal scans can have very different spatial branch and noise distributions, and low signal to noise ratio (Figure 9.11, Section 9.5.1). Contrast can vary significantly inside one single branch, such that the neuronal branches are either strongly high-lighted or interrupted, whereas the width of branches can differ by an order of mag-nitude (Figure 9.27.a). There were several attempts of automatic skeletonization of 3D neurons (Cohen et al., 1994; Mao and Sonka, 1996; Kim and Gilies, 1998; Xu et al., 1998; Herzog et al., 1998; Schirmacher et al., 1998). Most of them rely on a previous thresholding segmentation of the image (Xu et al., 1998; Schirmacher et al., 1998; Kim and Gilies, 1998), which either looses information or includes too much noise. Skeletonization based on thinning (Xu et al., 1998; Kim and Gilies, 1998; Mao and Sonka, 1996), surface shrinking (Schirmacher et al., 1998),

9.1 State of the Art 117

and distance maps (Ge and Fitzpatrick, 1996; Gagvani and Silver, 1997; Zhou et al., 1998) is sensitive to border irregularities of the object. It is also known that distance maps are not topology preserving (Vilanova et al., 1999). Since surface shrinking relies on a preliminary distance map, it suffers of the same drawback.

The work of Cohen et al. (1994) and He et al. (2001) shows that it is in principle possible to design automatic cell tracking procedures for 3D images. They applied an adaptive segmentation (Roysam et al., 1992), then performed a Seed-Fill on the structures, to identify the foreground voxels and to assure their connectivity, followed by a thinning, a graph construction step, and geometrical measurements.

The connectivity condition is based on the assumption of smooth gray value vari-ations inside the neuronal branches. This is not always true in high resolution scans, since low signal to noise ratios, dye inhomogeneity, and quantization errors may lead to local disconnectedness of the finest processes (i.e. spines). These are the most interesting zones for the biologists, since they represent the connection points with other neurons.

Another approach was done by Xu et al. (1998), who fitted generalized cylin-der models onto the neuronal centerline, which was extracted by global thresh-olding and subsequent thinning. Besides its implicit sensitivity to border irregu-larities, the 3D expansion of the thinning procedure is not straightforward since the search space grows exponentially1 (Latecki and Ma, 1996) and a definition of minimal skeletons for some elementary 3D shapes cannot be formulated uniquely (Mukherjee et al., 1989).

Some authors have tried to overcome these problems, such as Herzog et al.

(1998) who developed a gray value based tracking algorithm with concomitant cylinder fitting, following a smooth path of maximal gray values. Still, at points of sudden contrast decrease inside the branches, this method stops the tracking, losing thus the foreground data lying behind these critical regions. Another attempt was to shrink a surface of an already segmented object (Schirmacher et al., 1998) based on a distance transform. This approach generates very smooth skeleton lines for relatively linear objects, but branching points are represented by surfaces, since the method does not allow punctual shrinking in these areas.

For a skeletonization of high quality, application of specific knowledge has to be plugged in. This is often done by the consideration of problem specific models, which provide precise informations about the structural characteristics of the ob-jects which are to be analyzed. Kim and Gilies (1998) performed a classification of different development stages of oligodendrocytes using a threshold based seg-mentation, followed by a thinning step and by a Bayesian classification method.

These tools perform well on the problem at hand, but lack generality.

The neuronal graph construction presented here tries to use prior information, but still keep its algorithms as general as possible. The next section enumerates therefore the assumptions made about the foreground data and the conditions imposed to the graph construction algorithm.

1There are 326 possible configurations of the 3×3×3 neighborhood of a point in 3D, since each voxel can have the values black/white/don’t care.

118 Graph Construction