• Keine Ergebnisse gefunden

Summary and Discussion

Im Dokument Fast STED Microscopy (Seite 63-67)

2.3 Data Analysis

2.3.4 Summary and Discussion

New approaches for image postprocessing and data analysis were re-quired in the Fast STEDmicroscopy. Specifically, localization and track-ing of neurotransmitter vesicles (and other objects) were desired in the scope of this thesis. The data analysis had to be unbiased by expecta-tions of human observers and had to handle large amounts of data.

A simple and computationally fast algorithm to find objects, espe-cially neurotransmitter vesicles, in STED movies was developed. The

algorithm smooths the images by low pass filtering and looks for lo-cal maxima above a threshold. Its performance was analyzed in detail.

This Local Maxima algorithm was compared to a fitting algorithm and to a machine learning algorithm. As test data, simulations (Figs2.9and 2.10) and experimental data (Figs2.11and2.12) were used.

Although the performance varied on different data sets, according to the respective signal to noise ratio, the performance of all three algo-rithms and of human observers on a specific data set was found to be surprisingly similar under most conditions. This holds for specifically optimized parameters (Figs 2.9 and 2.11) and partially for more gen-erally chosen parameters (Figs 2.10 and 2.12), with which the the Fit algorithm performed sometimes worse.

The Local Maxima algorithm performed best with a smoothing ker-nel which is much broader than the PSF (Tables 2.1 and 2.2). On the one hand, such a broad kernel strongly suppresses noise. On the other hand, objects lying closely together can not be resolved. In the (sim-ulated and experimental) movies of the vesicles, detecting noise peaks was a bigger problem than losing some close by particles. One must also keep in mind that the ground truths of the experimental data were manual labels; also the human observers might have mistaken two close by vesicles for one object.

The threshold in the Local Maxima algorithm was chosen relatively to the brightest object in the movie. This choice relative to one single object might seem unreliable. Because the movies of the neurons are all of similar structure and start all with images containing many objects, this relative threshold worked well. Under unusual conditions (e. g. very short movies or almost no objects in the field of view), however, this rel-ative threshold becomes unreliable. This is seen in the cross validation (Fig.2.12), where data sets just consisting of the second half of a typical movie were included. Other threshold criteria were evaluated as well:

Choosing the threshold according to a quantile of the brightest pixels in the movie did not enhance the performance (data not shown). Adapting the threshold within one movie was also considered because the movies get generally dimmer toward the end due to bleaching; also in this case, no enhanced performance was found (data not shown).

The self-learning Random Forest algorithm needs extensive

train-ing data. This is a drawback compared to the other two algorithms.

Often training data are not readily available. They can be generated by manual labeling of objects; first this is cumbersome and second also the human observers have a limited success in finding the objects (Fig.2.9).

This drawback does not pay off by a considerably enhanced performance.

The Fit algorithm depends on the correct estimation of the free pa-rameters. These are the physical properties of the imaging system and sample: the apparent size of point objects, i. e. theFWHMof thePSF, the mean brightness of single objects and the brightness variation between them, the minimal brightness of objects and the background level. Es-pecially inhomogeneous background and large brightness variations are disadvantageous for the algorithm. On some experimental data sets, the Fit algorithm performed worse than the other two algorithms with the generally chosen parameters (cross validation, Fig. 2.12). The rea-son is presumably the larger number of free parameters that had to be adjusted and that were not well transferable between different exper-imental conditions. In addition, the algorithm is computationally very slow compared to the other two algorithms.

Under the conditions used here, the higher complexity of the Fit al-gorithm did not pay off by an enhanced performance. Whenever the parameters are well known, though, the algorithm can use its full po-tential. The algorithm is advantageous, if the objects appear regularly in pairs lying closely together [184].

The Fit algorithm can estimate simultaneously several different so-lutions of the localization problem; probabilities are determined for dif-ferent particle configurations (e. g. a bright spot might consist of one bright particle or of two dim ones). These alternative localizations might be useful when localization and tracking are combined to use informa-tion about the previous particle posiinforma-tions for their localizainforma-tion. The lo-calization algorithms find the vesicles, but due to the lack of specific optical properties of the vesicles, they can not identify the same vesicle in successive frames. This identification via tracking has to rely on spa-tial proximity. The tracking algorithm of Crocker and Grier [56], which was extended here, considers simultaneously all particles in two consec-utive frames, but it operates only on a frame to frame basis. Tracking might be enhanced by the implementation of algorithms that consider

the whole movie as a three-dimensional spatio-temporal volume [33]

instead of tracking on a frame to frame basis. Also multiple-target tracing [281] might improve the tracking if the images are sufficiently bright. Multi-hypothesis tracking [255] would be globally optimal in space and time, but is computationally prohibitive even for small data sets [158].

Of the three localization algorithms compared, the Local Maxima algorithm finds the vesicles at least as good as the other algorithms, requires the least parameter tuning and is computationally fast. It was therefore used in combination with the tracking algorithm of Crocker and Grier for most of the data analysis in Sec.4.1.

Applications in Colloidal Physics

This chapter shows how the unique capabilities of Fast STED microsco-py were applied to visualize and analyze colloidal systems. The first sec-tion presents the imaging of diffusing nano-particles in dense samples at a rate of 80 fps. The second section presents movies of the formation of colloidal crystals imaged at 200 fps.

3.1 Diffusion of Nano-Particles

(1)

Im Dokument Fast STED Microscopy (Seite 63-67)