• Keine Ergebnisse gefunden

Figure 2.27:Validity of the MCT power laws in for the binary systems (cf. Figure2.19for the monodis-perse case): 𝛼 and𝛽 relaxation times are shown together with the plateau differences to the critical plateau, varying the relative separationπœ€ = (𝑐11,𝑐 βˆ’π‘11)βˆ•π‘11,𝑐 from the transition point. The magenta lines are power laws with the exponents𝛿= 1.62(for𝜏±

𝛽) andΞ³= 2.49(forπœπ›Ό) derived from the tran-sition in the monodisperse system with only big particles (computed via MPB-RMSA), which has an exponent parameter ofπœ†= 0.74.

The results for the binary system presented here indicate that there is no qualitative change in the pic-ture of the MCT glass transition compared to monodisperse systems. Important quantities like strucpic-ture factors and pair distribution functions at the transition points, critical NEPs, localization lengths and re-laxation times are very similar to those in monodisperse systems. However, one should keep in mind that the system discussed here might be somewhat special. The volume fraction of only5 %is rather low and the screening lengths1βˆ•Μƒπ‘˜are in the range of about1βˆ•4of the mean interparticle distance, which corresponds to very low screening. On the other hand, the chosen size and charge ratio of0.63 is not extremely small. One might expect qualitative changes for substantially smaller ratios. The simulations confirmed that crystallization in this binary system is indeed successfully prohibited, which is not the case in monodisperse systems. Basically the same system with the same size and charge ratio was inves-tigated in the diffusing wave spectroscopy experiments discussed in Chapter8. So the combination of MCT and MC simulations presented above provides useful results for later comparisons to experiments.

2.5 Conclusion

MCT predictions for the glass transition points were computed varying the system parameters in the hard sphere Yukawa (HSY) model, as there are volume fractionΞ¦, coupling parameter𝛾 and screening parameterπ‘˜. It turns out that glass transition lines𝛾(π‘˜)for constantΞ¦are similar to the crystal melting line determined by others in simulations of screened charged particle systems. The main difference is

that the required repulsion described by𝛾 is a factor of about 1.5 larger (forΞ¦ < 0.4). An effect of the hard sphere part of the HSY potential is only seen for volume fractions Ξ¦ > 0.45. The required 𝛾 decreases dramatically until no additional repulsion is necessary at the hard sphere MCT transition forΞ¦ ≃ 0.516. A Hansen-Verlet-like criterium, stating that the principal peak height of the structure factor𝑆(π‘žmax)has to be higher than a certain value for the system to become solid, yields considerable larger critical peak heights for charged system. Going from infinite screening (hard spheres) to very low screening, the critical𝑆(π‘žmax) grows from3.5up to4.7. Along the same line the critical principal peak heights of the pair distribution function 𝑔(π‘Ÿmax)go down from 5.5to2.7. Thus, MCT in combination with MPB-RMSA structure factors predicts long range order to be more important in charged systems while short range order becomes less important. This can be interpreted as a sign of the larger range of the HSY potential, it is larger for lower screening.

A similar trend to a lower importance of short range order is seen in the localization lengthsπ‘Ÿπ‘  inter-pretable as the cage size in which particles are trapped in a glassy system. For a charged system at low screening (Μƒπ‘˜ ≃ 4) directly at the transitionπ‘Ÿπ‘ is about7.9 %of the mean interparticle distance, while it is about7.5 %for pure hard spheres. MCT power laws for𝛼- and𝛽- relaxation times and for the plateau heights of glassy correlation functions are predicted to be valid up to a relative separation from the transi-tion point ofπœ€= 0.1orπœ€= 0.01, respectively. Exponents for relaxation time power laws are very similar to those for hard spheres, only for low screening they are distinctively smaller.

Using the experimental system parameters effective charge𝑍ef f and salt concentration𝑐11 instead of𝛾 andπ‘˜it turns out that the MCT transition line is not always reachable, not even with an arbitrary high charge number 𝑍ef f. This is due to the effect of counterions. They increase the screening and make the system softer. It was shown that a high value of the product of diameter and dielectric constant πœŽβ‹…πœ€enhances the possibility to reach the transition line. Counterions are also the reason for a reentrant transition scenario. With an increase of the effective charge𝑍ef fat first, the system undergoes a transition from liquid to glass. But with a further increase of𝑍ef fthe screening due to counterions can become so large that the system goes back from glass to liquid.

MCT calculations using structure factors from MC simulations on binary mixtures have shown no quali-tative difference to monodisperse systems. Localization length and power law predictions are very similar for the system discussed here. The main use of these results is the ability to compare them to the exper-imental system of PS particles in water elaborated in Chapter8. Since monodisperse charged systems rather show crystallization and do not form colloidal glasses, these predictions for binary systems are very useful.

Chapter 3

Particle detection

This chapter is dedicated to the detection of spherical particles in digital images recorded by video mi-croscopy. Already in 1996, Crocker and Grier [35] introduced an algorithm that has been commonly used in many experiments. However, for the case of binary or polydisperse systems, where a distinction of particles with different sizes is mandatory, the code does not perform well enough. The reason is that the algorithm is optimized for the detection of particles with a chosen fixed size. If particle sizes in the image are too diverse, false detections and also a lot of unidentified particles are the consequence.

The method described here is using a technique that is not dependent on the size of the detected objects:

Scale Invariant Feature Transform (SIFT). Developed by computer scientists (Lowe [64]) it is used in or-der to find and characterize features of arbitrary size in digital images. Well known applications are object or face recognition but also overlapping a series of images (stitching), for instance to create panoramic views. Leocmach and Tanaka [65] were the first to translate the algorithm to the detection of spherical particles. Not only is the detection independent of the particle size but also the determination of the radius is accurate up to an error only a few percent.

First the theoretical background for both, the method according to Crocker and Grier [35] and SIFT is presented, together with a detailed description of the implementation done in the course of this work.

Tests on simulated as well as on real images are presented, discussing the accuracy of the determined particle positions and sizes, together with a comparison of the two detection methods. Finally the problem of finite exposure times in the image recording is discussed and how to mitigate the situation in 3D measurements.

Contents

3.1 Detection of particles according to Crocker and Grier . . . 58 3.1.1 Background and noise reduction . . . 58 3.1.2 Sub-pixel precision . . . 60 3.1.3 Summary and critique . . . 61 3.2 Detection of particles with SIFT . . . 62 3.2.1 Scale space and difference of Gaussians . . . 62 3.2.2 Detection and localization of local minima . . . 65 3.2.3 Rejection of bad candidates . . . 66 3.2.4 Sizing . . . 68 3.2.5 Different resolution in𝑧direction, inflation . . . 68 3.2.6 Implementation . . . 69 3.3 Tests of the SIFT algorithm . . . 70 3.3.1 Tests on simulated images . . . 70 3.3.2 Tests on a rigid sample . . . 74 3.3.3 Comparison to Crocker and Grier’s method . . . 77 3.4 Finite exposure time problem . . . 80 3.4.1 Positioning error for moving particles: Brownian motion . . . 80

3.4.2 Recomputingπ‘₯and𝑦coordinates from the 2D slices . . . 82 3.4.3 Application to a binary system with glassy dynamics . . . 82 3.4.4 Is the 2D exposure time short enough for MSD measurements?. . . 85 3.4.5 How to do a correction for the𝑧direction? . . . 87 3.5 Conclusion . . . 88

3.1 Detection of particles according to Crocker and Grier

3.1.1 Background and noise reduction

In confocal microscopy one has the possibility to capture slices through a sample at different depths𝑧. One can either take 2D images at a fixed depth or one can capture whole stacks of slices at equidistant𝑧 positions to obtain a 3D image which is a snapshot of the observed sample volume. In 2D the measured quantity is the pixel value𝐼(π‘₯, 𝑦), in 3D a commonly it is the voxel value𝐼(π‘₯, 𝑦, 𝑧).

In a perfect experiment the value of a voxel would be1inside a particle and0outside. In reality there are are many sources of noise and distortion. One problem is that light emitted by the particles is scattered in the sample leading to a distorted background. Than there is digitization and readout noise from the camera. Another issue when taking pictures is shot noise, especially if the intensity is very low. Further-more, contrast and intensity gradients can arise from a nonuniform illumination, a problem that occurs mostly with confocal microscopes using a Nipkow disk. A real image is noisy and the background is nowhere really zero, therefore it is crucial to apply filters to get closer to a perfect image.

Crocker and Grier [35] assume spherical particles that are well separated in a sense that their features in the image are relatively small. This means that the background can be obtained by simply computing a floating average, sometimes called boxcar average. In 1D this corresponds to a convolution with a boxcar function which is1inside an interval (e.g. in[βˆ’π‘€, 𝑀]) and0outside. In 2D one has to convolve with a square and in 3D with a cube. Averaging in the 3D case is written as

𝐼𝑀(π‘₯, 𝑦, 𝑧) = 1 (2𝑀+ 1)3

βˆ‘π‘€ 𝑖,𝑗,π‘˜=βˆ’π‘€

𝐼(π‘₯+𝑖, 𝑦+𝑗, 𝑧+π‘˜) , (3.1)

which is the convolution of the image with a cube of edge length 2𝑀+ 1. Sinceπ‘₯, 𝑦, 𝑧can only take integer values (discrete positions of the voxels) the convolution is not an integral but a sum. For a good estimate of the background value, the boxcar dimension𝑀needs to be bigger than the radius of a particle but smaller than the average distance between particles. The application of this average to a 1D picture is shown in Figure3.1. The blue line in the right panel that represents the calculated background𝐼𝑀(π‘₯) reveals that the illumination of the simulated noisy image is higher to the left and lower to the right.

Digitization, Poisson and readout noise are completely random and typically have a spatial correlation length of just one voxel. They can be greatly reduced by using a Gaussian filter with a width ofπœŽβ‰ˆβˆš

(2) voxels, which is again a convolution:

𝐼𝜎(π‘₯, 𝑦, 𝑧) = 1 𝐡

βˆ‘π‘€ 𝑖,𝑗,π‘˜=βˆ’π‘€

𝐼(π‘₯+𝑖, 𝑦+𝑗, 𝑧+π‘˜) exp (

βˆ’π‘–2+𝑗2+π‘˜2 2𝜎2

)

, 𝐡= [ 𝑀

βˆ‘

𝑖=βˆ’π‘€

𝑒π‘₯𝑝(βˆ’(𝑖2βˆ•2𝜎2)) ]3

(3.2)

Subtracting the calculated background from the Gaussian filtered image yields a new image𝐴=πΌπœŽβˆ’πΌπ‘€, which is a lot closer to the perfect image (see red line in Figure3.1). In particular the intensity level of

3.1 Detection of particles according to Crocker and Grier

0 20 40 60 80 100

x

0.0 0.5 1.0 1.5

I ( x )

d=1.0 d=0.7

realistic perfect

0 20 40 60 80 100

x

0.0 0.5 1.0 1.5

I ( x )

background Gaussian subtraction perfect

Figure 3.1:Filtering in the method of Crocker and Grier [35]: Left panel compares a perfect 1D image of two particles and a realistic image with noise and uneven illumination. On the right panel we see the calculated background and the Gaussian filtered version of the noisy image. The red line shows the image built by subtracting the background from the Gaussian filtered image of which the positive part is used for further processing

Figure 3.2:Application of the background (b) and Gaussian (c) filters to a simulated raw 2D image with noise and uneven illumination (a). The difference image (d) of c and b is used to detect the positive local maxima. Insets show the shape of the filter kernels as described in Crocker and Grier [35].

all particles is now the same. One can compute the corrected image directly by performing just a single convolution

𝐴(π‘₯, 𝑦, 𝑧) =𝐼(π‘₯, 𝑦, 𝑧)⋆[𝐺(π‘₯, 𝑦, 𝑧, 𝜎) βˆ’ cube(π‘₯, 𝑦, 𝑧,2𝑀+ 1)] , (3.3) where𝐺denotes the Gaussian as in Equation3.2andcube(π‘₯, 𝑦, 𝑧, 𝑠)is a function which is1inside a cube (centered at the origin) of edge length 𝑠and zero outside. Projected into two dimensions the function πΊβˆ’ cubereminds one of a Mexican sombrero with a high tip and a quadratic rim that is bent up sharply at the edge.

In Figure3.2we can see how the filtering acts on a simulated noisy 2D image (panel a). The application of the background filter (panel b) shows that the illumination decreases towards the right side of the image.

Applying the Gaussion filter (panel c) removes the noise at small length scales. Finally, in the difference of panels c and b (panel d), we can see that the brightness has become lower but it is at a similar level for all 4 particles.

3.1.2 Sub-pixel precision

With the filtered image𝐴(π‘₯, 𝑦, 𝑧), the next step is to set all voxels with negative values to zero, so that the background between particles is always zero or at least very close to zero. Particles can now be detected by searching for local maxima, in practice a voxel is taken as a candidate particle position if no other voxel within a distance𝑀has a higher value. Since the background is sometimes slightly above zero one has to cut off all candidates which are below a deliberately chosen threshold.

The final step in the positioning according to Crocker and Grier [35] is a refinement procedure. Having found the brightest voxel for each particle the position is only obtained within an accuracy of one voxel.

Due to noise, in many cases this is even not the voxel closest the particle’s center. Therefore one computes something similar to a centre of mass, which in this case means the center of intensity of the voxels inside a sphere of a chosen radius (usually one takes𝑀) situated at the candidate positionπ‘₯, 𝑦, 𝑧:

βŽ›βŽœ (πœ–π‘₯, πœ–π‘¦, πœ–π‘§) gives the fractional shift of the centre of intensity relative to the candidate position. While π‘₯, 𝑦, 𝑧are integer values,π‘₯π‘Ÿ, π‘¦π‘Ÿ, π‘§π‘Ÿnow become real floating point numbers and the error in the standard deviation of the position measurement is reduced to below 10 %of the voxel size, depending on how many voxels contribute to the calculation. Gao and Kilfoil [66] proposed to do a further refinement by iterating this step using linearly interpolated voxel values. The intensity𝐴is given only at integer values π‘₯, 𝑦, 𝑧, however, if the fractional shiftπœ–π‘₯is in the interval[0,1]one can easily do an interpolation:

𝐴(π‘₯+πœ–π‘₯, 𝑦, 𝑧) =πœ–π‘₯𝐴(π‘₯, 𝑦, 𝑧) + (1 βˆ’πœ–π‘₯)𝐴(π‘₯+ 1, 𝑦, 𝑧) (3.5) Applying this as well to the 𝑦 and𝑧 direction one can calculate 𝐴(π‘₯+πœ–π‘₯+𝑖, 𝑦+πœ–π‘¦+𝑗, 𝑧+πœ–π‘§+π‘˜) and put it into Equation3.4. This yields a new fractional shift(πœ–β€²π‘₯, πœ–β€²π‘¦, πœ–π‘§β€²)and leads to a further refined position of the centre of intensity. Gao and Kilfoil [66] showed that pixel biasing, which is the effect that detected particle positions tend to agglomerate near integer pixel/voxel positions, can be greatly reduced or even completely disappears by doing up to20iterations of this refinement process.

In Figure3.3the position refinement is illustrated using simulated noisy images of particles with different diameters. Crocker and Grier [35] proposed to do the refinement step directly with the raw images, in

3.1 Detection of particles according to Crocker and Grier

Figure 3.3: Sub-pixel precision is achieved by computing the centre of intensity of the voxels inside a sphere. One starts with a sphere around the brightest voxel (red circle and red cross). After one refinement step the green circle is reached and after5iterations the refinement converges with the yellow circle and the black+sign.

this work it has proven to be safer to do it with the filtered images𝐴(π‘₯, 𝑦, 𝑧). There, the background between particles is constantly zero which means that it does not contribute at all in the centre of intensity calculation.

The method benefits from the fact that many voxels are included in the calculation of one position. How-ever, in order to avoid systematic errors it is crucial that the radius𝑀is not too big: If neighbour particles contribute to the calculation, they will pull the position into their direction, so that distances between neighbour particles become smaller than they really are. On the other hand, if𝑀is too small there are not enough dark and bright voxels for a precise measurement. For good results a minimum of at least20 voxels should be inside a sphere (3D) or a circle (2D) with radius𝑀.

3.1.3 Summary and critique

Crocker and Grier’s particle detection algorithm [35] was established in 1995 for 2D images. Since then it has been adapted to 3D and has been used for particle detection in many colloidal experiments [67,68, 69,70]. Gao and Kilfoil [66] criticized that many implementations only reach sub-pixel accuracy in the π‘₯, 𝑦plane but not in𝑧, which can only be achieved by doing the refinement step in 3D and by repeating it at least 10 times. This removes pixel biasing and leads to a dramatic correction of the physical quantities:

In their experiment the plateau in⟨π‘₯2(𝜏)⟩(one-direction MSD) was about a factor of6times lower using refined positions and the first peak of the pair distribution function𝑔(π‘Ÿ)was increased by almost20%.

Lu et al. [71] showed the importance of a rotationally symmetric kernel in the noise reduction step.

Using asymmetric kernels gives the artificial appearance of orientational crystalline order, even in fully-disordered isotropic particle systems. The kernel for the background calculation presented in Figure3.2 is also asymmetric and one can see clearly that there is a remnant of this fourfold asymmetry in the filtered image b.

Important parameters in the detection algorithm are the width𝜎of the Gaussian filter and the radius𝑀 of the sphere used in the refinement step. If𝜎 is chosen too small the intensity profile near a particle’s center is quite flat and the brightest pixel is not at the particles’ center (see Figure3.1. If𝜎 is too big the peaks of neighbouring particles overlap, which means that their detected positions shift towards each other. Similarly,𝑀being too small the center of intensity is not computed using all the voxels inside a

particle, which makes the result more sensitive on the noise. On the other hand, with𝑀being too big, neighbouring particles again have a bad influence on the refined positions. Therefore problems arise especially in dense systems and in systems with a polydispersity that exceeds a certain threshold, a value of 6-7% is already too much for an accurate positioning [65].

Another issue is the determination of particle sizes. In a dilute system this could by done by just summing up the voxel values inside a sphere of radius𝑀around the determined particle position, the same sphere that is used in the refinement step. This sum is directly related to the particles volume and thus to its radius. However, in a dense system this cannot work: There is no possible choice for𝑀so that the sphere includes all the voxels of any occurring particle size and at the same time no voxels from neighbouring particles.

In principle for an ideal position and size determination, each particle needs its own values for𝑀and𝜎. As we will see, this is exactly what is done using SIFT for the particle detection.