• Keine Ergebnisse gefunden

MAP ct, 64 lines, E=6.62 MAP op, 64 lines, E=4.38 MAP full MAP rd10, 64 lines, E=5.60 MAP eq, 64 lines, E=7.77

Figure 6.11: MAP reconstructions for Cartesian undersampling, axial TSE data.

We have TE=11ms (unknown during design optimisation) andNshot =64phase encodes (red:

32 initial centre lines; blue: 32 additional encodes according to design choices). Upper row:

full images. White window: location of blow-up. Middle row: residuals (difference toutrue), location of phase encodes (k-space columns). Lower row: blow-ups.

MAP ct: apparent lower resolution than MAP rd, MAP op. Both MAP ct and MAP rd have tendency to fill in dark area. MAP op retains high contrast there.

role for nonlinear reconstruction, and that sampling optimisation has to be matched to the reconstruction modality. The improvement of optimised over other design choices is most pro-nounced for fewer number of lines acquired. Importantly, even though designs are optimised on a single slice of data, a large part of these improvements generalises to different datasets in our study, featuring other slice positions, subjects, echo times, and even orientations (figure 6.9). Our results indicate that Bayesian design optimisation can be used offline, adjusting tra-jectories on data acquired under controlled circumstances, and final optimised designs can be used for future scans. Our framework embodies the idea of adaptive optimisation. The sam-pling design is adjusted based on a representative dataset (called training set), and if adequate measures for complexity control are in place (Bayesian sparsity prior, proper representation of posterior mass, sequential scheme of uncovering information only if asked for), good perfor-mance on the training set (figure 6.8) tends to imply good perforperfor-mance on independent test sets (figure 6.9), thus successful generalisation to similar future tasks.

Our framework is not limited to Cartesian sampling, as demonstrated by our application to spiralk-space optimisation. However, our findings are preliminary in this case: spiral sam-pling was interpolated from data acquired on a Cartesian grid, and only the offset angles of dense Archimedian interleaves were optimised (instead of considering variable-density spi-ral interleaves as well). In this setting, designs optimised by our technique show comparable performance to spacing offset angles equally, while a randomisation of these angles performs much worse.

In Bayesian design optimisation, statistical information is extracted from one or few rep-resentative images used during training and represented in the posterior distribution, which serves as oracle to steer further acquisitions along informative directions. Importantly, and

con-6.5. DISCUSSION 121

2 3 4 5 6 7 8 9

4 7.5 10 15 20 25 30 33

Nshot, Number of spiral arms L2 reconstruction error

MAP−rd MAP−eq MAP−op

2 3 4 5 6 7 8 9

4 7.5 10 15 20 25 30 40

Nshot, Number of spiral arms

L2 reconstruction error

axial short TE

2 3 4 5 6 7 8 9

Nshot, Number of spiral arms axial long TE 4

7.5 10 15 20 25 30 40

L2 reconstruction error

sagittal short TE sagittal long TE

2 3 4 5 6 7 8 9

4 7.5 10 15 20 25 30 33

Nshot, Number of spiral arms L2 reconstruction error

MAP−rd MAP−eq MAP−op

2 3 4 5 6 7 8 9

4 7.5 10 15 20 25 30 40

Nshot, Number of spiral arms

L2 reconstruction error

axial short TE

2 3 4 5 6 7 8 9

Nshot, Number of spiral arms axial long TE 4

7.5 10 15 20 25 30 40

L2 reconstruction error

sagittal short TE sagittal long TE

Figure 6.12: Results for MAP reconstruction, spiral undersampling of offset anglesθ0. Left column: reconstruction errors on sagittal slice (see figure 6.8 left), on which opis opti-mised. Right column: reconstruction errors on different data (averaged over 5 slices, 4 subjects each, see figure 6.9). Upper row: offset angles from [0,π). Lower row: offset angles from [0, 2π). Design choices: equispaced [eq]; uniform at random [rd] (averaged over 10 repeti-tions); optimised by our Bayesian technique [op].

firmed in further experiments (not shown), it is essential to optimise the design on MRI data for real-world subjects, or controlled objects of similar statistical complexity; simple phantoms do not suffice. While the latter are useful to analyse linear reconstruction, they cannot play the same role for nonlinear sparse reconstruction. Modern theory proves that overly simple signals (such as piecewise constant phantoms) are reconstructed perfectly from undersampled measurements, almost independently of the design used for acquisition [Candès et al., 2006, Donoho, 2006a]. This advantage of sparse reconstructionper se, for almost any design, does not carry over to real-world images such as photographs (see chapter 5) or clinical resolution MR images. The relevance of design optimisation grows with the signal complexity, and is dominatingly present for MR images of diagnostically useful content and resolution.

Variable density phase encoding sampling does not perform well at 14 of the Nyquist rate (figure 6.10, figure 6.11), if the density of Lustig et al. [2007] is used. For a different density with lighter tails (more concentrated on low frequencies), reconstructions are better at that rate, but are significantly worse at rates approaching 12 or more (results not shown). In practise, this drawback can be alleviated by modifying the density as the number of encodes grows. From our experience, a second major problem with variable density design sampling comes from the independent nature of the process: the inherent variability of independent sampling leads to uncontrolled gaps ink-space, which tend to hurt image reconstruction substantially. Neither of these problematic aspects is highlighted in Lustig et al. [2007], or in much of the recent com-pressed sensing theory, where incoherence of a design is solely focused on. A clear outcome from our experiments here is that while incoherence plays a role for nonlinear reconstruction, its benefits are easily outweighed by neglecting other design properties. Once design sampling distributions have to be modified with the number of encodes, and dependencies to previously

drawn encodes have to be observed, the problem of choosing such a scheme is equivalent to the design optimisation problem, for which we propose a data-driven alternative to trial-and-error here, showing how to partly automatise a laborious process, which in general has to be repeated from scratch for every new configuration of scanner setup and available signal prior knowledge.

Further issues will have to be addressed in a fully practical application of our method. We extracted (or interpolated) undersampling trajectories from data acquired on a complete Carte-sian grid, which may be realistic for CarteCarte-sian undersampling, but neglects practical inaccu-racies specific to non-Cartesian trajectories. Moreover, in multi-echo sequences, the ordering of phase encodes matters. For an immobile training subject/object, our sequential method can be implemented by nested acquisitions: running a novel (partial) scan wheneverXis extended by a new interleave, dropping the data acquired previously. With further attendance to imple-mentation and commodity hardware parallelisation, the time between these scans will be on the order of a minute. Gradient and transmit or receive coil imperfections (or sensitivities), as well as distortions from eddy currents, may imply constraints for the design, so that less can-didates may be available in each round. Such adjustments to reality will be simplified by the inherent configurability of our Bayesian method, where likelihood and prior encode forward model and known signal properties.

The near-Hermitian symmetry of measurements is an important instance of prior knowl-edge, incorporated into our technique by placing sparsity potentials on the imaginary part

=(u). This leads to marked improvements for sparse reconstruction, and is essential for Bayesian k-space optimisation to work well. In addition, phase mapping and subtraction is required. Phase contributions substantially weaken image sparsity statistics, thereby eroding the basis sparse reconstruction stands upon. In the presence of unusual phase errors, spe-cialised phase mapping techniques should be used instead. In future work, we aim to integrate phase mapping into our framework.

In light of the absence of a conclusive nonlinear k-space sampling theory and the well-known complexity of nonlinear optimal design, our approach has to be seen in the context of other realizable strategies. Designs can optimised by blind (or heuristic) trial-and-error explo-ration [Marseille et al., 1996], which in general is much more demanding in terms of human expert and MRI scan time than our approach. Well-founded approaches fall in two classes:

artificially simplified problems are solved optimally, or adaptive optimisation on representa-tive realdatasets is used. We have commented above on recent advances in the first class, for extremely sparse, unstructured signals [Candès et al., 2006, Donoho, 2006a], but these results empirically seem to carry little relevance for real-world signals. Our method falls in the sec-ond class, as an instance of nonlinear sequential experimental design [Chaloner and Verdinelli, 1995, Fedorov, 1972], where real-world signals are addressed directly, and for which few prac-tically relevant performance guarantees are available. Our approach to design optimisation is sequential, adapting measurements to largest remaining uncertainties in the reconstruction of a single image. While we established sound generalisation behaviour on unseen data in our experiments, real-time MRI [Gamper et al., 2008], [Bernstein et al., 2004, chapter 11.4] may es-pecially benefit from our sequential, signal-focused approach. While our algorithm at present does not attain the high frame rates required in these applications, algorithmic simplifications, combined with massively parallel digital computation, could allow our framework to be used in the future in order to provide advanced data analysis and decision support to an operator during a running MRI diagnosis.

Possible extensions include the application of the framework to 3D imaging. One step in this direction has already been done by Seeger [2010b], where Markovian assumptions between neighbouring slices are used to approximate full inference on a 3D cube of voxels instead of a 2D slice. Other future steps include the application of our methodology to real non-Cartesian measurements instead of simulated ones.

Chapter 7

Overall Conclusion and Perspectives

7.1 Summary

In this thesis, we developped and discussed many aspects of deterministic approximate infer-ence algorithms for generalised linear Bayesian models: chapter 3 focused on convexity and scalability, chapter 4 compared relative accuracy. We applied the algorithms to binary classi-fication (chapter 3), linear compressive image acquisition (chapter 5) and magnetic resonance imaging (MRI) optimisation (chapter 6) proving the validity and utility of our approach.

We studied three kinds of problems in increasing order of difficulty:

1. estimation, where the probabilistic model needs to provide a single best answer, that means a decision used in the future,

2. inference, where a normalised relative weighting betweenall possible answersin form of the posterior distribution is provided leaving the decision open, and

3. experimental design, where we seek to determine the questions to be asked in the first place to obtain solid knowledge allowing to produce informed answers subsequently.

In order to overcome analytical intractabilities, we had to do several approximations: we re-placed non-Gaussian distributions by Gaussian ones and we worked with lower bounds on marginal variances instead of their exact values. We saw strong similarities between the ap-proximate inference algorithms allowing to understand the effect of the approximations in practise. Also, we made clear that inference is to a certain extent orthogonal to modelling because many inference algorithms are able to approximate the exact posterior using the same interface. We also detailed the nested structure of the interrelations between estimation, in-ference and design: design can be done using a sequence of inin-ference steps and inin-ference can be understood as a sequence of estimation steps. Most estimators are solutions ofoptimisation problems; on the contrary, inference corresponds to considerably harderintegration problems.