• Keine Ergebnisse gefunden

makes the algorithms disconverge. Here, we replaceF by the Sobolev-gradientFsthat is the solution of the equation

div (σ∇Fs) =F in Ω, Fs= 0 on∂Ω.

For more detail about Sobolev-gradient, we refer to [56] and the references therein. Note that by running the algorithms, we have seen that the technique of cutting the gradient used for the diffusion identification problem does not work for this problem.

We emphasize that the conditions of the convergence of the algorithms are difficult to be shown in this problem. However, the numerical example will show that the most of algorithms still work well. The replacement of the gradient also changes the converegence rates of the algorithms. We will examine the performance of the algorithms in the following example.

We setβ= 102, α:= 5.103 and assume that σ(x1, x2) =

6,(x1, x2)∈B0.3(0.3,0)

1, otherwise .

The exact data (j, g) :=

jN, g

and noisy data jδ, gδ

:=

jN, gδ

are illustrated in Figure4.8. Here, we have

j−jδ

L2(Γ)+g−gδ

L2(Γ)0.05.

Figure 4.8: Optimal currentjN, exact Dirichlet datagand noise Dirichlet datagδ withδ= 0.05.

Figure 4.9: Values of 1/sn in the algorithms; Using exact data.

We first consider the performance of the algorithms with the exact data jN, g

. Figure 4.9 shows that the stepsizes in Alg.1 are larger than those in SSQN.I and they are almost equal to s1.

4.2. Electrical Impedance Tomography

Note that the intervals [s, s] in two algorithms have been chosen differently. Since the stepsizes in Alg.1 are larger than those inSSQN.I, Alg.1 is faster thanSSQN.I. Here, the convergence of two algorithms depend on the choice of the interval [s, s]. We have tried, for example, SSQN.I under setting [s, s] = [2.103,2.10] (as for Alg.1) do not converge. Similarly,Alg.1 under settings= 103 also disconverges.

In Figure4.10, the decrease ofM SEn) shows thatσn in the algorithms tend toσ very slowly in the first iterations and after that they go away. The decreasing rates of the objective functional in the algorithms are very slow, too. Here,Alg.1 is faster thanAlg.2,SSQN.IandSSQN.B. However,Alg.3 is still fastest. These observations show that using the Sobolev-gradient instead of theL2gradient not only smooth the recovered solutions but also changes the search direction. Thus,Alg.1 now turns to an accelerated version of the gradient method, whileSSQN.B using Sobolev-gradient canceled its advantage.

In electrical impedance tomography, it is very hard to obtain accurate approximations ofσ, especially using only one (optimal) current in our setting. With exact data, the values of M SEn) in the algorithms are very large and decrease very slowly in the first iterations and then they increase. The values of objective functional decrease slowly as well. From Figure 4.10, it seems that we cannot obtain an approximationσn ofσ such thatM SEn)<4.

Figure4.12presents the physical conductivityσ and the recovered solutionsσn in all algorithms. It is easy to see that the algorithms located the inhomogeneous part of the conductivityσ very well, but it is more difficult to recover accurately the values ofσ.

Figure 4.10: Values ofDn)L2(Ω), M SEn) and Θ (σn) in the algorithms; Using exact data.

Now, we consider the case of perturbed data jN, gδ

plotted in Figure 4.8. Similar to the exact data case, the sequencesM SEn) decrease very slowly in the first iterations and then they increase.

Their values are still large as well. The objective functional Θ (σn) decreases monotonically, but very

Figure 4.11: Values ofDn)L2(Ω), M SEn) and Θ (σn) in the algorithms; Using data with 5%

noise.

slowly.

Figure 4.13 illustrates σ and σn in the algorithms using noisy data jN, gδ

. With 5% noise, the algorithms can still recover the conductivity σ very well when they are compared with the case of the exact data.

4.2. Electrical Impedance Tomography

Figure 4.12: 3D-plots and contour plots ofσ, σn in the algorithms; Using exact data.

Figure 4.13: 3D-plots and contour plots ofσ, σn in the algorithms; Using data with 5% noise.

Conclusions

We have first investigated sparsity regularization for two parameter identification problems. For these problems, sparsity regularization incorporated with the energy functional approach was analyzed.

In diffusion coefficient identification problem, the regularized problem was proven to be well-posed and convergence rates of the method was obtained under the simple source condition. An advantage of the new approach is to work with a convex and weakly lower semicontinuous functional. Therefore, the problem can be solved numerically by the fast algorithms and the proof of the well-posedness is obtained without further conditions. Another advantage is that the source condition of obtaining convergence rates is very simple. We want to emphasize that our source condition is the simplest when it is compared with the others in the least squares approach. We do not need the requirement of smallness (or generalizations of it) in the source condition.

Not as above problem, in electrical impedance tomography, although sparsity regularization incorpo-rated with the energy functional approach are applied, we can not sure about the convexity of the energy functional. In order to obtain the well-posedness of the method, we have required some regu-larity properties of the recovered parameter. For this problem, the convergence rates of the method have obtained as well. However, the source condition is not as simple as that in the previous problem.

Secondly, we have also proposed the iterative methods for the minimization problem arising from sparsity regularization of nonlinear inverse problems.

A gradient-type method has been proven to converge for the non-convex minimization problem under certain conditions. A choice of efficient heuristic step-sizes has been analyzed as well. In the special case when the minimization problem is convex, two accelerated versions have proposed and their convergence have been proved. The decreasing rates of the objective functional in two accelerated algorithms areO(n12) withnbeing the number of iterations. This order of convergence rate is known to be the best for the algorithms based on the first-order schemes.

Other iterative methods considered are the semi-smooth Newton and quasi-Newton methods. We have proposed conditions to obtain the convergence and the convergence rate of two methods. Note that the semi-smooth Newton method is difficult to implement in practice since it relates to the computation of the second derivative, which is very hard in application. To overcome, we replaced the second derivative by its approximations, which leads to the semi-smooth quasi-Newton method. Some conditions for the convergence of the semi-smooth Quasi-Newton method are given. Furthermore, two specific cases of the method have been proposed and their convergence have been proven.

All algorithms have been applied to the two parameter identification problems. The examples have showed that the algorithms work very well. It also pointed out that the conditions for the convergence of the algorithms proposed in the thesis are only sufficient, but they are not necessary. The algorithms have worked well even though some of the conditions are violated.

On the other hand, the numerical examples showed that both parameter identification problems are very ill-posed, specially electrical impedance tomography. The algorithms did not converge when the L2(Ω)gradient is used (more exactly, the candidate of the L2(Ω)gradient is used). For using the algorithms, we need to use a prior information of recovered parameters or use the Sobolev-gradient instead of theL2(Ω)gradient. Note that using the Sobolev-gradient made the recovered parameters smoothed, but it made the algorithms more stable and accelerates the gradient-type method as well.

There are several possible directions for further work. For example, sparsity regularization incorpo-rated with the energy functional approach could be investigated for other problems and all algorithms in the thesis can be generalized to some other types of minimization problems. Especially, some other specific cases of the semismooth quasi-Newton method need further consideration.

Bibliography

[1] R. Acar and C. R. Vogel. Analysis of bounded variation penalty methods for ill-posed problems.

Inverse Problems, 10:1217–1229, 1996.

[2] G. Alessandrini. Open issues of stability for the inverse conductivity problem. J. Inverse and Ill-Posed Probl., 15(5):451–460, 2007.

[3] A. Allers and F. Santosa. Stability and resolution analysis of a linearized problem in electrical impedance tomography. Inverse Problems, 7(4):515–533, 1991.

[4] J. Aujol. Some algorithms for total variation based image restoration. http://hal.archives-ouvertes.fr/hal-00260494/en, 2011.

[5] H. T. Banks and K. Kunisch.Estimation Techniques for Distributed Parameter Systems. Systems and Control: Foundations and Applications Series. Birkhauser, Boston, 1989.

[6] J. Barzilai and J. M. Borwein. Two-point step size gradient methods. IMA J. Numer. Anal., 8:141–148, 1988.

[7] A. Beck and M. Teboulle. A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM J. Imaging Sci., 2(1):183–202, 2009.

[8] T. Bonesky, K. Bredies, D. A. Lorenz, and P. Maass. A generalized conditional gradient method for nonlinear operator equations with sparsity constraints. Inverse Problems, 33:2041–2058, 2007.

[9] L. Borcea. Electrical impedance tomography. Inverse Problems, 18:99–136, 2002.

[10] K. Bredies and D. A. Lorenz. Linear convergence of iterative soft thresholding. Fourier Anal.

Appl., 14:813–837, 2008.

[11] K. Bredies, D. A. Lorenz, and P. Maass. A generalized conditional gradient method and its connec-tion to an iterative shrinkage method. Computational Optimization and Application, 42(2):173–

193, 2009.

[12] L. M. Bregman. The relaxation of finding the common points of convex sets and its application to the solution of problems in convex programming. USSR Comput. Math. Math. Phys, 7:200–217, 1967.

[13] M. Burger and S. Osher. Convergence rates of convex variational regularization.Inverse Problems, 20:1411–1421, 2004.

[14] J. Cannon.The One Dimensional Heat Equation. Encyclopedia of mathematics, Vol. 23. Addison-Wesley Pub. Co, 1984.

[15] T. F. Chan and X. Tai. Identification of discontinuous coefficients in ellptic problems using total variation regularization. SIAM J. Sci. Comput., 25(3):881–904, 2003.

[16] T. F. Chan and X. Tai. Level set and total variation regularization for elliptic inverse problems with discontinuous coefficients. Journal of Computational Physics, 193:40–66, 2003.

[17] G. Chen and M. Teboulle. Convergence analysis of a proximal-like minimization algorithm using Bregman functions. SIAM J. Optim., 3(3):538–543, 1993.

[18] X. Chen. Superlinear convergence of smoothing quasi-Newton methods for nonsmooth equations.

J. Comp. Appl. Math, 80:105–126, 1996.

[19] X. Chen. Superlinear convergence and smoothing quasi-Newton methods for nonsmooth equa-tions. Comput. Appl. Math., 80:105–26, 1997.

[20] Z. Chen and J. Zou. An augmented Lagrangian method for identifying discontinuous parameters in elliptic systems. SIAM Journal of Control and Optimization, 37:892–910, 1999.

[21] M. Cheney and D. Isaacson. Distinguishability in impedance imaging. IEEE Transactions on Biomedical Engineering, 39(8):852–860, 1992.

[22] M. Cheney, D. Isaacson, and J. C. Newell. Exact solutions to a linearized inverse boundary value problem. Inverse Problems, 6(6):923–934, 1990.

[23] M. Cheney, D. Isaacson, and J. C. Newell. Electrical impedance tomography. SIAM Review, 1:85–101, 1999.

[24] M. Cheney, D. Isaacson, J. C. Newell, S. Simske, and J. Goble. Noser: An algorithm for solving the inverse conductivity problem. Int. J. Imag. Syst. Tech., 2(2):66–75, 1990.

[25] E. T. Chung, T. F. Chan, and X. C. Tai. Electrical impedance tomography using level set representation and total variational regularization. J. Comput. Phys., 205(1):357–372, 2005.

[26] P. G. Ciarlet. The Finite Element Method for Elliptic Problems. SIAM, Philadelphia, 2002.

[27] I. Daubechies, M. Defrise, and C. Demol. An iterative thresholding algorithm for linear inverse problems with a sparsity constraint. Comm. Pure Appl. Math, 57:1413–1541, 2004.

[28] J. E. Dennis and J. J. Mor´e. A characterization of superlinear convergence and its application to quasi-Newton methods. Inverse Problems, 28(126):549–560, 1974.

[29] D. C. Dobson and F. Santosa. Resolution and stability analysis of an inverse problem in electrical impedance tomography: dependence on the input current patterns. SIAM Journal on Applied Mathematics, 54(6):1542–1560, 1994.

[30] H. W. Engl, M. Hanke, and A. Neubauer.Regularization of Inverse Problems. Kluwer, Dordrecht, 1996.

[31] H. W. Engl, K. Kunisch, and A. Neubauer. Convergence rates for Tikhonov regularization of non-linear ill-posed problems. Inverse Problems, 5(4):523, 1989.

[32] L. C. Evans and R. F. Gariepy. Measure Theory and Fine Properties of Functions. Systems and Control: Foundations and Applications Series. CRC Press, Boca Raton, 1992.

[33] R. Falk. Error estimates for the numerical identification of a variable coefficient. Mathematics of Computation, 40(3):537–546, July 1983.

BIBLIOGRAPHY

[34] L. Gan. Block compressed sensing of natural images. In Digital Signal Processing, 2007 15th International Conference on, pages 403–406, July 2007.

[35] M. Gehre, T. Kluth, A. Lipponen, B. Jin, A. Sepp¨anen, J. Kaipio, and P. Maass. Sparsity reconstruction in electrical impedance tomography: an experimental evaluation. in press, 2011.

[36] D. Gilbarg and N. S. Trudinger. Elliptic Partial Differential Equations of Second Order. A series of comprehensive studies in mathematics. Springer, Berlin, 1998.

[37] M. Grasmair, M. Haltmeier, and O. Scherzer. Sparsity regularization with lq penalty term.

Inverse Problems, 24:055020, 2008.

[38] R. Griesse and D. A. Lorenz. A semismooth Newton method for Tikhonov functionals with sparsity constraints. Inverse Problems, 24:035007, 2008.

[39] E. T. Hale, W. Yin, and Y. Zhang. A Fixed-Point Continuation Method for l1-Regularized Minimization with Applications to Compressed Sensing. Technical report, 2007.

[40] D. N. H`ao and T. N. T. Quyen. Convergence rates for Tikhonov regularization of coefficient identification problems in Laplace-type equation. Inverse Problems, 26:125014, 2010.

[41] D. N. H`ao and T. N. T. Quyen. Convergence rates for Tikhonov regularization of a two-coefficient identification problem in an elliptic boundary value problem. Numer. Math., pages 1–33, 2011.

[42] D. N. H`ao and T. N. T. Quyen. Convergence rates for total variation regularization of coefficient identification problems in elliptic equations I. Inverse Problems, 27:075008, 2011.

[43] D. N. H`ao and T. N. T. Quyen. Convergence rates for total variation regularization of coef-ficient identification problems in elliptic equations II. Journal of Mathematical Analysis and Applications, 2012.

[44] M. Hinterm¨uller, K. Ito, and K. Kunisch. The primal-dual active set strategy as a semismooth Newton method. SIAM J. Optim., 13(3):865–88, 2003.

[45] B. Hofmann, B. Kaltenbacher, C. P¨oschl, and O. Scherzer. A convergence rates result for Tikhonov regularization in Banach spaces with non-smooth operators. Inverse Problems, 23:987–

1010, 2007.

[46] L. B. Horwitz and P. E. Sarachik. Davidon’s methods in Hilbert space.SIAM Journal on Applied Mathematics, 16:676–695, 1968.

[47] O. Y. Imanuvilov, G. Uhlmann, and M. Yamamoto. The Calder´on problem with partial data in two dimensions. J. Amer. Math. Soc., 23(3):655–691, 2010.

[48] D. Isaacason. Distinguishability of conductivities by electric current computed tomography.IEEE Transactions on Medical Imaging, 5(2):91–95, 1986.

[49] K. Ito and K. Kunisch. Maximizing robustness in nonlinear ill-posed inverse problems. SIAM Journal on Control and Optimization, 33(2):643–666, 1995.

[50] B. Jin, T. Khan, and P. Maass. A reconstruction algorithm for electrical impedance tomography based on sparsity regularization. International Journal for Numerical Methods in Engineering, 2011.

[51] B. Jin, T. Khan, P. Maass, and M. Pidcock. Function spaces and optimal currents in impedance tomography. Journal of Inverse and Ill-posed Problems, 19(1):25–48, 2011.

[52] B. Jin and P. Maass. An analysis of electrical impedance tomography with applications to Tikhonov regularization. http://www.dfg-spp1324.de/download/preprints/preprint070.pdf, 2011.

[53] A. Kirsch.An introduction to the mathematical theory of inverse problems. Applied Mathematical Sciences. Springer, 1991.

[54] K. C. Kiwiel. Proximal minimization methods with generalized Bregman functions. SIAM J.

Control Optim., 35:1142–1168, 1997.

[55] I. Knowles. A variational algorithm for electrical impedance tomography. Inverse Problems, 14:1513–1525, 1998.

[56] I. Knowles. Parameter identification for elliptic problems. Journal of Computational and Applied Mathematics, 131:175–194, 2001.

[57] R. V. Kohn and B. Lowe. A variational method for parameter identification. Mathematical Modeling and Numerical Analysis, 22:119–158, 1988.

[58] C. Kravaris and J. H. Seinfeld. Identification of parameters in distributed parameter systems by regularization. SIAM Journal of Control and Optimization, 23:217–241, 1985.

[59] Y. W. Kwon and H. Bang. The Finite Element Method Using MATLAB. CRC Press, Boca Raton, 2000.

[60] A. Lechleiter and A. Rieder. Newton regularizations for impedance tomography: a numerical study. Inverse Problems, 22(6):1967–1987, 2006.

[61] W. R. B. Lionheart. EIT reconstruction algorithms: pitfalls, challenges and recent developments.

Phys. Meas., 25(1):125–142, 2004.

[62] D. A. Lorenz. Convergence rates and source conditions for Tikhonov regularization with sparsity constraints. J. Inv. Ill-posed problems, 16:463–478, 2008.

[63] D. A. Lorenz, P. Maass, and P. Q. Muoi. Gradient descent methods based on quadratic approx-imations of tikhonov functionals with sparsity constraints: theory and numerical comparison of stepsize rules. preprint, 2011.

[64] M. Lukaschewitsch, P. Maass, and M. Pidcock. Tikhonov regularization for electrical impedance tomography on unbounded domains. Inverse Problems, 19(3):585–610, 2003.

[65] P. Maass and P. Q. Muoi. Semismooth newton and quasi-newton methods for minimization problems in weighted1regularization of nonlinear inverse problems. preprint, 2011.

[66] P. Maass and P. Q. Muoi. Sparsity regularization for electrical impedance tomography : the well-posedness and convergence rates. preprint, 2011.

[67] P. Maass and P. Q. Muoi. Sparsity regularization for the diffusion coefficient identification prob-lem: the well-posedness and convergence rates. preprint, 2011.

[68] R. V. Mayorga and V. H. Quintana. A family of variable metric methods in function space, without exact line searches.Journal of Optimization Theory and Applications, 31:303–329, 1980.

[69] N. G. Meyers. AnLpestimate for the gradient of the solutions of second order elliptic divergence equations. Ann. Scuola Norm. Sup. Pisa (3), 17:189–206, 1963.

BIBLIOGRAPHY

[70] Y. Nesterov. Gradient methods for minimizing composite objective function. CORE Discussion Papers 2007076, Universit´e catholique de Louvain, Center for Operations Research and Econo-metrics (CORE), September 2007.

[71] A. Neubauer. On enhanced convergence rates for Tikhonov regularization of nonlinear ill-posed problems in Banach spaces. Inverse Problems, 25:065009, 1989.

[72] J. M. Ortega and W. C. Rheinboldt. Iteration Solution of Nonlinear Equations in Several Vari-ables. Academic Press, New York, 1970.

[73] J. M. Ortega and W. C. Rheinboldt. Numerical Methods for Unconstrained Optimization and Nonlinear Equations. Englewood Cliffs, NJ, 1983.

[74] R. Ramlau and G. Teschke. A Tikhonov-based projection iteration for nonlinear ill-posed prob-lems with sparsity constraints. Numer. Math., 104:177–203, 2006.

[75] E. Resmerita and O. Scherzer. Error estimates for non-quadratic regularization and the relation to enhancement. Inverse Problems, 22:801–814, 2006.

[76] G. R. Richter. Numerical identification of a spatially varying diffusion coefficient. Mathematics of Computation, 36:375–386, 1981.

[77] L. Rondi and F. Santosa. Enhanced electrical impedance tomography via the Mumford-Shah functional. ESAIM Control Optim. Calc. Var., 6:517–538, 2001.

[78] E. Sachs. Broyden’s method in Hilbert space. Math. Programming, 35:71–82, 1986.

[79] F. Santosa and M. Vogelius. A backprojection algorithm for electrical impedance imaging.SIAM J. Appl. Math., 50(1):216–243, 1990.

[80] D. Sun and J. Han. Newton and quasi-Newton methods for a class of nonsmooth equations and related problems. SIAM J. Optim., 7(2):463–480, 1997.

[81] D. Takhar, J. N. Laska, M. B. Wakin, M. F. Duarte, D. Baron, S. Sarvotham, K. F. Kelly, and R. G. Baraniuk. A new compressive imaging camera architecture using optical-domain compression. Inin Proc. of Computational Imaging IV at SPIE Electronic Imaging, pages 43–52, 2006.

[82] G. Teschke and R. Ramlau. An iterative algorithm for nonlinear inverse problems with joint sparsity constraints in vector-valued regimes and an application to color image inpainting.Inverse Problems, 23(5):1851–1870, 2007.

[83] G. Uhlmann. Commentary on Calder´on’s paper (29), on an inverse boundary value problem.

Amer. Math. Soc., pages 623–636, 2008.

[84] M. Ulbrich. Semismooth Newton methods for operator equations in function spaces. SIAM J.

Control Optim., 13(3):805–842, 2003.

[85] M. B. Wakin, J. N. Laska, M. F. Duarte, D. Baron, S. Sarvotham, D. Takhar, K. F. Kelly, and R. G. Baraniuk. An architecture for compressive imaging. In IEEE International Conference on Image Processing (ICIP), pages 1273–1276, 2006.

[86] P. Weiss, L. F´eraud, and G. Aubert. Efficient schemes for total variation minimization under constraints in image processing. SIAM J. Sci. Comput., 31:2047–2080, 2009.