• Keine Ergebnisse gefunden

Sampling Inequality with Polynomials and Galerkin Data

Im Dokument Sampling Inequalities and Applications (Seite 118-133)

This section is completely independent from the previous ones. In this section, we shall chooseV to be a polynomial space, to be more precise,V =πk(Ω)for somek∈N. This means that we do a kind of generalized finite element theory with πk(Ω). To derive an upper bound forkδykV we first have to work locally on nice “small “ domains. The first step in deriving upper bounds is to consider only local regionsDthat are star-shaped with respect to a ball. A bounded domainΩbounded region that is star-shaped with respect to a ball satisfies a uniform interior cone condition.

In the case that V consists of polynomials, a bound on kδyk can be found in [64]. If V =πk Rd

is the space of all algebraic polynomials of degree not exceedingk, we know [38]

kDαpkL(D)

2k2 rsin (θ)

|α|

kpkL(D) ,

for arbitraryp ∈ V. Unfortunately, this estimate does not use the W2τ-norm. Therefore we have to modify this result. To do so, we use a result from [11, Lemma 4.5.3], i.e., for 1≤p, q≤ ∞and0≤m≤ℓwe have

kvkWp(D)≤CρmDℓ+d/pd/qkvkWqm(D) for allv∈πk Rd

, whereρD=diam(D)<1denotes the diameter of the domainD.

Lemma 8.5.1 IfDis star-shaped with respect to a ball with radiusr, for everyα ∈ Nd

0

and every real numberβ > d/2andτ ≤β,

yDαp| ≤C

2k2 rsin (θ)

|α|

ρτDβkpkW2τ(D) , for allp∈V =πk(D).

8.5. SAMPLING INEQUALITY WITH POLYNOMIALS AND GALERKIN DATA 115 Proof:Putting the inequalities together, yields forβ > d/2and anyy∈ D

yDαp| ≤ kDαpkL(D)

2k2 rsin (θ)

|α|

kpkL(D)

2k2 rsin (θ)

|α|

kpkWβ

2(D)

≤ C

2k2 rsin (θ)

|α|

ρτDβkpkW2τ(D) ,

where we used Sobolev’s inequality in the fourth step. 2

Since we are going to do anL2-theory, we shall assumeα= 0.

We assume that the domainDis compact, i.e., bounded and closed, so that for any polyno-mialp∈πk Rd

there is a pointx˜∈ Dsuch that

maxx∈D |p(x)|=|p(˜x)|=|δx˜p| .

Proposition 8.5.2 Assume that the solutionwgof the adjoined problem (8.2.4) fulfillswg ∈ W2σ(Ω), where we have to imposeσ = ξ. Letn

φDj o

j=1,...,n be an orthonormal basis of πτ(D)with respect to theW2τ-inner product. Then there is a constantC >0such that for allu∈W2τ(D)we have

kukL2(D)≤C

ρσDτkukW2τ(D)1ρτ−β+d/2D ka(u, φ)k2(n) , withσ > τ.

Proof:We find for an arbitrary polynomialpand anyu∈W2τ(D) kukL2(D) ≤ ku−pkL2(D)+kpkL2(D)

≤ ku−pkL2(D)+vol(D)1/2x˜p|

≤ ku−pkL2(D)d/2D

Xn j=1

bj(˜x)a(p, φj)

≤ ku−pkL2(D)d/2D

Xn j=1

bj(˜x) (a(u−p, φj) +a(u, φj))

≤ ku−pkL2(D)d/2D kb(˜x)k2ka(u−p, φ)k2(n)

+ ρd/2D kb(˜x)k2ka(u, φ)k2(n) . (8.5.1) Sincea(·,·)is coercive we can choose a polynomialp˜D ∈V =πτ(D)such that

a(u, φj) =a(˜pD, φj) for all1≤j≤n .

We point out that the polynomialp˜D depends on the domainD. Furthermore, we can use Theorem 8.2.5 and Lemma 8.5.1 with|α|= 0to note that

kb(˜x)k2 ≤γ−1x˜kV≤Cγ−1ρτDβ .

If theφj’s form an orthonormal basis forπτ(D), the estimate (8.5.1) reduces to kukL2(D) ≤ ku−pDkL2(D)+Cγ1ρτDβ+d/2ka(u, φ)k2(n) .

To bound the term ku−p˜DkL2(D) we will use the Aubin-Nitsche construction from the introduction, i.e.,

ku−p˜DkL2(D)≤Kaku−p˜DkW2τ(Ω) sup

gL2(D)

infφπτ(D)kwg−φkW2τ(Ω) kgkL2(D)

.

For the next step we assume a regularity assumption in the sense of(8.2.5). Then we can bound the second factor by the Bramble-Hilbert lemma, which gives

φinfπτ(D)kwg−φkW2τ(Ω)≤ρσ−τD |wg|W2σ(D) ≤Cρσ−τD kgkL2(D) . For the first part we use the Cea Lemma [10] to get

ku−pDkW2τ(D)≤CkukW2τ(D) .

Putting things together yields the claimed estimate. 2

This is a local sampling inequality. We shall now apply a covering argument to extend this result to the global domain. To derive a global sampling inequality we again cover the global domainΩby nice small domainsDtas described in Theorem 3.3.10. From now on we assume h ≤ Q(k, θ)R0, so that the construction is applicable. Then we can prove a global version of our sampling inequality.

Theorem 8.5.3 Under the assumption from Proposition 8.5.2 and Theorem 3.3.10 there exists a constanth0 >0with the following property. There is a constantC > 0such that for all functionsu∈W2τ(Ω)and allh≤h0the sampling inequality

kukL2(Ω) ≤C

hσ−τkukW2τ(Ω)+hτ−β+d/2γ−1ka(u, φ)k2(n#T r)

holds, wherej}j=zℓ+1,...,(z+1)ℓis an orthonormal system inπτ(Dz+1) for allz= 0, . . . ,#Tr−1.

Proof:The decomposition ofΩtogether with Proposition 8.5.2 and sinθ

2 (1 + sinθ)Q(k, θ)h= 2r≤ρDt ≤2R = 2 Q(k, θ)h

8.5. SAMPLING INEQUALITY WITH POLYNOMIALS AND GALERKIN DATA 117 shows that we have

kuk2L2(Ω) = Z

|u(x)|2dx

≤ X

tTr

Z

Dt

|u(x)|2dx=X

tTr

kuk2L2(Dt)

≤ CX

t∈Tr

hστkukW2τ(Dt)+hτβ+d/2γ1ka(u, φ)k2(n)2

, where we have used the fact that we may choose the same constantC > 0for all regions Dt. Further calculation yields

kuk2L2(Ω)≤CX

tTr

hσ−τkukW2τ(Dt)+hτ−β+d/2γ−1ka(u, φ)k2(n)

2

≤C h2(στ)X

tTr

kuk2W2τ(Dt)+h2(τβ+d/2)γ2 X

tTr

ka(u, φ)k22(n)

!

≤CM1

h2(στ)kuk2W2τ(Ω)+h2(τβ+d/2)γ2ka(u, φ)k22(n#T r) , where the last estimate follows from Theorem 3.3.10, since

X

tTr

kukpWk+s

p (Dt) ≤M1kukpWk+s p (Ω) .

This finishes the proof. 2

Please note the special meaning of the parameterh. It is not the usual fill distance of a discrete setX⊂Ω. It rather mimics the meaning ofhin the finite element literature as the size of the local patches.

The error analysis of this section relies on good polynomial projections. This indicates some possible generalization in the context ofgeneralized finite elements.

Chapter 9

Discussion and Outlook

In this thesis we have systematically generalized the concept of sampling inequalitiesto various situations, and we have illustrated applications. The first main part considered strong sampling inequalities. We derived sampling inequalities for infinitely smooth func-tions where the sampling order turned out to vary exponentially in the fill distance. As a special case our technique reproduces the well known error estimates for classical interpo-lation in the native spaces of Gaussian and inverse Multiquadric kernels. However, even in the special case of interpolation the optimal convergence rates are not known. Further re-search should address better sampling orders by avoiding the boundary effect. Although we did not pay much attention on this detail, a main drawback of the estimates lies in the con-stants involved, which depend exponentially on the space dimension. To avoid this spectral growth one should consider sparse grids.

We further presented a deterministic error analysis for support vector regression algorithms.

We restricted ourselves to theǫ- and theν-SVR, but the described procedure can be easily generalized to all learning algorithms with penalty terms induced by kernels whose native spaces are Sobolev spaces. The sampling orders we found are optimal and were confirmed numerically. The error analysis does not depend on any assumptions on the inaccuracy of the given data, so one should combine them with stochastical models on the noise to im-prove parameter choices.

As an auxiliary result we proved a Bernstein inequality, which provides equivalence con-stants between the Sobolev- and theL2-norm on a finite dimensional space of translates of an RBF. For that, we need a technical condition on the distribution of centers, which seems to be artificial. Further research should overcome this.

The second main part addressed weak sampling inequalities. We considered stationary convolution-type data, which generalizes the usual finite volume methods. We derived a sampling inequality for this situation, which forms an important step towards ana priori error analysis of MLPG methods. Finally we derived a deterministic error analysis for the solution of regularized variational problems, which arise naturally as Ritz-Galerkin approx-imations of pde’s in weak formulations. We presented sampling inequalities for both kernel based and polynomial ansatz spaces. For the analysis of MLPG methods it would be useful to prove sampling inequalities for other kinds of weak data that, e.g., covers derivatives.

119

Bibliography

[1] A.N. Agadzhanov. Functional properties of Sobolev spaces of infinite order. Soviet.

Math. Dokl., 38, No. 1:88–92, 1989.

[2] R. Arcangéli, M.C. López de Silanes, and J.J. Torrens. An extension of a bound for functions in Sobolev spaces, with applications to (m, s)-spline interpolation and smoothing. Numer. Math., 107(2):181–211, 2007.

[3] R. Arcangéli, M.C. López de Silanes, and J.J. Torrens. Estimates for functions in sobolev spaces defined on unbounded domains. J. Approx. Theory, 2008.

doi:10.1016/j.jat.2008.09.001.

[4] S.N. Atluri. The meshless method (MLPG) for domain and BIE discretizations. Tech Science Press, Encino, CA, 2005.

[5] S.N. Atluri and S.P. Shen. The Meshless Local Petrov-Galerkin (MLPG) Method.

Tech Science Press, Encino, CA, 2002.

[6] S.N. Atluri and T. Zhu. A new meshless local Petrov-Galerkin (MLPG) approach in computational mechanics. Computational Mechanics, 22:117–127, 1998.

[7] S.N. Atluri and T. Zhu. A new meshless local Petrov-Galerkin (MLPG) approach to nonlinear problems in computer modeling and simulation. Computer Modeling and Simulation in Engineering, 3:187–196, 1998.

[8] I. Babuska, U. Banerjee, and J.E. Osborn. Survey of meshless and generalized finite element methods: A unified approach. Acta Numerica, 12:1–125, 2003.

[9] P. Borwein and T. Erdelyi. Polynomials and Polynomial Inequalities. Springer, New York, 1995.

[10] D. Braess.Finite Elements. Theory, Fast Solvers and Applications in Solid Mechanics.

Cambridge University Press, Cambridge, 2001.

[11] S. Brenner and L. Scott. The Mathematical Theory of Finite Element Methods.

Springer, New York, 1994.

[12] C. C. Chang and C. L. Lin. Training ν-Support Vector Regression: Theory and Algorithms. Neural Computation, 14(8):1959–1977, 2002.

121

[13] E. W. Cheney. An introduction to approximation theory. McGraw-Hill, New York, 1966.

[14] P. G. Ciarlet. The Finite Element Method For Elliptic Problems. Studies in Mathe-matics And Its Applications. North-Holland Publishing Company, Amsterdam, New York, Oxford, 1978.

[15] S. de Marchi and R. Schaback. Stability of Kernel-Based Interpolation. Advances in Computational Mathematics, 2008. DOI: 10.1007/s10444-008-9093-4.

[16] M. Dobrowolski.Angewandte Funktionalanalysis. Springer, Berlin Heidelberg, 2006.

[17] T. Evgeniou, M. Pontil, and T. Poggio. Regularization Networks and Support Vector Machines. Advances in Computational Mathematics, 13:1–50, 2000.

[18] G.E. Fasshauer. Solving partial differential equations by collocation with radial basis functions. In A.L. Mehaute, C. Rabut, and L.L. Schumaker, editors,Surface Fitting and Multiresolution Methods, pages 131–138. Vanderbilt University Press, Nashville, 1997.

[19] G.E. Fasshauer. On the numerical solution of differential equations with radial basis functions. In C.S. Chen, C.A. Brebbia, and D.W. Pepper, editors,Boundary Element Technology XIII, pages 291–300. WIT Press, Southampton, 1999.

[20] C. Franke and R. Schaback. Convergence oreder estimates of meshless colloca-tion methods using radial basis funccolloca-tions. Advances in Computational Mathematics, 8:381–399, 1998.

[21] C. Franke and R. Schaback. Solving partial differential equations by collocation using radial basis functions. Advances in Computational Mathematics, 93:73–82, 1998.

[22] E. Freitag and R. Busam. Funktionentheorie 1. Springer Verlag, Berlin, 2000.

[23] F. Girosi. An Equivalence Between Sparse Approximation and Support Vector Machines. Neural Computation, 10 (8):1455–1480, 1998.

[24] W. Hackbusch. Theorie und Numerik elliptischer Differentialgleichungen. Teubner, Stuttgart, 1986.

[25] A. Iske and T. Sonar. On the structure of function spaces in optimal recovery of point functionals for eno-schemes by radial basis functions. Numerische Mathematik, 74:177–201, 1996.

[26] K. Jetter, J. Stöckler, and J.D. Ward. Norming sets and scattered data approximation on spheres. InApproximation Theory IX, Vol. II: Computational Aspects, pages 137 – 144. Vanderbilt University Press, 1998.

[27] L. Ling, R. Opfer, and R. Schaback. Results on meshless collocation techniques.

Engeneering Analysis with Boundary Elements, 30:247–253, 2006.

BIBLIOGRAPHY 123 [28] L. Ling and R. Schaback. Stable and Convergent Unsymmetric Meshless Collocation

Methods. to appear in SIAM J. Numer. Anal., 2007.

[29] F. Lu and H. Sun. Positive definite dot product kernels in learning theory. Advances in Computational Mathematics, 22:181–198, 2005.

[30] G. Lube. Theorie und Numerik elliptischer Randwertprobleme. Lecture note, avail-able at:

http://www.num.math.uni-goettingen.de/lube/FEM1_06_akt.pdf.

[31] W. R. Madych. An estimate for multivariate interpolation II. J. Approx. Theory, 142:116–128, 2006.

[32] W.R. Madych and S.A. Nelson. Bounds on multivariate polynomials and exponential error estimates for multiquadric interpolation. J. Approx. Theory, 70:94–114, 1992.

[33] J.M. Melenk. On Approximation in Meshless Methods, pages 65–141. Universitext.

Springer Berlin / Heidelberg, 2005.

[34] C. A. Micchelli and M. Pontil. Learning the Kernel Function via Regularization.

Journal of Machine Learning Research, 6:1099–1125, 2005.

[35] St. Müller and R. Schaback. A Newton Basis for Kernel Spaces. J. Approx. Theory, 2008. DOI: 10.1016/j.jat.2008.10.014.

[36] F.J. Narcowich and J.D Ward. Generalized Hermite interpolation via matrix-valued conditionally positive definite functions. Math. Comput., 63:661–687, 1994.

[37] F.J. Narcowich and J.D Ward. Scattered-data interpolation onRn: Error estimates for radial basis and band-limited functions. SIAM J. Math. Anal., 36:284–300, 2004.

[38] F.J. Narcowich, J.D. Ward, and H. Wendland. Sobolev bounds on functions with scattered zeros, with applications to radial basis function surface fitting. Mathematics of Computation, 74:743–763, 2005.

[39] F.J. Narcowich, J.D. Ward, and H. Wendland. Sobolev error estimates and a Bernstein inequality for scattered data interpolation via radial basis functions. Constructive Ap-proximation, 24:175–186, 2006.

[40] K. Pelckmans, I. Goethals, J.D. Brabanter, J.A.K. Suykens, and B.D. Moor. Com-ponentwise Least Squares Support Vector Machines, volume 177/2005 ofStudies in Fuzziness and Soft Computing, pages 77–98. Springer Berlin / Heidelberg, 2005.

[41] C. Rieger. Approximative Interpolation. Master’s thesis, NAM, Göttingen, 2005.

[42] C. Rieger and B. Zwicknagl. Deterministic Error Analysis of Support Vector Machines and Related Regularized Kernel Methods. to appear in: J. Machine Learn-ing Research.

[43] C. Rieger and B. Zwicknagl. Sampling Inequalities for Infinitely Smooth Functions, with Applications to Interpolation and Machine Learning. Adv. Comp. Math., 2008.

DOI: 10.1007/s1.10444-008-9089-0.

[44] R. Schaback. Kernel-Based Meshless Methods. Lecture note, available at:

http://www.num.math.uni-goettingen.de/schaback/

teaching/07SS/vorl/kernel.pdf.

[45] R. Schaback. Unsymmetric Meshless Methods for Operator Equations. preprint, 2006.

[46] R. Schaback. Convergence of Unsymmetric Kernel-Based Meshless Collocation Methods. SIAM Journal of Numerical Analysis, 45/1:333–351, 2007.

[47] R. Schaback. Recovery of Functions from Weak Data Using Unsymmetric Meshless Kernel-Based Methods. Applied Numerical Mathematics, 58 (5):726–741, 2008.

[48] R. Schaback and H. Wendland. Inverse and saturation theorems for radial basis func-tion interpolafunc-tion. Mathematics of Computation, 71:669–681, 2002.

[49] R. Schaback and H. Wendland. Kernel Techniques: From Machine Learning to Meshless Methods. Acta Numerica, 15:543–639, 2006.

[50] B. Schölkopf, C. Burges, and V.Vapnik. Extracting support data for a given task.

In Proceedings, First International Conference on Knowledge Discovery and Data Mining. CA:AAAI Press., Menlo Park, 1995.

[51] B. Schölkopf and A. J. Smola. Learning with kernels - Support Vector Machines, Regularisation, and Beyond. MIT Press, Cambridge, Massachusetts, 2002.

[52] B. Schölkopf, R. C. Williamson, and P. L. Bartlett. New Support Vector Algorithms.

Neural Computation, 12:1207–1245, 2000.

[53] T. Sonar. Optimal recovery using thin plate splines in finite volume methods for the numerical solution of hyperbolic conservation laws. IMA J. Numer. Anal., 16:549–

581, 1996.

[54] T. Sonar. On the construction of essentially non-oscillatory finfite volume approxima-tions to hyperbolic conservation laws on general triangulaapproxima-tionss: Polynomial recovery, accuracy, and stencil selection. Comput. Methods in Appl. Mechanics and Engineer-ing, 140:157–181, 1997.

[55] T. Sonar. On families of pointwise optimal finite volume ENO approximations. SIAM J. Numer. Anal., 35:2350–2369, 1998.

[56] Elias M. Stein.Singular integrals and differentialbility properties of functions. Prince-ton University Press, PrincePrince-ton, N.J., 1970.

[57] V. Vapnik.The nature of statistical learning theory. Springer-Verlag, New York, 1995.

[58] G. Wahba. Spline Models for Observational Data, CBMS-NSF. Regional Conference Series in Applied Mathematics. Siam, Philadelphia, 1990.

BIBLIOGRAPHY 125 [59] H. Wendland. Ein Beitrag zur Interpolation mit radialen Basisfunktionen. Master’s

thesis, NAM, Göttingen, 1994.

[60] H. Wendland. Piecewise polynomial, positive definite and compactly supported radial basis functions of minimal degree. Adv. Comput. Math., 4:389–396, 1995.

[61] H. Wendland. Error estimates for interpolation by compactly supported radial basis functions of minimal degree. J. Approx. Th., 93:258–272, 1998.

[62] H. Wendland. Meshless Galerkin methods using radial basis functions. Math. Com-put., 68:1521–1531, 1999.

[63] H. Wendland. Local polynomial reproduction and moving least squares approxima-tion. IMA J. Numer. Anal., 21:285–300, 2001.

[64] H. Wendland. On the convergence of a general class of finite volume methods. SIAM Journal of Numerical Analysis, 43:987–1002, 2005.

[65] H. Wendland. Scattered Data Approximation. Cambridge Monographs on Applied and Computational Mathematics. Cambridge University Press, Cambridge, 2005.

[66] H. Wendland. On the stability of meshless symmetric collocation for boundary value problems. BIT Numerical Mathematics, 47:455–468, 2007.

[67] H. Wendland and C. Rieger. Approximate Interpolation. Numerische Mathematik, 101:643–662, 2005.

[68] J. Wloka. Partielle Differentialgleichungen: Sobolevräume und Randwertaufgaben.

Mathematische Leitfäden. Teubner, Stuttgart, 1982.

[69] Z. Wu. Hermite-Birkhoff interpolation of scattered data by radial basis functions.

Approximation Theory Appl., 8:1–10, 1992.

[70] B. Zwicknagl. Power Series Kernels. Constructive Approximation, 29 (2):61–84, 2009.

Acknowledgement

It is a great pleasure to thank my supervisor Professor Robert Schaback for giving me the opportunity to work on this topic and for many stimulating discussions. I am grateful for all his valuable advice and support during the time I spent on this thesis.

I would like to thank Professor Gerd Lube for his willingness to act as referee and his in-spiring suggestions.

I am grateful to Professor Holger Wendland for introducing me to the field of sampling inequalities and his support.

I would like to thank Barbara Zwicknagl for the inspiring collaboration on which some chapters of this thesis are based, for numerous fruitful and clarifying discussions and for thoroughly proof-reading this thesis uncountably many times. Beside this, I would like to thank her for the great time with her, for providing inestimable support during the last years and for always being there for me.

The financial support by the Deutsche Forschungsgemeinschaft through the Graduiertenkol-leg 1023 “Identifikation in mathematischen Modellen: Synergie stochastischer und nu-merischer Methoden“ is gratefully acknowledged. Furthermore, I would like to thank the Institut für Numerische und Angewandte Mathematik der Universität Göttingenfor the fi-nancial support and the chance to work in an inspiring environment.

Finally, I would like to thank my parents for their permanent support and their understan-ding.

Curriculum Vitae Christian Rieger

Schlözerweg 18 Date of birth: August 20, 1981

37085 Göttingen Place of birth: Göttingen

Email: crieger@math.uni-goettingen.de Citizenship: German EDUCATION

1987 to 1991 Höltyschule Göttingen 1991 to 1993 Lutherschule Göttingen

1993 to 2000 Theodor-Heuss-Gymnasium Göttingen 1997 Bradfield College, Great Britain 2000 to 2008 Georg-August University, Göttingen Vordiplom University of Göttingen, April 2002

Major: Mathematics, Minor: Computer Science Diplom University of Göttingen, January 2005

Major: Mathematics, Minor: Computer Science

Thesis: ‘Approximative Interpolation mit radialen Basisfunktionen‘.

Supervisor: Professor Holger Wendland

Im Dokument Sampling Inequalities and Applications (Seite 118-133)