• Keine Ergebnisse gefunden

6.6.1 Proof of expectation of monomials over Bp

Given the symmetry of the integration domain w.r.t. each xn, it fol-lows that the integral vanishes if at least one exponent αn is odd. For the remaining part we use the fact that ∀α ∈ 2NN+ and the injective substitution ϕ : RN+ → RN+ : [x1, . . . , xN] → [y1/(α1 1+1), . . . , y1/(αN N+1)] with Jacobian determinant

|det(Jϕ)| = N n=1

1

αn+ 1|yn|αnαn+1. (6.40) to obtain the transformed integral of (6.5), i.e.,

N n=1

1 αn+ 1

Ω

N n=1

|yn|αnαn+1|yn|αnαn+1 dy= (6.41)

= N n=1

1 αn+ 1

Ω

1dy, (6.42)

with transformed integration domain Ω =

N n=1

|yn|αnpn+1 =:Bp with pn= pn

αn+ 1∀n. (6.43)

Using the volume of generalized balls from (6.2) with the characteristic vector p from (6.43) in (6.42) establishes the desired result.

6.6.2 Derivation of partial derivatives

Due to linearity we may exchange the roles of trace and expectation and employ the following results on derivatives of traces [PP08]

Wtr{g(W)}=g(W)T (6.44)

Wtr{WA}=AT. (6.45) Thus, for d∈N++ we have that

Wtr'Ex

diagd(WAx)diag(x)(= (6.46)

=∇WEx

tr'I(WAx1T)ddiag(x)(

=Ex

dI(WAx1T)d−1diag(x)∇Wtr'WAx1T(

=d·Ex

diagd−1(WAx)xxTAT,

which proves the first part, while the second part follows along similar lines by replacing diag(x) withI in (6.46).

[1] S. Limmer, J. Mohammadi, and S. Stańczak. A simple algorithm for approximation by nomographic functions. In53rd Annual Allerton Con-ference on Communication, Control, and Computing, Urbana, IL, USA, 2015.

[2] S. Limmer and S. Stańczak. Onp-norm computation over multiple-access channels. InIEEE Information Theory Workshop (ITW), pages 351–355, 2014. (invited).

[3] S. Limmer and S. Stańczak. Towards optimal nonlinearities for sparse recovery using higher-order statistics. InIEEE MLSP, 2016.

[4] S. Limmer and S. Stanczak. A neural architecture for bayesian compressive sensing over the simplex via laplace techniques. IEEE Transactions on Signal Processing, 2018. accepted for publication.

[5] S. Limmer, S. Stańczak, M. Goldenbaum, and R. L. G. Cavalcante. Ex-ploiting interference for efficient distributed computation in cluster-based wireless sensor networks. IEEE GlobalSIP, 2013. (invited).

[6] J. Mohammadi, S. Limmer, and S. Stańczak. A decentralized eigenvalue computation method for spectrum sensing based on average consensus.

Frequenz, 70(7-8):309–318, 2016.

128

[Arn57] V. I. Arnol’d. On the representability of a function of two vari-ables in the form χ(φ(x) +ψ(y)). Uspekhi Matematicheskikh Nauk, 12(2):119–121, 1957.

[Bac17] F. Bach. Breaking the curse of dimensionality with convex neural networks. Journal of Machine Learning Research, 18(19):1–53, 2017.

[BBDL+11] V. Baldoni, N. Berline, J. De Loera, M. Köppe, and M. Vergne.

How to integrate a polynomial over a simplex. Mathematics of Computation, 80(273):297–325, 2011.

[BCKV14] H. Boche, R. Calderbank, G. Kutyniok, and J. Vybıral. A survey of compressed sensing. 2014.

[BEF00] B. Büeler, A. Enge, and K. Fukuda. Exact volume computation for polytopes: a practical study. InPolytopes, combinatorics and computation, pages 131–154. Springer, 2000.

[Bel63] P. Bello. Characterization of randomly time-variant linear chan-nels.IEEE transactions on Communications Systems, 11(4):360–

393, 1963.

[BGMN05] F. Barthe, O. Guédon, S. Mendelson, and A. Naor. A proba-bilistic approach to the geometry of thenp-ball. The Annals of Probability, 33(2):480–513, 2005.

[BM13] N. Boumal and B. Mishra. The manopt toolbox, 2013.

129

[BSR17] M. Borgerding, P. Schniter, and S. Rangan. Amp-inspired deep networks for sparse linear inverse problems. IEEE Transactions on Signal Processing, 65(16):4293–4308, 2017.

[BTGP92] Y. Brychkov, V. K. Tuan, H. J. Glaeske, and A. Prudnikov. Mul-tidimensional integral transformations. 1992.

[Buc82] R. C. Buck. Nomographic functions are nowhere dense. Pro-ceedings of the American Mathematical Society, pages 195–199, 1982.

[BV04] S. Boyd and L. Vandenberghe.Convex optimization. Cambridge university press, 2004.

[Can06] E. Candes. Compressive sampling. InProceedings of the interna-tional congress of mathematicians, volume 3, pages 1433–1452.

Madrid, Spain, 2006.

[CDD09] A. Cohen, W. Dahmen, and R. DeVore. Compressed sensing and bestk-term approximation. Journal of the American mathemat-ical society, 22(1):211–231, 2009.

[CDD+12] A. Cohen, I. Daubechies, R. DeVore, G. Kerkyacharian, and D. Picard. Capturing ridge functions in high dimensions from point queries.Constructive Approximation, 35(2):225–243, 2012.

[CDS01] S. S. Chen, D. L. Donoho, and M. A. Saunders. Atomic decom-position by basis pursuit. SIAM review, 43(1):129–159, 2001.

[CDT98] G. Calafiore, F. Dabbene, and R. Tempo. Uniform sample gener-ation inpballs for probabilistic robustness analysis. InDecision and Control, 1998. Proceedings of the 37th IEEE Conference on, volume 3, pages 3335–3340. IEEE, 1998.

[Cev09] V. Cevher. Learning with compressible priors. In Advances in Neural Information Processing Systems 22, pages 261–269, 2009.

[CGLM08] P. Comon, G. Golub, L.-H. Lim, and B. Mourrain. Symmetric tensors and symmetric tensor rank. SIAM Journal on Matrix Analysis and Applications, 30(3):1254–1279, 2008.

[CK13] P. G. Casazza and G. Kutyniok. Finite Frames: Theory and Applications. Springer, 2013.

[CR06] E. Candes and J. Romberg. Encoding the p-ball from limited measurements. In Data Compression Conference, 2006. DCC 2006. Proceedings, pages 33–42. IEEE, 2006.

[CS66] G. T. Cargo and O. Shisha. The bernstein form of a polyno-mial. Journal of Research of the National Bureau of Standards:

Mathematics and mathematical physics. B, 70:79, 1966.

[CT05] E. J. Candes and T. Tao. Decoding by linear programming.IEEE transactions on information theory, 51(12):4203–4215, 2005.

[Dav65] Philip J Davis. Gamma function and related functions.Handbook of Mathematical Functions, 1965.

[DFAG17] Y. N. Dauphin, A. Fan, M. Auli, and D. Grangier. Language modeling with gated convolutional networks.Proceedings of Ma-chine Learning Research, 2017.

[DHS11] J. Duchi, E. Hazan, and Y. Singer. Adaptive subgradient meth-ods for online learning and stochastic optimization. Journal of Machine Learning Research, 12, 2011.

[Die69] J. Dieudonne. Foundations of Modern Analysis, volume 1. Aca-demic Press, 1969.

[DMM10] D. L. Donoho, A. Maleki, and A. Montanari. Message passing al-gorithms for compressed sensing: I. motivation and construction.

InInformation Theory Workshop (ITW), 2010 IEEE, pages 1–5.

IEEE, 2010.

[DMM11] D. L. Donoho, A. Maleki, and A. Montanari. How to design message passing algorithms for compressed sensing. preprint, 2011.

[DN04] H. Aron David and H. N. Nagaraja. Order statistics. Encyclo-pedia of Statistical Sciences, 9, 2004.

[Doe12] G. Doetsch. Introduction to the Theory and Application of the Laplace Transformation. Springer Science & Business Media, 2012.

[Don06] D. L. Donoho. High-dimensional centrally symmetric polytopes with neighborliness proportional to dimension.Discrete & Com-putational Geometry, 35(4):617–652, 2006.

[Dot12] V. Dotsenko. One more discussion of the replica trick: the exam-ple of the exact solution.Philosophical Magazine, 92(1-3):16–33, 2012.

[DS51] S. P. Diliberto and E. G. Straus. On the approximation of a function of several variables by the sum of functions of fewer variables.Pacific J. Math, 1(2):195–210, 1951.

[Edw22] J. Edwards.A treatise on the integral calculus: with applications, examples and problems, volume 2. Macmillan and Company, lim-ited, 1922.

[ES81] B. Efron and C. Stein. The jackknife estimate of variance. The Annals of Statistics, pages 586–596, 1981.

[FL09] S. Foucart and M.-Jun. Lai. Sparsest solutions of underdeter-mined linear systems viaq-minimization for 0< q <1. Applied and Computational Harmonic Analysis, 26(3):395 – 407, 2009.

[FR13] S. Foucart and H. Rauhut.A mathematical introduction to com-pressive sensing. Springer, 2013.

[FSV12] M. Fornasier, K. Schnass, and J. Vybiral. Learning functions of few arbitrary linear parameters in high dimensions.Foundations of Computational Mathematics, 12(2):229–262, 2012.

[GB12] M. Grant and S. Boyd. CVX: Matlab software for disciplined convex programming, version 2.0 beta, sep 2012.

[GBS13] M. Goldenbaum, H. Boche, and S. Stanczak. Harnessing inter-ference for analog function computation in wireless sensor net-works. IEEE Transactions on Signal Processing, 61(20):4893–

4906, 2013.

[GCD12] R. Gribonval, V. Cevher, and M. E. Davies. Compressible dis-tributions for high-dimensional statistics.IEEE Transactions on Information Theory, 58(8):5016–5034, 2012.

[GL10] K. Gregor and Y. LeCun. Learning fast approximations of sparse coding. InICML, 2010.

[GL12] G. H. Golub and C. F. Van Loan. Matrix computations. Johns Hopkins University Press, 2012.

[Gol80] M. v. Golitschek. Approximating bivariate functions and matri-ces by nomographic functions. Quantitative approximation (R.

DeVore, K. Scherer, Eds.), pages 143–151, 1980.

[Gol84] M. v. Golitschek. Shortest path algorithms for the approxi-mation by nomographic functions. In Anniversary Volume on Approximation Theory and Functional Analysis, pages 281–301.

Springer, 1984.

[GPS06] H. J. Glaeske, A. Prudnikov, and K. Skòrnik. Operational cal-culus and related topics. 2006.

[Gri05] M. Griebel. Sparse grids and related approximation schemes for higher dimensional problems. 2005.

[Gri11] R. Gribonval. Should penalized least squares regression be inter-preted as maximum a posteriori estimation? Signal Processing, IEEE Transactions on, 59(5):2405–2410, 2011.

[GS00] L. Grippo and M. Sciandrone. On the convergence of the block nonlinear gauss–seidel method under convex constraints. Opera-tions Research Letters, 26(3):127–136, 2000.

[GS13] M. Goldenbaum and S Stanczak. Robust analog function compu-tation via wireless multiple-access channels. IEEE Transactions on Communications, 2013.

[HG07] A. Hjorungnes and D. Gesbert. Complex-valued matrix differ-entiation: Techniques and key results. Signal Processing, IEEE Transactions on, 55(6):2740–2746, 2007.

[HHI93] W. Hardle, P. Hall, and H. Ichimura. Optimal smoothing in single-index models. The annals of Statistics, pages 157–178, 1993.

[HM11] F. Hlawatsch and G. Matz. Wireless communications over rapidly time-varying channels. Academic Press, 2011.

[Hoe92] W. Hoeffding. A class of statistics with asymptotically nor-mal distribution. InBreakthroughs in Statistics, pages 308–334.

Springer, 1992.

[HRW14] J. R. Hershey, J. L. Roux, and F. Weninger. Deep unfold-ing: Model-based inspiration of novel deep architectures. arXiv preprint arXiv:1409.2574, 2014.

[HT90] T. J. Hastie and R. J. Tibshirani. Generalized additive models, volume 43. CRC Press, 1990.

[Hub85] P. J. Huber. Projection pursuit. The annals of Statistics, pages 435–475, 1985.

[Joh81] F. John. Plane waves and spherical means applied to partial differential equations. Springer-Verlag New York, 1981.

[Kas18] H. Kasai. SGDlibrary: A matlab library for stochastic gradi-ent descgradi-ent algorithms. Journal of Machine Learning Research (JMLR), 2018.

[Kay93] S. Kay. Fundamentals of statistical signal processing, volume i:

estimation theory. 1993.

[KBSZ11] M. Kloft, U. Brefeld, S. Sonnenburg, and A. Zien. p-norm mul-tiple kernel learning. Journal of Machine Learning Research, 12(Mar):953–997, 2011.

[KC08] J. Kovačević and A. Chebira. An introduction to frames. Foun-dations and Trends in Signal Processing, 2(1):1–94, 2008.

[KGS14] A. Kortke, M. Goldenbaum, and S. Stanczak. Analog Computa-tion Over the Wireless Channel: A Proof of Concept. InProc.

IEEE Sensors 2014, Valencia, Spain, November 2014.

[KM16] U. Kamilov and H. Mansour. Learning optimal nonlinearities for iterative thresholding algorithms. IEEE Signal Processing Letters, 23(5):747–751, 2016.

[Kol57] A. N. Kolmogorov. The representation of continuous functions of many variables by superposition of continuous functions of one variable and addition. 1957.

[Kol05] A. Koldobsky.Fourier analysis in convex geometry. Number 116.

American Mathematical Soc., 2005.

[KS06] J. Kaipio and E. Somersalo.Statistical and computational inverse problems, volume 160. Springer Science & Business Media, 2006.

[KSWW10] F. Kuo, I. Sloan, G. Wasilkowski, and H. Woźniakowski. On decompositions of multivariate functions. Mathematics of Com-putation, 79(270):953–966, 2010.

[KV15] A. Kolleck and J. Vybiral. On some aspects of approximation of ridge functions. Journal of Approximation Theory, 2015.

[KWT10] Y. Kabashima, T. Wadayama, and T. Tanaka. Statistical me-chanical analysis of a typical reconstruction limit of compressed sensing. InInformation Theory Proceedings (ISIT), 2010 IEEE International Symposium on, pages 1533–1537. IEEE, 2010.

[Lap12] P. S. Laplace.Théorie analytique des probabilités. Courcier, 1812.

[Las83] J. B. Lasserre. An analytical expression and an algorithm for the volume of a convex polyhedron inRn. Journal of optimization theory and applications, 39(3):363–377, 1983.

[Las09] J. B. Lasserre. Linear and integer programming vs linear inte-gration and counting: a duality viewpoint. Springer Science &

Business Media, 2009.

[Las15] J. B. Lasserre. Volume of slices and sections of the simplex in closed form. Optimization Letters, 9(7):1263–1269, 2015.

[LDS90] Y. LeCun, J. S. Denker, and S. A. Solla. Optimal brain damage.

InAdvances in neural information processing systems, pages 598–

605, 1990.

[LGM96] G. G. Lorentz, M. v. Golitschek, and Y. Makovoz. Constructive approximation: advanced problems, volume 304. Springer Berlin, 1996.

[LMS+10] Z. Luo, W. Ma, A. M.-C. So, Y. Ye, and S. Zhang. Semidefi-nite relaxation of quadratic optimization problems.IEEE Signal Processing Magazine, 27(3):20–34, 2010.

[LZ01a] J. B. Lasserre and E. S. Zeron. A laplace transform algorithm for the volume of a convex polytope. Journal of the ACM, 48(6):1126–1140, 2001.

[LZ01b] J. B. Lasserre and E. S. Zeron. Solving a class of multivariate integration problems via laplace techniques.Applicationes Math-ematicae, 28(4):391–405, 2001.

[LZZ07] S. Liu, J. Zhang, and B. Zhu. Volume computation using a direct monte carlo method. InInternational Computing and Combina-torics Conference, pages 198–209. Springer, 2007.

[MMB17] C. Metzler, A. Mousavi, and R. Baraniuk. Learned d-amp: Prin-cipled neural network based compressive image recovery. In Ad-vances in Neural Information Processing Systems, pages 1770–

1781, 2017.

[MMS11] B. Mishra, G. Meyer, and R. Sepulchre. Low-rank optimization for distance matrix completion. In2011 50thIEEE Conference on Decision and Control and European Control Conference (CDC-ECC) Orlando, FL, USA, December 12-15, 2011. IEEE, 2011.

[MP13] B. Makarov and A. Podkorytov. Real analysis: Measures, inte-grals and applications. Springer Science & Business Media, 2013.

[MZ93] S. Mallat and Z. Zhang. Matching pursuits with time-frequency dictionaries. IEEE Transactions on signal processing, 41(12):3397–3415, 1993.

[NW72] J. A. Nelder and R. W. M. Wedderburn. Generalized linear mod-els. Journal of the Royal Statistical Society. Series A (General), pages 370–384, 1972.

[NW08] E. Novak and H. Wozniakowski. Tractability of Multivariate Problems, vol. I: Linear Information. 2008.

[OSFM07] R. Olfati-Saber, J. A. Fax, and R. M. Murray. Consensus and cooperation in networked multi-agent systems. Proceedings of the IEEE, 95(1):215–233, 2007.

[PB13] N. Parikh and S. Boyd. Proximal algorithms. Foundations and Trends in Optimization, 1(3):123–231, 2013.

[Pin99] A. Pinkus. Approximation theory of the mlp model in neural networks. 1999.

[PP08] K. B. Petersen and M. S. Pedersen. The Matrix Cookbook. 2008.

[Rad07] L. A. Rademacher. Approximating the centroid is hard. In Pro-ceedings of the twenty-third annual symposium on Computational geometry, 2007.

[Rah14] S. Rahman. Approximation errors in truncated dimensional de-compositions. Mathematics of Computation, 83(290):2799–2819, 2014.

[Red09] D. Redmond. Existence and Construction of Real-Valued Equiangular Tight Frames. PhD thesis, University of Missouri-Columbia, 2009.

[RGF09] S. Rangan, V. Goyal, and A. K. Fletcher. Asymptotic analysis of map estimation via the replica method and compressed sensing.

In Advances in Neural Information Processing Systems, pages 1545–1553, 2009.

[SHJ03] T. Strohmer and R. W. Heath Jr. Grassmannian frames with applications to coding and communication. Applied and compu-tational harmonic analysis, 14(3):257–275, 2003.

[Sob69] I. M. Sobol. Multidimensional quadrature formulas and haar functions [in russian], 1969.

[Spr65] D. A Sprecher. A representation theorem for continuous func-tions of several variables. Proceedings of the American Mathe-matical Society, 16(2):200–203, 1965.

[Spr14] D. A. Sprecher. On computational algorithms for real-valued con-tinuous functions of several variables.Neural Networks, 59:16–22, 2014.

[Stu99] J. F. Sturm. Using sedumi 1.02, a matlab toolbox for optimiza-tion over symmetric cones. Optimization methods and software, 11(1-4):625–653, 1999.

[Stu10] A. M. Stuart. Inverse problems: a bayesian perspective. Acta Numerica, 19:451–559, 2010.

[TB97] L. N. Trefethen and D. Bau. Numerical Linear Algebra. SIAM, 1997.

[Tes18] G. Teschl.Topics in Real and Functional Analysis. Amer. Math.

Soc., 2018. To appear.

Im Dokument Distributed and sparse signal processing (Seite 135-146)