Abstract
Functional principal component analysis for sparse longitudinal data usually proceeds by first smoothing the covariance surface, and then obtaining an eigendecomposition of the associated covariance operator. Here we consider the use of penalized tensor product splines for the initial smoothing step. Drawing on a result regarding finite-rank symmetric integral operators, we derive an explicit spline representation of the estimated eigenfunctions, and show that the effect of penalization can be notably disparate for alternative approaches to tensor product smoothing. The latter phenomenon is illustrated with two data sets derived from magnetic resonance imaging of the human brain.
Original language | English |
---|---|
Pages (from-to) | 1-12 |
Number of pages | 12 |
Journal | Journal of Statistical Planning and Inference |
Volume | 208 |
DOIs | |
State | Published - Sep 2020 |
Bibliographical note
Funding Information:This work was supported in part by grant 1777/16 from the Israel Science Foundation . Many thanks to Uriah Finkel, Fabian Scheipl, Luo Xiao and David Zucker for helpful discussions, and to Aaron Alexander-Bloch, Jay Giedd and Armin Raznahan for providing the cortical thickness data. The DTI data were collected at Johns Hopkins University and the Kennedy-Krieger Institute. Theorem 1 was inspired by a similar result derived in unpublished lecture notes of Ilya Goldsheid. Appendix A
Publisher Copyright:
© 2019 The Authors
Keywords
- Bivariate smoothing
- Covariance function
- Covariance operator
- Eigenfunction
- Roughness penalty
ASJC Scopus subject areas
- Statistics and Probability
- Statistics, Probability and Uncertainty
- Applied Mathematics