Abstract
The goal of subspace learning is to find a k-dimensional subspace of ℝd, such that the expected squared distance between instance vectors and the subspace is as small as possible. In this paper we study subspace learning in a partial information setting, in which the learner can only observe r ≤ d attributes from each instance vector. We propose several efficient algorithms for this task, and analyze their sample complexity.
Original language | English |
---|---|
Journal | Journal of Machine Learning Research |
Volume | 17 |
State | Published - 1 Apr 2016 |
Externally published | Yes |
Bibliographical note
Publisher Copyright:©2016 Alon Gonen, Dan Rosenbaum, Yonina C. Eldar and Shai Shalev-Shwartz.
Keywords
- Budgeted learning
- Learning theory
- Learning with partial information
- Principal components analysis
- Statistical learning
ASJC Scopus subject areas
- Software
- Control and Systems Engineering
- Statistics and Probability
- Artificial Intelligence