Abstract
How many bits of information are revealed by a learning algorithm for a concept class of VC-dimension d? Previous works have shown that even for d = 1 the amount of information may be unbounded (tend to ∞ with the universe size). Can it be that all concepts in the class require leaking a large amount of information? We show that typically concepts do not require leakage. There exists a proper learning algorithm that reveals O(d) bits of information for most concepts in the class. This result is a special case of a more general phenomenon we explore. If there is a low information learner when the algorithm knows the underlying distribution on inputs, then there is a learner that reveals little information on an average concept without knowing the distribution on inputs.
Original language | English |
---|---|
Pages (from-to) | 633-646 |
Number of pages | 14 |
Journal | Proceedings of Machine Learning Research |
Volume | 98 |
State | Published - 2019 |
Externally published | Yes |
Event | 30th International Conference on Algorithmic Learning Theory, ALT 2019 - Chicago, United States Duration: 22 Mar 2019 → 24 Mar 2019 |
Bibliographical note
Publisher Copyright:© 2019 Proceedings of Machine Learning Research. All rights reserved.
Keywords
- Learning Theory, Information Theory
ASJC Scopus subject areas
- Artificial Intelligence
- Software
- Control and Systems Engineering
- Statistics and Probability