Abstract
This work provides an additional step in the theoretical understanding of neural networks. We consider neural networks with one hidden layer and show that when learning symmetric functions, one can choose initial conditions so that standard SGD training efficiently produces generalization guarantees. We empirically verify this and show that this does not hold when the initial conditions are chosen at random. The proof of convergence investigates the interaction between the two layers of the network. Our results highlight the importance of using symmetry in the design of neural networks.
Original language | English |
---|---|
Title of host publication | LATIN 2020 |
Subtitle of host publication | Theoretical Informatics - 14th Latin American Symposium 2021, Proceedings |
Editors | Yoshiharu Kohayakawa, Flávio Keidi Miyazawa |
Publisher | Springer Science and Business Media Deutschland GmbH |
Pages | 401-412 |
Number of pages | 12 |
ISBN (Print) | 9783030617912 |
DOIs | |
State | Published - 2020 |
Externally published | Yes |
Event | 14th Latin American Symposium on Theoretical Informatics, LATIN 2020 - Sao Paulo, Brazil Duration: 5 Jan 2021 → 8 Jan 2021 |
Publication series
Name | Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) |
---|---|
Volume | 12118 LNCS |
ISSN (Print) | 0302-9743 |
ISSN (Electronic) | 1611-3349 |
Conference
Conference | 14th Latin American Symposium on Theoretical Informatics, LATIN 2020 |
---|---|
Country/Territory | Brazil |
City | Sao Paulo |
Period | 5/01/21 → 8/01/21 |
Bibliographical note
Publisher Copyright:© 2020, Springer Nature Switzerland AG.
Keywords
- Neural networks
- Symmetry
ASJC Scopus subject areas
- Theoretical Computer Science
- General Computer Science