On Symmetry and Initialization for Neural Networks

Ido Nachum, Amir Yehudayoff

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

Abstract

This work provides an additional step in the theoretical understanding of neural networks. We consider neural networks with one hidden layer and show that when learning symmetric functions, one can choose initial conditions so that standard SGD training efficiently produces generalization guarantees. We empirically verify this and show that this does not hold when the initial conditions are chosen at random. The proof of convergence investigates the interaction between the two layers of the network. Our results highlight the importance of using symmetry in the design of neural networks.

Original languageEnglish
Title of host publicationLATIN 2020
Subtitle of host publicationTheoretical Informatics - 14th Latin American Symposium 2021, Proceedings
EditorsYoshiharu Kohayakawa, Flávio Keidi Miyazawa
PublisherSpringer Science and Business Media Deutschland GmbH
Pages401-412
Number of pages12
ISBN (Print)9783030617912
DOIs
StatePublished - 2020
Externally publishedYes
Event14th Latin American Symposium on Theoretical Informatics, LATIN 2020 - Sao Paulo, Brazil
Duration: 5 Jan 20218 Jan 2021

Publication series

NameLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Volume12118 LNCS
ISSN (Print)0302-9743
ISSN (Electronic)1611-3349

Conference

Conference14th Latin American Symposium on Theoretical Informatics, LATIN 2020
Country/TerritoryBrazil
CitySao Paulo
Period5/01/218/01/21

Bibliographical note

Publisher Copyright:
© 2020, Springer Nature Switzerland AG.

Keywords

  • Neural networks
  • Symmetry

ASJC Scopus subject areas

  • Theoretical Computer Science
  • General Computer Science

Fingerprint

Dive into the research topics of 'On Symmetry and Initialization for Neural Networks'. Together they form a unique fingerprint.

Cite this