No fine-tuning, no cry: Robust svd for compressing deep networks

Murad Tukan, Alaa Maalouf, Matan Weksler, Dan Feldman

Research output: Contribution to journalArticlepeer-review


A common technique for compressing a neural network is to compute the k-rank ℓ2 approximation Ak of the matrix A ∈ Rn×d via SVD that corresponds to a fully connected layer (or embedding layer). Here, d is the number of input neurons in the layer, n is the number in the next one, and Ak is stored in O((n + d)k) memory instead of O(nd). Then, a fine-tuning step is used to improve this initial compression. However, end users may not have the required computation resources, time, or budget to run this fine-tuning stage. Furthermore, the original training set may not be available. In this paper, we provide an algorithm for compressing neural networks using a similar initial compression time (to common techniques) but without the fine-tuning step. The main idea is replacing the k-rank ℓ2 approximation with ℓp, for p ∈ [1, 2], which is known to be less sensitive to outliers but much harder to compute. Our main technical result is a practical and provable approximation algorithm to compute it for any p ≥ 1, based on modern techniques in computational geometry. Extensive experimental results on the GLUE benchmark for compressing the networks BERT, DistilBERT, XLNet, and RoBERTa confirm this theoretical advantage.

Original languageEnglish
Pages (from-to)5599
Issue number16
StatePublished - 2 Aug 2021

Bibliographical note

Publisher Copyright:
© 2021 by the authors. Licensee MDPI, Basel, Switzerland.


  • Löwner ellipsoid
  • Matrix factorization
  • Neural networks compression
  • Robust low rank approximation

ASJC Scopus subject areas

  • Analytical Chemistry
  • Information Systems
  • Atomic and Molecular Physics, and Optics
  • Biochemistry
  • Instrumentation
  • Electrical and Electronic Engineering


Dive into the research topics of 'No fine-tuning, no cry: Robust svd for compressing deep networks'. Together they form a unique fingerprint.

Cite this