## Abstract

A common technique for compressing a neural network is to compute the k-rank ℓ_{2} approximation A_{k} of the matrix A ∈ R^{n×d} via SVD that corresponds to a fully connected layer (or embedding layer). Here, d is the number of input neurons in the layer, n is the number in the next one, and A_{k} is stored in O((n + d)k) memory instead of O(nd). Then, a fine-tuning step is used to improve this initial compression. However, end users may not have the required computation resources, time, or budget to run this fine-tuning stage. Furthermore, the original training set may not be available. In this paper, we provide an algorithm for compressing neural networks using a similar initial compression time (to common techniques) but without the fine-tuning step. The main idea is replacing the k-rank ℓ_{2} approximation with ℓ_{p}, for p ∈ [1, 2], which is known to be less sensitive to outliers but much harder to compute. Our main technical result is a practical and provable approximation algorithm to compute it for any p ≥ 1, based on modern techniques in computational geometry. Extensive experimental results on the GLUE benchmark for compressing the networks BERT, DistilBERT, XLNet, and RoBERTa confirm this theoretical advantage.

Original language | English |
---|---|

Pages (from-to) | 5599 |

Journal | Sensors |

Volume | 21 |

Issue number | 16 |

DOIs | |

State | Published - 2 Aug 2021 |

### Bibliographical note

Publisher Copyright:© 2021 by the authors. Licensee MDPI, Basel, Switzerland.

## Keywords

- Löwner ellipsoid
- Matrix factorization
- Neural networks compression
- Robust low rank approximation

## ASJC Scopus subject areas

- Analytical Chemistry
- Information Systems
- Atomic and Molecular Physics, and Optics
- Biochemistry
- Instrumentation
- Electrical and Electronic Engineering