## Abstract

A common approach for compressing Natural Language Processing (NLP) networks is to encode the embedding layer as a matrix A ∈ R^{n}×^{d}, compute its rank-j approximation A_{j} via SVD (Singular Value Decomposition), and then factor A_{j} into a pair of matrices that correspond to smaller fully-connected layers to replace the original embedding layer. Geometrically, the rows of A represent points in R^{d}, and the rows of A_{j} represent their projections onto the j-dimensional subspace that minimizes the sum of squared distances (“errors”) to the points. In practice, these rows of A may be spread around k > 1 subspaces, so factoring A based on a single subspace may lead to large errors that turn into large drops in accuracy. Inspired by projective clustering from computational geometry, we suggest replacing this subspace by a set of k subspaces, each of dimension j, that minimizes the sum of squared distances over every point (row in A) to its closest subspace. Based on this approach, we provide a novel architecture that replaces the original embedding layer by a set of k small layers that operate in parallel and are then recombined with a single fully-connected layer. Extensive experimental results on the GLUE benchmark yield networks that are both more accurate and smaller compared to the standard matrix factorization (SVD). For example, we further compress DistilBERT by reducing the size of the embedding layer by 40% while incurring only a 0.5% average drop in accuracy over all nine GLUE tasks, compared to a 2.8% drop using the existing SVD approach. On RoBERTa we achieve 43% compression of the embedding layer with less than a 0.8% average drop in accuracy as compared to a 3% drop previously.

Original language | English |
---|---|

State | Published - 2021 |

Event | 9th International Conference on Learning Representations, ICLR 2021 - Virtual, Online Duration: 3 May 2021 → 7 May 2021 |

### Conference

Conference | 9th International Conference on Learning Representations, ICLR 2021 |
---|---|

City | Virtual, Online |

Period | 3/05/21 → 7/05/21 |

### Bibliographical note

Funding Information:Support for this research has been provided in part by NSF award 1723943. We are grateful for it.

Publisher Copyright:

© 2021 ICLR 2021 - 9th International Conference on Learning Representations. All rights reserved.

## Keywords

- cs.LG
- stat.ML

## ASJC Scopus subject areas

- Language and Linguistics
- Computer Science Applications
- Education
- Linguistics and Language