Quantizable Transformers: Removing Outliers by Helping Attention Heads Do Nothing. (arXiv:2306.12929v1 [cs.LG])


Quantizable Transformers: Removing Outliers by Helping Attention Heads Do Nothing. (arXiv:2306.12929v1 [cs.LG])
By: <a href="http://arxiv.org/find/cs/1/au:+Bondarenko_Y/0/1/0/all/0/1">Yelysei Bondarenko</a>, <a href="http://arxiv.org/find/cs/1/au:+Nagel_M/0/1/0/all/0/1">Markus Nagel</a>, <a href="http://arxiv.org/find/cs/1/au:+Blankevoort_T/0/1/0/all/0/1">Tijmen Blankevoort</a> Posted: June 23, 2023

Transformer models have been widely adopted in various domains over the last
years, and especially large language models have advanced the field of AI
significantly. Due to their size, the capability of these networks has
increased tremendously, but this has come at the cost of a significant increase
in necessary compute. Quantization is one of the most effective ways to reduce
the computational time and memory consumption of neural networks. Many studies
have shown, however, that modern transformer models tend to learn strong
outliers in their activations, making them difficult to quantize. To retain
acceptable performance, the existence of these outliers requires activations to
be in higher bitwidth or the use of different numeric formats, extra
fine-tuning, or other workarounds. We show that strong outliers are related to
very specific behavior of attention heads that try to learn a “no-op” or just a
partial update of the residual. To achieve the exact zeros needed in the
attention matrix for a no-update, the input to the softmax is pushed to be
larger and larger during training, causing outliers in other parts of the
network. Based on these observations, we propose two simple (independent)
modifications to the attention mechanism – clipped softmax and gated attention.
We empirically show that models pre-trained using our methods learn
significantly smaller outliers while maintaining and sometimes even improving
the floating-point task performance. This enables us to quantize transformers
to full INT8 quantization of the activations without any additional effort. We
demonstrate the effectiveness of our methods on both language models (BERT,
OPT) and vision transformers.

Provided by:
http://arxiv.org/icons/sfx.gif

DoctorMorDi

DoctorMorDi

Moderator and Editor