Analyzing And Editing Inner Mechanisms Of Backdoored Language Models

Analyzing And Editing Inner Mechanisms Of Backdoored Language Models
By: Max Lamparth, Anka Reuel Posted: May 8, 2024
arXiv:2302.12461v3 Announce Type: replace-cross
Abstract: Poisoning of data sets is a potential security threat to large language models that can lead to backdoored models. A description of the internal mechanisms of backdoored language models and how they process trigger inputs, e.g., when switching to toxic language, has yet to be found. In this work, we study the internal representations of transformer-based backdoored language models and determine early-layer MLP modules as most important for the backdoor mechanism in combination with the initial embedding projection. We use this knowledge to remove, insert, and modify backdoor mechanisms with engineered replacements that reduce the MLP module outputs to essentials for the backdoor mechanism. To this end, we introduce PCP ablation, where we replace transformer modules with low-rank matrices based on the principal components of their activations. We demonstrate our results on backdoored toy, backdoored large, and non-backdoored open-source models. We show that we can improve the backdoor robustness of large language models by locally constraining individual modules during fine-tuning on potentially poisonous data sets.
Trigger warning: Offensive language.
Provided by:

DoctorMorDi

DoctorMorDi

Moderator and Editor