Explore, Establish, Exploit: Red Teaming Language Models from Scratch. (arXiv:2306.09442v2 [cs.CL] UPDATED)
By: <a href="http://arxiv.org/find/cs/1/au:+Casper_S/0/1/0/all/0/1">Stephen Casper</a>, <a href="http://arxiv.org/find/cs/1/au:+Lin_J/0/1/0/all/0/1">Jason Lin</a>, <a href="http://arxiv.org/find/cs/1/au:+Kwon_J/0/1/0/all/0/1">Joe Kwon</a>, <a href="http://arxiv.org/find/cs/1/au:+Culp_G/0/1/0/all/0/1">Gatlen Culp</a>, <a href="http://arxiv.org/find/cs/1/au:+Hadfield_Menell_D/0/1/0/all/0/1">Dylan Hadfield-Menell</a> Posted: June 23, 2023
Deploying Large language models (LLMs) can pose hazards from harmful outputs
such as toxic or dishonest speech. Prior work has introduced tools that elicit
harmful outputs in order to identify and mitigate these risks. While this is a
valuable step toward securing language models, these approaches typically rely
on a pre-existing classifier for undesired outputs. This limits their
application to situations where the type of harmful behavior is known with
precision beforehand. However, this skips a central challenge of red teaming:
developing a contextual understanding of the behaviors that a model can
exhibit. Furthermore, when such a classifier already exists, red teaming has
limited marginal value because the classifier could simply be used to filter
training data or model outputs. In this work, we consider red teaming under the
assumption that the adversary is working from a high-level, abstract
specification of undesired behavior. The red team is expected to refine/extend
this specification and identify methods to elicit this behavior from the model.
Our red teaming framework consists of three steps: 1) Exploring the model’s
behavior in the desired context; 2) Establishing a measurement of undesired
behavior (e.g., a classifier trained to reflect human evaluations); and 3)
Exploiting the model’s flaws using this measure and an established red teaming
methodology. We apply this approach to red team GPT-2 and GPT-3 models to
systematically discover classes of prompts that elicit toxic and dishonest
statements. In doing so, we also construct and release the CommonClaim dataset
of 20,000 statements that have been labeled by human subjects as
common-knowledge-true, common-knowledge-false, or neither. Code is available at
https://github.com/thestephencasper/explore_establish_exploit_llms. CommonClaim
is available at https://github.com/Algorithmic-Alignment-Lab/CommonClaim.
Provided by:
http://arxiv.org/icons/sfx.gif