Specifity+
Detecting Edit Failures In Large Language Models: An Improved Specificity Benchmark


  • Jason Hoelscher-Obermaier1*, Julia Persson1*, Esben Kran1, Ionnis Konstas2, Fazl Barez1,3*


  • 1Apart Research 2Edinburgh Centre for Robotics 3Department of Engineering Sciences, University of Oxford
    * Equal contribution
    Accepted at Findings of ACL 2023

Abstract

Recent model editing techniques promise to mitigate the problem of memorizing false or outdated associations during LLM training. However, we show that these techniques can introduce large unwanted side effects which are not detected by existing specificity benchmarks. We extend the existing CounterFact to include a dynamic component and dub our benchmark CounterFact+. Additionally, we extend the metrics used for measuring specificity by a principled KL divergence-based metric. We use this improved benchmark to evaluate recent model editing techniques and find that they suffer from low specificity. Our findings highlight the need for improved specificity benchmarks that identify and prevent unwanted side effects.

Figure 1: Unintended side effects of model edits and how to measure them. (a) GPT-2-medium is edited using ROME to counter-factually associate the Louvre's location with Rome. However, this results in unintended associations ("loud facts") like the association of Obama with Rome, suggesting low specificity of the edit. The edit also significantly increases the maximum logit (shown in brackets), suggesting that the edit is not merely replacing "Paris" with "Rome" in the desired contexts. (b) Measuring specificity by the fraction of correctly completed test prompts (CounterFact) suggests a high specificity for ROME. Prepending the edit prompt (like "The Louvre is in Rome.") to each test prompt (CounterFact+) results in a significant drop in performance. A significant drop in measured specificity can also be observed if the model edit is implemented using constrained fine-tuning (FT-L).