24
May
[Submitted on 10 May 2024] View a PDF of the paper titled An Assessment of Model-On-Model Deception, by Julius Heitkoetter and 2 other authors View PDF Abstract:The trustworthiness of highly capable language models is put at risk when they are able to produce deceptive outputs. Moreover, when models are vulnerable to deception it undermines reliability. In this paper, we introduce a method to investigate complex, model-on-model deceptive scenarios. We create a dataset of over 10,000 misleading explanations by asking Llama-2 7B, 13B, 70B, and GPT-3.5 to justify the wrong answer for questions in the MMLU. We find that, when models…