PRISM : a methodology for auditing biases in large language models
Azzopardi, Leif and Moshfeghi, Yashar (2024) PRISM : a methodology for auditing biases in large language models. Other. arXiv, Ithaca, NY. (https://doi.org/10.48550/arXiv.2410.18906)
Preview |
Text.
Filename: Azzopardi-Moshfeghi-arXiv-2024-PRISM-a-methodology-for-auditing-biases-in-large-language-models.pdf
Final Published Version License: Download (856kB)| Preview |
Abstract
Auditing Large Language Models (LLMs) to discover their biases and preferences is an emerging challenge in creating Responsible Artificial Intelligence (AI). While various methods have been proposed to elicit the preferences of such models, countermeasures have been taken by LLM trainers, such that LLMs hide, obfuscate or point blank refuse to disclosure their positions on certain subjects. This paper presents PRISM, a flexible, inquiry-based methodology for auditing LLMs - that seeks to illicit such positions indirectly through task-based inquiry prompting rather than direct inquiry of said preferences. To demonstrate the utility of the methodology, we applied PRISM on the Political Compass Test, where we assessed the political leanings of twenty-one LLMs from seven providers. We show LLMs, by default, espouse positions that are economically left and socially liberal (consistent with prior work). We also show the space of positions that these models are willing to espouse - where some models are more constrained and less compliant than others - while others are more neutral and objective. In sum, PRISM can more reliably probe and audit LLMs to understand their preferences, biases and constraints.
ORCID iDs
Azzopardi, Leif and Moshfeghi, Yashar ORCID: https://orcid.org/0000-0003-4186-1088;-
-
Item type: Monograph(Other) ID code: 90978 Dates: DateEvent24 October 2024PublishedSubjects: Science > Mathematics > Electronic computers. Computer science
Political ScienceDepartment: Faculty of Science > Computer and Information Sciences Depositing user: Pure Administrator Date deposited: 29 Oct 2024 12:33 Last modified: 11 Nov 2024 16:08 URI: https://strathprints.strath.ac.uk/id/eprint/90978