Search Results for author: Maryam Amirizaniani

Found 2 papers, 0 papers with code

LLMAuditor: A Framework for Auditing Large Language Models Using Human-in-the-Loop

no code implementations14 Feb 2024 Maryam Amirizaniani, Jihan Yao, Adrian Lavergne, Elizabeth Snell Okada, Aman Chadha, Tanya Roosta, Chirag Shah

A case study using questions from the TruthfulQA dataset demonstrates that we can generate a reliable set of probes from one LLM that can be used to audit inconsistencies in a different LLM.

Hallucination

AuditLLM: A Tool for Auditing Large Language Models Using Multiprobe Approach

no code implementations14 Feb 2024 Maryam Amirizaniani, Tanya Roosta, Aman Chadha, Chirag Shah

Probing LLMs with varied iterations of a single question could reveal potential inconsistencies in their knowledge or functionality.

Cannot find the paper you are looking for? You can Submit a new open access paper.