Search Results for author: Shaz Furniturewala

Found 4 papers, 0 papers with code

Thinking Fair and Slow: On the Efficacy of Structured Prompts for Debiasing Language Models

no code implementations16 May 2024 Shaz Furniturewala, Surgan Jandial, Abhinav Java, Pragyan Banerjee, Simra Shahid, Sumit Bhatia, Kokil Jaidka

Existing debiasing techniques are typically training-based or require access to the model's internals and output distributions, so they are inaccessible to end-users looking to adapt LLM outputs for their particular needs.

Text Generation

All Should Be Equal in the Eyes of Language Models: Counterfactually Aware Fair Text Generation

no code implementations9 Nov 2023 Pragyan Banerjee, Abhinav Java, Surgan Jandial, Simra Shahid, Shaz Furniturewala, Balaji Krishnamurthy, Sumit Bhatia

Fairness in Language Models (LMs) remains a longstanding challenge, given the inherent biases in training data that can be perpetuated by models and affect the downstream tasks.

Fairness Language Modelling +1

Cannot find the paper you are looking for? You can Submit a new open access paper.