1 code implementation • 1 Apr 2024 • Shaina Raza, Oluwanifemi Bamgbose, Shardul Ghuge, Deepak John Reji
This paper introduces a Safety and Responsible Large Language Model (\textbf{SR}$_{\text{LLM}}$ ), an approach designed to enhance the safety of LLM-generated content.
no code implementations • 19 Jan 2024 • Shaina Raza, Shardul Ghuge, Chen Ding, Elham Dolatabadi, Deval Pandya
The rapid evolution of Large Language Models (LLMs) highlights the necessity for ethical considerations and data integrity in AI development, particularly emphasizing the role of FAIR (Findable, Accessible, Interoperable, Reusable) data principles.
no code implementations • 1 Dec 2023 • Shaina Raza, Mizanur Rahman, Shardul Ghuge
Despite increasing awareness and research around fake news, there is still a significant need for datasets that specifically target racial slurs and biases within North American political speeches.
no code implementations • 30 Sep 2023 • Shaina Raza, Oluwanifemi Bamgbose, Veronica Chatrath, Shardul Ghuge, Yan Sidyakin, Abdullah Y Muaad
Bias detection in text is crucial for combating the spread of negative stereotypes, misinformation, and biased decision-making.