no code implementations • 6 May 2024 • Yanhong Bai, Jiabao Zhao, Jinxin Shi, Zhentao Xie, Xingjiao Wu, Liang He
Detecting stereotypes and biases in Large Language Models (LLMs) is crucial for enhancing fairness and reducing adverse impacts on individuals or groups when these models are applied.
no code implementations • 12 Mar 2024 • Yanhong Bai, Jiabao Zhao, Tingjiang Wei, Qing Cai, Liang He
This paper thoroughly analyzes the interpretability of KT algorithms.
no code implementations • 16 Dec 2023 • Jingyi Zhou, Jie zhou, Jiabao Zhao, Siyin Wang, Haijun Shan, Gui Tao, Qi Zhang, Xuanjing Huang
Few-shot text classification has attracted great interest in both academia and industry due to the lack of labeled data in many fields.
no code implementations • 21 Aug 2023 • Yanhong Bai, Jiabao Zhao, Jinxin Shi, Tingjiang Wei, Xingjiao Wu, Liang He
Detecting stereotypes and biases in Large Language Models (LLMs) can enhance fairness and reduce adverse impacts on individuals or groups when these LLMs are applied.