no code implementations • NAACL (BEA) 2022 • Alexander Kwako, Yixin Wan, Jieyu Zhao, Kai-Wei Chang, Li Cai, Mark Hansen
This study addresses the need to examine potential biases of transformer-based models in the context of automated English speech assessment.
no code implementations • 16 Apr 2024 • Yixin Wan, Kai-Wei Chang
Social biases can manifest in language agency.
no code implementations • 1 Apr 2024 • Yixin Wan, Arjun Subramonian, Anaelia Ovalle, Zongyu Lin, Ashima Suvarna, Christina Chance, Hritik Bansal, Rebecca Pattichis, Kai-Wei Chang
In this survey, we review prior studies on dimensions of bias: Gender, Skintone, and Geo-Culture.
no code implementations • 16 Feb 2024 • Yixin Wan, Kai-Wei Chang
Recent large-scale Text-To-Image (T2I) models such as DALLE-3 demonstrate great potential in new applications, but also face unprecedented fairness challenges.
no code implementations • 28 Oct 2023 • Yixin Wan, Fanyou Wu, Weijie Xu, Srinivasan H. Sengamedu
We explore the correlation between the level of hallucination in model responses and two types of sequence-level certainty: probabilistic certainty and semantic certainty.
1 code implementation • 13 Oct 2023 • Yixin Wan, George Pu, Jiao Sun, Aparna Garimella, Kai-Wei Chang, Nanyun Peng
Through benchmarking evaluation on 2 popular LLMs- ChatGPT and Alpaca, we reveal significant gender biases in LLM-generated recommendation letters.
1 code implementation • 8 Oct 2023 • Yixin Wan, Jieyu Zhao, Aman Chadha, Nanyun Peng, Kai-Wei Chang
Recent advancements in Large Language Models empower them to follow freeform instructions, including imitating generic or specific demographic personas in conversations.
no code implementations • 26 May 2023 • Yixin Wan, Yuan Zhou, Xiulian Peng, Kai-Wei Chang, Yan Lu
To begin with, we are among the first to comprehensively investigate mainstream KD techniques on DNS models to resolve the two challenges.
1 code implementation • 26 May 2023 • Yixin Wan, Kuan-Hao Huang, Kai-Wei Chang
Existing fine-tuning methods for this task are costly as all the parameters of the model need to be updated during the training process.
1 code implementation • 21 May 2023 • Wenhu Chen, Ming Yin, Max Ku, Pan Lu, Yixin Wan, Xueguang Ma, Jianyu Xu, Xinyi Wang, Tony Xia
We evaluate a wide spectrum of 16 large language and code models with different prompting strategies like Chain-of-Thoughts and Program-of-Thoughts.
Ranked #1 on Natural Questions on TheoremQA
1 code implementation • Findings (ACL) 2022 • Cenyuan Zhang, Xiang Zhou, Yixin Wan, Xiaoqing Zheng, Kai-Wei Chang, Cho-Jui Hsieh
Existing studies have demonstrated that adversarial examples can be directly attributed to the presence of non-robust features, which are highly predictive, but can be easily manipulated by adversaries to fool NLP models.