no code implementations • 6 Jun 2024 • Guangliang Liu, Milad Afshari, Xitong Zhang, Zhiyu Xue, Avrajit Ghosh, Bidhan Bashyal, Rongrong Wang, Kristen Johnson
Our proposed framework can push the fine-tuned model to approach the bias lower bound during downstream fine-tuning, indicating that the ineffectiveness of debiasing can be alleviated by overcoming the forgetting issue through regularizing successfully debiased attention heads based on the PLMs' bias levels from stages of pretraining and debiasing.
no code implementations • 30 May 2023 • Xitong Zhang, Avrajit Ghosh, Guangliang Liu, Rongrong Wang
It is widely recognized that the generalization ability of neural networks can be greatly enhanced through carefully designing the training procedure.
no code implementations • 2 Feb 2023 • Avrajit Ghosh, He Lyu, Xitong Zhang, Rongrong Wang
It is well known that the finite step-size ($h$) in Gradient Descent (GD) implicitly regularizes solutions to flatter minima.
no code implementations • 18 Jul 2022 • Avrajit Ghosh, Michael T. McCann, Madeline Mitchell, Saiprasad Ravishankar
We present a method for supervised learning of sparsity-promoting regularizers for denoising signals and images.
no code implementations • 21 Nov 2021 • Avrajit Ghosh, Michael T. McCann, Saiprasad Ravishankar
We present a method for supervised learning of sparsity-promoting regularizers, a key ingredient in many modern signal reconstruction problems.