Search Results for author: Chih-Hsun Lin

Found 3 papers, 1 papers with code

Safe LoRA: the Silver Lining of Reducing Safety Risks when Fine-tuning Large Language Models

no code implementations27 May 2024 Chia-Yi Hsu, Yu-Lin Tsai, Chih-Hsun Lin, Pin-Yu Chen, Chia-Mu Yu, Chun-Ying Huang

Therefore, parameter-efficient fine-tuning such as LoRA have emerged, allowing users to fine-tune LLMs without the need for considerable computing resources, with little performance degradation compared to fine-tuning all parameters.

Ring-A-Bell! How Reliable are Concept Removal Methods for Diffusion Models?

1 code implementation16 Oct 2023 Yu-Lin Tsai, Chia-Yi Hsu, Chulin Xie, Chih-Hsun Lin, Jia-You Chen, Bo Li, Pin-Yu Chen, Chia-Mu Yu, Chun-Ying Huang

While efforts have been made to mitigate such problems, either by implementing a safety filter at the evaluation stage or by fine-tuning models to eliminate undesirable concepts or styles, the effectiveness of these safety measures in dealing with a wide range of prompts remains largely unexplored.

Cannot find the paper you are looking for? You can Submit a new open access paper.