1 code implementation • 4 Feb 2024 • Zhenxing Niu, Haodong Ren, Xinbo Gao, Gang Hua, Rong Jin
This paper focuses on jailbreaking attacks against multi-modal large language models (MLLMs), seeking to elicit MLLMs to generate objectionable responses to harmful user queries.